The COVID-19 pandemic has shifted society, unexpectedly, and with tragic consequences. But in the wake of the darkness, this sudden homebound pivot may have brought us into the future faster on at least one frontier – technology.
Technology innovation and adoption takes time, not just for something to be invented, but for its use to reach critical mass in a society and to reach a point of ubiquitous and accepted use wherein said technology is no longer thought of as “technology.” No one marvels at ballpoint pens or traffic lights or ATMs. They just are.
Humans have been slow to adopt technology and related techno-practices for the majority of the 195,000-plus years that humanity has existed. We tend to follow a pattern: someone invents something, there are varying degrees of concern about it being scary or taking over jobs, people start using it, more people use it, until finally we forget we’re using it or that it was ever novel or scary.
Brief bursts of innovation like the Renaissance, Roman Empire, and Tang Dynasty aside, this cycle used to take hundreds of years prior to the Industrial Revolution. For instance, there’s an almost 600-year gap between the invention of the printing press and the current global literacy average of 86%. Superstition, fear, power-plays, inefficient supply chains of goods and information, and socio-economic disparity have all played a role in slowing the ability to discover or create something new and incorporate that into everyday life.
In the 20th century, and even more so these first two decades of the 21st century, the rate at which people adopt something new continues to increase rapidly and exponentially, far more than in earlier blitzes of innovation. Consider the following (sourced from Our World in Data):
- It took more than 100 years for the flush toilet to be found in half of U.S. homes;
- 70 years passed before 50% of American households had a car, and an additional 60 years passed before we got to the current place where 95% of American households now have a car;
- The washing machine was available for 40 years before it made it into 50% of American homes;
- The computer wormed its way into 90% of homes, businesses, and hearts in less than 30 years;
- The internet gained widespread adoption within 20 years;
- Smartphones and social media became indispensable, daily facets of American life within 5 years of their launch; and
- Virtual assistants like Alexa and Google Home crossed the threshold into more than 50% of American homes in less than five years.
The Geospatial Revolution
We’re experiencing overlapping inventions, simultaneously taking in and adopting technology in exponentially faster waves. Humanity relies on technology more than ever, and COVID-19 has sped this up even more. In isolation, we have capitalized on our access to digital worlds and virtual infrastructure to continue working, shopping, playing, and seeing our loved ones.
Companies that were once resistant to working from home have had to shift to almost entirely a digital working infrastructure. The boom in remote collaboration and video call apps struck not just the workforce but also outlying users that are usually slower to adopt new technologies. Older and very young generations have found themselves suddenly logging into video chats and social media. The conventional, in-person classroom moved online and may stay online long after lockdowns end. This may well shift societal perceptions of online vs. in-person education.
The #futureofwork, education, and life is digital. Now that everyone has tasted what this shift can mean for the post-industrial work life, it is unlikely we will go back to the same workplace systems, especially with the promises of automation and artificial intelligence on our horizon.
Once adopted, technology is hard to stop using. Can you imagine life without the mapping app on your smartphone? And yet those mapping apps and the food delivery apps and Amazon services and wayfinding that spiraled out from their mappy core—are just the beginning.
The geospatial revolution that spawned mapping apps never stopped. And now society and technology are on the cusp of an even wider spatial revolution—one where we don’t just use 2D maps in screens to navigate the world—but one where the rooms and offices around us are understandable as machine readable 3D maps that we can layer and access information on and through mixed reality. This combopack of mapping and mixed reality tech+ is known as spatial computing.
COVID-19 closed the world down just as investment and development of spatial computing experienced an up-tick. The confluence of 2D and 3D geospatial mapping with computer vision, mixed reality, gaming, and more had only reached mainstream consciousness when the 2020 Consumer Electronics Show (CES) disabused many that holographs and AI avatars were not just science fiction but were here or coming soon.
With today’s at-home lifestyles pushing many new players into virtual worlds—in games and social media, and into remote collaboration systems for work—the appetite for spatial computing has increased leading to:
- Head mounted displays for VR and gaming consoles selling out rapidly at the start of the shelter-in-place, and they continue to do so when each new set launches;
- Attendance at virtual events and in games is booming; and
- The digitization of tourist spots, stores, and real estate is increasingly widespread, allowing people to travel in situ—exploring the world and future purchases from the comfort of our own couches in increasingly realistic 3D.
Though the workplace has long been where most humans spend the majority of their time, the pandemic returned focus and time back to homes and made work an adjunct of family life (as it had been for the millennia of human history prior to the Industrial Revolution). As our homes reclaim their role as the primary stage in our lives, new tools for mapping those spaces to augment and annotate them further are in high demand. Small and large companies alike are successfully weathering COVID-19 to provide solutions and work these new technologies into our everyday isolated existence—one in which we are all, for once, keen to try and use these technologies without the same level of pushback traditionally experienced when our species adopts new things.
We are unlikely to return to the same, in-person from 9-5 office hours after discovering that numerous enterprises can be run successfully from a remote, geodispersed digital workforce.
Curious to delve further into the farther end of augmented reality?
Or maybe you’re interested in seeing how quickly spatial computing is carving out spaces in our workflows and playflows? Here’s a quick taste: Use a chrome browser on your latest smartphone to search ‘cheetah’ or ‘tiger’ and select the ‘View in 3D’ option that now appears as part of Google’s encyclopedia entry for that animal. Let it roam your living room for a while. It’s still early days on the animation, but if your child ever asks you ‘what’s a <insert your choice of animal here>?’ would you ever consider NOT using this function now that you know it exists? Will a 2D search or a world viewed through 2D screens be enough for us once we have tastes of real 3D worlds?
It isn’t and it won’t.
3D content in a 3D world is much more powerful and compelling (and cognitively less taxing). Where a picture is worth a thousand words, a 3D art asset is worth a million, and an interactive piece of 3D data captured from the right combination of increasingly cheaper, miniaturized sensors and with a high resolution texture—that’s worth a billion.
New interfaces and new ways of connecting and understanding 2D and 3D data about ourselves and the world around us are rolling out even in our darkest hour. They may well pave the way to a world with better contextual understanding of ourselves and the planet.
But to really utilize them we will need to not just adopt these new, ever-more-frequent waves of innovation, but we will need to learn how to constantly evolve our infrastructures—our businesses and governments to adopt and adapt alongside with us.