A broken and bedraggled robot suddenly comes to life and projects a 3D image of a beleaguered princess volumetrically recording her plea for help in 1977’s “Star Wars.”
Michael Caine sits at the head of a fancy wooden conference table by himself. Or so it appears until Colin Firth dons his augmented reality glasses to see each seat instantly filled with the lifelike avatars of the other agents waiting patiently for their meeting to begin in 2014’s “Kingsman.”
To better understand the native terrain of the planet Pandora, a former Marine can call up 3D imagery of his scouting trips for his colleagues back at base in 2009’s Avatar.
Targeted ads and models of products pop up for passersby walking down bustling city streets, ruining Tom Cruise’s attempt to remain unnoticed by attempting to sell him specific products in 2002’s “Minority Report.”
What do all these snippets of science fiction have in common?
- They use 3D datasets and 3D visualization systems.
- These systems are a lot closer to science fact than science fiction.
This long considered fictional arena of technology, combining things like 3D data capture, geospatial mapping, artificial intelligence, and mixed reality — is increasingly being called “spatial computing” and it is rolling out rapidly.
Funnily enough, though all of the sci-fi representations focus on humans being able to engage with the data, this confluence of technologies isn’t finally coming together because humans want the ability to see these things. It’s coming together because we’ve realized it’s also important for our machines to see and map alongside us.
3D mapping of people, objects, and rooms is what takes us from our blind Roombas to a more insightful and change-detection capable Rosie the Maid (that’s from the Jetsons for those unfamiliar with the Hanna-Barbera cartoon). Likewise, innovations in self-driving cars are being made by the combination of techniques like light detection and ranging (LiDAR), photogrammetry, and multispectral imaging for 3D capture, mixing with artificial intelligence and computer vision to understand what has been captured so autonomous systems can roam the streets. The autonomous systems we are transitioning to are dependent on the ability to map and understand the space in which they’re operating.
Mapping the world in more ways allows for new operators to utilize those maps. Therefore, we’re also creating more content to fill those systems. And in so doing, 3D imagery is following a similar path to 2D photography.
A hundred years ago, cameras were expensive, fancy pieces of technology few had access to — but the more people found value in photos of the world and of themselves, the more photos were taken. And the cheaper, more advanced, and easier-to-use those devices became, the more platforms were created to share that content with each other (#Instagram). While most people don’t have a useful bit of LiDAR on their smartphone or tucked into their pocket, as they become cheaper devices and their content can get used in more arenas of everyday life — the more 3D data we will see.
Most office buildings and monuments around you are probably already 3D mapped in one form or another. Building information modelling (BIM) has been a major thing in certain architecture, engineering, and construction industries for ages — but now we’re moving from artistically created 3D content to realistically captured 3D content. From there, it’s moving from corporate and into our everyday lives. Real estate groups, for example, are expanding that effort to include digital 3D walk-ups of homes for sale or rent. 3D data collection is running rampant in the best possible ways. Why? Because it’s easier and cooler to share information in 3D instead of just 2D with a voiceover. If a 2D picture is worth a thousand words — how much more is a 3D model of the same thing worth in how much more sense our 3D-focused brains can make of it?
How much easier would it be to give someone specific directions if you could pop up a 3D map of the city on the restaurant table in between you and mark it out for them? How much more engaging would it be to explain to your grandmother how awesome your vacation was by literally showing her — by augmenting her living room with your favorite spatial memories? How much more productive and cost-effective would it be to work remotely with your international team as if you were all in the same conference room collaborating on your next big thing?
Spatial computing isn’t a brand new thing — it’s a combination of several old ones evolving alongside industrial and academic research developing artificial intelligence and computer vision.
Spatial computing is what happens when we:
- Put a lot of tech puzzle pieces together to form a whole.
- Stop recording and sharing the world in 2D, and when we stop looking at the world through 2D screens and start using the world itself as our interface to map and share the information we can collect from it (in 2D, 3D, and any format).
- Convert our information into 3D worlds of knowledge.
- Layer the digital directly onto the physical.
- Digitize time and space at an expanded rate and use it to augment our reality and make better sense of what is going on in it.
Curious to learn more? Stay tuned for more coverage of this space/time.