From Star Wars to The Matrix to Blade Runner, today new technologies mean that sci-fi scenarios are not be as far-fetched as we once imagined. The New Realities – including virtual reality (VR), augmented reality (AR), and mixed reality (MR) – mean that we can ‘holoport’ ourselves across geographic locations, use our brains to interface with computers, and combine these technologies with AI to take them into new dimensions.
Many are already familiar with AR, which uses a smartphone, tablet or smart glasses to overlay digital 2D content onto physical objects, people and environments. Head-Up Displays (HUD) or HUD glass are computer-augmented wearables or interface screens that present information, images, and data to a user’s focal viewpoint, as seen on some car windshields, activated by voice to provide local information and directions.
VR headsets completely immerse the senses of sight and hearing into an artificial, 3D, digital environment. The stereoscopic images trick the brain into believing the simulation exists tangibly and images instantly adjust based on head position and movement, as they would in real life. It is worth noting that 360° video isn’t the same as VR, even though it’s viewed through a VR headset. 360° video is actual footage of a real place and time that offers surround viewing, but is not stereoscopic and has limited interaction.
Mixed reality, or MR, is sometimes classified under AR but is has differentiating features that are key to why it is likely to become the most important of the new realities. MR is characterised by interactive 360°, 3D imagery, or virtual 3D holograms, overlaid onto real environments. People can still see their natural environments only with additional layers of holographic digital content that is sharable and interactive. MR gives us the ability ‘holoport’, sending 360°, 3D holograms of ourselves to other parts of the world to communicate in real time as if we’re all in the same room, no matter where we are physically in the world.
All of the content of the new realities lives in the metaverse. Meaning ‘beyond universe’, the metaverse is the name of the virtual digital landscape onto which the new realities is projected. Forget about going to Mars, the metaverse is an infinite blank canvas just waiting to be populated. Accessed only through AR, VR and MR, it has unlimited real estate, no geographic borders, and at present, no regulations beyond self-governance and societal acceptability.
5G is having a significant impact on these new technologies, enabling better image quality and latency that improves the overall user experience. We already have the ability to socialise across geographic locations in the metaverse, and now we are seeing the rise of ‘X-commerce’, or experience commerce, where the new realities are disrupting both retail and advertising. In the Metaverse we can go shopping with a friend from Hong Kong and a friend from California in real time, ‘try on’ clothes from different stores together, have a gamified retail experience, and actually feel fabrics and textures with haptic technology, all from the comfort of our own homes.
The convergence between the new realities and artificial intelligence (AI) is already beginning as headsets like Microsoft’s HoloLens 2 are starting to be designed to include custom AI chips that give the technology new capabilities. From instant translations to more voice-activated services, AI can help headsets and the metaverse learn customer preferences, customise suggestions, and anticipate needs.
As wearable computers, headsets integrate into the internet of things (IoT). Smart headsets can connect consumers to all the other smart items in the home, allowing you to control the physical world from within the virtual one.
VR and AI are also converging in training applications to generate scenarios based on real experiences and data. Virtual customer service agents in the metaverse will leverage AI to respond to customer inquiries and issues, and virtual companions will also come to exist. When integrated with biosensors, AI may determine when new realities users need to take a break or have a drink of water, even pausing the experience as a result. Using a person’s bio-data, things like heart rate, blood pressure, sweat response and brainwaves, AI algorithms can generate completely new and individual experiences for users in the metaverse. This means that no two people would have the same digital reality experience.
To make virtual experiences indistinguishable from the real world, some believe the next step of human evolution is integrating these technologies into the brain. Neuroreality refers to brain-computer interface (BCI) – the use of brainwaves to generate and navigate experiences through thought alone, processing both conscious and subconscious responses. Whether through implants or external electrodes, companies like Elon Musk’s Neuralink and Boston-based start-up Neurable are already leaders in this area. Soon we may all be able to turn on and off parts of the brain like we would a mobile phone. One thing we’ll have to start thinking about is the fact that BCIs and neuroreality will challenge basic human rights, as thoughts and privacy will no longer be inseparable, and people could become vulnerable to exploitation in new ways.
These technologies are changing and challenging the ways we communicate, collaborate, socialise, market, and learn. They have the potential to democratise training, skills and experiences, similar to what the internet did for sharing information. Yet innovation always brings new and unforeseen risks, from mental and physical implications, to new dangers around data and personal privacy. While the new realities offer plenty of new commercial opportunities, these technologies also have the power to change us individually and as a society. It’s on all of us to make sure we develop these technologies and apply them responsibly.