AR in Action: Changing the Way We Interact with the World

Posted By: Katelyn Ruwe

At the MIT Media Lab this past month, AR in Action hosted a range of speakers from developers to futurists. In the demo area, presenters showed off impressive and unique AR experiences spanning from videogames to an AR sound experience. In the main hall, speakers from Microsoft, Magic Leap, and other major players in Augmented Reality spoke about their visions for the future of AR.

While many speakers made forecasts about what we can expect from AR as the technology progresses, a few key points stood out to me:

Out with smartphones, in with AR:  We have all heard the complaints about people walking while looking at their devices or hunching over their smartphones. While these complaints may seem trivial, the iHunch actually demonstrates a fundamental weakness in our current technologies. Thus far, humans have had to conform to new technology, whether it be hunching over computers and phones or communicating through typing rather than speech. By converging the physical and digital worlds, AR will conform to us, allowing users to consume digital data in the same native way that they consume physical data.

Empathetic technology: This native, human-centric way of interacting with technology will lead to more empathetic communication and collaboration. AR breaks people and objects out from behind 2D screens, allowing for more genuine and productive interactions. AR can be combined with sensors like the Emphatica E4 in order to share exactly what users are seeing, hearing, and feeling.

While the potential of AR is at all time high, there are still some limitations to the technology, such as:

Headsets: As Jacob Lowenstein put it, “no one wants to wear a computer on their head.” The AR experiences that we see in movies like Kingsman and Minority Report are not too far away from a software perspective but the hardware is a little trickier. The current state of AR requires users to either hold their phones in front of them or don bulky headsets, though the rapidly evolving technology continues to improve with each new iteration.

Computer Vision and graphics: AR is really good at recognizing horizontal planes and placing objects but there are still some limitations in recognizing and interacting with physical environments. Some common limitations are:

Occlusion: It is currently a challenge for AR to recognize when a physical object should be blocking a digital object from view. Refining occlusion will allow for more realistic AR experiences.

Depth:  Right now, it is difficult for most cameras to sense depth, which prevents them from creating a truly accurate map of a user’s physical surroundings.

Lighting: AR cannot sense and recreate the physical world’s lighting. These lighting differences are an issue for consumer AR applications in particular as they limit the ability to realistically represent a digital object in a physical space.

It is becoming increasingly clear that AR has the potential to radically change the way that we interact with the world around us. At the conference, it was amazing to see how quickly the technology is evolving. AR continues to prove that it is more than just a gimmick and after attending AR in Action, I am excited to see what else 2018 will bring.

 

 

Courtesy- PTC blogs