Earlier this week, I gave a talk at VRDC about the Four Futures of Entertainment and VR. While the talk was more theoretical than most of the others at the developer-heavy event, it did wind up with some practical tips for what exactly needs to happen to enable the fourth future of VR — a world where we seamlessly interact in virtual worlds and perhaps even partake in spaces that resemble Star Trek’s famous holodeck.
The first step is the improvement of artificial intelligence (AI) as it relates to natural language processing. IBM (with Watson) and Google are both leaders in the space, and as the technology evolves, more players will surely enter. These advances will allow people to have real “virtual” conversations — right now, while you can ask someone a question in VR and get a response, the response is programmed and the question needs to asked a certain way to get a response. In addition, the experience cannot account for any fluctuations in tone of voice, and those of us who communicate in the real world know how much the way someone says something belies their intent and feeling.
Once you can have an actual conversation in VR, a whole new world of communication opens up. A history class doesn’t need to be a series of lectures or films or books — rather, you can slip on a headset and ask George Washington directly about his experience. Of course, historical figures will still be limited by how much of a record of their life exists, but there’s also a massive opportunity to start creating a new and more immersive record in real time. In fifty years, students could have conversations with now-current figures that are based on things they actually said at the time, rather than what was recorded in letters.
You can also have conversations with real people, although things get a little trickier there. If you could call up a virtual version of your favorite musician, for instance, and have a chat with them every night, how does that blur the lines between fantasy and reality? It’s obviously fairly far off, but represents something that developers need to think about.
The next step is the creation of photo-real avatars, something MacInnes Scott is working on. Right now immersive VR is somewhat limited by game engine technology, and it’s hard to realistically interface with someone who looks like a cartoon rendering of an actual person. There will still be animated VR experiences, just as there are animated films and TV shows, but avatars will continue to improve until it feels like you are really interacting with another human.
Finally, eye-tracking as it relates to emotions will be needed to create natural interactions. Google recently bought Eyefluence, which works on just this thing, and it makes complete sense. Just as tone of voice conveys so much, where you look is often an indicator of true feelings. Looking directly into someone’s eyes as you talk is very different than staring over their shoulder, or at the the floor.
The fourth future of VR, which again is only a temporary future, will come one day — hopefully soon, if advances in the above technologies can be made.
Thanks to Kevin Cornish of Moth+Flame VR for his suggestions and guidance for this piece.