This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.
Over the last weekend I took part in the VR Game Jam in Copenhagen, Denmark. To the best of my knowledge this is the biggest on-site VR game jam in the world. Given the excellent industry connections of the organisers, the lineup of speakers was excellent. This article is a quick summary of current challenges in VR based on how they manifested during the jam and the presentations leading up to the jam.
Given the novelty of working VR systems, it is no surprise that the design space is still very much unexplored. Game Jams are an excellent opportunity to experiment, yet a lot of people – including me – need to become much more familiar with the hardware and the software, before they can start to design more than simplistic experiences. Most game engines offer more or less stable support for VR equipment by now. It is easy to get started but a number of issues manifests once developers start to work on actual projects. A simple example is that if a scene plays during the night, the difference in brightness between the head-mounted display and a computer screen means that calibration can only be done on the actual device. Since lighting is the same in both views, the computer screen only shows a pitch-black image.
Designing with consideration of bodily reactions is another design challenge the is unexpected. VR is tricking the brain and it does so in a very unsubtle way. Designers have to be constantly alert that they are much closer to the player than in a traditionally presented video game. At the same time, people are different in how they react to visual stimuli. What makes one person slightly uncomfortable might be completely nauseating for the next. There is a rough understanding on what is acceptable to a wide variety of players, but in the actual implementation a lot depends on details. On group at the VR Game Jam made a game to be played in the elevator. When you went up in the physical world you also went up in the virtual one. As an experiment, the group also tried out going down virtually when you went up in reality. What was surprising about this experiment was that there was a significant difference between going up in the real world and down in the virtual and the other way round. The experiment highlighted how much we do not understand about the influence of the sense of balance and visual perception.
Our own team created a very simplistic scene – more an experience than a game. Similarly, most of the realised projects were limited in their ambition when it came to game design. The Crescent Bay demo presented by Oculus was as good as non-interactive. In our case, we did not find a suitable input device to realise our initial idea for interaction. I wonder if it was the same with Oculus.
I have to admit that I’ve tried the Razor Hydra but never had the opportunity to test Valve’s HTC Vive controller. It might as well be that this problem is already solved and if you have this opinion, please skip this paragraph.
Similarly to how the mouses redefined how we work with computers, there will be a dominant input method that is going to define how we interact with VR objects. The requirements are given. The controller has to work accurately in 3D space and feature minimal latency. In a perfect world, the VR setup would not require any adaptations of the living room. Sadly, all existing accurate systems for position-tracking require cameras in the room. Now what if the controller – and the headset – featured a camera and could accurately create a representation of the space around it?
Samsung’s GearVR is an interesting proposition in that requires no external peripherals or host computers. It’s touchpad and motion sensors are a lightweight solution that fulfils minimal requirements excellently. With the iPhone increasing more than 20x in its performance over the last 5 years (see e.g. Geekbench), this is a problem that will be solved in the near future. Whether GearVR or Google Cardboard, turning your personal mobile phone into an ad-hoc VR device is a prospect that leads me to the next challenge VR is facing.
When home computers first became popular they were in itself not very social devices. We made them social by gathering around them and sharing the experience of playing. VR is prohibitive in that the experience is not sharable in the real world. Only in the virtual world we might gather and play together. Seeing how mobile phones are used for virtual as well as on-site socialising (sharing a headset, playing games in hot-seat mode) it is clear that a technology that denies this ability will face challenges when it comes to adoption. Making the first person experience transparent to people in the same room will be crucial for network effects of the technology.
Jed Ashforth from Sony mentioned that his company is aware of this challenge and is actively working on techniques to mitigate this issue. One solution is that screen shows the game scene from the viewpoint of the player. I would find it even more interesting if the screen showed the viewpoint from a different angle than they player’s, allowing for interesting asynchronous multiplayer gameplay. I’m sure in time people will come up with excellent game design ideas for this situation. Apropos excellent design ideas: developing for VR is expensive. The reason why so many indie games are 2D is economic. Most small studios can not afford to create three-dimensional worlds. Even if 3D graphics is achievable, players would unfavourably compare indie games to AAA games, unless indie developers go for a unique art style. I’m personally curious how indies will solve this problem. What genres will they adopt? Maybe they will build virtual tabletop games.
Virtual tabletop games – where you assume a third person view instead of experiencing the world from first person – will be the new 2D game. Well, technically I can also see 2D games projected on a virtual projection screen at simulated cinema-screen distance, to work quite well. Still, the virtual tabletop Sim City-like scene in the Crescent Bay demo left me with the strongest impression. Those tiny citizens of a three-dimensional world that was about table-sized were more alive than the T-Rex in the scene before. I could feel more empathy for them than for the alien I encountered on a barren planet and the robots that were fighting each other with magic batons. This one scene demonstrated to me how much impact VR can have – and how huge the design challenge is. Let’s work on it.