Virtual reality brings some fantastic opportunities for people with disabilities. New experiences, therapeutic benefits, even accessibility for people who have better head control than hand control. But it also brings considerable new barriers, with great potential to lock people out from these benefits.
Some of the barriers that VR presents are unavoidable, there are people who will simply never be able to take part in VR as it currently exists. From the people who will never be able to overcome simulation sickness, to people who simply aren’t physically able to have a bulky device on their head.
But other barriers are avoidable, through the right design considerations - through accessibility.
It is still early days for this iteration of VR, so this post doesn’t aim to provide all of the answers or form a concrete set of guidelines. There will be issues that are not covered here, and still there’s huge room for innovation, discovering new and better design patterns.
It does however aim to ask some questions, point out a few VR-specific barriers to access, and show a few possible solutions, allowing you to include more people as a result.
Lord Wibbley, via Reddit
Simulation sickness occurs due to some of your senses telling your brain that one thing is happening, while other senses are busy telling your brain that something else is happening – a sensory mismatch between your visual system and your vestibular system. It is similar to motion sickness, but the opposite. Motion sickness occurs when your visual system says you are stationary but your vestibular system says you are moving. Simulation sickness occurs when your visual system says you are moving but your vestibular system says you are stationary.
Simulation sickness is the area of accessibility that has seen by far the biggest effort. Not really surprising, considering the role it played in the decline of the of the VR industry in the early 90s.
It is a prevalent and important issue. Some people will never be able to play due to it, regardless of how well designed the hardware or software is. Even just putting on a headset and looking around is enough to trigger simulation sickness in some people.
Some - but not all - people are able to adapt through acclimatisation. However I am sceptical about the degree to which that’s going to happen. There will be people who simply won’t go back for another try if their first experience is a bad one. The chance that it might not be so awful next time doesn’t seem like a compelling enough reason.
I’ve had an awful experience myself. I found myself in a situation where I was carrying out an accessibility audit on a VR game, and only had one session to cover everything in. After starting to feel the onset I had to force myself on through it for another half hour, to ensure all areas of the experience were covered.
Never do this!
As a result I was laid low for a whole two days afterwards, and still have ongoing bad effects from it over a year later. Over that past year I’ve regularly suffered from car sickness, something I hadn’t experienced since I was a child, and now find fast camerawork in movies and playing FPS games difficult too, neither of which I experienced before.
I’ve seen some other reports of the same long term effects from a bad experience, although they’re obviously uncommon and extreme cases. But you don’t need to have an experience that’s this bad to put you off.
And of course it’s not only important for the individual players’ experience. It’s important for the whole VR industry, from a PR angle. A well known example is the negative coverage of Resident Evil’s showing at E3 taking attention away from many positive stories.
For many people, simulation sickness can be avoided if the right design considerations are in place. The goal is to avoid your eyes giving you a sense of movement when your inner ear and so on are telling you that no movement is taking place.
Techniques include (but are not limited to):
There are many other considerations and a great deal of literature covering them, so to save repeating it all, here are a few good ones:
Some simulation sickness considerations can be implemented as an integrated part of the experience, others can be implemented through options. These options have been seen in the form of a comfort mode from very early on. Such as in The Gallery: Six Elements, an early comfort mode that replaced free analogue rotational seated movement with snapping to fixed 30 degree increments.
CloudHeadGames, via Reddit
However bear in mind that there will be plenty of people who simply aren’t aware that they would need something like a comfort mode, so it is best to have it turned on by default, and allow those players who feel comfortable with more sickness-prone methods of movement and so on to be able to turn them on if they prefer the immersion that it brings.
Dan Crawley, via venturebeat.com
Simulation sickness is a has been a big issue in gaming for a long time, outside of VR. First and third person in particular. And things like good default and configurable FOV and toggles for things like motion blur and weapon bob can make a huge difference.
Eric Qualls, via about.com
Despite its prevalence and its game-breaking effect, awareness amongst developers has remained consistently low. My hope is that game developers will take what they are learning in VR about visual/vestibular mismatch and apply it outside of VR too. If VR can drive greater awareness throughout the whole of the industry that would be a great thing, making gaming a much more enjoyable experience for many more people.
Johanna Roberts, via YouTube
Motor accessibility in gaming traditionally means ability to operate a controller. Aside from a few gesture related games, this has meant that the motor requirements of gaming have centred on your hands and arms.
With VR, the range and complexity of motor ability required in order to participate has increased considerably. Particularly with this current generation of bulky headsets. There are promising hardware developments, such as eye tracking enabled VR, but not yet with mass market commercial availability.
The types of motor ability that can come up against motor related barriers in VR include:
Strength/fatigue – ability to / length of time possible to carry an unsupported weight on your head or in your hands or carry out repeated actions
Range of motion – how far a head or a hand can be moved in any direction, and how far fingers can move too, particularly with controllers that place buttons on a range of sides, and controls located directly on the headset.
Accuracy – ability to make small, smooth or precise movements
Height – a wide range, including sitting in a wheelchair
Locomotion – ability to walk, lean, duck or kneel
Presence of limbs and digits – not everyone has two working hands with ten working fingers
Speed – ability to complete a task within a set time frame
Balance & balance impairment – particularly with older players
Orientation – forwards in real life might not always be forwards in-game, for example if someone can only play while lying down
All of these pose obvious challenges. Some barriers that VR poses for motor ability are insurmountable, but most are not. There are some really obvious solutions, and some more interesting ones too. Offering a choice of input methods, configurable head height, options for how to navigating the environment, of range of motion required. If your concept really does hinge on any of those things, that’s fine, there’s nothing wrong with that, games by definition need to have some kind of barriers, and any barrier is exclusionary. It’s about analysing the barriers in your mechanic, working out which ones are necessary and which aren’t, and optimising the experience to be as enjoyable as possible for as many people as possible.
A number of people are already doing excellent work in this area, which is encouraging to see at this stage of the industry:
Again, while the approaches taken above are fantastic, they’re still only first stabs. There’s huge opportunity for innovation and figuring out new ways to avoid unnecessary exclusion.
As is the case outside of VR, the key thing is offering options. For example the option to use a standard controller instead of hand tracking or walking... useful not only for people who don’t have the accuracy and range of motion required for a full Vive-style experience, but also for people who are more prone to injury, which can be increased by not having sight of the angle and position of your limbs. The initial set of PSVR games do a great job of offering a choice between motion controls and using a traditional controller.
Choice between standard controller and motion controllers in The London Heist
It is important to recognise that what you see as the optimal way to experience the game isn’t the only way to experience it, and that someone being able to experience 80% of your vision is far better than them being able to experience 0% of it.
Even if your vision is one of room-scale VR with 360 degree head movement and full hand tracking, that person who’s playing without any horizontal head movement or locomotion at all, just using using one stick and two buttons on a controller, may still actually be the person who gains more from the experience than anyone else.
Sound is an important part of the VR experience, but one that has a rigidly enforced format. Even with top notch 3D/binaural sound, the delivery mechanism is always the same – stereo headphones. Unilateral hearing loss (difficultly hearing that affects one ear more than the other) is common, with obvious implications for enforced and rigidly separated stereo.
khannr2, via reddit
Mono toggles (playing both stereo channels through both ears) are a highly valued feature outside of VR, but does mean that you lose and sense of direction.
In real life unilateral hearing loss is in part mitigated by echo, for example someone who is deaf in their left ear still being able to hear a little of sounds coming from the left when they echo back off surfaces to the right. That might be an interesting area to pursue, for game engines in particular. There is some interesting work being done on more accurate audio representation, most recently in Gears of War 4, but there’s still huge scope for innovation.
But the really big issue for hearing loss and VR is currently subtitling (textual equivalent for speech) / captioning (textual equivalent for other important sounds). Namely the lack of it.
Karmagon, via Reddit
A feature so close to standard throughout the games industry now conspicuously missing from so many VR games, with obvious dire implications for both hearing related accessibility and localisation costs. An important lesson in not taking accessibility for granted, things can so easily slip backwards like this.
So why has this happened?
There seems to be a pretty simple answer. Captions in other media aren’t a big design issue, you put them across the bottom of the screen. But in VR, there is no bottom of the screen. So that presents a design challenge.
We could have them floating across the bottom of the player’s vision. To do so they need to be right up close to the player to avoid occlusion issues with objects in the environment.
Subtitles floating in front of the player in The Vanishing of Ethan Carter
This however presents an issue. A phenomenon called vergence-accomodation conflict, explained here with far more eloquence and detail than I could by Adrienne Hunter.
So with the floating option pretty much out of the window due to eye strain, headaches and nausea, what else is there? Another approach is to make subtitles/captions contextual, attach them to the source of the audio, which most commonly means a character in the game.
No issues with vergence-accommodation conflict here. But then you run up against another problem. What if you’re looking the other way? You have no way of knowing that someone is speaking, no reason to turn around to read the text. So you end with a system that is far far inferior to systems outside of VR, so that isn't an ideal solution either.
And that often seems to be about as far as the design consideration gets - no obvious solution, so just leave them out.
It’s not an unsolvable problem though. There are answers, such as the following early prototype of VR subtitling from Melbourne dev Joe Wintergreen. It combines the two approaches, contextually positioned when within the player’s view, but snapping to a fixed point when outside of the player’s view, allowing them to then choose whether to turn to look at the source.
As with the motor accessibility work this is just an early stab, there’s plenty of room for innovation. Simple enhancements would be adding in the speaker name for when more than one audio sources are present at the same time, giving some indication of what direction the sound is coming from (aligning left/right of the view, or adding arrows), and some of the usual best practices for subtitle design - particularly paying close attention to size, and to contrast.
But this is the most promising approach that I’ve personally seen, avoiding vergence-accommodation conflict and making the text available even when the speakers aren’t visible.
And of course speech is just part of the audio equation. As with non-VR gaming, but again particularly important here because of the enforced stereo, don’t rely on audio alone to communicate important information. Using multiple cues, i.e communicating through visuals as well as sound, not only makes the information available to people with hearing loss, it’s also often just good design for all players, offering extra reinforcement, quicker recognition.
Cathy Vice, IndieGamerChick.com
Photosensitive epilepsy is rare, but still a critically important area of accessibility. This is because despite only accounting for a fraction of cases of epilepsy, the results are severe. Generally accessibility is about avoiding unnecessary bad experiences or exclusion, or in the case of simulation sickness, some pretty extreme discomfort. But epilepsy results in very real physical harm.
For this reason the manufacturers go to some extreme lengths to discourage people with epilepsy from using VR equipment. But here’s the thing – there’s only one way that you discover that you have epilepsy, which is by having seizures. Ubisoft’s mandatory in-house epilepsy testing was brought in as a result of a child having their first ever seizure while playing Rabbid Rabbits.
It’s important to note that games do not give you epilepsy, but they can trigger seizures, including in people who have a predisposition but have never had a seizure before and have no idea that they have a predisposition. For example The National Society of Epilepsy estimates that every year in the UK alone 150 people have their first seizure while playing games.
There is no research yet on the impact of VR on seizure likelihood. But a factor in being triggered is the proportion of your field of view that is taken up by a visual effect. When watching a TV, there’s a hard limit on how much can be taken up, as the screen only takes up a relatively small portion of your vision. But in VR, the proportion of your vision that can be taken up by a trigger flash/flicker/pattern can be 100%.
A full tonic-clonic seizure (loss of consciousness followed by muscle spasms) while you’re wearing bulky equipment strapped to your eyes and a cable dangling around your neck doesn’t sound like a good combination.
While it’s impossible for any game to be epilepsy safe, there’s a standard set of common triggers to keep in mind to reduce seizure risk, relating to two things – flickering/flashing, and high contrast repeated patterns. Some companies simply don’t allow them in any game, others provide options to disable them. For some triggers there are thresholds based on the amount of time they last for or how much of the screen they take up.
But until there’s some good research available, because of the potentially higher risk and potentially more serious injury implications it seems sensible to err on the side of caution, to drop the idea of options for now and instead simply avoid all of the common triggers all of the time, regardless of how much of the screen they take up.
Oculus take a very clear standpoint on it –
Small text and UI size is a common complaint across gaming in general, with games rarely meeting the 28px@1080p minimum for 20/20 vision, let alone any degree of vision loss. Players are often left with one simple workaround – sit closer to the screen.
In VR sitting closer to the screen obviously isn’t possible, and nor is playing with your entire display zoomed in - at least not without greatly increasing simulation sickness risk. So this translates into two things:
Leaning in closer to a menu in Crystal Rift
These kind of considerations are also helpful for people using headsets that would otherwise be able to see fine, but are using a headset that isn’t compatible with wearing glasses. This is a really common issue, and one that is unique to VR too. Small text size in particular is one of the most common accessibility complaints in games in general, let alone if you’re unable to use your glasses.
Crosshairs, or the option to turn crosshairs on, can also be useful. As well as providing a frame of reference for simulation sickness, they’re also useful for people with a greater level of vision loss in one eye than the other, who then end up looking slightly off-centre when in VR. And of course as this relates to people with impaired vision, it is also useful to offer options for the size and design of the crosshair.
And there’s also the far end of the scale, full blindness. Blindness + VR might seem like a non-starter, on the assumption that the point of VR is it being a visual medium. But actually VR games based solely on audio do exist, and blind mainstream gaming is also a thing. There are even solid bases of blind gamers playing Grand Theft Auto V and Resident Evil 6, due to a combination of assists (for example getting in a car by pressing the button anywhere in the general vicinity of the car, and auto aim, just having to face in the general direction of a sound and hit lock on), excellent and detailed sound design, and simple environments.
The breakthrough with GTAV was the introduction of first person mode, a seemingly trivial thing but actually one that meant that people reliant on sound alone are able to tell where sounds are relative to themselves at all times. And with VR, the combination of full head tracking and the advances being made in immersive 3D sound, that ability can be amplified. Particularly as VR environments are often straightforward to navigate, with comfort modes, teleportation based movement and so on. Even hand tracking, being able to point directly at the source of a sound rather than having to line up a crosshair that you can’t see.
There are some pretty strong limitations, complexity of environment being a big one, and UI interaction being another. But it’s worth thinking about, and worth speaking with blind gamers (just searching for ‘blind gamer’ on twitter will find some) about if you think there might be any possibilities. With the right mechanic, you might be surprised about just how much can be done for relatively low effort.
One last thing that’s worth bearing in mind for vision is a condition that is uniquely relevant to VR (and AR) – stereoblindness. Most of the time the inability to see in stereo won’t be an issue, particularly if you take into account the point above about crosshairs, but it’s worth bearing in mind if you have anything that unnecessarily relies on being able to perceive differences in depth. An obvious reason for not being able to see in stereo is reduced/no vision in one eye, so always ensure elements such as captions are displayed across both screens.
There is another very interest aspect to stereoblindness, which is that being stereoblind in real life doesn’t always mean you will be stereoblind in VR, as in this moving story, and this, with some interesting consequences.
As mentioned at the start, this isn’t intended to be a comprehensive list of all barriers or all solutions. And there are of course still all the other general accessibility good practices for game development, many of which are still relevant to VR. But hopefully the above is enough to get a few people thinking. And if something is considered in the early days that we’re in now, it’s easier for the solution to become a widespread design pattern.
It’s important to remember that even if a number of the barriers above are ringing bells for your own game, there’s nothing fundamentally wrong with your design. You’re still building something that people will love. It’s a question of expanding that out, increasing the number of people who are going to be able to be able to enjoy the experience, and keeping an open mind about what forms that enjoyment can take.
I’m yet to meet someone working in VR who isn’t a passionate evangelist for the medium, wanting to share their enthusiasm for others, bring more people on-board. So many of the games being worked on at the moment will be people’s first experiences with VR, and if we can start cracking some of these issues now, so many more of those first experiences will be everything that people are hoping for.
Republished from ian-hamilton.com/blog. Includes input from Barrie Ellis (OneSwitch), Tara Voelker (Gaikai), Jesse Anderson (IllegallySighted), Joe Parlock (LetsPlayVideoGames), Brian Van Buren (Tomorrow Today Labs), Adrienne Hunter (Tomorrow Today Labs), Andrew Normand (University of Melbourne), Kimberly Voll (Riot Games), and Katie Goode (Triangular Pixels)