by John Kolencheryl, Tech Director of I Expect You To Die, Schell Games
I Expect You To Die (IEYTD) is an escape-the-room style puzzle game in which you step into the shoes of a super spy with telekinetic powers. This blog post will focus on how the engineering team tackled some key challenges in VR to create a unique and immersive world of espionage.
The biggest feature change for IEYTD since the release of our Oculus Share demo was the integration of hands. The game was originally built for the mouse only, a control scheme we still support and are proud of today. However, after we got our first Oculus Touch hardware and did some early prototypes, we were convinced that this was how the game was meant to be played. It made the players feel more powerful and elevated their sense of presence.
In IEYTD, the core idea is that the player interacts with objects in the environment to solve puzzles. With the mouse, you are constantly using telekinesis, during which you can aim at objects using a reticle and interact with them. This mode allows players to interact with objects that are farther away from them. As we transitioned to hands, we did not want to lose this ability, but at the same time, we wanted players to experience the beauty of holding an object in VR. As a result, we came up with two modes for hands: Local Grab Mode and Telekinesis Mode.
The ability to grab objects using virtual hands is a powerful experience and supporting this mechanic was crucial to IEYTD’s gameplay. In the game, the player’s virtual hands are represented as a pair of spy gloves along with animations to show different hand poses. In this mode, when the player grabs an object, the hands fade out completely. This is done for a couple of reasons. The first, and more obvious reason, is scope. There are a large variety of objects that can be picked up and manipulated in IEYTD. Therefore, creating a unique animation and grab pose for each of them was not feasible. Secondly, the way objects are held varied based on the person and the context in which they were being used.
As the player’s hands explore the environment, the objects that are “grabbable” become highlighted. We predict this with the help of a trigger volume and a raycast from the player’s head to their hand. The trigger volume helps maintain a list of grabbable candidates and the raycast helps us select the object that is closest to the player’s head. The assumption we make here is that the grabbable candidate that is closest to the player’s head, is the one that is most likely to be interacted with.
The TK mode is our solution to allow players to interact with objects that are not within their reach, allowing our puzzles to be more flexible and making it possible to create more diverse environments. It also solidifies the game’s super spy theme because it adds an additional “spy-like” element to the players’ arsenal of tools.
For TK mode, our goal was to translate the reticle based on interaction from the mouse controls to the hands. To achieve this, we parent a reticle to each hand and assign it a forward vector. The reticle is then positioned in 3D space along this vector using a raycast against the level’s collision geometry. This implementation worked in most cases; however, there were a couple of undesirable consequences.
The first one was that the reticle moved considerably. Since it was parented to your hand, the slightest hand movement would cause it to move. At greater distances, this movement was magnified, and ironically, made it difficult to accurately target far away objects. To address this issue, we add a “smoothed reticle” that follows the original reticle using a damping curve. The amount of smoothing applied is inversely proportional to the displacement of the original reticle. As a result, small displacements of the reticle are smoothed out and large displacements are immediate.
The second issue was a consequence of how we render the reticle in the game. In order to avoid clipping issues and to give a 2D feel to the reticle, we render it on top of all world geometry. While this solution works well when the reticle is parented to your head, it wasn’t the case with hands. The player’s perception of the reticle’s position was severely impacted. The problem was most noticeable when there were objects at different depths along the player’s line of sight. To the player, the reticle would appear to be on top of the object that was closest in their line of sight, when it was actually on top of an object at a different depth. This discrepancy caused confusion amongst players as to why they couldn’t interact with the object they were targeting. To solve this problem, we do a raycast from the head to the reticle and detect if there is an interactable object along its path. If an object is detected, we snap the reticle in front of it.
A huge challenge with hands in VR is how they interact with the world from a physics standpoint. Game physics is an approximation and when you combine it with virtual hands, whose motion is not constrained by world geometry, terrible and often funny things happen. The hand collision model is made up of two aspects: Hand Physics and Held Object Physics.
In IEYTD, the hands do not have colliders; they only have trigger volumes to grab objects. Players can push their virtual hands into geometry and it will clip right through. At one point, we considered giving visual feedback when this happened, but eventually didn’t do it because we felt that it would alert the player even more to the fact that their hands were inside geometry. Lastly, we did not want players to be able to push objects forcefully out of closed containers, especially ones that were locked away and crucial to the puzzle.
Determining the physics behavior of objects held by the player is an interesting problem. Our first iteration was a model where held objects collided with the environment, and when the collision amount was above a certain threshold, the object dropped from the player’s hand. At first, this solution seemed to work well, but as the levels and interactions became more complex, it made object interaction frustrating. We had stacks of cash, cigars and grenades placed inside tight spaces. When the player grabbed these objects, especially during speed runs, the tight spaces would cause the physics system to constantly try to resolve penetration, resulting in severe physics glitches. And often, such penetration caused the collision amount to exceed the object drop threshold, resulting in the player dropping objects unintentionally. It was frustrating and became a hindrance to puzzle solving.
Our second approach was to turn off collision when the object was picked by the hand. Early tests were promising because you no longer saw physics glitches when grabbing objects from tight spaces. But, it did come with the caveat that players could now stick objects into geometry, and upon release, they would bounce into unpredictable locations as physics tries to resolve the penetration. However, in our playtests, it was fairly uncommon and most of the times when it happened, people wouldn’t notice it.
There was still one issue we had to solve with this approach. Since we turned off colliders when you picked up an object, it no longer detected collisions. As a result, we lost the ability to break objects. This change affected our puzzle solving since breaking objects is a core puzzle mechanic and secondly, it is immersion breaking to not acknowledge such an impulsive action. In order to address this issue, we turn on colliders on the held object based on the hand’s acceleration. Therefore, if the player moves the held object quickly, we turn on its colliders for a brief moment, thereby allowing it to send and receive collision events. This solution worked well since it was an intuitive action to perform when breaking objects.
Lastly, the colliders are never turned off on the held object when the hand is in Telekinesis Mode. We experimented with this, however it felt very unnatural to TK objects in and out through geometry without any form of resistance. It also had a higher chance of losing objects inside geometry.
Audio plays a crucial role in delivering a good VR experience. At its core, IEYTD is a puzzle game, however it is also a physics sandbox. Players are able to stack objects, throw them around, break them and even shoot them. As a result, these objects collide with the environment a lot and it was important that they make believable collision sounds when it happens.
In order to achieve this audio interaction, we designed a system that allowed us to tag environment colliders as Soft, Hard, Glass and Metal surfaces. The image to the right shows a paint over of the different surfaces in the Car Level. The sound designer created two collision sounds (normal and heavy) for each type of surface, and for every object that can be picked up by the player. The idea is then to adjust the volume levels of these two sounds on impact, based on their collision velocity, and then play them simultaneously. Our initial tests were positive, and with some additional tweaking, we were able to get the system to produce a good approximation of surface-based collision sounds. The following diagram is an overview of how the system works.
As mentioned earlier, IEYTD is also a physics sandbox. It allows players to have fun with physics while they try to solve a puzzle. We love this aspect of our game; however, it comes at a cost. Players can easily lose an object that is crucial to the puzzle and put the game in a stale state.
We decided to solve this problem using a technique that games have done before: have an x-ray shader on objects that are hidden. At first, we had concerns about it breaking the player’s immersion, but then again you are a super spy with telekinetic powers, so x-ray vision didn’t seem too farfetched. We did not want all occluded objects to use the x-ray shader, since it would compromise objects that are hidden from a puzzle standpoint. Instead, we built a system that allowed us to define approximate areas that are not reachable by the player. We did this using trigger volumes and named them Hidden Volumes. When an object falls into this volume, an x-ray shader is applied to it, allowing the player to see it through geometry.
Next, we needed a way for the player to be able to interact with objects inside the Hidden Volume. In order to do this, we put the objects into a separate physics layer, thereby allowing us to create a layer-specific raycast to detect and prioritize their targeting. This solution allowed the players to pick up objects through geometry.
As the name suggests, the sole purpose of this volume is to reduce the visibility of the player when they peek into places that are either a crucial part of the puzzle or simply lack any geometry. Under the hood, they are just trigger volumes that are placed in the affected regions of the level. These trigger volumes, by default, cause the screen to fade to black when the player’s head enters it. In other scenarios, it was more convenient to set up a single Blind Volume that caused the screen to fade to black when the player’s head exited it.
In IEYTD, we have a number of compartments containing objects that the players can pick up. In order to prevent them from grabbing these objects without opening the compartment door (or lid), we put the lid collider on a special physics layer. Using a raycast against this layer, from the player’s head to their hand, we detect if this collider is hit. If it is, then we deduce that the player is trying to access an object inside a compartment without opening it. When this happens, we simply disable the player’s ability to grab objects.
The development of IEYTD has been a challenging and rewarding experience. The techniques described here are the result of countless playtests and iterations, and knowledge shared by the VR community. VR is a powerful medium that allows us developers to entice players in new and exciting ways. And, it does a pretty darn good job of transporting them to a world we’ve created. Here’s hoping that IEYTD gets people closer to their dreams of being a sophisticated and responsible Super Spy.
For more information on the making of I Expect You To Die, check out CEO Jesse Schell’s Gamasutra article from June 2015.