Q&A: A chat with Magic Leap's interaction experts
During Magic Leap's inaugural L.E.A.P. Conference in Los Angeles, we sat down with Brian Schwab, director of the company's Interaction Lab, and Aleissia Laidacker, interaction director, to talk about the future of mixed reality.
With the advances coming with the Magic Leap One, we wanted to see how Magic Leap’s interaction experts saw the future of games.
Edited for length and clarity.
What are the most important aspects of interactive design on Magic Leap that developers coming from a traditional 2D environment into the 3D MR environment need to grasp?
Brian: I mean that's that's a fairly big question. I would say there are some tech challenges, don't get me wrong. There's a there's a host of new types of sensors that people haven't had to deal with before, so there are definitely some inputs that they have to do a little research on.
Hopefully some of the Magic Kit samples that we put out there will give them that ability to just try things, to feel what the quality of eye tracking is and feel what meshing is like. Those sorts of things. But then likewise, there's also design considerations like you say. Like you really need to stop trying to replace the world with your own world. This world needs to be a primary actor.
I think with mixed reality, the less-is-more idea this is actually super important, because the less pixels you use, the more magical they feel; like they're in a real-world setting as opposed to this crazy setting that you built. There's a bunch of things though.
In traditional mediums I'm making this crazy-compelling world, and you're going there. And this [mixed reality] is almost the opposite -- it's sort of like, "hey, I'm going to incorporate my stuff into your world." I'm going to make it compellingly pushed out towards you, instead of pulling you in. That's is a very hard thing for a lot of devs to get their heads around. They have a suite of tools that let them pull you in immersively, and play your heart strings and make you feel whatever you need to feel. And now, they need to be a guest in your world. They need to still enrapture you and pull you into some kind of an experience. But it needs to be more on your terms than it is on their terms.
When I'm given a blank space, like in Create, my first response to it is inaction because there weren't any guide posts. When you've been talking to developers, has that been an issue?
Brian: I think Create's kind of a special case because it's not really a game, it's more of like a toy box. People, even gamers, I think need some structure, usually. You know like when you were five, and somebody just put a box of toys out, you'd dive in and play. You start doing stuff. But like the average 20- or 30- or whatever-year-old, when you put a box of toys in front of them they're like, "oh you've got a lot of toys in here." Whereas if you put a Lego playset in front of them and said "hey let's futz around," people start to play. Just the tiniest bit of structure and people will go for it. So I think Create has has a little bit of an extra step, because they're just putting a bunch of toys in front of you and saying "your go."
So, yes. With developers what we do is help get people to take their first step. That can be somewhat difficult because they're used to standing still while playing. Especially with games like Dr. Grordbort's Invaders. You've got a gun in your hand, but it was difficult for them to get players to the point where they would move. Usually, once they started moving they got it quickly. You start realizing how fun it is to move around and shoot. But that can be a challenge.
Dr. Grordbort's Invaders
What are some of the challenges you've found in scaling games to different environments without hurting the experience?
Brian: It's a huge challenge, and Dr. G specifically had to overcome a lot of those things. They wanted their robots to be six feet tall in some cases right? So if you're going to make a portal that is for a 6-foot-tall robot to step out of, you need a large chunk of wall or a large chunk of empty floor. And like you said, if I have a small place I might not have that. I might have one space like that or two spaces like that, but I don't have four. So if you if you played through all of Dr. G, you'll notice that there's like some portals that are quite a bit smaller where the robot squeezes out and then he stands up. They had to find really neat clever ways use smaller spaces, so that now the full size robot can come out of that portal and it makes sense.
Aleissa, what ways do you see biometric feedback becoming used as an input or controller in gaming?
Aleissia: One thing, personally, coming from working in games, I did a lot of character AI. I'm really passionate about the whole character side. So I really want to see a lot of those inputs being used for how we control our NPC characters in video games.
I mean we've been hearing this for years, right? That NPCs have no agency, they're kind of there to be like the sleeves of the game player. Whereas we were looking at it now much more along the lines of "how do I actually create a digital character that has its own goals and is truly an intelligent character." And for me and like Brian, we've always been saying this, the only way to do that is with a lot more contextual information about the person in front of me.
So let's say for example I've got a digital character here, and we're just chatting, but it's still a digital character that's right there. Let's say my partner walks into the room, and I turn and I'm like "hey how's it going." The digital character needs to be aware of that, and actually respond to that.
"I think with mixed reality, the less-is-more idea this is actually super important, because the less pixels you use, the more magical they feel."
On the characters side, I think it's super powerful, and going to open up a whole new side of how we see NPC characters in video games, and I'm really excited. On the interaction/input side, one thing that I know that we're excited about is accessibility. I worked in video games on gameplay systems. For years it was like, how do we do this mastery of being so awesome at clicking and mashing buttons for games. That's because the only way that I interacted with video games is through this very limited set of interactions.
But now we have a whole lot of interaction. One of the pitfalls we've seen sometimes is when game devs come to mixed reality and they're like "Cool, a lot more things. But I'm still going about my interactions to do one thing as an output." And we're always stressing that it is much more about natural things. Natural things are blended inputs. Me interacting with a thing is not just going to be like, 'I look at it.' It's going to be, 'I look at it, I probably point at it.' I say, "hey move this to here." It's a combination of inputs and we want devs to definitely think about the possibilities of the space.
Asking them to look just beyond the idea of a basic controller and try to use storytelling almost as a controlling device?
Aleissia: Yeah. So some game devs when I've talked about this, a few of them who are, like, hardcore gamer devs say, "I don't want to do that. I enjoy making the gameplay systems where beating the level and being better than you comes down to flicking a joystick slightly better." But that's why none of my friends play the types of games that I play, because they feel so scared from the game controller.
There are going to be a lot of game devs that come in and don't necessarily right away embrace natural inputs, because they enjoy the limitations and constraints that game controllers have right now. But that's what we're trying to do: to see what are all the different interaction spaces and how can you still have a really fun time, and a challenging time, playing different types of games in mixed reality.