Speaker Q&A: Arno Hartholt discusses the use of Virtual Humans and VR/AR for clinicians
Arno Hartholt is Director for R&D Integration at USC Institute for Creative Technologies and will be at VRDC 2017 to present his talk Immersive Medical Care with VR/AR and Virtual Humans, which will discuss how to apply VR/AR and other powerful capabilities to heal, inform, and teach in the medical domain. Here, Arno gives us some information about himself and his work.
Attend VRDC Fall 2017 to learn about immersive games & entertainment, brand experiences, and innovative use cases across industries.
Tell us about yourself and your work in VR
I work at the USC Institute for Creative Technologies (ICT) as the Director for R&D Integration. ICT is a non-profit research institute focusing on creating immersive experiences that train, educate, heal and entertain. We are fairly unique in that we do basic and applied research in addition to developing and fielding prototypes. My background is in Computer Science and besides coordinating between our researchers and developers, I also focus on how we can best design and develop these kinds of experiences using multi-disciplinary teams.
My first experience with VR was around 2008. We were developing Gunslinger, a mixed-reality, interactive experience, set in the wild west. We were exploring different ways in how we could develop this and one of them was in VR. We built a prototype of a digital character with procedural animations in VR, which was an eye opening experience. You could walk around this character and even jump or squat down and she would just always keep looking at you; such a feeling of presence.
Given the state of the technology at the time, we ultimately went with a mixed-reality setting, where we had a real Hollywood-type physical set in which we embedded projector screens which extended the real space and showed the characters you could interact with, including the barman and the bad guy. You were playing the Ranger and were basically the lead in a typical wild west movie scene. It’s still up and running and quite cool.
Currently, I’m co-heading Bravemind, which is a VR tool for clinicians to help treat Veterans who have PTSD. This is the brain child of Dr. Skip Rizzo, who has been working on this since 2004. It’s helping Veterans through Prolonged Exposure Therapy and is a very rewarding project to work on.
Without spoiling it too much, tell us what you’ll be talking about at VRDC
There will be two main topics. The first is AR/VR, with an emphasis on the VR work that has been done in the clinical realm, including the treatment of phobias and PTSD. VR is a particularly great tool here, because you have complete control over the virtual environment, so that you can determine what the patient sees and hears as well as recording their reactions. I’ll be delving quite deep into the design and development of Bravemind. The second topic is virtual humans, which are interactive, autonomous characters who can perceive you and respond verbally and nonverbally. They can be used in a variety of different ways, including as mentors, tutors, actors, etc. It’s the combination of virtual humans and VR/AR that is fascinating to explore.
What excites you most about VR?
I get excited by the presence of others in VR/AR. This can be a fully autonomous agent (e.g. virtual human), or an avatar which is being driven by another human being. But the ability to populate virtual worlds with characters who we can interact with in a meaningful way is very powerful to me.
What do you think is the biggest challenge to realizing VR’s potential?
There’s obviously the hardware that needs to be improved, including resolution and interaction devices. But I think the biggest challenge is figuring out how we can create meaningful interactions for these new platforms. Granted, we’ve had the same challenge for more traditional games or simulations as well, but with VR and AR where they are now, we are on the cusp of creating incredibly compelling worlds. However, these worlds will feel increasingly empty and trivial if we are unable to populate them with characters that we can meaningfully interact with.
What exactly is a virtual human, and how are they designed to interact with “real” humans?
Virtual humans are like digital actors. They can hear you, see you, “think” about what you are doing, and then respond using both body language and regular speech. They can play a variety of roles, including that of a teacher, role-player or mentor. And that means you can use them pretty much anywhere where you’d want to talk to a real human being, like practicing job interviews or having a tutor who helps you learn a new skill. They can be developed for a range of platforms, including mobile, the web, AR/VR, or a large mixed reality environment, which in turn dictates how exactly you interact with them, but you typically talk into a microphone and look at the character on a screen.
Your talk mentions the use of VR/AR and virtual humans outside of just games and entertainment. Where do you see the future of virtual humans in the medical field?
Virtual humans are great for training. They’re consistent, they’re available 24/7, they’re easily tweaked, they can do things real humans can’t, and you can easily collect all the interaction data for analysis. So they can be used as virtual patients, for instance, so that medical students can practice a variety of skills. They’re obviously not as good as real humans in terms of AI, but that makes them a great tool to augment existing training with real people. They can also be a great human-computer interaction paradigm, where say patients can talk to their own virtual doctor and ask common questions. This is clearly not going to replace a real doctor, but it can be a great tool for more common interactions.
Gamasutra, VRDC, and GDC are sibling organizations under parent UBM Americas.