Alan Wake and Quantum Break developer Remedy is working with tech giant Nvidia to create a streamlined motion capture and animation system.
The new technique uses a deep learning neural network -- which runs on Nvida's eight-GPU DGX-1 server -- to generate accurate 3D facial animations based on videos of voice actors recording lines.
As reported by ArsTechnica, Remedy began the experimental process by feeding the network information on existing animations so it had a basic understanding of the final outcome.
Then, after supplying it with around five to 10 minutes of facial capture footage, the network was ready to begin producing animations of its own. Once the network has been suitably trained, it's also apparently able to create new animations using nothing but audio input.
Even at this stage, it looks like a surprisingly effective technique that has the potential to drastically speed up the animation process, giving artists more time to focus on other areas of production.
"Complex facial animation for digital doubles like that in Quantum Break can take several man-years to create," said Antti Herva, lead character technical artist at Remedy.
"After working with Nvidia to build video- and audio-driven deep neural networks for facial animation, we can reduce that time by 80 percent in large scale projects and free our artists to focus on other tasks. We're convinced AI will revolutionize content creation."
There's no doubting the appeal of the tech, but given it's still in the early stages of development, there's no word on when (or in what form) it'll be released into the wider world.