Researchers at Adobe and Scotland’s University of Edinburgh are developing systems that will help video game characters move like humans.
As the teams note, the motion-capture technology commonly used in video games can’t account for every way that a digital character might interact with the world.
With that in mind, the researchers are using a deep neural network to create more realistic depictions of movement.
To start, the teams have studied and collected various motions like picking up objects, climbing and sitting down. These were all carried out by a live performer on a soundstage. From there, the neural network can take what it’s learned and adapt that to almost any situation or environment.
In addition to producing more realistic animations, this network can help reduce file sizes, notes the team. This could be especially useful as games shift toward streaming, like on Google’s upcoming Stadia platform.
The researchers will present their findings at the ACM Siggraph Asia conference in Brisbane, Australia next month.