Facebook’s artificial intelligence is capable of extracting playable characters from real-world videos.
As detailed in a newly published preprint paper called “Vid2Game: Controllable Characters Extracted from Real-World Videos,” Facebook’s AI “generates novel image sequences of that person … [and the] generated video can have an arbitrary background, and effectively capture both the dynamics and appearance of the person.”
To do this, Facebook used two neural networks that can trace the featured person’s poses as they move. This capture will then be “cropped out” of the video and can be controlled using any “low-dimensional” signal, like a joystick or keyboard.
Researchers trained the AI using three videos that were between five and eight minutes long and featured a tennis player outdoors, a person swinging a sword indoors, and a person walking.
This helped the AI learn how to successfully analyze different dynamic elements depending on the environment and action being taken.