fbpx
News

Samsung deepfake tech creates animated avatars from a single still image

Can we believe anything anymore?

MegaPortraits is Samsung Labs’ new research experiment that allows researchers to create realistic high-definition deepfakes with nothing more than a single source photo.

The Samsung Labs’ team says that it can create megapixel-sized avatars from a single frame source, even including paintings like the Mona Lisa, to create realistic-looking deepfakes, as seen in the video below. The deepfakes look absolutely unreal, and when I say unreal I mean they look absolutely real, with realistic head and neck movements and rich facial expressions.

 

The way the technology works is by combining the source image with the motion from a ‘driver.’ The ‘driver’ here, is another person whose motion is applied to the source image to create the deepfake.

“Our training setup is relatively standard. We sample two random frames from our dataset at each step: the source frame and the driver frame. Our model imposes the motion of the driving frame (i.e., the head pose and the facial expression) onto the appearance of the source frame to produce an output image,” the MegaPortraits abstract explains.

“The main learning signal is obtained from the training episodes where the source and the driver frames come from the same video, and hence our model’s prediction is trained to match the driver frame.”

The team, including researchers Nikita Drobyshev, Jenya Chelishev, Taras Khakhulin, Aleksei Ivakhenko, Victor Lempitsky, and Egor Zakharov wrote a full research paper on the application, titled MegaPortraits: One-shot Megapixel Neural Head Avatars. Read more about it here.

Image credit: Samsung Labs

Source: Samsung Labs

MobileSyrup may earn a commission from purchases made via our links, which helps fund the journalism we provide free on our website. These links do not influence our editorial content. Support us here.

Related Articles

Comments