New research from Nvidia utilizes artificial intelligence to create slow-motion video.
When it comes to filming slow-motion video, it’s all about the frames. The average cinematic video runs at about 24 frames per second (fps). Often, TV shows and YouTube videos are filmed at 30 fps. Furthermore, games play at 60 fps.
To get slow-motion however, you need to have a much higher framerate that can be broken down to one of the standard framerates. Otherwise, the video will look choppy.
Nvidia’s solution can turn a standard 30 fps video into a 240 fps video. If you were to distill that down to the cinematic 24 frames, a single second of film would be extended to ten seconds.
How the AI creates seven frames
The AI developed by the researchers is able to create up to seven intermediate frames. It does so by taking two input frames and warping them to the desired point between the two frames. The AI adaptively fuses the frames together using a convolutional neural network (CNN). However, this method doesn’t handle motion well. So the research team developed a second CNN to refine the approximations of motion between frames.
The CNN also predicts visibility maps, highlighting areas where movement has exposed areas that were previously not visible. The visibility maps are applied to the first CNN before it fuses the images. This helps reduce artifacts.
The results are surprisingly good, although not always accurate. Before it sees any commercial applications the process will need to be improved.
The full details can be read in a research paper on the preprint arXiv server. The research was published by PhD student Huaizu Jiang, his professor Erik Learned-Miller from the University of Massachusetts, Amherst along with Ming-Hsuan Yang of the University of California, Merced. They worked with Nvidia researchers Deqing Sun, Jan Kautz and Varun Jampani.
Real-world applications
However it could be a great solution for slow-motion video, especially on mobile devices. While phone cameras are capable of taking 240 fps video, doing so is incredibly power-intensive and requires a lot of storage. It’s also plain impractical to record in 240 fps all the time.
Nvidia’s solution could be beneficial in mobile applications. Picture a Google Photos-like experience. Right now, Photos can automatically tweak images and present the results to you in the Assistant panel. Image that kind of seamless experience, but with slow-motion video. You record your kid blowing out the candles on their birthday cake. Later you phone notifies you it made a video for you. When you open it up, is a super slow-motion clip of your kid blowing out the candles, interpolated from the video recorded earlier.
However, the system has limits. Researchers trained the AI using similar slow motion video in order to accurately create intermediate frames. Unfortunately this may limit how the system would work with consumer’s video.
MobileSyrup may earn a commission from purchases made via our links, which helps fund the journalism we provide free on our website. These links do not influence our editorial content. Support us here.