NVIDIA researchers can now turn 30fps video into 240fps slo-mo footage using AI

NVIDIA researchers have developed a new method to create 240fps slow-motion video from 30fps content using artificial intelligence. Detailed in a paper submitted to the Cornell University Library, NVIDIA researchers trained the system by processing more than 11,000 videos through NVIDIA Tesla V100 GPUs and a cuDNN-accelerated PyTorch deep learning framework.

This archive of videos, shot at 240fps, taught the system how to better predict the positioning differences in videos shot at only 30fps.

This isn't the first time something like this has been done. A post-production plug-in called Twixtor has been doing this for almost a decade now. But it doesn't come anywhere close to NVIDIA's results in terms of quality and accuracy. Even in scenes where there is a great amount of detail, there appears to be minimal artifacts in the interpolated frames.

The researchers also note that while there are smartphones that can shoot 240fps video, it's not necessarily worth it to use all of that processing power and storage when something that will get you 99% of the way there is possible using a system such as theirs. 'While it is possible to take 240-frame-per-second videos with a cell phone, recording everything at high frame rates is impractical, as it requires large memories and is power-intensive for mobile devices,' the researchers wrote in the paper.

The research and findings detailed in the paper will be presented at the annual Computer Vision and Pattern Recognition (CVPR) conference in Salt Lake City, Utah this week.

.

researchers nvidia videos 240fps

2018-6-21 19:13