Most smartphone cameras, even those with optical image stabilization systems, rely on electronic stabilization only for stabilizing video footage. Google's new Pixel 2 devices, however, are managing to combine both optical and electronic stabilization for ultra-smooth handheld footage and panning.
And a new post on the Google Research Blog explains in quite some detail how the system works.
As you would expect from a software company like Google, advanced algorithms making use of the company's expertise in the area of machine learning are the key to the solution.
Motion information is collected from the optical stabilizer and the device's built-in gyroscope. In a next step, the Pixel 2 devices then use a filtering algorithm that pushes video frames into a deferred queue, analyzes them, and uses machine learning to predict where and how the camera is going to move next.
The system can correct for more types of motion than conventional stabilization systems, including wobbling, rolling shutter and even focus hunting. Virtual motion is used to correct for strong variation in sharpness when the device is moved very quickly.
The system might still have scope for improvement, but with a video score of 96, including a very high sub-score of 93 for stabilization, the Pixel 2 is already performing very well in the DxOMark Mobile ranking, and already has us looking forward to future generations of Google's AI-powered hybrid stabilization system.
For more detail read the original article on the Google Research Blog.
2017-11-15 17:32