Google used a 360

Google used a 360
ÔÎÒÎ: dpreview.com

Google has published an overview on its AI Blog that details how it developed the technology that powers the Portrait Light feature found on its newer Pixel smartphones. While the entire process is performed on the device using machine learning, not everything about the process is artificial.

Google explains how it used real-world examples to create two machine learning algorithms that ‘help create attractive lighting at any moment for every portrait — all on your mobile device. ’

The first is Automatic Light Placement. This machine learning model attempts to replicate a photographer’s job of assessing the lighting in a scene and compensating with artificial light accordingly to achieve the best possible portrait.

Estimating the high dynamic range, omnidirectional illumination profile from an input portrait. The three spheres at the right of each image, diffuse (top), matte silver (middle), and mirror (bottom), are rendered using the estimated illumination, each reflecting the color, intensity, and directionality of the environmental lighting. Photo and caption by Google.

To do this, Google says it first ‘estimate[s] a high dynamic range, omnidirectional illumination profile for a scene based on an input portrait’ using technology Google researchers detailed in a white paper earlier this year. The result is a sphere that ‘infers the direction, relative intensity, and color of all light sources in the scene coming from all directions, considering the face as a light probe. ’ Google also estimates the head positioning of the subject using MediaPipe Face Mesh, a neural network-powered tool that tracks the position of a subject’s face in real-time using 468 3D ‘face landmarks. ’

Google uses this information to determine where the synthetic light should be positioned using real-world studio portrait examples. Specifically, Google says it tries to recreate ‘a classic portrait look, enhancing any pre-existing lighting directionality in the scene while targeting a balanced, subtle key-to-fill lighting ratio of about 2:1. ’

Using all of the data Portrait Light has parsed thus far, Google then uses the second tool, Data-Driven Portrait Relighting, to ‘add the illumination from a directional light source to the original photograph. ’

To make the lighting look as realistic as possible, Google trained the machine learning model using ‘millions of pairs of portraits both with and without extra light. ’ To create this dataset, Google used the Light Stage computational illumination system, pictured below, which is a ‘spherical lighting rig includes 64 cameras with different viewpoints and 331 individually-programmable LED light sources. ’

Google photographed multiple subjects with varying face shapes, genders, skin tones, hairstyles and more to create a diverse dataset for the neural network to work with. As for how each person was photographed using Light Stage, Google breaks down the process:

‘We photographed each individual illuminated one-light-at-a-time (OLAT) by each light, which generates their reflectance field — or their appearance as illuminated by the discrete sections of the spherical environment. The reflectance field encodes the unique color and light-reflecting properties of the subject’s skin, hair, and clothing — how shiny or dull each material appears. Due to the superposition principle for light, these OLAT images can then be linearly added together to render realistic images of the subject as they would appear in any image-based lighting environment, with complex light transport phenomena like subsurface scattering correctly represented. ’

Left: Example images from an individual’s photographed reflectance field, their appearance in the Light Stage as illuminated one-light-at-a-time. Right: The images can be added together to form the appearance of the subject in any novel lighting environment. Photos and caption by Google.

To ensure the process of applying lighting affects to the image is as efficient and effective as possible, the team trained the relighting model to output a low-resolution quotient image. This allows the process to be less resource-intensive and ‘encourages only low-frequency lighting changes, without impacting high-frequency image details. ’

The pipeline of our relighting network. Given an input portrait, we estimate per-pixel surface normals, which we then use to compute a light visibility map. The model is trained to produce a low-resolution quotient image that, when upsampled and applied as a multiplier to the original image, produces the original portrait with an extra light source added synthetically into the scene. Image and caption by Google.

All of this information alone though doesn’t account for various parts of a subject’s face being closer to or further away from the synthetic light source. To keep the synthetic lighting as realistic as possible, Google applies Lambert’s law to the input image to create a ‘light visibility map’ for the desired synthetic lighting direction, which results in final product that more accurate represents studio lighting.

Google notes the entire process results in a model capable of running on mobile devices and which takes up just under 10MB—impressive considering how much is going on behind-the-scenes.

.

light google lighting portrait using image

2020-12-14 18:38

light google → Ðåçóëüòàòîâ: 5 / light google - ôîòî


Ôîòî: dpreview.com

Lytro is officially shutting down

Photo: Lytro Last week, industry sources told TechCrunch that Google would soon acquire light field camera maker Lytro for somewhere between $25 million and $40 million. And while Google's part in all this hasn't been confirmed, Lytro has now formally announced that it will be shutting down, and an anonymous source has shed more light on how Google is involved. dpreview.com »

2018-03-27 23:00