Ever wondered how the Google Pixel 2 camera takes such great portrait shots? Well, the researchers behind the tech submitted a research paper earlier this week that explains how it works.
Neal Wadwha and his team compiled the comprehensive paper for ACM Transactions on Graphics, a bimonthly peer-reviewed scientific journal. The paper, to be published in the August 2018 issue, outlines the team’s work in creating a single-camera shallow depth-of-field (DOF) effect.
Wadwha’s team wanted to provide an easy user experience that combined the best features of a DSLR and smartphone. The goals were fast processing with a high resolution output, a one-button capture experience and a convincing DOF result.
The system they came up with uses two technologies in tandem but can function with only one.
A neural network that recognizes people
The first is a neural network the team trained to segment people and their accessories out of images. The segmentation uses facial recognition to identify people in the image. Then the network infers a low-resolution ‘mask.’ Masks hide certain parts of an image and reveal other parts of it.
In this case, the mask is roughly shaped like the person photographed. If you were to see it, the person shape would be white and the rest of the image would be black. Once the low-resolution mask is set, the system upscales it to full resolution using edge-aware filtering — a technique that smooths edges accurately.
All this allows the system to determine which pixels in the image belong to the person and which do not. Then the blur effect can be applied based on the non-person pixels in the image.
Measuring distance with split pixels
The second piece, if it’s available, uses a dual-pixel sensor — typically used on phones to provide fast auto-focus — to create a map of depth in the image. Dual-pixel sensors split the individual pixels in half. This creates a slight disparity based on an object’s distance from the camera’s focal plane. In other words, the further away something is from the camera, the greater the disparity.
The system is able to read the disparity information and construct a depth map. Furthermore, the system uses the depth information to accurately blur objects in the background of the image. As a result, objects further away are more blurred.
Applying blur to images
However when it comes to applying the blur, the system actually departs from reality. When using a real camera, there’s a certain spot where the object is in focus — the focal plane. While the system could hold to that real-world standard, it doesn’t have to.
An example shown in the paper depicts an image taken of a dog. If the system applied a physically correct blur mapping to the photo, the dog’s nose would be out of focus. Instead, the team decided to map the blur smartly to ensure the subject is always properly in focus.
Another example is how the system shapes bokeh — the photography term for the shape and quality of the defocussed area in a photo. On a traditional camera with a six-bladed aperture, the bokeh would be hexagonal. The team chose to implement the ideal circular bokeh instead.
The system in practice
The system can make use of any of these systems for a total of three methods of creating a fake DOF effect. The first is a combination of segmentation and dual-pixel. Portrait Mode on the Pixel 2 rear camera works this way. The rear camera features a dual-pixel to provide depth information for accurate background blurring. The segmentation helps pick out people in the image so they aren’t blurred.
The second method only uses segmentation. Taking a photo with the Pixel 2 selfie camera uses this mode. The camera doesn’t have split pixels, but it doesn’t really need them. In most selfie applications, a uniformly blurred background isn’t noticeable.
Finally, the third method uses only dual-pixel depth mapping. This comes into play when taking photos using the Pixel 2 rear camera without Portrait Mode. The phone applies subtle blur to objects based on the disparity information.
However this all hardly scratches the surface of how the technology functions. For a full breakdown, you can read the research paper here.
MobileSyrup may earn a commission from purchases made via our links, which helps fund the journalism we provide free on our website. These links do not influence our editorial content. Support us here.