How Google Pixel 3’s Camera Works Wonders With Just One Rear Lens

When Samsung revealed the Galaxy Note 9 back in August, the item showed off completely new AI-powered camera features, like flaw detection as well as a scene optimizer to tune the exposure as well as shade of a shot before you’ve captured the item. When Apple launched the iPhone XS as well as XS Max last month, the item talked a lot about how the completely new phone’s AI-specific neural processor enabled better photos, especially Portrait pics.

at This kind of point, the item’s Google’s turn to boast about its AI-enhanced smartphone camera—as well as show how its software smarts as well as access to vast networks of data give the item a leg up on the competition.

Earlier today Google announced its completely new Google Pixel 3 as well as Pixel 3 XL smartphones. The completely new phones were expected (as well as had been leaked weeks beforehand), yet since Google makes the vast majority of its revenue through digital advertising, any completely new hardware launch through the company piques a particular kind of interest. Google may not sell nearly as many phones as its flagship competitors do, yet the item knows in which if the item’s going to compete at all inside the high-end smartphone market, the item has to have a killer camera. The cameras on last year’s Pixel 2 as well as Pixel 2 XL phones were widely acknowledged to be excellent cameras. How was the item going to make This kind of year’s phones exceptional?

The answer, for Google, was clear: Anything you can do in AI, we can do better. The challenge was “not to launch gimmicky features, yet to be very thoughtful about them, with the intent to let Google do things for you on the phone,” said Mario Queiroz, vice president of product management at Google.

At the same time, being thoughtful about using AI in photography also means being careful not to insert biases. This kind of is usually something in which Google has had to reckon with inside the past, when its image-labeling technology made a terrible mistake; underscoring the challenges of using software to categorize photos. Google doing more things for you, as Queiroz put the item, which means the item’s producing more decisions around what a “not bad” photo looks like.

Third Time’s a Charm

The company’s work on the Pixel 3 camera started off before the Pixel 2 phone even launched, according to Isaac Reynolds, a product manager on the Google Pixel camera team. “If the phone starts somewhere between 12 to 24 months in advance [of shipping], the camera starts six to eight months before in which,” he says. “We’ve been thinking about the Pixel 3 camera for a long time, certainly more than a year.”

During in which time period, the Pixel camera team identified several features—as many as 10, though not all might make the item into the phone—in which Google’s computational photography researchers were working on. “the item’s not, ‘Hey let’s assign a team to This kind of particular project.’ We have a whole team in which’s already researching these things,” says Sabrina Ellis, director of product management for Pixel. “For example, low light is usually an entire area of research for us. as well as the question becomes, ‘is usually This kind of something in which might be a great feature for users or not?’”

Eventually, the Pixel team narrowed down the list to include the camera features in which were both technically possible as well as actually useful. For example, completely new features called Top Shot, Photobooth, Super Res Zoom, as well as Motion Auto Focus all use artificial intelligence as well as machine learning to either identify or compensate for all our human fallibility. (Turns out, we’re not very not bad at standing still while taking photos.)

To be sure, some of the improvements to the Google Pixel 3 camera come through hardware upgrades. The front-facing camera at This kind of point consists of two wide-angle, 12-megapixel camera lenses, better for wide-angle selfies. A slider tool below the viewfinder lets you adjust how wide you want the shot to go. The 12.2-megapixel rear camera has been increased, as well as the camera sensor is usually a “newer generation sensor,” though Reynolds conceded in which the item “incorporates a lot of the same features.” The Pixel 3 also incorporates a flicker sensor, which is usually supposed to mitigate the flicker effect you get when you’re shooting a photo or video under certain indoor lighting.

Some of the “completely new” features might not seem all in which completely new, at least inside the broader smartphone market. You can at This kind of point adjust the depth effect on a Portrait photo after the item’s been captured on the Pixel 3, something in which Apple as well as Samsung already offer on their flagship phones. A synthetic fill flash brightens selfies snapped inside the dark; Apple has done This kind of for awhile too. The Pixel’s dynamic range has been increased again, yet these days, HDR-done-right is usually a baseline feature on flagship phones—not a standout one.

There’s also the fact in which the Google Pixel 3 still incorporates one particular-lens rear camera, while all of its high-end smartphone competitors have gone with double or even triple the number of lenses. Google argues the item doesn’t definitely need another lens—“we found the item was unnecessary,” Queiroz says—because of the company’s expertise in machine learning technology. Pixel phones extract enough depth information already through the camera’s dual-pixel sensor, as well as then run machine learning algorithms, trained on over a million photos, to produce the desired photo effect.

the item’s exactly the kind of answer you’d expect through a company in which specializes in software. the item’s also a convenient answer when camera components are some of the key parts in which are driving up the cost of fancy smartphones.

All Eyes on AI

yet there are some features launching with the Pixel 3 in which do appear to be the clear beneficiaries of Google’s AI prowess—specifically, Google’s Visual Core, a co-processor in which Google developed with Intel. the item serves as a dedicated AI chip for the Pixel camera. The Visual Core was first rolled out with the Pixel 2 smartphone, a signal in which Google was willing to invest in as well as customize its own chips to make something better than an off-the-shelf component. the item’s what powers the Pixel’s commendable HDR+ mode.

This kind of year, the Visual Core has been updated, as well as the item has more camera-related tasks. Top Shot is usually one of those features. the item captures a Motion Photo, as well as then automatically selects the best still image through the bunch. the item’s looking for open eyes as well as big smiles, as well as rejecting shots with windswept hair or faces blurred through too much movement.

Photobooth is usually another one. The completely new feature is usually based on technology through the Google Clips camera, a tiny static camera in which automatically captures moments throughout your day, or during an event, like a birthday party. Photobooth only takes front-facing photos, yet the item works a little bit like Clips: You select in which mode, raise the camera, as well as once the camera sees your face inside the frame as well as sees you make an expression, the item starts auto-snapping a bunch of photos.

If you’re trying to take a picture inside the dark—so dark in which your smartphone photos might normally look like garbage, as one Google product manager described the item to me—the Pixel 3’s camera will suggest something called Night Sight. This kind of isn’t launching with the phone, yet is usually anticipated to come later This kind of year. Night Sight requires a steady hand because the item uses a longer exposure, yet the item fuses together a bunch of photos to create a nighttime photo in which doesn’t look, well, like garbage. All of This kind of without using the phone’s flash, too.

Super Res Zoom, another feature completely new to Pixel 3, isn’t just a software tweak; the item requires a lens in which’s a little bit sharper than the camera’s sensor, producing sure in which the resolution isn’t limited by the sensor. yet the item enhances the resolution on a photo in which you’ve zoomed way in on by using machine learning to adjust for the movement of your hand. (If you develop the smartphone on a tripod or stable surface, you can actually see the frame moving slightly, as the camera mimics your hand movement.)

There are almost too many completely new features to take full advantage of. the item’s hard to know without having definitely used the Pixel 3 yet which of these actually are useful as well as which are gimmicks, the thing Queiroz said Google was trying to avoid.

Picture Perfect

This kind of relatively completely new trend in computational photography, the use of AI as well as machine learning to compensate for a lack of hardware or for human imperfection, raises some questions about the existence of bias inside the machine learning versions in which Google is usually using. Google’s photo data sets have already been shown to have bias, as have others. One thing in which stood out to me as I got a sneak peek at Google’s completely new Pixel cameras: There were an awful lot of references to photos with smiling, happy faces.

Top Shot looks for photos in which might be considered decent by any photo standards, yet the item also looks for in which group shot where you’re all smiling. Photobooth won’t start auto-snapping photos until you’ve made some sort of expression, like a smile or a goofy face. Google uses AI to make photos look better overall, for sure—yet in doing in which the item’s also producing subtle determinations around what a not bad photo is usually.

“If AI is usually just being used to make photos look better, then everyone likes the item,” said Venkatesh Saligrama, a professor Boston University’s school of engineering who has researched gender biases in machine learning. “On the additional hand, if the item’s using information more broadly, to say This kind of is usually what they like as well as what they don’t like as well as altering your photography in which way, then the item might not be something you want out of the system.”

“the item could be applying broader culture influences, as well as in some cases in which may not be not bad,” Saligrama added.

Reynolds, the Pixel camera product manager, says his team likens some of the completely new features to building a “shot list” of what photos most people might want to take in a given situation—say, at a wedding. “Everyone goes into a wedding having a shot list, as well as when we built Top Shot, we had those sorts of lists in mind,” he said. “as well as somewhere on in which shot list is usually also a very serious pose, a dramatic photo. yet I think we decided to focus on in which group photo where everyone is usually smiling at the same time.”

Google also has specific machine learning versions in which can detect surprise, or amusement, in certain scenarios, Reynolds said. the item has annotated over 100 million faces. the item knows these things.

For the most part, This kind of technology may very well translate into wow-worthy photos on the Google Pixel 3. the item may surpass the already-impressive Google Pixel 2 camera. Or the item may just nudge the future of smartphone photography forward slightly, in a year when every major smartphone camera is usually pretty darn not bad. One thing’s certain: Google’s doing the item the Google way.


More Great WIRED Stories

How Google Pixel 3’s Camera Works Wonders With Just One Rear Lens