Smarter cameras can produce better results
Last week, I spent many hours poring over images from the newly released Google Pixel. I was checking them against photos from the iPhone 7 and Samsung Galaxy S7 Edge in what has become a twice-yearly exercise: trying to discern which new phone has the best camera. The conclusion was that the Pixel is pretty much the best, but the difference between the three was closer than ever.
One area where the Pixel seemed to be lacking, however, was in low light situations. The images from the iPhone and the Galaxy S7 you’d see a mix of evenly distributed noise as well as some noise reduction, but the Pixel’s images featured scattered splotchy patches.
One of the few things that was communicated to us before we were handed review units was that Google recommended we leave the “Auto HDR+” feature on while shooting. It sounded like hype, and the mode produced some garish results in daylight, so while I was testing I turned Auto HDR+ off. I turned it off on the iPhone and the S7 Edge, too — I wanted an even baseline across all three phones.
But the difference with the Pixel is that its Auto HDR+ mode isn’t just an added feature, it’s part of the core function of the camera. That’s something we uncovered in our interview with Marc Levoy, who is the head of Google’s computational photography team. Levoy sounded extremely proud of what Auto HDR+ was capable of in low light — even without optical image stabilization, which the Pixel lacks. From that article:
“Mathematically speaking, take a picture of a shadowed area — it’s got the right color, it’s just very noisy because not many photons landed in those pixels,” says Levoy. “But the way the mathematics works, if I take nine shots, the noise will go down by a factor of three — by the square root of the number of shots that I take. And so just taking more shots will make that shot look fine. Maybe it’s still dark, maybe I want to boost it with tone mapping, but it won’t be noisy.”
Now that sounded like a good case for using Auto HDR+. So over the last few days I took a few more shots in low light with the Pixel. I’ve lined them up against images of the same scenes taken by the iPhone 7.
The difference isn’t massive, but the Pixel definitely has the edge in both of those cases. It captured more detail than the iPhone in both the highlights and in the shadows.
Levoy also said that “by taking multiple images and aligning them, we can afford to keep the colors saturated in low light. Most other manufacturers don’t trust their colors in low light, and so they desaturate, and you’ll see that very clearly on a lot of phones — the colors will be muted in low light, and our colors will not be as muted.”
Of course, low light is still the most challenging environment for any camera, and the Pixel will still struggle from time to time. And while Levoy says that the Pixel is fine without optical image stabilization because it takes “a number of shorter exposures and merge[s] them,” I’d still appreciate it as someone who has a very unsteady hand. And for as impressive as Auto HDR+ was in low light, it didn’t win out every time.
This is all to say that Google was right: the Pixel can perform really well in low light, better than its competitors in some regards. I’m still not pleased with the inconsistency of Auto HDR+ in daylight situations – especially because the mode activates every time you open the camera app even if you turn it off. But seeing what kind of difference the computational photography approach can make in this one particular situation has me foaming at the mouth wondering what Google might be able to do with its mobile cameras down the road.
As for the phones you can buy now, the iPhone 7 and the S7 Edge have excellent cameras. Google’s is still just a little bit better.