When a new Android smartphone is announced, it’s not uncommon to see commentators explaining how its camera is “better” than the latest iPhone’s camera, based on its technical specifications. Then, unexpectedly, former Google SVP Vic Gundotra - no stranger to cameras and photography - announced on his Facebook page that thanks to the iPhone 7, “the end of the D-SLR, for most people, has already arrived” Image by Scott WebbGundotra worked on Google Plus, a social media platform that puts photography front and centre.
It featured innovative smart editing of images. It was able automatically to identify facial features. Gundotra also spent 15 years at Microsoft. So, when he came out in favour of the iPhone 7, people listened. And many vocally disagreed.
The comments section was alive with partisan consumers quoting tech specs or DxO tests, in an attempt to prove that beautiful images were a numbers game. Gundotra doubled down on his claim that iPhone had won the smartphone camera race, adding that he couldn't see himself abandoning the format for years to come.
It quickly became clear that the angry commenters and Gundotra were, on some level, speaking at cross purposes. It has long been a trait of the angry tech fan to say “this number is higher, therefore this product is better”. Vic, on the other hand, was talking about the benefits of, “computational photography (portrait mode as Apple calls it). ”
What did Apple do to put themselves so far ahead of the game? The clues were all in his blog post, and here’s the core of it: “the greatest innovation isn't even happening at the hardware level - it's happening at the computational photography level. ” The differentiator, he believes, has become the way that AI interacts with hardware.
So when Vic said,”If you don't mind being a few years behind, buy an Android,” he’s talking about a new approach to making cameras. One in which good hardware specs are no longer enough.
What’s required is a marriage of hardware and software. And this, Vic states, requires an unfractured platform in which the manufacturer has granular control over both the hardware and the software. Google and Android, he claims, simply can’t do this.
Some of the commenters dismissed Vic’s opinion because he was, they believed, overemphasising Apple’s Portrait mode. This mode blurs the background of a portrait shot to create the look of a D-SLR photo.
In the case of portrait mode, Apple uses its dual lens technology to create a depth map. This gives a better result than the edge detection that less-advanced cameras use for simulated bokeh. Just as crucially, Apple is using machine learning, which means the results continue to get better over time.
Apple understands that to the majority of consumers, and even informed consumers like Gundotra, it’s the result that’s important; whether it's achieved with hardware or software isn't the issue for most people. Increasingly, advances in photographic technology are coming in the form of software upgrades, as well as hardware ones.
The iPhone 7 also features an Apple-designed image signal processor, which is used to detect faces and bodies, adjust exposure, correct white balance, capture wide colour gamuts, and reduce noise. This kind of hardware, paired with software that learns over time, is the kind of innovation that can make a huge difference to an enthusiast like Gundotra. However, it may not even appear on a traditional tech spec breakdown, which would explain why the comments section tended to overlook it.
It's important to remember that smartphones are among the most commonlu used image-taking devices. Machine learning is already creating the perception that Apple is winning that particular race. If so, where is AI going to take us next?
Apple’s Photos app has been using machine learning for years to identify and sort objects and scenes. This makes it easier for users to search and categorise their images.
The next step would be to implement the same technology in the iPhone Camera app so that it can be used in real time to adjust exposure, white balance, and HDR settings.
The camera would be able to "learn" how best to shoot and edit a range of shots. The more shots that are taken, the smarter the software would become.
If this marks the advent of the smart camera, what kinds of features can we expect? Well, it’s now common for consumer cameras to have features like face recognition. A smart camera would not only learn to more accurately recognise faces of different ages and ethnicities - it would do so without having to be told to.
It could also learn to identify pets. To understand where the sky and land are in a landscape shot. To identify and correctly expose snowy scenes. To understand when it is shooting a sporting event. To help you shoot a sunset. To optimise itself for shooting objects such as fireworks, foliage, documents, cars, and more.
If a smart camera was able to identify these scenes, people and objects, it could potentially change settings on the fly, without the user having to browse modes in nested menus and to select the appropriate setting manually, as is common among consumer cameras today.
In a sense the age of the smart camera has already begun; machine learning is already here. However, it is likely that we are bordering on a historical moment in which AI and machine learning become so prevalent and sophisticated that cameras (including camera apps in smart devices) start to be labelled as smart cameras.
Perhaps this is what Vic Gundotra meant when he said: “if you truly care about great photography, you own an iPhone. ”