jacobgotoのブログ

jacobgoto daily life

iPhone uses machine learning

Smart photo management software is one thing, but artificial intelligence and machine learning can be said to have a greater impact on the way images are taken from the beginning. Yes, lenses continue to get faster, and sensors can always get bigger, but we are already breaking through the limitations of physics and squeezing optical systems into thin mobile devices. Nonetheless, now photos taken in some cases are better than those taken with many dedicated camera equipment, at least before post-processing. This is because traditional cameras cannot compete with another type of hardware that is equally important for photography: systems on a chip, including CPUs, image signal processors, and more and more neural processing units (NPUs).


This is the hardware called computational photography, which is a broad term that covers everything from the fake depth-of-field effect in **portrait mode to the algorithms that help drive the incredible image quality of Google Pixel. Not all computational photography involves artificial intelligence, but artificial intelligence is undoubtedly a major part of it.


Apple uses this technology to drive the portrait mode of its dual camera. The iPhone’s image signal processor uses machine learning technology to recognize people through one camera, while the second camera creates a depth map to help isolate objects and blur the background. When this feature was first introduced in 2016, the ability to recognize people through machine learning was nothing new, as photo organization software was already doing this. But managing it in real-time at the speed required by smart cameras is a breakthrough.