Deep Fusion

The revolution continues.

Click here for an index of all iPhone articles.

I referred to the three revolutions in photography since its invention in the previous column.

Now we are beginning to see one of the most significant steps forward for minuscule sensors in Apple’s catchily named Deep Fusion technology.



Phil Shills.

While this technology is all about merging disparate images to make a better whole, the software driving this process would not be possible without Apple’s A13 CPU, designed in Cupertino and made by ARM. After the creation of the original iPhone, one of Apple’s best ideas has been the decision to design its processing chips in-house, permitting a level of laser-like focus on design not available with off-the-shelf silicon. Those CPUs are invisible to users but make a lot of the magic underlying computational photography possible. The in-house design also makes it harder for the likes of thieves like Samsung and the others to steal Apple’s intellectual property, a particular expertise of certain far east nations.

Deep Fusion will become available in iOS 13.2 (13.0, 13.1 and 13.1.1, with my iPad a victim of all three, are horribly buggy) and will only work on the latest iPhone 11 models. The idea is not new. Hasselblad provides the ability to merge multiple pixel-shifted images in some of its ridiculously priced medium format digital cameras as does Sony, and maybe others, in some of their FF bodies. NASA has been using the technique for decades to enhance images from poor early sensors. But with all that processing power in the A13 CPU – Apple claims 8.5 billion transistors, and who am I to argue? – Cupertino goes for a far more complex solution. Three frames are taken before the shutter button is touched (how does the phone know to do that?), three more when it’s activated and then one more long exposure when you thought it was all over. The best one of the six short exposures and the long exposure are merged and the magic CPU does the work in delivering the best definition. The complexity notwithstanding, all of this happens invisibly and automatically, taking but one second.

The test images disclosed so far leave no doubt that the definition in iPhone 11 images is adequate for huge prints. Heck, I was making decent 13″ x 19″ prints from my iPhone 4 millenia ago. Sure, they had to be taken in medium lighting and relatively low contrast, but definition was not an issue. No one needs a 50mp monster sensor, unless employed as a spook or trying to impress his mates. Now we have definition galore, much better processing for broad dynamic range and the superb Night Mode which takes low light photography to a new level. The images of the latter disclosed to date are simply breathtaking.

So when I write that the sort of computational photography made possible by high end CPUs in the latest iPhones will kill MFT and, for that matter, most digital cameras, there’s a growing body of evidence to support that opinion.

My iPhone 11Pro? Well, I just took delivery of a new belt holder and protective case (the latter also stores a driver’s license, medical and credit cards), but I cannot buy the new iPhone until sales of my MFT hardware are completed. The cash thus raised will pay for the new cell phone, relieving me of a lot of clutter and no cash.

Update: For test results of Deep Fusion, click here.