At the beach.
Canon 5D, 24-105mm kit zoom.
At the beach.
Canon 5D, 24-105mm kit zoom.
In the Valley of the Sun.
The garden late yesterday – after the heat recedes everything comes right back into bloom.
Nikon D700, 200mm Nikkor-Q.
The revolution continues.
Click here for an index of all iPhone articles.
I referred to the three revolutions in photography since its invention in the previous column.
Now we are beginning to see one of the most significant steps forward for minuscule sensors in Apple’s catchily named Deep Fusion technology.
While this technology is all about merging disparate images to make a better whole, the software driving this process would not be possible without Apple’s A13 CPU, designed in Cupertino and made by ARM. After the creation of the original iPhone, one of Apple’s best ideas has been the decision to design its processing chips in-house, permitting a level of laser-like focus on design not available with off-the-shelf silicon. Those CPUs are invisible to users but make a lot of the magic underlying computational photography possible. The in-house design also makes it harder for the likes of thieves like Samsung and the others to steal Apple’s intellectual property, a particular expertise of certain far east nations.
Deep Fusion will become available in iOS 13.2 (13.0, 13.1 and 13.1.1, with my iPad a victim of all three, are horribly buggy) and will only work on the latest iPhone 11 models. The idea is not new. Hasselblad provides the ability to merge multiple pixel-shifted images in some of its ridiculously priced medium format digital cameras as does Sony, and maybe others, in some of their FF bodies. NASA has been using the technique for decades to enhance images from poor early sensors. But with all that processing power in the A13 CPU – Apple claims 8.5 billion transistors, and who am I to argue? – Cupertino goes for a far more complex solution. Three frames are taken before the shutter button is touched (how does the phone know to do that?), three more when it’s activated and then one more long exposure when you thought it was all over. The best one of the six short exposures and the long exposure are merged and the magic CPU does the work in delivering the best definition. The complexity notwithstanding, all of this happens invisibly and automatically, taking but one second.
The test images disclosed so far leave no doubt that the definition in iPhone 11 images is adequate for huge prints. Heck, I was making decent 13″ x 19″ prints from my iPhone 4 millenia ago. Sure, they had to be taken in medium lighting and relatively low contrast, but definition was not an issue. No one needs a 50mp monster sensor, unless employed as a spook or trying to impress his mates. Now we have definition galore, much better processing for broad dynamic range and the superb Night Mode which takes low light photography to a new level. The images of the latter disclosed to date are simply breathtaking.
So when I write that the sort of computational photography made possible by high end CPUs in the latest iPhones will kill MFT and, for that matter, most digital cameras, there’s a growing body of evidence to support that opinion.
My iPhone 11Pro? Well, I just took delivery of a new belt holder and protective case (the latter also stores a driver’s license, medical and credit cards), but I cannot buy the new iPhone until sales of my MFT hardware are completed. The cash thus raised will pay for the new cell phone, relieving me of a lot of clutter and no cash.
Update: For test results of Deep Fusion, click here.
Time waits for no one.
There have been three revolutions since the invention of photography in the early 19th century.
The first was miniaturization, credited to German engineer Oskar Barnack who invented the Leica in 1913. The Leica user could take 36 images on one roll of film, loadable in daylight thanks to the cassette design, and enjoy decent quality not really that much worse in pocket prints from that delivered by the monster cameras which preceded this piece of design genius. Leica continues to this day but long ago ceased making cameras, remaining in existence as a purveyor of overpriced jewelry.
The second revolution was the invention of the digital sensor camera for which we can thank Kodak. Being abject fools they concluded that film would last forever, abandoning digital, making one of the worst business decisions of the 20th century in the process. They went bankrupt. Steven Sasson was the Brooklyn born engineer behind this revolution (not the bankruptcy) and while his first design was anything but compact, rapid development of sensors fixed that.
Digital sensor sizes are now whatever you want, from monsters divining distant galaxies in outer space to pin heads in digital cameras much loved by the Kremlin.
The third revolution was in the creation of the cell phone camera, and while the nutty genius running Apple could not claim the company had invented cell phone photography, he very much packaged the whole thing in a device that would become ubiquitous, user friendly and now delivering more images than any of its predecessors did in aggregate.
Now while no one could accuse today’s automaton CEO of Apple of ever having had an original idea, time marches on and, despite Apple’s characterless leadership, really good small sensors are now available in the latest iPhone, the four lensed iPhone 11Pro. What the business community expected to be just one more modest product refresh turns out to be at the cutting edge of the cell phone photography revolution. And that cutting edge performance is delivered with really small sensors.
The percentage of photographers needing large sensors in large bodies is minuscule, a statistic which predicts the rapid demise of all traditional cameras, be they medium format, FF, APS-C or MFT in format. Medium format and FF digital will retain infinitesimal market shares for specialized commercial and scientific purposes, but otherwise their time is done. And as for APS-C and MFT, with the latest cell phone cameras equalling or beating their output in terms of quality and versatility, those formats will shortly be indistinguishable from toast. A very tough time to be Canon, or even worse, Nikon. As for Panasonic and Sony, they can stick to making ever larger TV sets.
The cutting edge cell phone photographer has access to depth maps with his images, selective focus of his choice, multiple lenses with the wide in the Pro being very wide indeed and a Night Mode so spectacular that Walter Mandler must be spinning in his grave. Oh! and did I mention 4K video, all in a wafer thin package which also just happens to make phone calls and works seamlessly with the internet? And that photographer does all of this with the most sophisticated CPU and brilliant software design available, neither feature found in traditional cameras.
The new iPhone’s three forward facing lens design is nothing new. Leica and many movie cameras have had it for ages. No, ‘selfies’ were out. No fourth lens for you.
But now the lens in the ‘turret’ is chosen with a touch, nothing moves, it’s stabilized for all but the ultrawide option, and you choose the depth of field after taking the picture:
The above is all by way of a preamble to my upcoming purchase of an iPhone 11Pro. I’m sticking with Apple as only an insane person would trust Google/Android with his data and, yes, all my MFT hardware is being sold as I write. I’m getting with the plan before it becomes completely worthless. It’s been a fun decade since that groundbreaking Panasonic G1.
I’ll keep a few items of Nikon FF digital and film gear, as they are already worthless, and as I still have vestiges of nostalgia in my psyche. But, as a street snapper, I can see no reason to actually use this megalithic gear. And I try very hard, in an increasingly uncluttered life, to avoid owning things I do not use.
The old cliché has it that “the best camera is the one you have with you“. You always have your cell phone with you.
On the Embarcadero, SF.
The Oakland Bay Bridge is in the background.
Panny G1, kit lens.