Ok, don’t panic. I can hear all of my photographer friends out there slamming their computers, tablets and phones shut. It can’t be happening: photography evolving into something that uses math, algorithms and logic to deliver the “decisive moment”? Say it ain’t so! Oh, but it is, and I think we will be better off for it. At least I hope we will.
I’ve been hearing and reading quite a bit about this thing called computational photography. It is such a new field that what’s in and what’s out, or even the language with which it is communicated, is not yet well defined. But it can be applied to any form of optical capture, whether in the science lab or in the artist’s studio.
Just as digital photography revolutionized the medium by converting light into numbers through sensors and processors, computational photography manipulates those numbers “in camera” through layers of new software, providing the photographer with new options, like correcting capture problems after the fact or applying a wide variety of creative effects.
It’s actually been around in the engineering and computer science universe for more than a decade, but practically speaking, is now having a huge impact in pro and consumer photography, particularly in the latest smartphones.
Computational photography allows for things like light field manipulation to adjust focus or motion blur after image capture. It includes light field analysis and separation of direct and indirect light to allow for different lighting options for your subject after image capture. It can help penetrate haze and fog to produce a crisp clear image.
Low light performance is also a big beneficiary, with software deciding what parts of the image need to be exposed at what value, and software directing the camera to quickly collect enough images to meet the need and combine those into a final image. And then there is the blending of a flash filled scene with a no flash version of the same scene automatically with one shutter press to bring out the best of both worlds.
In digital cameras, we have already seen it applied to familiar things like in-camera panorama stitching and wide dynamic range outputs. It can also create in-camera background blur after the fact for that perfect bokeh.
But it’s really smartphones where computational photography is beginning to shine. Smartphones benefit because of a simple reality: users want smartphones to be thinner and lighter with each new generation. That constrains the advances that can be made in optical hardware, sensors and processors in smart phones, or at least slows it down dramatically. So if you want more photographic bang for your buck, it has to come some other way.
I recently purchased a new smartphone. Instead of one rear camera, it has three, each dedicated to specific optical capture tasks. That’s because the phone is so thin, no single camera could have handled it all. The real magic happens through software, where captured images go through 30-40 processing steps before I ever see a result on the screen. The phone’s “artificial intelligence” assesses the images taken with each of the cameras and decides how to combine them to produce a best result based on what it perceives is the subject of the photograph. And even then, I can change the result in a variety of different ways to both correct for mistakes and emphasize what I want to see.
You must be intrigued by now. So why haven’t we heard more about this? The next big thing becomes a big thing in one of two ways: either visionary users call for upgrades or visionary developers market the heck out of a new idea and create demand, Remember Steve Job’s famous “one more thing”?
The difference for photography though is that there is a natural tension between technology and art. I don’t know many photographers who would be attracted by being offered “computational” anything. They understand light, shadow and how to manage them with their camera, but most, in my experience, have limited interest in the mechanics of the camera itself (except, strangely, the megapixel count). This is even more true for the average smartphone user, who just wants to get the shot and maybe do a few creative things with it by making some choices from a menu or moving a few sliders.
So, my photographer friends, you may already have computational photography power in your hands and not even know it. If you have purchased a new digital camera recently or, like me, a new higher end smartphone, chances are you are already benefitting from this new age of mathematical creativity.
And yet, the irony is that you may have it but never have tried it. Many photographers turn off the automatic settings in their cameras (and now maybe even their smartphones), and shoot exclusively in manual mode. As I mentioned in a previous post, you should give the automatic settings in your camera or smartphone a workout from time to time. You might be surprised by what you discover (or by what you can fix when you think you might have blown it).
One thought on “Computational Photography – The Next Big Thing?”
Comments are closed.