What You See is Not What You Get

I seem to be on a weird and wacky schedule these days – I routinely forget what day of the week or what month it is.  But I am also getting busier, with online clubs and activities now going strong, in-person family visits a regular thing (which means driving) and solo outings wrapping up for the fall (somewhat desperately before the next lockdown comes).  I don’t really feel like I am in control, although in reality, control is exactly what I do have.

Seamless pattern with film and digital photographic or photo cameras on light backgroundBut I digress, so back to photography.  Have you ever stopped to consider the magical process that allows us to go from camera to screen to print?  With all of us staring at screens so much more these days, I started to wonder about the specifics.  I guess I have time on my hands and I am a nerd.  So here’s what I found out…

Just displaying colour is amazingly complex for our eyes, as well as for camera and screen.  The first problem is defining it.  None of us sees the same scene in the same way.  At the extremes, some of us are colour-blind for certain colours, while others may have smaller deficits such as I do, in that the level of detail I see is diminished.  What is distinct black, gold and green for you might just be my copper colour.  So right out of the gate, we are not even imagining the same result as we capture it.  So how is it possible to “get it right” on the screen or when we share it with others?

Scarborough Bluffs Pano

Our eyes see a remarkable range of tones and can easily include very light and very dark tones in the same view, without a problem.  Cameras see far less tonal information, sometimes half of what we see, meaning that dark tones are interpreted to be completely black and light tones interpreted to be completely white far sooner than our eyes would do.  Images out of camera look ok, but are often too dark, too light or too dull.

Camera hand drawn outline doodle iconAnd then there are the number of translations that occur as light passes through the lens, hits the sensor, is recorded as electrical impulses, converted to numbers, reconstructed by RAW processor software and is then edited, taking us further and further away from that original perception.  For example, digital sensors, in all cases, capture only three colours from a continuous spectrum of visible and invisible colours, but arrange and transmit those colours using whatever magic mathematical interpretations have been built into the camera and into the receiving software.  In all cases, the processors in-camera and the receiving software on the computer interpret the colours not captured and apply techniques called “demosaicing” to fill in the story.

Image SensorI’ve heard the term demosaicing many times, but never understood what it was.  It seems that each pixel (really photocell) of your camera’s sensor typically records only one colour.  The photocell may record red, green or blue.  But all of our RAW processors and our post-processing software are dependent on having three colour values for each pixel – a red value, a green value and a blue value.  A brightness value or tone is also needed for each colour value.  In combination, the colour values and their associated brightness values can give us any colour imaginable, and certainly any colour we think we actually saw.

Scarborough Bluffs Initial

In the absence of recorded colour and tone values, the RAW processor and associated post-processing software have to “fill-in” the missing colour information.  The algorithms to do so are very sophisticated and amazingly accurate.  The process of filling in the missing information to give three colour and tone values from one is known as demosaicing.

I should mention that some cameras now have a sensor design that allows all three colour/tone values to be recorded for each photocell.  In these new sensors, each photocell carries all three colour filters, which is amazing for something smaller than the tip (not even the head) of a pin.  There is some debate on how accurate the colour capture is, given that light is passing through not one, but three filters.

Bluffs Edit

Once housed on our computers, the data is converted to viewable pixels and is “corrected” by us and enhanced, extending reality even further.  Those enhancements are stored as numerical adjustments or metadata, to be applied whenever the image is viewed or exported.

ASUS-PA329-1But in creating these interpretations, we use monitors that are themselves so variable in colour quality, gamut and intensity.  I recently replaced an entry level 70% sRGB 1920×1080 monitor with a full 100% Adobe RGB 3840×2160 4K high resolution monitor.  All of a sudden, my image colours are exploding from the screen.  I am able to make different editing choices now because of this.  But I wonder if those choices will be helpful or harmful to images that I share with others, who might see them on an their equivalent sRGB display.

Canon Pixma Pro10And then I may choose to print an image, bringing together again new sets of numbers for ink, paper and printer.  We use colour “profiles” and paper “profiles” to ensure our vision of the image is captured appropriately on the new medium.  But most of us don’t get it right on the first try.  When you add up all of the hand-offs and translations, it’s amazing to me that any printed image ends up looking at all like what we actually saw when we were standing there.

ScarboroughBluffs_20181125_0132 (Affinity Photo)

The best makers realize that and adjust the scene with their own artistic interpretation.  This to me, more than anything, explains the differences between a beginner and an accomplished photographer.  While the mechanics of their gear and the capture of their subject are key, I now realize that being able to convey a specific vision in a finished image is so much more important.  It explains why Ansel Adams was such a master of dodging and burning and why Joe McNally puts dozens of speedlites around a scene.   It explains why Adam Gibbs spends an hour recreating the dappled light falling on a gnarly collection of moss covered branches in a Vancouver area park.  It also explains why the same shoreline shot by me and the photographer I most admire at my camera club will look so stunningly different.

There is a lot of talk today about artificial intelligence turning our photographs into artificial, even mechanical, interpretations of what we really saw.  But given the number of translations and handoffs that already occur from lens to print, I think the argument over AI is, well, artificial too.  Take a moment to pay attention to that journey the next time you take an image from capture to destination.  Compare your result to someone else’s from the same situation or location.  I think you’ll be surprised.

One thought on “What You See is Not What You Get

Comments are closed.