Posts Tagged ‘light’

Digital post

Thursday, 25 March, 2010

The Oncoming Storm
What goes into making a photo? The simplistic answer is that you just record the light that enters the camera.

The problem with that is that a camera responds to light in a very different way to how the human eye and the human brain respond to light. If you pump the same number of photons of a certain wavelength into a camera at a certain position, they will hit the same pixel on the sensor (or the same position on a piece of film, if you’re old school) and be recorded in the same way. Give or take some random noise which is mostly insignificant. But if you pump the same number of photons of a certain wavelength into an eyeball, the human brain will process that signal very differently, depending on what else is around it in the image, how well adapted the eye is to the present illumination level, the presence of strong light sources or colours elsewhere in the visual field, and so on.

This causes the common phenomenon of seeing something spectacular – a sunset is a good and common example – and taking dozens of photos of it because, well, it’s just so amazing! But then when you look at the photos later, they’re all kind of blah. They have a dark, almost black foreground, and a washed out sky and the colours aren’t nearly as vivid as you remember seeing. The camera isn’t lying – it just records what the light was actually doing. It’s your brain that that was lying at the time your eyes were seeing the sunset. The human visual system is wonderfully adaptive. It can make out details in extremely high contrast scenes that current cameras struggle, or fail, to deal with. That’s the first problem.

The second problem is that the physical objects we use to reproduce photos – prints or display screens – don’t have anywhere near the brightness contrast or the range of colours that humans can actually perceive. There are colours that you can see in real life that cannot be generated by a consumer-level display screen. The result of these two problems is that photos straight off a camera sensor often bear only a superficial relation to the contrasts and colours we saw when we took the photo.

This problem is addressed by post-processing. This is not a new thing associated with digital photos. The old masters of film photography knew this, and used darkroom techniques to produce prints of images that were based on what was recorded on the negative, but were modified to give a better representation of how their eye remembered the image in the field. Dodging and burning (which some of you may be familiar with in digital image processing applications) began as darkroom techniques to alter contrast levels locally in a photograph. Ansel Adams, who created some of the most memorable black and white film images in the history of photography, used these techniques extensively. His photos were so striking and memorable and lifelike because he manipulated the data on the negatives to produce a print that the eye would recognise as close to what it would see in reality, rather than within the limited range of photographic film.

And the same principle applies to digital photos. The JPEGs you get out of digital cameras are processed to adjust the contrast levels and colour saturation so that when you display the image it looks roughly how it looked in reality. This is done automatically for the most part, with most people blissfully unaware. It’s only if you examine the raw image data off the sensor that you notice how different it is to what the scene should look like. And if you are an advanced level digital photographer and manipulate raw images and process them yourself to produce nice-looking results, you know this, and that some judicious tweaking can produce much more pleasing photos.

The point of this is that digital post-processing is often seen as “cheating” somehow, making the photo into something it never was. It can be that, certainly. But frequently some post-processing is needed simply to make a photo as recorded more closely match what we saw with our eyes when we decided to take the shot.

Oncoming Storm: original
When I saw this photo (right) in my collection after a trip to the beach, my first thought was, “Bleah, how dull. Why did I even take that shot?” But I loaded it up into Photoshop and played around a bit. I’m not claiming the shot at the top of this post is a perfect representation of what I saw with my eyes (being displayed on a screen, it never can be), but it’s definitely a closer match to what my brain told me I was looking at when I decided to take the photo.

I’m sure some people will claim they prefer the “unprocessed” version, saying the processed one looks “too fake”. Fine. I think it better represents what I saw that day. It is perhaps a little more enhanced for dramatic effect, but that’s also part of what makes photography an art form, rather than just a mechanical process. I can make a decision on how to present the photo, knowing that no way that I can present it actually matches the experience of being there.

The point here is that digital post-processing of photos shouldn’t be looked down upon as “messing with reality”. The image as recorded in the camera is already “messed with”. What you can do is take that data and turn it into something you want to look at and that reminds you of what you saw – in your mind – when you took it. And isn’t that what photography is about?