“Why can’t you just take the footage out of the camera and use that?”
It’s a question that I’ve heard a lot of times. And you can’t ever blame anyone for asking it. Modern digital cameras, although more affordable than any that have gone before, can cost thousands of pounds and have been designed with photography and filmmaking explicitly in mind. They are incredibly customisable, geared-up to produce cinematic images, and when us filmmakers arrive on a set, or at a client’s residence, we spend a considerable amount of time preparing what we are about to shoot. We take into account available light, additional light, colours and textures and audio and hundreds of other similar elements. All this, on top of hours of deep consideration, bespoke to that particular project.
Yet despite all this, even when we shoot something with absolute precision, we take the footage out of the camera, put it in our computers and…
…spend a lot of time making it different, better, new.
Yes, it’s fair to say that why we do that is confusing. And having just written all that, in the mind-set briefly of someone looking in from the outside, I find myself almost as confused.
If you want to understand why colour correction and other forms of video editing are important, and why these steps — steps like colour correction and colour grading, I mean — exist in the first place, the best place to begin, I think, is perception.
When you get out of bed in the morning, and your eyes adjust to the light — much brighter than when you had your eyes closed — you find yourself involved with a complex number of additional tasks that are instantly on your mind and come as the result of millions of years of evolution that has helped us to not fall off cliffs half asleep or wander into a burning cave (I’m not sure why there’d be fire inside a cave but please bear with me): where can I stand where I won’t stand on something hazardous, where am I? (that one is dependant on how much you may or may not have drunk the night before), what time is it?
What you’re not even thinking about are the extreme and voluminous range of other complex tasks that your brain is performing, simultaneously trying to preserve your life as it also makes the world make sense to you.
Fortunately, getting up in the morning doesn’t involve doing a manual white balance, or deciding which lenses to use. If it did, I think it’s safe to say that we’d all be late for work.
So, the somewhat simplified conclusion here is that what we see with a camera and what we see with our eyes is similar, but never quite the same. More to the point, it’s how we arrive at matching those images. That is the really hard part, and it’s made more difficult by the fact that we all perceive light and dark slightly differently — both when we record a scene and when we review it.
This is why what we get from the camera needs attention before the world can see it, or should at least receive it: the human brain is capable of adjusting to and processing an amazingly complex three-dimensional world, while committing experiences to memory, and doing a thousand other things. Our cameras, on the other hand, have the inarguably impressive task of not only seeing what we put in front of them in a way that appears life-like to us, but also recording those images in a way that will allow them to be replayed over and over again without any degradation in quality. And, even more to the point, in a way that we can adjust in an infinite number of ways.
This explains why camera technology is expensive, and why we must spend time matching the vision we have in our heads to what we see on the screen before us.
All these things considered, we can think of the footage from the camera as a good but not perfect memory of what took place before us. The challenge is how we manipulate that footage to form a perfect, or more appropriate memory. As cameras and lenses get more expensive, so do — generally speaking — their capacities for getting closer to what we originally perceived or imagined. But they still need a lot of help, for yet another reason: because what a digital camera shows us at the time of recording and what is actually being recorded are two totally separate things. What we see at the time is a strong representation of what is being recorded, made simpler so that we can see it happen in real-time. What we see on the computer screen later is what was actually captured by the camera. As a result, there will always be a difference between what we thought we recorded and what we actually did, and until someone designs a USB cable that can plug directly into the human brain, that is likely to remain the case.
The gap between these two things is why camera operators spend years learning how to record things as well as they can the first time around. Years are spent understanding colour science, the mechanics of light, and a myriad of other things. Only by getting close to perfection that first time around can you make the gap slightly easier to cross.
Yet one of the things we love most about filmmaking, I think, is modifying, tweaking and adjusting what comes out of the camera. Yes, you can take footage from a camera and use it in a finished piece, but when one discovers what can be done, and how footage can be manipulated, stretched and changed, it becomes harder to reason that this is always a good way of working (but of course there are exceptions: some times, like in the case of a documentary, you may want to leave the raw beauty of a scene untouched, so as to convey the moment exactly as it was, in its purest form. Equally, a firework display might cause a freak anomaly in the footage which is better left as is).
The real challenge of a filmmaker, then, is not to blindly take footage and change it, but to make complex decisions about how it should best be treated, and to do so in a way that proves both artistically appropriate and economically efficient — two very, and brutally, different things, which are given different weight and importance, depending on the project at hand. That might mean any number of approaches, from taking footage almost as it was shot, to making vast amendments to the blacks, or exposing shadows to reveal information not obvious to the eye at the time of shooting.