The movies that we just looked at were all shot on celluoid film. It’s only very recently that digital camera systems have started to reach the level where they can emulate the look of film - though that debate is still open among the purists. But to this filmmaker, digital acquisition represents freedom the likes of which has never been seen before. So let’s start out by asking how do digital camera capture color?

The answer is actually similar to the way that color film captures color - by breaking down and recording the light as three primary colors - red green and blue. But that’s sort of where the similarity ends.

I don’t want to get too technical here so we’ll just cover the essentials. In earlier digital camera systems, professional cameras used a specially designed prism (dichroic prism) that split light into red green and blue onto 3 different sensors - a digital analog of the technicolor three strip process.

Color separator
Color_Separation_Prism (1)

But as sensors grew from the 1/3rd or 2/3rd inch chips to 35mm film sized chips and beyond, Manufacturers started using a single chip - using a colored filter array to assign different color channel duties to individual pixels. A common color arrangement is called a Bayer Pattern.

bayer_pattern_sensor

Look closely at the pattern of red green and blue pixels - notice how there are twice as many green photosites as there are red and blue. And even though a sensor advertises a particular resolution, the Bayer pattern means there’s actually half of those pixels dedicated to capturing green and a quarter of those pixels to capturing red and a quarter to blue.

Now before you feel like your eyes are being cheated of color data - relax - these are systems not only designed but modeled after our biological limitations. Our eyes are not as sensitive to changes in color, as they are to changes in brightness - luminosity. Because of this biological quirk, engineers designed digital cameras to deliver more resolution in the black and white luminosity using the green pixels to serve not to serve up the green channel but to help create the luminosity channel.

So as the raw data comes off the sensor, the digital cameras uses mathematical algorithms to interpolate the missing pixels in each of the color channels and the luminance channel and then combines the red green and blue color channels into what’s called a YCbCr colorspace. Then that color can be further compressed for storage in a process called Chroma Subsampling - which is the process of reducing the color resolution while keeping the luminance resolution the same.

compressing color

Example of different flavors of chroma subsampling

For watching back video, compression is not a terrible thing… Perception of resolution is not a simple numbers game - more pixels doesn’t necessarily mean a better image as things like motion dramatically affect our ability to sense fine details.

But, from a production standpoint, compressing color data leads to less flexibility when color correcting and grading in post production.

DSLR

Consumer cameras and DSLRs use what’s called an 8bit 4:2:0 chroma subsampling which can “break” when heavily color graded because there’s not that much color data available. The 8bit part refers to the number of possible levels in each channel - 8 bit is 2 to the eighth power for 256 possible levels of color in each of the color channels. 8 bit is fine for playback, you’re watching this video in 8 bit color.

BMCC

Contrast this to the Blackmagic line of cameras. The Blackmagic Cinema Camera can record a much more robust 4:2:2 10bit compressed file in either ProRes or DNx.HD. 10 bit means 2 to the 10th power 1024 levels of color in each channel - 4 times as many levels. And 4:2:2 has twice the color resolution of 4:2:0 and is considered robust enough to stand up to most professional needs.. Further up the quality line, the Blackmagic Cinema camera is even capable of 12-bit RAW files - 12 bit meaning 2 to the 12th 4096 levels of color in each channel and RAW meaning the data is straight sensor before interpolation or any mathematical mumbo jumbo.. Of course you will start to pay dearly in terms of storage space when you’re shooting in that kind of pristine quality (about 7 gigabytes per minute of footage)

Now before we move on from this technical overview of color and digital cameras, we need to talk a little about colorspace. Digital Video uses a colorspace called REC 709 - an HDTV colorspace standard that was first approved in 1990. REC709 is a linear colorspace and is fine for your final output.

But camera sensors have now exceeded the dynamic range that REC709 can capture. So what’s the solution? Use a flatter logarithmic curve to represent the data - a colorspace commonly called LOG.

There are many flavors of LOG color but they all function essentially the same way - they flatten the curve which information is captured on the camera. This encodes the extra details that would normally fall outside the dynamic range of REC 709. When viewed back on a REC709 monitor, LOG images will look flat and washed out but you gain the flexibility of recovering shadows and highlights details in post.

Rec-709-v-Log

Here we’re comparing the video mode Rec 709 and the LOG like Film mode from the Blackmagic Cinema Camera - when corrected, we can recover much of the detail from the blown out windows.