When doing those experiments with the lightbulb - I just eyeballed what I thought was “sharp enough”. Mathematically speaking there is only one point in space where the image is perfectly sharp. But our eyes and our recording medium have some leeway - and that leeway is called the **circle of confusion**.

Let’s imagine a single point of light as it travels through the lens. When the single point of light is emanating from the plane of focus, it will hit the sensor as a single point of light creating this cone. As that light moves forward or backward, the cone will shift in space, no longer being single point of light on the imaging sensor but a spot of light.

The question is **how big can that spot of light be before it becomes noticeable as a blur?** And that’s where the The Circle of Confusion confusion comes in. The Circle of Confusion is the **maximum size that a spot of light can be to be indistinguishable from a single point to the final viewer** - bigger than the circle of confusion and we see a blur - smaller and we see what looks like a focused single dot.

The mathematical equation for the size of the Circle of Confusion is:

CoC (mm) = viewing distance (cm) / desired final-image resolution (lp/mm) for a 25 cm viewing distance / enlargement / 25

As a shortcut, many use the “Zeiss formula” - calculated as d/1730 where d is the diagonal measure of the original image (the camera format). For full-frame 35 mm format (24 mm × 36 mm, 43 mm diagonal) this comes out to be 0.025 mm. A more widely used CoC is d/1500 which yields about **0.029mm for a Full Frame** 35mm sensor and about half that for an **APS-C sensor with 0.018mm**.

Cinematographers projecting film onto a big screen use a slightly different set of numbers - the ASC Manual puts the circle of confusion of **35mm film (which is actually about the same size as an APS-C sensor) at 1/1000th of an inch or 0.025mm** and 5/10,000th of an inch or **0.013mm for 16mm film**.

This number can be plugged into this nasty equation to derive depth of field charts.

But those numbers are for celluloid film and very high megapixel cameras. In the digital video world the sensor is sectioned off into pixels and these pixels put a limit on the size of our circle of confusion - and if you’re still a little confused about the whole circle of confusion thing - this little light experiment might clear things up a bit.

Going back to our basic single element lens - I replaced the lightbulb with a flashlight. Now imagine the grid on the paper to be the pixels on the camera sensor. If the light cone falls inside one pixel box - that pixel will activate - it doesn’t matter how small the light cone is, there is no way to capture anything smaller. So in **this experiment, **the width of one pixel acts as our circle of confusion.

Here the light is focused at 18 inches. As I move the flashlight forward, the light cone gets bigger and starts to spill over to the other pixels around it. Now we’re getting a blur. The close is somewhere around 11 inches and the far of 30 inches which gives us a depth of field using this sensor pixel size of about 19 inches.

Now watch what happens as we reduce the size of the pixel - thereby reducing the circle of confusion.

Our focus is still at 18 inches but our near is only 13 inches and far is 21 inches - giving us an 8 inch depth of field.

Going even tighter with the pixel grid - we get a near of 14 inches and a far of 19 inches - giving us an 5 inch depth of field.

So as we increase the resolution and reduce our circle of confusion - we make the depth of field shallower. This may be rather intuitive. It’s really hard to see what’s in focus when you're looking at a standard definition monitor but if you look at an HD monitor you'll see all the focusing problems.

But there’s one more take away… imagine we’re shooting an HD image - 1920x1080 - as we step down in sensor size from Full Frame to APS-C which is closer to Super 35mm or even down to Micro 4/3rds which is about half that of full frame- our depth of field actually gets *shallower*.

Let me repeat that because it’s a big point. **Given identical lenses, apertures and same sized prints, the smaller sensor with it’s smaller circle of confusion will create a shallower depth of field. Smaller sensors have shallower depth of field.**

Even though we put the circle of confusion to the same size as the pixel, this works even with much higher resolutions used in photography where the pixel is MUCH SMALLER than the circle of confusion. Remember that the Circle of Confusion is what we consider "sharp enough" - because a smaller sensor has to be enlarged *more* to get the same size print, the tolerance for what is "sharp enough" is less - therefore shallower depth of field!

This is a constant source of camera forum debate, but if we follow physics and definitions of depth of field the conclusion is clear. And we’re not done yet - two different sensors will give different **field of views **which we'll address when dealing with crop factor and lens equivalency.