Diffraction Limited Effective Resolutions
Sometimes I like, or need, to turn things on their heads to look at them from a different angle. Most photographers have likely come across the concept of the diffraction-limited aperture at some point in their researching lenses.
The same concept can be flipped around to compute the maximum effective resolution a given aperture can produce on a given size sensor.
I would note there’s a certain amount of fuzziness to this, as the standard diffraction limited aperture calculation isn’t necessarily accurate in practice. It’s not wrong, per say. What it does is work on the fundamental assumption that the sensors’ actual resolving power is equal to the sensor’s pixel pitch. That holds true for a monochrome sensor, a Foevon style sensor that stacks all the colors for each pixel vertically, or a 3-chip system, and all without an optical low-pass filter.
On the other hand, Bayer pattern sensors, and virtually all sensors with optical low-pass filters, can’t actually resolve at their native resolution. The Bayer pattern alone reduces the usable resolving power to something between the native resolution and half of the native resolution depending on the quality of debayering algorithm is. With a lower actual resolving power, the aperture where diffraction becomes a problem increases.
Which brings me to this tool, instead of computing the diffraction limited aperture, I’m computing the effective resolution of a given aperture on various sensor sizes for red, green, and blue light. If the “image resolution” is higher than the resolution of your camera, then you’re not diffraction limited. If the resolution is lower, then you are.
Comments
Hi and thank you for making these tools available
I was curious how you are calculating the LF 4×5 and 8×10 information at the bottom of the tool? ie – does your methodology assume a bayer type sensor for those as well? The reason I ask is because I was wondering how much a LF sheet film could capture in theory.
The listed resolutions assume a “monochromatic sensor” with square pixels with a edge length equal to the diameter of the diffraction spot calculated for the selected wavelength of light. There’s no compensation for Bayer, or any other color sampling patterns at all.
The discussion of Bayer patterns is there largely because I don’t consider a color-sampling pattern at all. So practically, you’d need to factor that in to figuring out a real world sensor resolution that would be needed for a given calculated MP.
This is truly fascinating- it seems that some years ago I worked out (using my eye) that for an aps-c camera- a Canon that had about 10MP sensor it did not seems useful to go beyond ƒ8 when trying to make a deep focus landscape shot- ƒ11 was simply overall less sharp to the point where smaller details in the distance although they may be in better extended focus and resolve more from more depth of field – now were not rendered any better (actually a bit worse) than ƒ8 as overall resolution had decreased because of the diffraction of the smaller aperture. You chart indicates that at ƒ8 the aps-c has 12MP of resolution – but at ƒ11 only about 6MP. WOW- I’ve taught this to my students but never knew there was a mathematical proof…