Let's face it, there is very little in photography more abused, confused and downright misleading than the pantheon of babble about sensor size. Articles abound, frequently conflicting, many created with best intent, and yet the end result is never clear. In a move reminiscent of Quixote, I will take on that windmill.Preamble
Light is governing by physics as are optics, electronics, semiconductors and everything else we encounter, yet there continues to propagate information that proposes that the laws of physics are not inviolate and that magical "stuff" happens inside a camera. Nope.
Photography is all about capturing light reflected from something that is of interest to the creating artist. In film, we had acetates coated with light sensitive chemistry that we exposed to the light, then processed into an image, positive or negative, that after work in the darkroom became a print. In digital we collect photons on photo-sensitive sensors built with millions of orderly deployed pixel-collectors. The information is recorded as a series of bits (this is the digital part) and that information is processed by computers and software to produce an image first on the screen and then by further processing, occasionally a print.
The entire purpose of the sensor is to capture what we think we see and store it in a manner such that it can be processed into a representation that is aligned with what we think we saw.
Sensor : A sensor is the opt-electronic collection of ordered photo-receptors that gather photons and represent their characteristics digitally. Light is composed additively of red, green and blue wavelengths and sensors have filters built into themselves to selectively gather these wavelengths. Then a computer combines the data from all the photo-receptors into a file format that can be further processed into a user understandable image. There is no image without this post processing, and thus anything we get out of a sensor that we recognize has already been processed by sophisticated computer technology to convert a stream of bits into something that makes sense to the viewer.
Sensor Size : A photo sensor has dimensions. Generally, the larger the physical area of the sensor, the more photo-receptors the manufacturer can lay out on the sensor framework. The theory is that the more photo receptors, the greater the level of finite detail there is to be collected as bits, which should then convert to a higher quality image
Megapixels : Literally this is millions of photo receptors, with the premise that more is better. This is not necessarily true, but manufacturers and advertisers tend to avoid letting facts cloud up their messages.
Photo Receptor Size : This is usually documented by the scientific people at a manufacturer and then never talked about. Fundamentally, the larger the photo receptor, the greater the luminance range it can record. This is always evolving as manufacturers work to improve the range and ability of photo receptors. As of this writing, the photo receptors of small size found in a high density sensor such as that found in Nikon's D810 have the same luminance recording capability as those photo receptors found in sensors with one third the megapixel count made just five years ago. In general, the smaller the photo receptor, the more power that must be fed to it to capture luminance at low levels. The more power fed to a receptor, the greater the probability of electronic "noise"
Noise : In film, noise was synonymous with grain, with grain being the literal size of the silver halide crystal needed to record information. High ISO meant bigger grain than low ISO. There was also a loss of contrast and colour range as ISO increased, but most people equated noise with grain size. In digital, noise is the after effect of pushing more power into the sensor to increase its ability to collect information in conditions of low luminance.
EV or Exposure Value
The concept of EV is a measure of light. It has nothing to do with ISO, shutter speed or aperture. EV sensitivity is purely a measure of a film or sensor's ability to record light. In film, the chemical composition and grain was changed to increase the EV sensitivity at the low end. In digital, more power is pushed to the sensor photo receptors to increase EV sensitivity. There is a threshold where this becomes a losing proposition known as signal to noise ratio. When the noise becomes so prevalent that it's contribution to the signal gets too large for acceptable image quality, the manufacturer sets an EV "floor" for the sensor. The physical sensor may actually have the capability to collect information at a much lower EV value but the manufacturer makes a business decision that going beyond a certain noise level would impact quality and reputation.
There are two fundamental sensor architectures at time of writing.
CCD or Charge Coupled Device is a sensor type that arrived at the dawn of digital. CCD sensors are characterized by an extremely accurate "look" with superb colour fidelity and tonal range when the luminance level is decent. CCD sensors still exist and are used in the majority of medium and large format digital sensors to this day. They do not handle low luminance levels very well because they need more power applied to the photo receptors and at a relatively medium ISO (by today's standards) exhibit an unacceptable signal to noise ratio. With acceptable luminance, they continue to produce excellent results.
CMOS or Complementary Metal Oxide Semiconductor sensors are the most common in digital cameras today. They tend to require less power in general and have lower signal to noise ratios than a CCD sensor at higher ISOs (typically ISO 800 or higher). They too produce excellent photon capture characteristics but also can suffer from interference pattern creation, also referred to as moire. This has been typically combatted by introducing an AA or anti-aliasing filter in front of the sensor. This filter reduces the interference likelihood enormously but will contribute to some nominal softening of the overall image because of the layout of the filter. Some people claim that they can see the difference on the LCD on the back of their cameras. This is very nice for them and I wonder where they get their tights and capes dry cleaned. Some manufacturers choose to offer CMOS sensors without an AA filter. This is a situation of caveat emptor. If the photographer is making images less likely to suffer interference, they may elect to choose a camera without this filter. Post production software has also gotten so very good that many interference patterns can be handled well in post production automatically, but it is important to understand that a manipulation is occurring somewhere if an interference pattern is being corrected, either in front of the sensor, or in the computer.
For the purposes of this article, I will be confining most of the dialogue to sensors defined as Full Frame, Canon Crop, Nikon (et. al.) Crop, Micro 4/3 and 1". All other sensor sizes follow the same mathematical rules, but would make the article more complicated than I wish. My Canon user friends should relax because I both know of, and own Canon APS-H sensor cameras. I did not forget, I simply chose not to include this rarely used sensor size in the article.
A full frame sensor is one that is ostensibly the same size as a negative from a 35mm camera that shot film and had a standard sized gate. I specify this because there were cameras that used 35mm film but did not produce a standard sized image, so cropped capture has existed for a very long time. The capture area of such a sensor is 36mm x 24mm in size, the same size as the 35mm negative from a standard SLR camera. This produces a 3:2 image ratio most commonly recognizable in the 4x6 prints we used to get from the photo lab. Please note that M43 and other sensors use a 5:4 ratio.
A crop sensor is basically a physically smaller sensor than one built to the full frame area. They exist for cost reasons. In the early days of digital sensors, producing a "full frame" sensor was prohibitively expensive and prone to flaws at the edges. So manufacturers that wanted to be digital simply made smaller, more robust and cheaper sensors. Canon's foray produced a 1.6x crop sensor. All this means is that if a full frame sensor could see 100% of the lens' image circle, the crop sensor only saw 62.5% of the image circle. The lens produced the same image, the sensor simply saw less of it. When Nikon, and most others built their digital sensors they ended up with a crop factor of 1.5x meaning that the sensor could see 66.7% of the lens' image circle. I will talk about image circle shortly so bear with me for the moment. We call these sensors by the common name APS-C but note that not all APS-C sensors are the same size. Note the FUNDAMENTAL difference here. The crop sensor is no less capable of recording light, it simply sees less of a full frame image circle, thus a crop sensor is not by size any better or worse than a full frame sensor. This is a common misrepresentation made in advertising, books and by camera store sales people. If you hear this or read it, your BS detector should now be active.
A micro four-thirds sensor is a conscious design to achieve a balance between reduced weight and bulk and image quality. A camera that sits on a shelf or in a bag unused because it is too cumbersome is a complete waste of time and money. Many would-be photographers bought very good cameras and lenses and then never used them because they were too bulky, or too heavy, or attracted too much attention. A smaller camera is a less obtrusive camera, and a small mirror less camera is much quieter than a big DSLR with the echo of mirror box slap. Now sadly some mirrorless manufacturers don't get that silence is golden and artificially add digital shutter click noise to make their cameras more "real" If you must, but give me a very clear option to tell the darn thing to shut up. Sony, I am speaking to you. M43 is also a consortium design so you can mix and match lenses between body manufacturers. Sort of. It's important to note that Olympus does some wonderful glass but because stabilization is in the camera body, there is no in lens stabilization and that's a loss when you mount that lens on a Panasonic Lumix body that expects the stabilization to be in the lens. So while M43 might look like it is completely open, be careful out there. Because the M43 sensor is smaller, and the image circle that encloses it is smaller, the depth of field at any aperture number is greater than say on a full frame lens and this allows for blindingly fast autofocus, or at least it should. More coming up on that.
I've created a graphic that shows the relative sensor size areas for different sensors. The graphic is not to scale but does clearly show the difference in size.
Lenses as you will have noticed are round. Their elements are round. The combination of elements and their construction produce an image that is round. This round image is defined as the image circle, and in the classical full frame image model, the image circle is big enough to hold a 36mm x 24mm frame at any angle of rotation. I've included a graphic to show this.
A cropped sensor simply sees less of the overall image circle. The focal length of the lens is NOT different, nor is the aperture, nor is the depth of field at any given focus point. Again, if you hear such comments, your BS detector should be active.
Making lenses capable of producing an image circle large enough to cover any rotation of a 36mm x 24mm frame produces lenses of a certain size and larger elements cost more to make. Thus manufacturers seeking to increase market penetration and share decided to make lenses that produced an image circle only big enough to accommodate their sensor size. Nikon calls their lenses of this type DX. Canon calls theirs EF-S lenses. When we look at the micro 4/3s space occupied predominantly by Olympus and Panasonic and marvel at how small and light their lenses are, this is because they need to produce a considerably smaller image circle. The size is so different that it actually changes the focal length and depth of field produced by the lenses. This is why it is often so difficult to create an equation for DSLR to M43 mirrorless comparisons.
Some makers made it impossible to mount smaller image circle lenses on their full frame bodies, most notably Canon. Others, such as Nikon permit the mounting but force their full frame sensors into DX mode. While marketed as "customer flexibility" it really exists to prevent a customer from making an image where the edges are horribly vignetted because the image circle doesn't encompass a 36mm x 24mm area. Whether this is customer flexible, or simply problem avoidance is the reader's choice to make.
The lens' image circle has everything to do with depth of field, and focal length. My Nikkor 210mm lens for my Sinar 4x5 inch camera has no allegory in a 35mm world. The focal length defines the distance from the focal point in the lens to the film/sensor plane. Similarly a 12mm lens producing an image circle proper for a M43 camera does not share the characteristics of a Nikon 12mm DX lens or a 12mm Canon EF lens. When the image circle size changes, so does EVERYTHING else.
Another area that is frequently understood poorly and frequently miscommunicated is the entire area of focal length when different sensor sizes are involved.
Focal length and image circle are bound together. What this means is very simple. A 100mm lens built to generate an image circle to provide coverage for a full frame centre will not produce the same image rendition as a 100mm lens built to generate an image circle for an APS-C sensor. Yes that both have focal lengths of 100mm but that is only relative to the sensor that they are creating the image circle for. This is why when you mount that 105mm full frame Micro-Nikkor lens on your D7100 it looks more like 157.5mm from a magnification perspective. It is still a 100 mm lens with all the characteristics of a 100mm lens including depth of field, and perspective compression or expansion. Because the APS-C sensor sees less of the total image circle that the lens is designed to produce, it looks at first glance like it is a longer focal length. Some people say that the effective focal length is 157.5mm in this scenario. They are WRONG. It looks partly like 157.5mm but has not got the depth of field of a true 157.5mm lens nor does it have the perspective compression of a 157.5mm lens. When you hear people talk about effective focal length, cut them a bit of slack because this is so poorly understood, but know for yourself that it is incorrect. What the two lenses have is a common angle of view.
Angle of View
This is one specification that gets ignored and it is REALLY important, possibly more important than focal length when we own cameras with different sensor sizes.
The reason is simple, when we say we want a wide angle or a telephoto many of us think we are asking for focal length. What we are really asking for is angle of view. I've produced a table for you that lists a series of focal length lenses and what angle of view is produced by a lens of that focal length made for each sensor.
What you can conclude is that the smaller the focal length, the wider the field of view, the larger the sensor.
We are often consumed by how "fast" our lenses are and in the general condition we are referring to maximum aperture. A lens with a maximum aperture of f:/1.4 is much "faster" than a lens of f:/2.8 maximum aperture. Specifically the first lens can admit two more EV than the second lens when the aperture is fully open. You may see this as two stops more light, or two stops faster and you will be correct. An EV is a stop. An opening of f:/2.8 admits the same amount of light on any lens with that aperture opening. You may then conclude that f:/2.8 is always f:/2.8 and be correct. However, when the lens produces a different sized image circle, even though the amount of light transmitted is the same, the depth of field created by the lens at any given aperture is NOT the same. But there is a corollary to that to be explored in the next section.
Depth of Field
What is depth of field? Again, depth of field is a frequently misunderstood concept and very frequently misexplained. Simply depth of field is the range of distance from the sensor plan where the subject area will be in focus. We are already aware that large apertures have shallow (small) depth of field and small apertures have more depth of field. This is consistent regardless of focal length, but it is important to understand the following two governing concepts.
Focal length has an impact on depth of field. The shorter the focal length of a lens mounted to a sensor, the more depth of field that will be delivered at any given aperture. Thus a 24mm lens at f:/5.6 will always produce more depth of field than a 300mm lens at f:/5.6. By corollary, longer focal lengths produce less depth of field at any given aperture than a shorter focal length.
But what happens to depth of field at a given focal length and aperture when the image circle size changes? This is where more confusion breeds. The smaller the image circle created by the lens, the more depth of field produced at a given aperture and equivalent focal length. For example a 12mm lens on a M43 sensor that produces an M43 image circle set at f:/5.6 and focused at a specific point will have more depth of field than a full frame lens on a full frame sensor at the same focus point and the same aperture. This is why point and shoot cameras with minuscule sensors have most everything in focus all the time. The sensors are so small that even at medium apertures, most everything is in focus.
We hear this argument in favour of full frame around the beaten into the ground term "bokeh". The math is simple. The larger the image circle, the smaller the amount of depth of field at any given aperture. So a full frame lens has shallower depth of field at f:/5.6 than a lens built for a crop sensor at f:/5.6 at the same focal length and the same focus distance. Want super shallow depth of field? Go to big openings on lenses with big image circles. And if you mount that full frame lens to a camera with a smaller sensor, like using an EF lens on an EF-S censored body, the depth of field DOES NOT CHANGE with the sensor size. The image circle has not changed at all, the sensor merely sees less of it. So if you hear that the depth of field does change, your BS detector should be pinging.
In the table below, I list four sensors and a lens of focal length to give the same angle of view for each sensor. As we saw in the Angle of View table, we have to go to shorter focal lengths with smaller sensors to get the same angle of view. This table balances that to show how much depth of field at a given focus distance, at a given aperture produces when the lenses deliver equivalent angles of view. As we should expect, the smaller the sensor, the more depth of field at a given angle of view, focus distance and aperture.
So What Does This All Mean
It actually means what we have always known for decades. Lenses that appear at a glance to have the same characteristics may not, depending on the image circle that they are built to create. I have a Nikkor 210mm f:/5.6 lens. It produces an image circle large enough to fully enclose a 4" x 5" sheet of film. The focal length is completely irrelevant to any other camera or any other required image circle. The maximum aperture of f:/5.6 passes as much light as any lens of any focal length or any image circle at f:/5.6 but the 210mm focal length is irrelevant. When you go out and purchase a 55-250 Canon EF-S zoom lens it delivers an image circle just large enough to enclose the space occupied by Canon's crop sensor. The focal lengths relative to that sensor are actually 55-250mm. When you mount a Nikon 70-200 f:/2.8 FX lens on the D7100, the lens was built to produce an image circle for a full frame sensor so the Nikon DX sensor only sees a piece of it as noted earlier. The lens is still a 70mm-200mm lens with the same depth of field and perspective effect, but because of the crop it "looks like" it is a 105-300mm lens.
This is why cinematographers love to mount DSLR full frame lenses on their Super 35 sensors cine cameras. Lens built to deliver an image circle perfect for the Super 35 sensor (pretty close to APS-C) can never have the shallow depth of field of a full frame designed lens at a given aperture. There is a look that a particular lens gives and it always gives the same look, although you may "see" it differently because the sensor is cropping away part of the image circle.
I shot two quick sequences in the studio. In each case the camera was mounted to a Really Right Stuff BH-55 Ball Head on a Really Right Stuff VS-288 Slider on a Really Right Stuff TL-34 Leg Set. The cameras were in physically the same position for all the shots, the only thing changed was the focal length indicated on the lens barrel. The 1Dx uses a full frame sensor, the 7D Mark II crops into the image circle by a factor of 1.6x.
Sequence 1 - Canon 1Dx, Canon 24-105 all 1/60 f:/8 TTL Flash
Sequence 2 - Canon 7D Mark II, Canon 24-105 all 1/60 f:/8 TTL Flash
We see what we expect to see. The crop sensor appears to increase the focal length for each setting. The important thing to remember here is that IT DOES NOT CHANGE THE FOCAL LENGTH, it crops into the image by the crop factor, in this case 1.6x. The depth of field is identical at 105mm on the lens regardless if the sensor is cropped or not, because it is the same lens rendering the same image. The crop sensor simply sees less of it.
Does It Matter, Really?
Of course it matters. A lens built to match the image circle size for the sensor requirement will be lighter and smaller than a lens built to produce a much bigger image circle.
A sensor with 20,000,000 pixels squeezed onto an APS-C sized sensor will have physically smaller photo receptors than a sensor with 20,000,000 pixels squeezed onto a full frame sensor. Larger photo receptors are typically more light sensitive in low EV situations and have better signal to noise ratios. Thus comparing the MP ratings of a crop and a full frame sensor is pretty pointless if you disregard the size of the photo receptors.
One of the big reasons that we see mirror less cameras such as the micro four thirds lineups taking off is that the lenses are smaller and lighter. So in fact are the camera bodies, partly because the sensor is smaller and requires less space, and partly because the manufacturer doesn't need to make space for the mirror and mirror box. This elimination also allows the rear element of the lens to get much closer to the sensor and this further reduces size and weight.
Much of the focus being placed by manufacturers is on smaller and lighter bodies and lenses. There is a downward limitation on full frame because of the required image circle and that's part of the reason we see EF-S and DX and all the other cropped image circle lenses. They are cheaper to make, and can be sold for less, perhaps causing more people to take up the hobby. Those people are possibly less demanding than a serious amateur and definitely less demanding than a professional and will tolerant lighter less robust materials. They also will be much more tolerant of optical compromises that keep costs down because they only ever look at pictures on a smartphone or the computer display, which with rare exceptions is so poor, they could not see the difference anyway. This is not a slight against hobbyists and amateurs, the smart ones have figured out their use cases and spend accordingly.
Ok Smarty, What Glass Do I Buy?
If you are shooting M43, buy the best lens you can find with the construction you need. All M43 lenses are built to deliver an M43 image circle. That's why you find all manner of "odd" focal lengths. A 9mm must be an ultra wide right? Nope. It is definitely wide, but has no more angle of view than a full frame 18mm lens as we have seen in our discussion of angle of view. If you are shooting a crop sensor and have no need to ever use that lens on a full frame sensor and the quality is acceptable, the lens for the crop sensor will be smaller and lighter and likely less expensive than the full frame variant. And this smaller lens can still give you the angle of view that you want when you choose it. If you will shoot a full frame sensored camera, you will buy full frame lenses. You will pay more, and the weight will be greater. However, because this is what most Pros shoot, you will also usually find that the full frame lenses have the best optics, the best focusing helicoids, the best internal stabilization and the best construction. And to reiterate, you will pay more. When the time comes to sell though, you will also discover that your full frame lenses hold value MUCH better than crop sensor lenses.
What About Third Party Lenses?
Most credible third parties make lenses with image circle delivery for both crop sensor and full frame sensors. The full frame lenses will usually work on a crop sensor body while the crop sensor lenses will not deliver a full frame image circle and may not mount to a full frame body at all. If that sounds just like the manufacturer lenses, it is. If the third party lens delivers the quality you need and the angle of view you need, it may be the perfect choice.
I have read so many varied perspectives on this subject. Some say that it doesn't matter. If all you shoot is a crop sensor, who cares about anything else, and that it doesn't matter. Others say that they want to understand the differences, even if doing so may not change their use cases or what they end up buying. This article is for those people, what the long ago Buffalo tailor used to say in his ads counts for me - "an educated consumer is my best customer". As an educator, a mentor, a coach and critiquer, the more you know, the easier it is to work with and help you.
When you understand the different variables, you also see why examining the EXIF data from someone else's image of different subjects with different distances and different settings does absolutely nothing to help you make better images. Use your own EXIF to educate yourself, but don't expect examining someone else's to be of any value at all. Your sensor choice, focal length choice, settings choices, subject choices and distance choices make for a unique situation where you can create your own artwork.