Ok, I lied, this one is gonna be big and spin your head…
I had planned on starting small and working up to more complex concepts, but then I decided to address bit depth for my first topic. So this first real post is going to give background to the topic and really, there’s a lot that needs to be said here that I will absolutely refer back to.
In conversations about photography, we refer to bit-depth or cameras supporting 12, 14, or even 16-bit raw files. Cameras generally all put out 8-bit JPEGs and TIFFs, so we’re talking about raw files when we talk about bit-depth.
What is bit depth? Well, let’s start with a description of what every one of us see on a daily basis: 8 bits.
Every computer/tv screen you see is optimized for 8 bits per channel, in the sRGB color space. 8 bits describes 256 colors, or black through the brightest of that color. Like so:
This “ramp” of green has a total of 255 values of green and one value for true black. Each “pixel” on your computer screen has three sub pixels of red, green, and blue, and each gets assigned one of these values. Black gets a value so that each sub pixel can be completely off.
Why is this important? Because the human eye can barely distinguish between value 255 and 254 on this ramp. Really, here they are, try it:
Can you see a difference? Can you see which part of this patch is 254 and which is 255? Unless your monitor is severely broken or improperly calibrated, no, you will not see the division between the two colors very clearly. Why is this important? Well, to illustrate what bit depth means. If the above ramp were 16 bits of green, you wouldn’t get a brighter green or darker black, you’d just get the same range of values chopped up into finer “bits”. A total of 65536 values to be precise.
Why on earth would you want that precision? Read on…
First, a word about nomenclature. “8-bit” color in Adobe Photoshop is really 8 bits per channel, for a total of 24 bits of color information (three channels: red, green, blue). 32-bit color on your monitor is actually 24-bit (32 bits is just easier from a programming standpoint). Most output formats (your screen, printer, movie projectors) are only capable of recreating 24 bits, and usually far less than that. Your typical desktop inkjet or laser printer usually doesn’t display blacks or highlights or purple blues accurately. Your monitor can’t display some yellow oranges. Even though the numbers exist for these colors, you’ll never see them in an output format, so 8-bits per channel is enough to make things look pretty on your monitor or in print.
But camera sensors can detect more fine gradations than your eye. Not a lot more, but a measurable amount. Probably around 12 bits per channel for the current generation of sensors. That’s the equivalent of 4096 values for those counting.
Why not more? Because of noise, but we’ll come back to that…
So why would we want to capture more colors than our eye can see? Because capture is imprecise and cameras don’t have computers as complex as our brains to turn the images into something useful. RAW files allow us to record what the sensor sees and sort the rest out later with a much more complex processor and with our eyes aiding the adjustments. RAW files allow you to use your brain to help the camera do what it cannot on it’s own.
Here’s a hypothetical example: You take a picture of a grassy field. There are two blades of grass lit by bright sunlight. It’s so bright that you can barely see them when you open the raw file on your computer. What do you do? If you add contrast (make the darker value darker and the lighter value lighter) you’ll be able to tell one blade of grass from the other. Here’s our two pure green colors with copious amounts of contrast added:
All of a sudden you can see that edge now, huh? That could be the difference between 4095 and 4096 in a file with 12 bits of color per channel (a typical raw file). (The values are 255 and 239 by the way.) But in a file with only 8-bits per channel those two values would end up smooshed into one value of 255, and no amount of processing on a computer would recover that information. In effect, raw files contain more information. That’s part of why they’re so much bigger.
So there you have it. In theory raw files contain much more information than the average jpeg or tiff straight from the camera, which allow you to recover with post processing information that might otherwise be lost.
In theory.
…but in practice?
That’s what the next post will be about!
Love the blog! One unnecessary comment: “32-bit color on your monitor is actually 24-bit (32 bits is just easier from a programming standpoint)” the last 8 bits in that set is Alpha; photoshop files also record 256 levels of transparency per pixel. I don’t know if you already knew that and didn’t feel like explaining or what, but figured I’d pipe up just in case. 🙂
Thanks Micah, you just made that so simple there’s no way I can forget it this time. Btw, I’m really glad you have a blog and are laying down the photo-knowledge-law. Yeah!
So is it worth it to shoot in 14 bit over 12? Seems like 12 is good enough…
Indeed, up to the current generation of sensors, it is my belief that there is no tangible difference between 12-bit and anything higher. The analogue information just isn’t there to be recorded by the extra precision that more than 12-bits provides for.
It seems a bit green.
Thanks for a bit of insight. 🙂