You often hear people referring to digital footage as being “4:4:4” or “4:2:0.” If you have no idea what they’re talking about, you’re not alone. In fact, many people who think they understand those terms actually don’t.
What we’re talking about here is called Chroma Subsampling, and there’s a LOT of confusion about this topic. Most of it stems from the fact that there have been two different approaches to chroma subsampling, and both of them are written out the same way: 4:x:x. However, as brevity is the soul of wit, I’m only going to cover the more modern and prevalent system.
Let’s start at the beginning. An electronic image is composed of little squares called pixels. Each pixel can have luminance – luma – which tells the pixel how bright or dark to be, and chrominance – chroma – which tells the pixel what color to be. If you don’t have any chroma data, your image will be grayscale – black and white. But if you don’t have any luma data, you won’t have any image at all.
Now, to have an reasonably good picture, every pixel needs to have its own luma data. But some clever engineers figured out a long time ago that every pixel does NOT need to have its own chroma data. You can save a lot of space by forcing chunks of pixels to share the same chroma sample – basically, to be the same color. And that process is called chroma subsampling.
Different flavors of chroma subsampling are written out using the format “J:a:b”
The first number “J” tells us how many pixels wide the the reference block for our sampling pattern is going to be.
Sometimes it’s eight or three, but usually it’s four pixels wide.
The second number tells us how many pixels in the top row get chroma samples, and the third number tells us how many pixels in the bottom row get chroma samples.
As you can see here, if every pixel in the 4×2 grid gets a chroma sample, there’s actually no subsampling going on, and the scheme is 4:4:4. This is what’s used in high end HD cameras like the Panavision Genesis and Sony F35.
Now let’s take a look at 4:2:2. Every two pixels on the top row share a chroma sample, and every two pixels on the bottom row share a chroma sample. We’ve definitely lost a lot of detail, but we can still get an idea of the original image. This is the subsampling used in Panasonic cameras that record in DVCPRO HD, and Sony cameras that record in XDCAM HD422, as well as in editing codecs like Apple ProRes 422.
Now let’s take another step down and look at 4:2:0. Our “a” number is still 2, so every two pixels on the top row still share a chroma sample … But the “b” number is zero, which means that the pixels in the bottom row don’t get anything of their own. So, they have to share with whatever’s above them. You can see how much information is lost here. This is the subsampling used in DVCam, HDV, Apple Intermediate Codec, and most flavors of MPEG, including the ones generated by Canon DSLRs.
Looking at this diagram, you can see one of the main reasons why formats with heavy chroma subsampling give you blocky artifacts. What you’re seeing is actually chunks of pixels that are sharing chroma data and being forced to be the same color, to save space. And, of course, this isn’t even taking into account the other aspects of image compression, which can make this blockiness even worse.
This really becomes an issue when you talk about pulling a chromakey. Think about trying to pull the green pixels out of a shot of smoke, or wispy hair. It would be fairly easy if each pixel has its own chroma sample.
But it gets much harder when pixels are sharing samples, because the green pixels aren’t necessarily at the exact edge anymore. This is why you get those jagged lines around the edges of chromakeys with subsampled footage.
Of course, there are a lot of other factors that figure into the quality of an image, and chroma subsampling is only one of them. However, it’s one of the least straightforward to understand, so I hope this has been helpful.