Codec stands for COmpressor/ DECompressor. Compressing the data makes it smaller to be able to transport easier. While Decompressing makes it bigger again to allow editing and viewing of the data on different formats.
When thinking of a video file what is the first thing that you think of? .mov? .avi or .wmu? Well while these are video files, these are what we in the industry call "containers". The containers are what the codecs go into. Common examples of codecs are: h.264, Prores, and DNxHD. Let me explain what these are for.
Types of codec
First there is the Capture codec which is commonly h.264. The capture codec reads what the camera records and processes it into a view able image. So when you are shooting a scene the camera is doing like 200 different things just so you can view the shot once you stop recording.
Then there is your Editing codec which from the above list is normally DNxHD. When editing you most of the time want to go for speed over quality so that it doesnt take like FOREVER to render out all of the images just so you can get a small fragment for the edit. It also helps your exporting speed, especially if you are exporting into an HD format like 720p or 1080p.
After you export your video there are two different things can happen with thee video:
It can be converted into a Delivery codec not unlike h.264, because it is very compatible with DVDs and the Internet. It finds a sort of balance between speed and quality so that the video still looks good but doesnt take a decade to load.
Or
It can be Archived. The most common archival codec is DNxHD but one that prefers quality over speed. You would really like to archive the video in a better quality or larger format than ever needed so that if the client needs the video again you can give them any combination of size to quality and you dont have to go back and re-export the video to get the right file size.
BIT DEPTH
Ok so bit depth is a bit more complicated... ish. Bit depth is a number of values between light and dark. So like if I have a gradient of white to black there is a set number value in that gradient. the higher bit depth the better quality, lower the bit depth and you will start to see vertical lines appear in the gradient.
That is because the computer now has to fit more of those shades into a smaller amount of categories.
The most common bit depth recorded in is 8-bit. But that is really 2^8 which equals 256. But then we have to split that up into three parts because we have the RGB color range that it has to recognize.
2^8= 256
/ | \
R G B
But then there is 10-bit.
2^10= 1024
/ | \ R G B
Which is 4X more color than what 8-bit can capture. Now you really dont need that much. Its just unnecessary to have that much color. But dont get me wrong more colors are all well and good, but 8-bit does just as good and can produce amazing shots and colors.
Chroma Subsampling
Alright so Chroma Sub-sampling is when you tell individual pixels to take the brightness or color of another pixel. There are three different ratios that are used and they are: 4:4:4, 4:2:2, and 4:2:0
With 4:4:4 there is no sub-sampling at all.
[ ] [ ]
[ ] [ ]
All the pixels keep their own colors and brightness. So this one is very easy to understand.
When using 4:2:2 we start out with 4 individual pixels
[ ] [ ]
[ ] [ ]
And as we sample these pixels we basically throw away the color and brightness from two of the pixels and they take on the brightness of the other two.
[ ] <---[x]
[ ] <---[x]
So when you 4:2:2 sample those two pixels take their new brightness from the other two causing a sort of fade in between tow colors. You cant really tell any difference between 4:4:4 and 4:2:2 unless you go to the pixel level and look closely.
Spatial compression
This is when either the Capture or Edit codec runs an algorithm that basically cuts up an image into blocks to save space. So if you have one clip and there are a lot colors the algorithm will find all the colors that match in a certain space and will box them off and save them separately. If you have a lot of colors this can be bad because some of these algorithms will allow unlimited boxes, which means the boxes can be infinitely smaller and there can be millions of them and because they are so small it doesnt effect the image at all although these take up ALOT of space. And then there are the ones that have a finite amount of boxes. These boxes can become quite large and will show on the image. But having an algorithm for finite boxes can be good. If the image doesnt have very many colors or has large spaces of empty solid color, it can save space by only compressing one or two boxes instead of looking for very small differences in color.
Bit rate
Lets move on to bit rate. Bit rate is how much data a certain codec uses at a time. Which can be shown as Bits per second ( b/s)
Lets say that you start with a 8 mb/s video and you want to see how many Megabytes that would be after its rendered out.
Well we know that 1 Byte= 8 bits so we can convert
(8 mb/s) = 1MB/s
8
^ the large B meaning Bytes
And then there are 60 seconds in a minute so we multiply 1MB/s by 60
1MB/s(60)= 60 MB/s
Lets say you want the video to be 5 minutes long so we would again multiply but by 5 this time.
60MB(5) = 300 MB
So because you have an 8 mb/s bit rate after everything is saved and exported you will have a 300 MB video.
Thanks for reading! I hope I explained everything well enough but if you are still having troubles understanding please check out this video. Its talks about everything I did plus a little more and maybe a little better.
No comments:
Post a Comment