What does 16-bit mean in the computer?
16-bit means that something uses a piece of data that has a length of 16 bits (or 2 bytes in a typical computer).
An 16-bit unsigned integer ranges from 0 to 65535.
An 16-bit signed integer ranges from -32768 to 32768.
16 equals 2^4
(How to Read Math).
Many things in computers jump from 8-bit (1 byte) to 16 bits (2 bytes), which means what was previously 256 (or from 0 to 255) becomes 65536 (or from 0 to 65535). In some cases, the jump is from 8-bit to 10-bit instead.
Examples
Color Depth
Images and videos generally have a color depth of 8 bits per color channel, totaling 24 bits for the three RGB channels combined. Although this is sufficient for displaying the images, when editing images and videos, mathematical operations done on the color values accumulate rounding errors that can produce banding artifacts.
The crux of the issue is that each channel has a value that only goes from 0 to 255 for each pixel. If an effect is suppose to only change the color very, very slightly to 0.6 redder, for example, it will be rounded to +1 redder. When multiple effects are combined, this becomes perceptible, specially effects like blur.
Some applications like Photoshop and Krita allow you to work on higher color depths such as 16 bit, which eliminates this problem. The image displayed on screen is still limited by the color depth of the screen, so if your monitor is 8 bit, you SEE 8 bit, but the edits are performed in the 16 bit space. Interestingly, there are monitors that have a 10 bit color depth, which may sound little but is still 4 times more precise than 8 bit.
Leave a Reply