If you use a computer too much, you'll eventually come across the same number over and over for some reason. That might make you think: why is this number showing up all the time? What is its significance? In this article, I'll list some of these numbers and why they exist.
0, 1, and 2: the typical computer stores data as bits, which are "binary digits." This means each bit encodes a value that is either 0
or 1
. We can also say that a maximum of 2 choices exist: 0
or 1
.
The number 0 is specially significant, for multiple reasons:
0 is falsy in all programming languages I know of.
0 as a memory address is (typically) the null
pointer, which never points to a valid memory address. Note: some implementations of null
can use a different address than zero.
0 is a numeric ID that is (typically) never assigned in a database. In a RDBMS table with an automatically incrementing primary key, the first entry gets the ID 1
, never 0
. Many ORMs make use of this concept: the ID of a record that is not stored in the database is set to 0
, so whether the ID is 0
or not can be used to determine which records are stored and which ones are not.
0 is the 1st index of an item in an array in most languages (except, notably, Lua). Arrays as a data structure are just a region of data where items are tightly packed. The start of this array would have the first item. However, array access is done by adding the size of an item in bytes to the start of an array. This means to get the address of the 2nd item, you calculate (first address) + (size of an item in bytes)
, and for the 3rd, you multiply size by 2, for the 4th, you multiply size by 3 and so on. Most programming languages decided it was too confusing to subtract 1 from 4th to get 3, so instead they start indexing from 0. The 1st item becomes the 0th item.
0th is guaranteed to exist in any indexed sequence that starts at zero, so it's possible for it (this first item) to be treated specially in some programs. For example, if you have several different color palettes of different sizes, some with 16 colors, some with 256 colors, all of them will have a zeroth color. You could have a program that assumes the zeroth color will always be the transparent color.
8: a single bit can't encode enough data for the majority of operations you'd like to do with data. Computers are optimized to work with packs of bits instead. The smallest back the typical computer can use is the byte, which (typically) is equal to 8 bits. This is sometimes called an octet of bits, e.g. 01010011
. Octets are sometimes written with a space after the fourth digit: 0101 0011
.
256: the number of permutations in an octet. That is, if count how many uniquely different bit sequences you can have with 8 bits (0000 0000
, 0000 0001
, 0000 0010
, and so on), the total number of this count will be 256. This can be calculated as 2^8
(How to Read Math).
Notably, some pixel art is limited to 256 colors because they use a 8-bit palette, inspired by retro video-games that were similarly limited on old hardware. The Super Nintendo, for example, could display only 256 colors on the screen at a time.
From 0 to 255: the range of decimal numbers we can encode using 8 bits. Since we have 256 permutations, and 0000 0000
represents the decimal number zero (the minimum), 1111 1111
would represent the number 255 (the maximum). Note that this assumes we're interpreting the bit sequence as a binary number (this is also called an unsigned single byte integer, or uint8). Theoretically we could interpret it in however order we want.
From -128 to 127: a range of decimal numbers we can encode using one byte, if we want to include negative numbers as well. This is also called a signed single byte integer, or int8. One implementation of this would have the first bit represent the mathematical sign: 0
for positive numbers (+
sign), 1
for negative numbers (-
sign). This means:
0111 1111
is 127 in bothuint8
andint8
.1000 0001
is -128 inint8
.1111 1111
is -1 inint8
.
Note: 8 bits is all that is necessary to implement ASCII text encoding; some languages use the term char (uchar) to refer to an 8-bit structure, from text "character." It's also the minimum length of a single code point in UTF-8 encoding.
-1: the number -1 is a special value used when 0
is significant to represent a false or undefined value. For example, in Javascript, the function indexOf
returns the zero-based index of an item found in an array. This means if the item you search for is the first item, 0
is returned. What happens if the item isn't found? It returns -1
.
The number -1
is commonly found in low-level languages like C and Zig that feature strict typing.
If you set an unsigned number to -1
in Zig, that causes an error because unsigned numbers can't have negative values. However, if you do it in C, it will try to assign the bits of the signed integer -1
to the signed variable. The bits of -1
would be 1111 1111
for int8
, for example. If we take those bits and interpret them as uint8
instead, instead of -1
we get the maximum possible number we can represent in the unsigned structure: 255
(1111 1111
). Consequently, using -1
with an unsigned number has the effect of setting it to its maximum value. In Zig, integer overflow is possible with the explicit %=
operator.
Fun fact: in Civilization, the game, leaders had a variable for how aggressive they were. Gandhi's value was so low that when subtracted it would cause an integer overflow, looping to the maximum value instead! This is the reason why he uses nukes so much in the game!
16: the number 16 is 8*2
, i.e. it's twice the number of bits you would have in a byte. In general, all sorts of things jump from 8-bit to 16-bit in computers.
For example, the typical digital image and video is uses 8 bits per pixel per color channel. This can cause banding when editing the image, specially because of the perceptual gamma used in the default color profile in most image editors. A solution is to use 16 bits instead when editing photos and videos. A static image won't look different in your monitor if it's 8bpp, but edits won't cause artifacts due to compounding rounding errors.
This is the number of digits we have when we use hexadecimal numbers (base 16 numbers): 0123456789ABCDEF
. It just happens that 16^2
is 256
, which means we can represent any number from 0 to 255, and consequently any octet, using two hexadecimal digits. For example: FF
is 1111 1111
. In some cases, 0b
is used as prefix for binary notation, while 0x
is used for hexadecimal notation, e.g. 0xFF == 0b11111111
.
65536: the number 65536 equals the total number of permutations of a 16-bit (2 byte) sequence. It equals 2^16
.
From 0 to 65535: the range of unsigned numbers we can encode in 16 bits.
From -32768 to 32767: the range of signed numbers we can encode in 16 bits.
10: some monitors use 10bpp instead of 8 or 16. Their color accuracy is four times better than 8bpp.
1024: the number 1024 equals 2^10
.
1024 is the number of bytes in a kebibyte, the number of kebibytes in mebibytes, the number of mebibytes in gebibytes, and so on.
Some applications, like OpenToonz, make use of a 10-bit 1024 indexed color palette instead of 8-bit 256 color.
24: this is the number of bits in 3 bytes.
In a 8-bit RGB color, each channel (red, green, blue) is encoded as one byte, which means the whole triplet is 24 bits long, i.e. this is the data size of a single pixel. These are often represented in hexadecimal notation, e.g. 0xFF0000
is the color red (255, 0, 0).
32: this is the number of bits in 4 bytes. This is the data size of a RGBA pixel.
For a long time, we had 32-bit CPUs, so integers were generally 32 bits long.
4294967296: the number of permutations for a 32 bits sequence. That's four billion and something.
Note: the typical computer has a separate memory address for each byte of memory. For a 32-bit CPU, this address is 32-bit long. This means the maximum amount of addressable memory is 4294967296 bytes, or something over 4 gigabytes. This is the reason why those computers couldn't handle more RAM than 4 GB.
Note: IP addresses, specially IPv4, use 4 bytes to store the address (e.g. 255.255.255.255
), which means there can only be 4 billion and something unique IPv4 addresses. Considering there are more than 4 billion people on the planet, and many of them have multiple devices connected to the Internet these days, there is a very real danger of literally running out of addresses! IPv6 was supposed to solve this by adding using more bits.
127.0.0.1: this is the loopback IPv4 address, typically aliased "localhost
." This is the address used to connect to your own computer if you have both a client and a server running in your computer and you need to connect to transmit data using Internet technologies for some reason (e.g. you're developing a website in your computer). I heard that data sent to localhost never leaves your PC, so it doesn't go to the Internet and back, it's "looped back" by the computer itself.
0xBAADF00D: this is a 32-bit value used by Microsoft for uninitialized memory. As you can see, you can spell some words using hexadecimal digits and l33t-speak. There are various other examples of this.
64: the number of bits in 8 bytes.
18446744073709551616: the number of permutations in a 64 bits sequence.
128: the number of bits in 16 bytes.
340282366920938463463374607431768211456: the number of permutations in a 128 bits sequence.
Note: IPv6 uses 128-bit addresses, written as quartets of hexadecimal digits separated by colons, e.g. ffff:ffff:ffff:ffff:ffff:ffff:ffff:ffff
.
::1: IPv6's loopback address abbreviation. Equals 0000:0000:0000:0000:0000:0000:0000:0001
.
Observations
No science!: most online calculators I tried started converting large numbers to scientific notation, e.g. 3.4028237e+38
. This e+38
means 3.4... is multiplied by 10 to the 38th power, which is hard to visualize. They also omit most digits! Thankfully, Wolfram Alpha [wolframalpha.com] gave me an actual number instead of this illegible nonsense!
Leave a Reply