The most commonly used units of data storage capacity are the bit, the capacity of a system that has only two states, and the byte (or octet), which is equivalent to eight bits. Multiples of these units can be formed from these with the SI prefixes (power-of-ten prefixes) or the newer IEC binary prefixes (power-of-two prefixes).
In 1928, Ralph Hartley observed a fundamental storage principle,[1] which was further formalized by Claude Shannon in 1945: the information that can be stored in a system is proportional to the logarithm of N possible states of that system, denoted logbN. Changing the base of the logarithm from b to a different number c has the effect of multiplying the value of the logarithm by a fixed constant, namely logcN = (logcb) logbN.
Therefore, the choice of the base b determines the unit used to measure information. In particular, if b is a positive integer, then the unit is the amount of information that can be stored in a system with b possible states.
When b is 2, the unit is the shannon, equal to the information content of one "bit" (a portmanteau of binary digit[2]). A system with 8 possible states, for example, can store up to log2 8 = 3 bits of information. Other units that have been named include:
Base b = 3
the unit is called "trit", and is equal to log2 3 (≈ 1.585) bits.[3]
the unit is called a nat, nit, or nepit (from Neperian), and is worth log2e (≈ 1.443) bits.[1]
The trit, ban, and nat are rarely used to measure storage capacity; but the nat, in particular, is often used in information theory, because natural logarithms are mathematically more convenient than logarithms in other bases.
Historically, a byte was the number of bits used to encode a character of text in the computer, which depended on computer hardware architecture, but today it almost always means eight bits – that is, an octet. An 8-bit byte can represent 256 (28) distinct values, such as non-negative integers from 0 to 255, or signed integers from −128 to 127. The IEEE 1541-2002 standard specifies "B" (upper case) as the symbol for byte (IEC 80000-13 uses "o" for octet in French,[nb 1] but also allows "B" in English). Bytes, or multiples thereof, are almost always used to specify the sizes of computer files and the capacity of storage units. Most modern computers and peripheral devices are designed to manipulate data in whole bytes or groups of bytes, rather than individual bits.
A group of four bits, or half a byte, is sometimes called a nibble, nybble or nyble. This unit is most often used in the context of hexadecimal number representations, since a nibble has the same number of possible values as one hexadecimal digit has.[7]
Computers usually manipulate bits in groups of a fixed size, conventionally called words. The number of bits in a word is usually defined by the size of the registers in the computer's CPU, or by the number of data bits that are fetched from its main memory in a single operation. In the IA-32 architecture more commonly known as x86-32, a word is 32 bits, but other past and current architectures use words with 4, 8, 9, 12, 13, 16, 18, 20, 21, 22, 24, 25, 29, 30, 31, 32, 33, 35, 36, 38, 39, 40, 42, 44, 48, 50, 52, 54, 56, 60, 64, 72[8] bits or others.
Computer memory caches usually operate on blocks of memory that consist of several consecutive words. These units are customarily called cache blocks, or, in CPU caches, cache lines.
Terms for large quantities of bits can be formed using the standard range of SI prefixes for powers of 10, e.g., kilo = 103 = 1000 (as in kilobit or kbit), mega = 106 = 1000000 (as in megabit or Mbit) and giga = 109 = 1000000000 (as in gigabit or Gbit). These prefixes are more often used for multiples of bytes, as in kilobyte (1 kB = 8000 bit), megabyte (1 MB = 8000000bit), and gigabyte (1 GB = 8000000000bit).
However, for technical reasons, the capacities of computer memories and some storage units are often multiples of some large power of two, such as 228 = 268435456 bytes. To avoid such unwieldy numbers, people have often repurposed the SI prefixes to mean the nearest power of two, e.g., using the prefix kilo for 210 = 1024, mega for 220 = 1048576, and giga for 230 = 1073741824, and so on. For example, a random access memory chip with a capacity of 228 bytes would be referred to as a 256-megabyte chip. The table below illustrates these differences.
In the past, uppercase K has been used instead of lowercase k to indicate 1024 instead of 1000. However, this usage was not consistently applied.
On the other hand, for external storage systems (such as optical discs), the SI prefixes are commonly used with their decimal values (powers of 10). Many attempts have sought to resolve the confusion by providing alternative notations for power-of-two multiples. The International Electrotechnical Commission (IEC) issued a standard for this purpose by defining a series of binary prefixes that use 1024 instead of 1000 as the main radix:[9]
The JEDEC memory standard JESD88F notes that the definitions of kilo (K), giga (G), and mega (M) based on powers of two are included only to reflect common usage, but are otherwise deprecated.[10]
256 bytes: page (on Intel 4004,[23]8080 and 8086 processors,[41] also many other 8-bit processors – typically much larger on many 16-bit/32-bit processors)
^However, if the SI guideline to include a space before the unit is ignored, the IEC 80000-13 abbreviation "o" for octets can be confused with the postfix "o" to indicate octal numbers in Intel convention.
^ISO/IEC standard is ISO/IEC 80000-13:2008. This standard cancels and replaces subclauses 3.8 and 3.9 of IEC 60027-2:2005. The only significant change is the addition of explicit definitions for some quantities. ISO Online Catalogue
^ abSteinbuch, Karl W.; Wagner, Siegfried W., eds. (1967) [1962]. Written at Karlsruhe, Germany. Taschenbuch der Nachrichtenverarbeitung (in German) (2 ed.). Berlin / Heidelberg / New York: Springer-Verlag OHG. pp. 835–836. LCCN67-21079. Title No. 1036.
^ abSteinbuch, Karl W.; Weber, Wolfgang; Heinemann, Traute, eds. (1974) [1967]. Written at Karlsruhe / Bochum. Taschenbuch der Informatik - Band III - Anwendungen und spezielle Systeme der Nachrichtenverarbeitung (in German). Vol. 3 (3 ed.). Berlin / Heidelberg / New York: Springer Verlag. pp. 357–358. ISBN3-540-06242-4. LCCN73-80607.
^Bertram, H. Neal (1994). Theory of magnetic recording (1 ed.). Cambridge University Press. ISBN0-521-44973-1. 9-780521-449731. […] The writing of an impulse would involve writing a dibit or two transitions arbitrarily closely together. […]
^ ab"Terms And Abbreviations / 4.1 Crossing Page Boundaries". MCS-4 Assembly Language Programming Manual - The INTELLEC 4 Microcomputer System Programming Manual(PDF) (Preliminary ed.). Santa Clara, California, US: Intel Corporation. December 1973. pp. v, 2-6, 4-1. MCS-030-1273-1. Archived(PDF) from the original on 2020-03-01. Retrieved 2020-03-02. […] Bit - The smallest unit of information which can be represented. (A bit may be in one of two states I 0 or 1). […] Byte - A group of 8 contiguous bits occupying a single memory location. […] Character - A group of 4 contiguous bits of data. […] programs are held in either ROM or program RAM, both of which are divided into pages. Each page consists of 256 8-bit locations. Addresses 0 through 255 comprise the first page, 256-511 comprise the second page, and so on. […] (NB. This Intel 4004 manual uses the term character referring to 4-bit rather than 8-bit data entities. Intel switched to use the more common term nibble for 4-bit entities in their documentation for the succeeding processor 4040 in 1974 already.)
^ abRaymond, Eric S. (1996). The New Hacker's Dictionary (3 ed.). MIT Press. p. 333. ISBN0262680920.
^Böszörményi, László; Hölzl, Günther; Pirker, Emaneul (February 1999). Written at Salzburg, Austria. Zinterhof, Peter; Vajteršic, Marian; Uhl, Andreas (eds.). Parallel Cluster Computing with IEEE1394–1995. Parallel Computation: 4th International ACPC Conference including Special Tracks on Parallel Numerics (ParNum '99) and Parallel Computing in Image Processing, Video Processing, and Multimedia. Proceedings: Lecture Notes in Computer Science 1557. Berlin, Germany: Springer Verlag.