Binary system
The binary system has base 2 and is represented by the set {0, 1}
The International System of Units and the term byte
In the early days of computing, units were shown as multiples of 1000, but in the 1960s 1000 began to be confused with 1024, since computer memory works on a binary basis and not on a decimal basis. The problem was when naming these units, since the names of the prefixes of the International System of Units were adopted. Given the similarity in the quantities, the thousand base prefixes that apply to the units of the international system (such as the meter, the gram, the volt or the ampere) were used
However, it is etymologically incorrect to use these prefixes (decimal base) to name multiples in binary base. As in the case of the kilobyte, although 1024 is close to 1000
To clarify the distinction between decimal and binary prefixes, the International Electrotechnical Commission (IEC), a standardization group, proposed in 1998 other prefixes, which consisted of abbreviated unions of the International System of Units with the word binary
Thus, a set of 2^{10} bytes (1024 bytes), it should be called a 4 kibibyte (KiB) contraction of Kilobyte Binary
This convention, expressed in the standards IEC 60027-2^5\text{ e }IEC 80000-13:2008, has been adopted for Apple's "Snow Leopard" operating system and Ubuntu. Others, like Microsoft, adopt the definition found in dictionaries such as Oxford's, keeping the use of "kilobyte" for 1024 bytes
In the computer environment it has been suggested to use the capital prefix K to distinguish the binary quantity from the decimal, but this topic has not yet been standardized, since the symbol "K" in the SI represents the unit of temperature, the kelvin. On the other hand, this suggestion could not be extended to other prefixes of greater magnitude since, in the case of MB (megabyte), the SI already uses both the uppercase M (mega: million) and the lower case (milli: thousandth)
Information units (of the byte) |
International System (decimal) |
ISO/IEC 80000-13 (binary) |
Multiple (symbol) |
IF |
Multiple (symbol) |
ISO/IEC |
kilobyte (kB) |
10^3 |
kibibyte (KiB) |
2^{10} |
megabyte (MB) |
10^6 |
mebibyte (MiB) |
2^{20} |
gigabyte (GB) |
10^9 |
gibibyte (GiB) |
2^{30} |
terabyte (TB) |
10^{12} |
tebibyte (TiB) |
2^{40} |
petabyte (PB) |
10^{15} |
pebibyte (PiB) |
2^{50} |
exabyte (EB) |
10^{18} |
exbibyte (EiB) |
2^{60} |
zettabyte (ZB) |
10^{21} |
zebibyte (ZiB) |
2^{70} |
yottabyte (YB) |
10^{24} |
yobibyte (YiB) |
2^{80} |
Binary units in computer science
In pure mathematics a value does not have a space limit for its representation, however, machines generally work with a fixed number of bits
Bit
The smallest unit of information on a machine is called bit. With one bit, only one value of two different possible values can be represented, example: zero or one, false or true, white or black, down or up, no or yes, etc
Nibble
A nibble is a 4 bit collection. It would not be an interesting data type except that with a nibble a number BCD and also that a nibble can represent a digit hexadecimal
Byte
A byte is an 8 bit collection. References to a certain memory location in all microprocessors is never less than one byte, (most use multiples of bytes), therefore, it is considered the smallest locatable (addressable) data
The bits of a byte are normally numbered from 0 to 7. Bit 0 is called the lowest order or least significant bit, bit 7 is considered the highest order or most significant bit
A byte also consists of 2 nibbles, bits 0, 1, 2 and 3 form the so-called lower order nibble, and bits 4, 5, 6 and 7 form the higher order nibble. Since a byte is made up of two nibbles, it is possible to represent any value with two hexadecimal digits
Word
A word is a group of 16 bits, bit 0 is the lowest order bit and bit 15 is the highest order bit. A word can be divided into 2 bytes called equally low and high order. Also a word can be considered as a group of 4 nibbles
A group of 32 bits is considered a double word
A quad word is considered a group of 64 bits
Modern computers typically have a word size of 16, 32, or 64 bits. Many other sizes have been used in the past, such as 8, 9, 12, 18, 24, 36, 39, 40, 48, and 60 bits. The slab is one of the examples of one of the first word sizes. Some of the early computers were decimal rather than binary, typically having a word size of 10 or 12 decimal digits and some of the early computers did not have a fixed word length
Sometimes the size of a word is set to have a particular value for compatibility with older computers. The microprocessors used in personal computers (for example, Intel Pentium and AMD Athlon) are an example
Its IA-32 architecture is an extension of the original Intel 8086 design which had a 16 bit word size. IA-32 processors continue to support 8086 (x86) programs, so the meaning of the word in the IA-32 context remains the same and continues to be said to be 16 bits, despite the fact that today (the default operand size is 32 bits) operates more like a machine with a 32 bit word size. Similarly in the new x86-64 architecture, a word is still 16 bit, although 64 bit (quad word) operands are more common
Integer numbers
It is possible to represent a finite number of whole numbers. For example, with 8 bits we can represent 256 different objects. If a positive integer scheme were used each of these objects would be numbered from 0 to 255
It is also possible to use a negative integer scheme, for this case the system is used two's complement, where the highest order bit is the sign bit, if that bit is zero, the number is positive, if it is one, the number is negative. If the number is positive it is stored in its standard binary value, if the number is negative it is stored in its two's complement form
Real numbers
The way in which the architecture of computers solves the problem of representing real numbers is by means of floating point numbers. A floating point number is divided into 3 bit sections: sign, signifier, and signed exponent
Conversions
Conversion to the decimal system
The binary representation of a decimal number (the passage of a number in base 10 to its corresponding in base 2), is calculated by successively dividing the quotient of the division of the number by the divisor 2, until obtaining a quotient less than 2. The representation In base 2 it will be the last quotient followed by the last remainder followed by the previous remainder followed by the previous remainder, and so on until the first remainder obtained
Example: Convert 3737 to binary representation
Number |
Ratio |
Rest |
\frac{3737}{2} |
1868 |
1 |
\frac{1868}{2} |
934 |
0 |
\frac{934}{2} |
467 |
0 |
\frac{467}{2} |
233 |
1 |
\frac{233}{2} |
116 |
1 |
\frac{116}{2} |
58 |
0 |
\frac{58}{2} |
29 |
0 |
\frac{29}{2} |
14 |
1 |
\frac{14}{2} |
7 |
0 |
\frac{7}{2} |
3 |
1 |
\frac{3}{2} |
1 |
1 |
So we have to:
3737_{(10} = 111010011001_{(2}
Convert from decimal to binary with decimal
The binary representation of a decimal number with decimals (the passage of a number in base 10 to its corresponding in base 2), is calculated by successively multiplying the number (after the results) without its integer part by 2, until obtaining a number without decimals , up to an amount that is repeated periodically (in the case of periodic numbers), or up to a number of digits predefined by the precision of the machine. The representation in base 2 will be the integer part without modifications, then the comma is added and finally the integer part of the result of the successive multiplications
Example: Convert 56.75 to binary representation with decimals
Number |
Ratio |
Rest |
\frac{56}{2} |
28 |
0 |
\frac{28}{2} |
14 |
0 |
\frac{14}{2} |
7 |
0 |
\frac{7}{2} |
3 |
1 |
\frac{3}{2} |
1 |
1 |
So we have that the integer part is:
56_{(10} = 111000_{(2}
Number |
Result |
Integer part |
0,75 \cdot 2 |
1,5 |
1 |
(1,5 - 1) \cdot 2 |
1 |
1 |
So we have that the decimal part is:
0,75_{(10} = 11_{(2}
So we have to:
56,75_{(10} = 111000,11_{(2}
Convert from binary system to decimal
The decimal representation of a binary number would correspond to applying the formula:
b_1\cdot 2^{(n - 1)} + \cdots + b_n \cdot 2^0
Where n would be the length of the string and b_i the value corresponding to the i-th position of the string, starting from left to right
Example: Convert 111010011001 to decimal representation
111010011001_{(2}= 1 \cdot 2^{11} + 1 \cdot 2^{10} + 1 \cdot 2^9 + 0 \cdot 2^8 + 1 \cdot 2^7 + 0 \cdot 2^6 + 0 \cdot 2^5 + 1 \cdot 2^4 + 1 \cdot 2^3 + 0 \cdot 2^2 + 0 \cdot 2^1 + 1 \cdot 2^0 = 2048 + 1024 + 512 + 0 + 128 + 0 + 0 + 16 + 8 + 0 + 0 + 1 = 3737_{(10}
So we have to:
111010011001_{(2}=3737_{(10}
Convert from binary system to decimal with decimal places
If the number also has decimals, it will be expressed with the following formula:
b_1\cdot 2^{(n - 1)} + \cdots + b_n \cdot 2^0+b_{n+1}\cdot 2^{-1} + \cdots + b_{n+m} \cdot 2^{-m}
Where n would be the length of the string without decimals, m the length of the string with decimals, b_i the value corresponding to the i-th position of the string, starting from left to right
Example: Convert 111000.11 to decimal representation
111000,11_{(2}=1 \cdot 2^5 + 1 \cdot 2^4 + 1 \cdot 2^3 + 0 \cdot 2^2 + 0 \cdot 2^1 + 0 \cdot 2^0 + 1 \cdot 2^{-1} + 1 \cdot 2^{-2} = 32 + 16 + 8 + 0 + 0 + 0 + 0 + 0,5 + 0,25 = 56,75(10
So we have to:
111000,11_{(2}=56,75(10