Category Archives: Data

The data describe empirical facts, events and entities. Represents the information that the programmer manipulates in the construction of a solution or in the development of an algorithm

Data

Data

A piece of information is anything that gives us knowledge about something: numbers, letters, symbols, images, sounds...

The data have to be stored in the memory of the machine so that it can work with them

The encoding of the data in the machine memory will be based on the type of the data: integer, real, character, etc

Not only data but also instructions are stored

This is to see how the machine internally represents the information

The electronic circuits that make up the machine are only capable of distinguishing between two values ​​(ON / OFF, YES / NO, TRUE / FALSE)

These two values ​​are physically implemented as current step, not current step

To distinguish between a greater number of voltage levels would facilitate the production of errors

Because of this, internal encoding methods must use only two symbols, the binary encoding

The alphanumeric characters (set of alphabetic and numeric characters, although in general, it is assumed that it also includes the rest of the characters available in the system, thus including graphic and control characters) are represented with codes in a type of numerical representation, also called tables of codes or representation tables, ASCII, EBCDIC, etc

Thus, depending on the representation code used, we will say that character A corresponds to a code decimal / binary / octal / hexadecimal

Decimal System

Decimal system

The decimal system has a base of 10 and is represented by the set {0, 1, 2, 3, 4, 5, 6, 7, 8, 9}

Digit

Name used by each of the symbols

When we combine several digits, we have a number

Their value depends not only on the value of each one of them, but also on the position they have within their group

The passage of any number based on 10, it would correspond to applying the formula:

b_1\cdot 10^{(n-1)}+\cdots+b_n\cdot 10^0

Where n would be the length of the string, and b_i, the value corresponding to the i-th position of the string, starting from left to right

Example: Representation of the number 3737 in decimal system

3737=3\cdot 10^3+7\cdot 10^2+3\cdot 10^1+3\cdot 10^0

Representation with decimal

If the number also has decimals, it will be expressed with the following formula:

b_1\cdot 10^{(n-1)}+\cdots+b_n\cdot 10^0+b_{n+1}\cdot 10^{-1}+\cdots+b_{n+m}\cdot 10^{-m}

Where n would be the length of the string without decimals, m the length of the string with decimals, b_i, the value corresponding to the i-th position of the string, starting from left to right

Example: Representation of the number 56.34 in decimal system

56,34=5\cdot 10^1+6\cdot 10^0+3\cdot 10^{-1}+4\cdot 10{-2}

Binary System

Binary system

The binary system has base 2 and is represented by the set {0, 1}

The International System of Units and the term byte

At the beginning of computing, units were shown as multiples of 1000, but in the 1960s 1000 began to be confused with 1024, since computer memory works on a binary basis and not a decimal basis

The problem was when naming these units, since the names of the prefixes from the International System of Units were adopted

Dada la similitud en las cantidades, se utilizaron los prefijos de base mil que se aplican a las unidades del sistema internacional (tales como el metro, el gramo, el voltio o el amperio)

However, etymologically it is incorrect to use these prefixes (decimal base) to name multiples in binary base

As in the case of the kilobyte, although 1024 is close to 1000

To clarify the distinction between decimal and binary prefixes, the International Electrotechnical Commission (IEC), a standardization group, proposed in 1998 other prefixes, which consisted of abbreviated unions of the International System of Units with the word binary

Thus, a set of 2^{10} bytes (1024 bytes), should be called a 4 kibibyte (KiB) contraction of Binary Kilobyte

This convention, expressed in the standards IEC 60027-2^5\text{ e }IEC 80000-13:2008, has been adopted for Apple's “Snow Leopard” operating system and by Ubuntu

Others, such as Microsoft, adopt the definition found in dictionaries such as Oxford's, by maintaining the use of "kilobyte" for 1024 bytes

In the computer environment, it has been suggested to use the capital K prefix to distinguish the binary quantity from the decimal, but this issue has not yet been standardized, since the symbol “K” in the SI represents the unit of temperature, the kelvin

On the other hand, this suggestion could not be extended to other prefixes of greater magnitude given that, in the case of the MB (megabyte), the IS already uses both the uppercase M (mega: million) and the lowercase M (milli: thousandth)

Information units (of the byte)
International System (decimal) ISO/IEC 80000-13 (binary)
Multiple (symbol) IS Multiple (symbol) ISO/IEC
kilobyte (kB) 10^3 kibibyte (KiB) 2^{10}
megabyte (MB) 10^6 mebibyte (MiB) 2^{20}
gigabyte (GB) 10^9 gibibyte (GiB) 2^{30}
terabyte (TB) 10^{12} tebibyte (TiB) 2^{40}
petabyte (PB) 10^{15} pebibyte (PiB) 2^{50}
exabyte (EB) 10^{18} exbibyte (EiB) 2^{60}
zettabyte (ZB) 10^{21} zebibyte (ZiB) 2^{70}
yottabyte (YB) 10^{24} yobibyte (YiB) 2^{80}

Binary units in computer science

In pure mathematics a value does not have a space limit for its representation, however, machines generally work with a fixed number of bits

Bit

The smallest unit of information on a machine is called bit

With a bit, only one value of two different possible values ​​can be represented, for example: zero or one, false or true, white or black, down or up, no or yes, etc

Nibble

A nibble is a collection of 4 bits

It wouldn't be an interesting type of data except that with a nibble you present a number BCD and also that a nibble can represent a digit hexadecimal

Byte

A byte is a collection of 8 bits

References to a certain memory location in all microprocessors are never less than one byte (most use multiples of bytes), therefore it is considered the smallest locatable (addressable) data

The bits are usually numbered from 0 to 7

The bit 0 is called bit of lower order or less significant, the bit 7 is considered the bit of the highest order or the most significant

A byte also consists of 2 nibbles, the bits 0, 1, 2 and 3 form the call nibble of a lesser order, and the bits 4, 5, 6 and 7 form the nibble of higher order

Since a byte is made up of two nibbles, It is possible to represent any value with two digits hexadecimals

Word

A word It's a group of 16 bits, the bit 0 is the bit of lower order and the bit 15 is the highest order

A word can be divided into 2 bytes called equally low and high order

Also a word can be considered as a group of 4 nibbles

A double word is considered to be a group of 32 bits

A quadruple word is considered to be a group of 64 bits

Modern computers typically have a word size of 16, 32, or 64 bits

Many other sizes have been used in the past, such as 8, 9, 12, 18, 24, 36, 39, 40, 48, and 60 bits

The slab is one of the examples of one of the first word sizes

Some early computers were decimal rather than binary, typically having a word size of 10 or 12 decimal digits and some early computers did not have a fixed word length

Sometimes the size of a word is defined to have a particular value for compatibility with older computers

Microprocessors used in personal computers (for example, Intel Pentium and AMD Athlon) are an example

Its IA-32 architecture is an extension of the original Intel 8086 design that had a word size of 16 bits

Los procesadores IA-32 siguen soportando programas del 8086 (x86), así que el significado de palabra en el contexto IA-32 sigue siendo el mismo y se continua diciendo que son 16 bits, a pesar del hecho de que en la actualidad (el tamaño del operando por defecto es 32 bits) operates more like a machine with a word size of 32 bits

Similarly in the new x86-64 architecture, one word is still 16 bits, aunque los operandos de 64 bits (quadruple word) are more common

Integer numbers

It is possible to represent a finite number of integer numbers

For example, with 8 bits we can represent 256 different objects

If a scheme were used positive integer numbers each of these objects would be numbered from 0 to 255

It is also possible to use a scheme negative integers numbers, for this case the system is used two's complement, where the bit of a higher order is the bit of sign, if such bit is zero, the number is positive, if it is one, the number is negative

If the number is positive it is stored in its standard binary value, if the number is negative it is stored in its standard binary form two's complement

Real numbers

The way computer architecture solves the problem of representing real numbers is through the numbers of floating-point

A number floating-point is divided into 3 sections of bits: sign, signifier and exponent with sign

Conversions

Conversion to the decimal system

The binary representation of a decimal number (the transition from a number in base 10 to its corresponding base 2), is calculated by successively dividing the quotient of the division of the number by the divisor 2, until obtaining a quotient less than 2

The representation in base 2 will be, the last quotient followed by the last remainder followed by the previous remainder followed by the previous remainder, and so on until the first remainder obtained

Example: Convert 3737 to binary representation

Number Ratio Rest
\frac{3737}{2} 1868 1
\frac{1868}{2} 934 0
\frac{934}{2} 467 0
\frac{467}{2} 233 1
\frac{233}{2} 116 1
\frac{116}{2} 58 0
\frac{58}{2} 29 0
\frac{29}{2} 14 1
\frac{14}{2} 7 0
\frac{7}{2} 3 1
\frac{3}{2} 1 1

So we have to:

3737_{(10} = 111010011001_{(2}

Convert from decimal to binary with decimal

The binary representation of a decimal number with decimals (the transition from a number in base 10 to its corresponding number in base 2)

It is calculated by successively multiplying the number (then the results) without its integer part by 2, until a number without decimals is obtained, up to a quantity that is repeated periodically (in the case of periodic numbers)

Or even a number of digits predefined by machine precision

The representation in base 2 will be the integer part without modifications, then the comma is added and finally the integer part of the result of the successive multiplications

Example: Convert 56.75 to binary representation with decimals

Number Ratio Rest
\frac{56}{2} 28 0
\frac{28}{2} 14 0
\frac{14}{2} 7 0
\frac{7}{2} 3 1
\frac{3}{2} 1 1

So we have that the integer part is:

56_{(10} = 111000_{(2}

Number Result Integer part
0,75 \cdot 2 1,5 1
(1,5 - 1) \cdot 2 1 1

So we have that the decimal part is:

0,75_{(10} = 11_{(2}

So we have to:

56,75_{(10} = 111000,11_{(2}

Convert from binary system to decimal

The decimal representation of a binary number would correspond to applying the formula:

b_1\cdot 2^{(n - 1)} + \cdots + b_n \cdot 2^0

Where n would be the length of the string and b_i the value corresponding to the i-th position of the string, starting from left to right

Example: Convert 111010011001 to decimal representation

111010011001_{(2}= 1 \cdot 2^{11} + 1 \cdot 2^{10} + 1 \cdot 2^9 + 0 \cdot 2^8 + 1 \cdot 2^7 + 0 \cdot 2^6 + 0 \cdot 2^5 + 1 \cdot 2^4 + 1 \cdot 2^3 + 0 \cdot 2^2 + 0 \cdot 2^1 + 1 \cdot 2^0 = 2048 + 1024 + 512 + 0 + 128 + 0 + 0 + 16 + 8 + 0 + 0 + 1 = 3737_{(10}

So we have to:

111010011001_{(2}=3737_{(10}

Convert from binary system to decimal with decimal places

If the number also has decimals, it will be expressed with the following formula:

b_1\cdot 2^{(n - 1)} + \cdots + b_n \cdot 2^0+b_{n+1}\cdot 2^{-1} + \cdots + b_{n+m} \cdot 2^{-m}

Where n would be the length of the string without decimals, m the length of the string with decimals, b_i the value corresponding to the i-th position of the string, starting from left to right

Example: Convert 111000.11 to decimal representation

111000,11_{(2}=1 \cdot 2^5 + 1 \cdot 2^4 + 1 \cdot 2^3 + 0 \cdot 2^2 + 0 \cdot 2^1 + 0 \cdot 2^0 + 1 \cdot 2^{-1} + 1 \cdot 2^{-2} = 32 + 16 + 8 + 0 + 0 + 0 + 0 + 0,5 + 0,25 = 56,75(10

So we have to:

111000,11_{(2}=56,75(10

BCD

Binary-Coded Decimal (BCD)

Binary-Coded Decimal (BCD), in computer science, is a standard for representing decimal numbers in the binary system, where each decimal digit is encoded with a sequence of 4 bits

With this special coding of the decimal digits in the binary system, Arithmetic operations such as addition, subtraction, multiplication, and division can be performed

Each decimal digit has a binary representation coded with 4 bits:

Decimal BCD
0 0000
1 0001
2 0010
3 0011
4 0100
5 0101
6 0110
7 0111
8 1000
9 1001

The decimal numbers, are coded in BCD with the bits that represent their digits

Example: BCD encoding of decimal number 59237

Decimal BCD
5 0101
9 1001
2 0010
3 0011
7 0111

Definition

In BCD each digit that represents a decimal digit (0, 1,… 8 and 9) is represented by its equivalent binary in four bits (nibble or quartet)

Since nine is the number highest that can be represented in BCD

The following table shows the most commonly used BCD codes:

Decimal Natural Aiken Excess 3
0 0000 0000 0011
1 0001 0001 0100
2 0010 0010 0101
3 0011 0011 0110
4 0100 0100 0111
5 0101 1011 1000
6 0110 1100 1001
7 0111 1101 1010
8 1000 1110 1011
9 1001 1111 1100

The BCD only uses 10 of the 16 possible combinations that can be formed with numbers of 4 bits, so the system loses representation capacity, although it facilitates the compression of the numbers

This is because BCD is only used to represent the digits, no the numbers in its entirety

This means that for numbers more than one figure requires two numbers BCD

A simple way to calculate numbers in BCD it is usually adding bit to bit, and if the set of 4 bits exceeds the number 9, then is added a 6 (0110) in binary, to be able to start over, as if we were doing a module to the summand

Since computer systems began to store data in sets of eight bits (octet), there are two common ways to store BCD data:

  • Omission of the four bits (as is the case in the EBCDIC)
  • Storage of two BCD data; it is the so called "packaged" BCD, in which the sign is also included first, usually with 1100 for the + and 1101 for the –

In this way, the number 127 would be represented as (11110001, 11110010, 11110111) in the EBCDIC or (11000001, 00100111) in the packaged BCD

BCD is still widely used to store data, in binary arithmetic or in electronics

The numbers can be easily displayed on seven segment viewers by sending each BCD quartet to a viewer

A personal computer's BIOS usually stores the date and time in BCD format; probably for historical reasons the need for its conversion into ASCII was avoided

The advantage of BCD code over rendering classic binary is that there is no limit to the size of a number

The numbers that are represented in binary are generally limited by the number greater than can be represented with 8, 16, 32 or 64 bits

Conversely, using BCD, add a new digit it only involves adding a new sequence of 4 bits

Convert from Decimal To XS3 (Excess 3)

The conversion of decimal numbers to excess 3 (XS3) is made by successively adding 3 to each digit

The resulting amount is converted to binary

Example: Convert Decimal 67 to XS3

We take every digit and we add 3

Number Result
6 9
7 10

To the digits resulting we convert them into binary

Digit Binary
9 1001
10 1010

So we have to:

67_{(10} = 1001\hspace{2mm}1010_{(XS3}

BCD in electronics

BCD is very common in electronic systems where a numerical value must be displayed, especially in non programmed digital systems (without a microprocessor or microcontroller)

Using the BCD code, the manipulation of the numerical data to be displayed is simplified, for example in a seven segment viewer

This in turn leads to a simplification in the physical design of the circuit (hardware)

If the numerical quantity were stored and manipulated in natural binary, the circuit would be much more complex than if the BCD were used

There is a program called b1411 that is used to divide the binary system in two combinations

Una por ejemplo es la de sistemas digitales

IBM and BCD

IBM used the terms decimal encoded in binary and BCD, for the binary code of six bits with which they could be represented numbers, capital letters, and special characters

A variant of the BCD was used on most of IBM's early computers, including IBM1620 and IBM 1400

With the introduction of the System/360, the BCD was replaced by EBCDIC, of eight bits

The positions of the bits, in the BCD of six bits, they were generally labeled B, A, 8, 4, 2 and 1

To encode the numeric digits, A and B were zero

The letter A was coded as (B, A, 1), etc

Legal History

In 1972, the U.S. Supreme Court overturned a lower court's decision that had allowed a patent to convert numbers BCD encoded to binary on a computer

This was one of the first important cases to determine the patentability of software and algorithms employees in the same

System Octal

System octal

The system octal has base 8 and is represented by the set {0, 1, 2, 3, 4, 5, 6, 7}

Conversions

To convert decimal system to octal

The binary representation of a decimal number (the passage of a number in base 10 to its corresponding in base 8), is calculated by successively dividing the quotient of the division of the number by the divisor 8, until obtaining a quotient less than 8. The representation in base 8 it will be the last quotient followed by the last remainder followed by the previous remainder followed by the previous remainder and so on until the first remainder obtained

Example: Convert 3737 to octal representation

Number Ratio Rest
\frac{3737}{8} 467 1
\frac{467}{8} 58 3
\frac{58}{8} 7 2

So we have to:

3737_{(10} = 7231_{(8}

To convert decimal system to octal with decimal

La representación binaria de un número decimal con decimales (el paso de un número en base 10 a su correspondiente en base 8), se calcula multiplicando sucesivamente el número (después los resultados) sin su parte entera por 8, hasta obtener un número sin decimales, hasta una cantidad que se repita periódicamente (en el caso de números periódicos), o hasta un número de dígitos predefinido por la precisión de la máquina. La representación en base 8 será, la parte entera sin modificaciones, después se le añade la coma y por último la parte entera del resultado de las multiplicaciones sucesivas

Example: Convert 56.75 to octal representation with decimals

Number Ratio Rest
\frac{56}{8} 7 0

So we have that the integer part is:

56_{(10} = 70_{(8}

Number Result Integer part
0,75 \cdot 8 6 6

So we have that the decimal part is:

0,75_{(10} = 6_{(8}

So we have to:

56,75_{(10} = 70,6_{(8}

Convert system-octal to decimal

The decimal representation of an octal number would correspond to applying the formula:

b_1 \cdot 8^{(n - 1)} + \cdots + b_n \cdot 8^0

Where n would be the length of the string, and b_i the value corresponding to the i-th position of the string, starting from left to right

Example: Convert 7231 to decimal representation

7231_{(8}=7 \cdot 8^3 + 2 \cdot 8^2 + 3 \cdot 8^1 + 1 \cdot 8^0 = 7 \cdot 512 + 2 \cdot 64 + 3 \cdot 8 + 1 \cdot 1 = 3584 + 128 + 24 + 1 = 3737_{(10}

So we have to:

7231_{(8}= 3737_{(10}

Convert system octal to decimal with decimal places

If the number also has decimals, it will be expressed with the following formula:

b_1 \cdot 8^{(n - 1)} + \cdots + b_n \cdot 8^0+ b_{(n + 1)} \cdot 8^{-1} + \cdots+ b_{(n + m)} \cdot 8^{-m}

Where n would be the length of the string without decimals, m the length of the string with decimals, b_i the value corresponding to the i-th position of the string, starting from left to right

Example: Convert 70.6 to decimal representation

70,6_{(8}=7 \cdot 8^1 + 0 \cdot 8^0 + 6 \cdot 8^{-1} = 7 \cdot 8 + 0 \cdot 1 + 6 \cdot 0,125 = 56 + 0,75 = 56,75_{(10}

So we have to:

70,6_{(8}= 56,75_{(10}

Hexadecimal System

Hexadecimal System

The hexadecimal system has base 16 and is represented by the set {0, 1, 2, 3, 4, 5, 6, 7, 8, 9, A, B, C, D, E, F}

Many values in computing are represented in hexadecimal

The value of a byte can be represented by two hexadecimal figures. From the decimal value 0 as 00, to the value 255 as FF

Every four binary figures correspond to a hexadecimal figure, so it also corresponds to the value of a nibble

Conversions

Convert from decimal number system to hexadecimal

The hexadecimal representation of a decimal number (the passage from a base 10 number to its corresponding on base 16) is calculated by successively dividing the ratio of the number division by divider 16, until a quotient less than 16 is obtained. The representation on base 16 will be, the last quotient followed by the last rest followed by the previous rest followed by the previous rest and thus up to the first remains obtained

To achieve the conversion, this table is also used to represent the values:

Conversion table
Decimal Hexadecimal
0 0
1 1
2 2
3 3
4 4
5 5
6 6
7 7
8 8
9 9
10 A
11 B
12 C
13 D
14 E
15 F

Example: Convert 3737 to hexadecimal representation

Number Ratio Rest
\frac{3737}{16} 233 9
\frac{233}{16} 14 9

14\rightarrow E
9\rightarrow 9

So we have to:

3737_{(10} = E99_{(16}

Convert from decimal number system to hexadecimal with decimal

The hexadecimal representation of a decimal number with decimal places (the passage from a base 10 number to its corresponding base 16), is calculated by multiplying the number (then the results) without its entire part by 16, to obtain a number without decimal places, up to a number that is repeated periodically (in the case of periodic numbers), or up to a number of digits predefined by machine accuracy. The base 16 representation will be, the whole part unchanged, then the comma is added and finally the whole part of the result of successive multiplications

Example: Convert 56.75 to hexadecimal representation with decimal places

Number Ratio Rest
\frac{56}{16} 3 8

3\rightarrow 3
8\rightarrow 8

So we have that the integer part is:

56_{(10} = 38_{(16}

Number Result Integer part
0,75 \cdot 16 12 12

12\rightarrow C

So we have that the decimal part is:

0,75_{(10} = C_{(16}

So we have to:

56,75_{(10} = 38,C_{(16}

To convert from hexadecimal system to decimal

The decimal representation of a binary number would correspond to applying the formula:

b_1 \cdot 16^{(n - 1)} + \cdots + b_n \cdot 16^0

Where n would be the length of the chain, \text{y }b_i the value corresponding to the i-th position of the string, starting from left to right

Example: Convert E99 to decimal representation

E\rightarrow 14
9\rightarrow 9

E99_{(16}=E \cdot 16^2 + 9 \cdot 16^1 + 9 \cdot 16^0 = 14 \cdot 256 + 9 \cdot 16 + 9 \cdot 1 = 3584 + 144 + 9 = 3737_{(10}

So we have to:

E99_{(16}=3737_{(10}

To convert from hexadecimal system to decimal with decimal places

If the number also has decimals, it will be expressed with the following formula:

b_1 \cdot 16^{(n - 1)} + \cdots + b_n \cdot 16^0 + b_{(n + 1)} * 16^{-1} + \cdots+ b_{(n + m)} \cdot 16^{-m}

Where n would be the length of the string without decimals, m the length of the string with decimals, b_i the value corresponding to the i-th position of the string, starting from left to right

Example: Convert 38.C to decimal representation

3\rightarrow 14
8\rightarrow 9
C\rightarrow 12

38,C_{(16}=3 \cdot 16^1 + 8 \cdot 16^0 + C \cdot 16^{-1} = 3 \cdot 16 + 8 \cdot 1 + 12 \cdot 0,0625 = 48 + 8 + 0,75 = 56,75_{(10}

So we have to:

38,C_{(16}=56,75_{(10}

Sign-Magnitude

Sign-Magnitude

The sign-magnitude representation uses the most significant bit as a sign bit, making it easier to check whether the integer is positive or negative

There are several conventions for representing integers in a machine in their binary form, both positive and negative

They all try to imply the most significant bit (the one to the leftmost) of the word (1 word equals 8 bits) as a sign bit

In a word of n bits, the n-1 bits on the right represent the magnitude of the integer

The general case would be to apply the formula:


A=\begin{cases}\sum\limits_{i=0}^{n} 0^{n-2} \cdot a_i & \text{if }a_{n-1}=0 \\ -\sum\limits_{i=0}^{n} 0^{n-2} \cdot a_i & \text{if }a_{n-1}=1 \end{cases}


The signo-magnitude representation has several limitations:

  • Addition and subtraction require to take into account both the signs of the numbers as their relative magnitudes to carry out the operation in question
  • There are two representations for the number 0:


    \begin{cases} +0_{(10}=00000000_{(2} \\ -0_{(10}=10000000_{(2} \end{cases}

Because of these limitations, signo-magnitude representation is rarely used to represent integers on a machine

Instead, the most common scheme is representation two's complement

Conversions

Example: Convert from decimal to sign-magnitude

We're going to convert \pm 18 to sign-magnitude, first we convert it to binary

Number Quotient Rest
\frac{18}{2} 9 0
\frac{9}{2} 4 1
\frac{4}{2} 2 0
\frac{2}{2} 1 0

So we have that the binary part is:


18_{(10} = 10010_{(2}


Now we apply the distinction by sign within 1 word (remember that they are 8 bits), then we have:


\begin{cases} +18_{(10}=00010010_{(2} \\ -18_{(10}=10010010_{(2} \end{cases}

Two's complement

Two's complement

The two-in-two representation, like the sign-magnitude representation, uses the most significant bit as a sign bit, making it easier to check whether the integer is positive or negative

Differs from the representation in sign-magnitude in the way of interpreting the bits remaining

Consider an n-bit A integer represented in add-on to two. If A is positive, the sign bit, a_{n-1} is 0. The remaining bits represent the magnitude of the number in the same way as in the sign-magnitude representation, i.e.:

\sum\limits_{i=0}^{n} 0^{n-2} \cdot a_i\text{ si }A\geq 0

The number 0 is identified as positive and therefore has a bit of sign 0 and a magnitude of all zeros. The range of positive integers that can be represented is, from 0 (all bits are 0) to 2^{n-1}-1 (all bits are 1). Any larger number would require more bits

For a negative A number, the sign bit, a_n-1 is 1. The remaining n-1 bits can take any of the 2^{n-1} Combinations. Therefore, negative integers can be represented from -1 to (-2)^{n-1}, I mean:

A=(-2)^{n-1}\cdot a_{n-1}

It would be desirable to assign the bits of negative integers in such a way that their arithmetic manipulation could be done in a direct way, similar to that of unsigned integers. In unsigned representation, to calculate the value of an integer from its bit expression, the most significant weight is 2^{n-1}

The general case would be to apply the formula:

A=(-2)^{n-1} \cdot a_{n-1} + \sum\limits_{i=0}^{n} 0^{n-2} \cdot a_i

Properties

  • Range of values
    (-2)^{n-1}\text{ a } 2^{n-1}-1
  • Number of representations of 0
    Only
  • Denial
    Run the Boolean plug-in for each bit and add 1 to the resulting bit pattern
  • Enlargement of the length in bits
    Extra positions are added to the left, filling them with the value of the original sign bit
  • Rule of overflow
    If two equal sign numbers (both positive or negative) are added, there is overflow if and only if, the result has an opposite sign
  • Rule for subtraction
    To subtract B from A, the complement is taken to two of B and added to A

Conversions

Example: Convert from decimal to two's complement

We're going to convert \pm 18 to two's complement, first we convert it to binary

Number Ratio Rest
\frac{18}{2} 9 0
\frac{9}{2} 4 1
\frac{4}{2} 2 0
\frac{2}{2} 1 0

So we have that the binary part is:

18_{(10} = 10010_{(2}

To find the -18 you have to use the denial of the 18

It runs the Boolean plugin of each bit within 1 word (remember that it is 8 bits), then we have:

18_{(10} = 00010010_{(2}
00010010 \rightarrow 11101101

Adds 1 to the resulting bit pattern

11101101+1=11101110

So we have to:

\begin{cases} 18_{(10} = 00010010_{(2} \\ -18_{(10} = 11101110_{(2} \end{cases}

Floating-point

Floating-point

The most important floating-point representation is that defined in IEEE 754 (ANSI/IEEE Std 754-1985)

This standard was developed to facilitate the portability of programs from one processor to another and to encourage the development of programs, sophisticated numerical

This standard has been widely adopted and is used practically in all processors and coprocessors arithmetic current

There are two floating-point representations:

  • Simple format (32 bit)
  • Dual format (64 bit)

Simple format (32 bit)

The following format is used in the simple format to store a word (32 bit):

Sign Bit Exponent Mantissa
1 bit 8 bits 23 bits

Given a E_1=\text{ exponent } + 127 (127 is the maximum positive number of the exponent) we have to:

00000000_{(2}\leq E_1\leq 11111111_{(2}

If we denote (E1)_{(10} to conversion decimal E_1, we need to:

0\leq (E1)_{(10}\leq 2^7+2^6+2^5+2^4+2^3+2^2+2^1+2^0=255

Or what would be the same:

-127\leq (E_1)_{(10} -127\leq 128

If we denote \alpha=(E_1)_{(10} -127 we have to:

-127\leq\alpha\leq 128
(E_1)=\alpha+127

We denoted s as the sign bit and M as the mantissa. As a result, two types of numbers arise:

  • Numbers normal
    -127 < \alpha < 128

    Conversion to normal number: (-1)\cdot s \cdot 1,M \cdot 2^\alpha

  • Numbers subnormal
    \alpha = (-127)

    Conversion to subnormal number: (-1)\cdot s \cdot 0,M \cdot 2^{-126}

  • \alpha = 128

    Gives rise to exceptions of type +\infty, -\infty\text{ y NaN} (not a number)

A bit hidden in the mantissa is used. You take the first digit that is always a_1 = 1. This hidden bit will not need to be stored, thus getting a few more numbers

Number Decimal Binary
\tiny \text{Maximum number normal positive} \tiny 3,40282347 \cdot 10^{38} \tiny\text{0 11111110 11111111111111111111111}
\tiny \text{Maximum number normal negative} \tiny -3,40282347 \cdot 10^{38} \tiny \text{1 11111110 11111111111111111111111}
\tiny \text{Minimum number normal positive} \tiny 1,17549435 \cdot 10^{-38} \tiny \text{0 00000001 00000000000000000000000}
\tiny \text{Minimum number normal negative} \tiny -1,17549435 \cdot 10^{-38} \tiny \text{1 00000001 00000000000000000000000}
\tiny \text{Maximum number subnormal positive} \tiny 1,17549421 \cdot 10[{-38} \tiny \text{0 00000000 11111111111111111111111}
\tiny \text{Maximum number subnormal negative} \tiny -1,17549421 \cdot 10[{-38} \tiny \text{1 00000000 11111111111111111111111}
\tiny \text{Minimum number subnormal positive} \tiny 1,40129846 \cdot 10^{-45} \tiny \text{0 00000000 00000000000000000000001}
\tiny \text{Minimum number subnormal negative} \tiny -1,40129846 \cdot 10^{-45} \tiny \text{1 00000000 00000000000000000000001}
\tiny \text{+0} \tiny 0,0 \tiny \text{0 00000000 00000000000000000000000}
\tiny \text{-0} \tiny -0,0 \tiny \text{1 00000000 00000000000000000000000}
\tiny +\infty \tiny +\infty \tiny \text{0 11111111 00000000000000000000000}
\tiny -\infty \tiny -\infty \tiny \text{1 11111111 00000000000000000000000}
\tiny \text{NaN} \tiny NaN \tiny \text{(0 or 1) 11111111 (some 1)}

Conversions

Example: converting the 3737 to simple format (32 bit)

It's positive, so the sign is 0

Number Ratio Rest
\frac{3737}{2} 1868 1
\frac{1868}{2} 934 0
\frac{934}{2} 467 0
\frac{467}{2} 233 1
\frac{233}{2} 116 1
\frac{116}{2} 58 0
\frac{58}{2} 29 0
\frac{29}{2} 14 1
\frac{14}{2} 7 0
\frac{7}{2} 3 1
\frac{3}{2} 1 1

So we have that the binary part is:

3737_{(10} = 111010011001_{(2}

For the exponent we will need to move 12 decimal places, therefore we must make 127 + 11 x 138 (we add 11 because the hidden bit is not counted)

Number Ratio Rest
\frac{138}{2} 69 0
\frac{69}{2} 34 1
\frac{34}{2} 17 0
\frac{17}{2} 8 1
\frac{8}{2} 4 0
\frac{4}{2} 2 0
\frac{2}{2} 1 0

So we have that the exponent is:

138_{(10} = 10001010_{(2}

So we have to:

3737_{(10} = \text{0 10001010 11010011001000000000000}_{(2}

Dual format (64 bit)

The following format is used in the simple format to store two words (64 bit):

Sign Bit Exponent Mantissa
1 bit 11 bits 52 bits

Given a E_1=\text{ exponent } + 1023 (1023 is the maximum positive number of the exponent) we have to:

00000000000_{(2}\leq E_1\leq 11111111111_{(2}

If we denote (E1)_{(10} to conversion decimal E_1, we need to:

0\leq (E1)_{(10}\leq 2^{10}+2^9+2^8+2^7+2^6+2^5+2^4+2^3+2^2+2^1+2^0=2047

Or what would be the same:

-1023\leq (E_1)_{(10} -1023\leq 1024

If we denote \alpha=(E_1)_{(10} -1023 we have to:

-1023\leq\alpha\leq 1024
(E_1)=\alpha+1023

We denoted s as the sign bit and M as the mantissa. As a result, two types of numbers arise:

  • Numbers normal
    -1023 < \alpha < 1024

    Conversion to normal number: (-1)\cdot s \cdot 1,M \cdot 2^\alpha

  • Numbers subnormal
    \alpha = (-1023)

    Conversion to subnormal number: (-1)\cdot s \cdot 0,M \cdot 2^{-1022}

  • \alpha = 1024

    Gives rise to exceptions of type +\infty, -\infty\text{ y NaN} (not a number)

A bit hidden in the mantissa is used. You take the first digit that is always a_1 = 1. This hidden bit will not need to be stored, thus getting a few more numbers

Number Decimal Binary
\tiny \text{Maximum number}\\ \text{normal positive} \tiny 1,7976931 \cdot 10^{308} \tiny \text{0 11111111110 1111111111111111111111111111111111111111111111111111}
\tiny \text{Maximum number}\\ \text{normal negative} \tiny -1,7976931 \cdot 10^{308} \tiny \text{1 11111111110 1111111111111111111111111111111111111111111111111111}
\tiny \text{Minimum number}\\ \text{normal positive} \tiny 2,2250738 \cdot 10^{-308} \tiny \text{0 00000000001 0000000000000000000000000000000000000000000000000000}
\tiny \text{Minimum number}\\ \text{normal negative} \tiny -2,2250738 \cdot 10^{-308} \tiny \text{1 00000000001 0000000000000000000000000000000000000000000000000000}
\tiny \text{Maximum number}\\ \text{subnormal positive} \tiny 2,2250738 \cdot 10^{-308} \tiny \text{0 00000000000 1111111111111111111111111111111111111111111111111111}
\tiny \text{Maximum number}\\ \text{subnormal negative} \tiny -2,2250738 \cdot 10^{-308} \tiny \text{1 00000000000 1111111111111111111111111111111111111111111111111111}
\tiny \text{Minimum number}\\ \text{subnormal positive} \tiny 4,9406564 \cdot 10^{-324} \tiny \text{0 00000000000 0000000000000000000000000000000000000000000000000001}
\tiny \text{Minimum number}\\ \text{subnormal negative} \tiny -4,9406564 \cdot 10^{-324} \tiny \text{1 00000000000 0000000000000000000000000000000000000000000000000001}
\tiny +0 \tiny 0,0 \tiny \text{0 00000000000 0000000000000000000000000000000000000000000000000000}
\tiny -0 \tiny -0,0 \tiny \text{1 00000000000 0000000000000000000000000000000000000000000000000000}
\tiny +\infty \tiny +\infty \tiny \text{0 11111111111 0000000000000000000000000000000000000000000000000000}
\tiny -\infty \tiny -\infty \tiny \text{1 11111111111 0000000000000000000000000000000000000000000000000000}
\tiny \text{NaN} \tiny \text{NaN} \tiny \text{(0 or 1) 11111111111 (some 1)}

Conversions

Example: converting the 3737 to double format (64 bit)

It's positive, so the sign is 0

Number Ratio Rest
\frac{3737}{2} 1868 1
\frac{1868}{2} 934 0
\frac{934}{2} 467 0
\frac{467}{2} 233 1
\frac{233}{2} 116 1
\frac{116}{2} 58 0
\frac{58}{2} 29 0
\frac{29}{2} 14 1
\frac{14}{2} 7 0
\frac{7}{2} 3 1
\frac{3}{2} 1 1

So we have that the binary part is:

3737_{(10} = 111010011001_{(2}

For the exponent we will need to move 12 decimal places, therefore we must do 1023 + 11 x 1034 (we add 11 because the hidden bit is not counted)

Number Ratio Rest
\frac{1034}{2} 517 0
\frac{517}{2} 258 1
\frac{258}{2} 129 0
\frac{129}{2} 64 1
\frac{64}{2} 32 0
\frac{32}{2} 16 0
\frac{16}{2} 8 0
\frac{8}{2} 4 0
\frac{4}{2} 2 0
\frac{2}{2} 1 0

So we have that the exponent is:

138_{(10} = 10000001010_{(2}

So we have to:

\tiny 3737_{(10} = \text{0 10000001010 1101001100100000000000000000000000000000000000000000}_{(2}

Boolean algebra

Boolean algebra

The Boolean algebra is a discipline that analyzes the digital circuits and other digital systems

It is named after the English mathematician George Boole, who proposed the basic principles of this algebra in 1854 in his treatise An investigation of the laws of thought on wich to found the mathematical theories of logic and probabilities

In 1938, Claude Shannon, an assistant researcher in the M.I.T. Department of Electrical Engineering, suggested that Boole algebra could be used to solve switching circuit problems

Shannon's techniques were used, consequently in the analysis and design of digital circuits. Boole algebra is a useful tool in two areas:

  • Analysis
    It is a concise way of discovering the operation of digital circuits
  • Design
    Given a desired function, Boole algebra can be applied to develop a simplified complexity implementation of that function

Boole algebra uses variables and logical operations. A variable can take the value 1 when it is true or 0 when false. It has only three basic logical operations (AND, OR, and NOT), which are symbolically represented:

AND OR NOT
A AND B A OR B NOT TO
A\wedge B A\vee B \neg A

Truth tables

The truth table defines the basic logical operations, listing the value for each possible combination of the operand values. In addition to the basic operators of Boole algebra, three useful operators are also typically listed: XOR, XNOR, NAND, and NOR. Which can be obtained by trading with basic traders

Table AND

The AND operation is true if and only if the two operands are true

A B A\wedge B
0 0 0
0 1 0
1 0 0
1 1 1

Table OR

The OR operation is true if and only if one or both operands are true

A B A\vee B
0 0 0
0 1 1
1 0 1
1 1 1

Table NOT

The NOT operation inverts the value of the operand

A \neg A
0 1
1 0

Table XOR

The XOR operation is true if and only one of the operands is true

It is equivalent to performing:

(A \wedge \neg B) \vee (\neg A \wedge B) \equiv (A \vee B) \wedge (\neg A \vee \neg B)

A B A\oplus B
0 0 0
0 1 1
1 0 1
1 1 0

Table XNOR

The operation XNOR is the negation of the XOR operation

It is equivalent to performing:

(A \wedge B) \vee (\neg A \wedge \neg B)

A B \neg(A\oplus B)
0 0 1
0 1 0
1 0 0
1 1 1

Table NAND

The operation NAND is the negation of the AND operation

It is equivalent to performing:

\neg(A \wedge B)

A B \neg(A \wedge B)
0 0 1
0 1 1
1 0 1
1 1 0

Table NOR

The operation NOR is the negation of the OR operation

It is equivalent to performing:

\neg(A \vee B)

A B \neg(A \vee B)
0 0 1
0 1 0
1 0 0
1 1 0

Basic identities

There are two kinds of identities:

  • Basic rules (postulated)
    You assert without demonstration
  • Other identities
    Can be derived from the basic postulates

The postulates define the way in which boolean expressions are interpreted

Basic rules
A \wedge B = B \wedge A A \vee B = B \vee A Law commutative
A \wedge (B \vee C) = (A \wedge B) \vee (A \wedge C) A \vee (B \wedge C) = (A \vee B) \wedge (A \vee C) Law distributive
1 \wedge A = A 0 \vee A = A Neutral element
A \wedge \neg A = 0 A \vee \neg A = 1 Complementary element
Other identities
0 \wedge A = 0 1 \vee A = 1
A \wedge A = A A \vee A = A
A \wedge (B \wedge C) = (A \wedge B) \wedge C A \vee (B \vee C) = (A \vee B) \vee C Law associative
\neg (A \wedge B) = \neg A \vee \neg B \neg (A \vee B) = \neg A \wedge \neg B Theorem of Morgan