Binary number

From ScienceZero
Revision as of 08:40, 15 August 2014 by Bjoern (Talk | contribs)

Jump to: navigation, search

The binary numeral system is a positional notation with a radix of 2. The symbols used in writing is usually 0 and 1.

Each binary digit has double the value of the preceding one, the first digit has the value 1.

 0001 = 1
 0010 = 2
 0100 = 4
 1000 = 8

Summing up the values for the digits that are 1 gives the number in decimal. So for the binary number 0110 we get:

 0001
 0010 = 2
 0100 = 4
 1000
 --------
 0110 = 6


Since each digit is half the value of the bit to the left and double the value of the bit to the right it is simple to divide or multiply a number by two by just shifting all bits just one position to the right or left. This is often the key to efficient computer programming.

Signed binary numbers

There are three practical ways of representing negative numbers in binary

  1. One's complement
  2. Two's complement
  3. Sign bit

In almost all modern designs Two's complement is used with integers and Sign bit is used with floating-point.

To change the sign of a floating-point number you negate the sign bit. To change the sign of an integer you subtract it from zero or you negate all bits in the word and add 1.

Change the sign of a two's complement integer without subtraction
01100110
10011001 <- first negate all bits
10011010 <- then add 1