Binary number

From ScienceZero
Jump to: navigation, search

The binary numeral system is a positional notation with a radix of 2. The symbols used in writing is usually 0 and 1.

Each binary digit has double the value of the preceding one, the first digit has the value 1.

 0001 = 1
 0010 = 2
 0100 = 4
 1000 = 8

Summing up the values for the digits that are 1 gives the number in decimal. So for the binary number 0110 we get:

 0001
 0010 = 2
 0100 = 4
 1000
 --------
 0110 = 6


Since each digit is half the value of the bit to the left and double the value of the bit to the right it is simple to divide or multiply a number by two by just shifting all bits just one position to the right or left. This is often the key to efficient computer programming.

Signed binary numbers

There are three practical ways of representing negative numbers in binary

  1. One's complement
  2. Two's complement
  3. Sign magnitude

In almost all modern designs Two's complement is used with integers and Sign magnitude is used with floating-point.

To change the sign of a sign magnitude number you negate the sign bit. To change the sign of an two's complement number you subtract it from zero or you negate all bits in the word and add 1.

Change the sign of an eight bit two's complement integer without using subtraction
0b01100110 =  102
0b10011001 = -103  <- first negate all bits
0b10011010 = -102  <- then add 1

You can check the sign of an integer by checking the most significant (leftmost) bit, if it is 1 the integer is negative.

There is one more negative two's complement numbers than there are positive numbers. In an 8 bit number you have the range -128 to 127 so some algorithms may give the wrong result sice the average is not 0.

There are two possible zeros in sign magnitude numbers, -0 and +0. Remember to ignore the sign bit when comparing zeros so -0 equals +0.