Online Encyclopedia Search Tool

Your Online Encyclopedia

 

Online Encylopedia and Dictionary Research Site

Online Encyclopedia Free Search Online Encyclopedia Search    Online Encyclopedia Browse    welcome to our free dictionary for your research of every kind

Online Encyclopedia



Hamming code

(Redirected from Hamming coding)

In telecommunication, a Hamming code is an error-correcting code named after its inventor, Richard Hamming. Hamming codes can detect single and double-bit errors, and correct single-bit errors as well. In contrast, the simple parity code cannot detect errors where two bits are transposed, nor can it help correct the errors it can find.

Contents

History

Hamming worked at Bell Labs in the 1940s on the Bell Model V computer, an electromechanical relay-based monster with cycle times in seconds. Input was fed in on punch cards, which would invariably have read errors. During weekdays special code would find errors and flash lights so the operators could correct the problem. During after-hours periods and on weekends, when there were no operators, the machine simply moved on to the next job.

Hamming worked on weekends, and grew increasingly frustrated with having to restart his programs from scratch due to the unreliability of the card reader. Over the next few years he worked on the problem of error-correction, developing an increasingly powerful array of algorithms. In 1950 he published what is now known as Hamming Code, which remains in use in some applications today.

Parity

Parity adds a single bit that indicates if the preceding data was even or odd. If a single bit is changed in transmission, the message will change parity and the error can be detected at this point. Parity checking is not very robust, if two bits are changed the parity will return to normal, and the check bit will not be able to detect the error.

Moreover parity does not indicate which bit contained the error, even when it can detect it. The data must be discarded entirely, and re-transmitted from scratch. On a noisy transmission medium this process could continue indefinitely.

Two-of-five

In the 1940s Bell used a slightly more sophisticated code known as two-of-five. This code ensured that every block of five bits (known as a 5-block) had exactly two 1s. The computer could tell if there was an error if in its input there were not exactly two 1s. Two-of-five remained able to detect single bits only, if one bit flipped to a 1 and another to a 0, the two-of-five rule remained true and the error would go undiscovered.

Error correction

Another code in use at the time repeated every data bit several times in order to ensure that it got through. For instance, if the data bit to be sent was a 1, an n=3 repetition code would send "111". If the receiving end got "011", it was not only obvious that there was an error, but where the error was. These sorts of codes were known as error-correcting for this reason, the receiving end could re-construct the original message.

Such codes could not detect all errors, however. In our example, if the error flipped two bits and the receiver got "001", the system would conclude that the original message was 0, not 1. In order to detect single bit errors the repetition would have to be increased, at n=4 all single bit errors can be found, although not corrected.

Moreover the repetition code was extremely inefficient, in this case reducing throughput by three times. In order to reduce the chance of multiple errors "fooling" the system, the efficiency had to be reduced even more, perhaps four or five times.

Hamming codes

If more error-correcting bits are included with a message, and if those bits can be arranged such that different errored bits produce different error results, then bad bits could be identified. In a 7-bit message, there are seven possible single bit errors, so three error control bits could potentially specify not only that an error occurred but also which bit caused the error.

Hamming studied the existing coding schemes, including two-of-five, and generalized their concepts. To start with he developed a nomenclature to describe the system, including the number of data bits and error-correction bits in a block. For instance, parity includes a single bit for any data word, so assuming ASCII words with 7-bits, Hamming described this as an (8,7) code, with eight bits in total, of which 7 are data. The repetition example would be (3,1), following the same logic. The information rate is the second number divided by the first, for our repetition example, 1/3rd.

Hamming also noticed the problems with flipping two or more bits, and described this as the "distance". Parity has a distance of 1, as any two bit flips will be invisible. The same is true of the (3,1) repetition, as two bits being flipped side-by-side lead to an error. A (4,1) repetition (each bit is repeated four times) has a distance of 2, flipping two side-by-side bits will no longer go undiscovered.

Hamming was interested in two problems at once; increasing the distance as much as possible, while at the same time increasing the information rate as much as possible. During the 1940s he developed several encoding schemes that were dramatic improvements on existing codes. Key to all of his systems was to have the parity bits overlap, such that they managed to check each other as well as the data.

Hamming (7,4)

Today, Hamming Code really refers to a specific (7,4) code he introduced in 1950. Hamming Code adds three additional bits to every four data bits of the message. Hamming's (7,4) algorithm can correct any single-bit error, and detect all two-bit errors. Since the medium would have to be uselessly noisy for 2 out of 4 bits to be lost, Hamming's (7,4) is generally lossless.

The algorithm is simple:

  1. All bit positions that are powers of two are used as parity bits. (positions 1, 2, 4, 8, 16, 32, 64, etc.)
  2. All other bit positions are for the data to be encoded. (positions 3, 5, 6, 7, 9, 10, 11, 12, 13, 14, 15, 17, etc.)
  3. Each parity bit calculates the parity for some of the bits in the code word. The position of the parity bit determines the sequence of bits that it alternately checks and skips.
      Position 1: check 1 bit, skip 1 bit, check 1 bit, skip 1 bit, etc.
      Position 2: check 2 bits, skip 2 bits, check 2 bits, skip 2 bits, etc.
      Position 4: check 4 bits, skip 4 bits, check 4 bits, skip 4 bits, etc.
      Position 8: check 8 bits, skip 8 bits, check 8 bits, skip 8 bits, etc.
      Position 16: check 16 bits, skip 16 bits, check 16 bits, skip 16 bits, etc.
      Position 32: check 32 bits, skip 32 bits, check 32 bits, skip 32 bits, etc.

Example

Consider a data word "1001101". Hamming code inserts bits at the 1, 2, 4 and 8 positions, so the word is expanded to "__1_001_101". Following the algorithm, the first parity bit adds up 1, 3, 5 etc., which in this message returns a 1. Continuing the algorithm, we get 1, 1 and 0 for the remaining parity bits. The resulting encoded message is "11110010101".

The clever part of Hamming Code is that the wrong parity bits encode the position of the incorrect data bit. For instance, let's say you received "11110010111". In this example, bits 2 and 8 are incorrect parity bits due to the flipped bit in position 10 - both of those bits are checking that position. Due to the way the Hamming Code is constructed, 2 + 8 = 10, pointing out which bit is incorrect.

See also: Hamming distance, Golay code



Last updated: 10-24-2004 05:10:45