HISTORY AND BACKGROUND
Data compression first begun in 1948 when Claude Shannon started a research.on Information theory.
Shannon planned an equation which calculated the minimum number of bits necessary to encode a message based on probability; he described the minimum number of bits as the “information content”. This equation was later known as Shannon’ law.
Shannon’s law indicates that there is a minimum size that can be achieved by any lossless compression algorithm.
“In 1951, David Huffman and his MIT information theory classmates were given the choice of a term paper or a final exam. The professor, Robert M. Fano, assigned a term paper on the problem of finding the most efficient binary code. Huffman, unable to prove any codes were the most efficient, was about to give up and start studying for the final when he hit upon the idea of using a frequency-sorted binary tree, and quickly proved this method the most efficient.
In doing so, the student outdid his professor, who had worked with information theory inventor Claude Shannon to develop a similar code. Huffman avoided the major flaw of Shannon-Fano coding by building the tree from the bottom up instead of from the top down”. [i]
In 1977 two Israeli computer scientists Abraham Lempel and Jacob Ziv created a substitutional compressor called LZ77 this compressing algorithm became the base for many other improved substitutional compressors. This compression algorithm became used in text compression due to its ability to capture high order relationships between words and phrases; it is also used in archiving programs.
Later versions of this compressing algorithm include LZ78 which is an improved scheme which came up a year later and is commonly used to compress binary data, such as bitmaps.
In 1984, Terry Welch modified the LZ78 compressor for implementation in high-performance disk controllers. The result was the LZW algorithm that is commonly found today.
In 1987 John Nagle proposed the Fair Queuing technique.
In 1989 a group of students composed by Lixia Zhang and by Alan Demers, Srivinasan Keshav and Scott Shenke developed the Weighted Fair Queuing technique which was later improved by Cisco and called Class-Based Weighted Fair Queuing (CBWFQ).
“In 1996, the Frame Relay Forum Technical Committee approved an implementation agreement for data compression over Frame Relay, FRF.9. This agreement specifies a standard for compression to ensure interoperability. FRF.9 is a per virtual circuit compression mechanism for both switched virtual circuits (SVC) and permanent virtual circuits (PVC).
In 1997, Microsoft Corporation introduced the Microsoft Point to Point Compression (MPPC) scheme as a means of representing arbitrary Point to Point Protocol (PPP) packets in a compressed form. MPPC was ratified by the IETF and is known as RFC 2118. “ [ii]