Coding Theory



Math 696: Special Topics
Course number 20786
Spring 2004
Meeting: MW, 4:00- 5:15 pm.
Physics 147
San Diego State University


Professor: Mike O'Sullivan
Email: m.osullivan@math.sdsu.edu
OfficeGMCS 579, ext. 594-6697
Office Hours: MWF 2:00- 3:45.
                             Other times: by appointment.

Detailed Information

SCHEDULE
ASSIGNMENTS
COMPUTER PROJECTS
Lecture notes: Polynomial rings and finite fields
Lecture notes: Reed-Solomon codes and the Berlekamp-Massey algorithm.
Lecture notes: Iterative decoding.
Lecture notes: Noisy coding theorem.

Course Description

This course concerns error-control coding, the effort to correct errors that occur in the transmission and storage of data. We will focus on three types of codes and the algorithms that are used to decode them: Reed-Solomon codes and the Berlekamp-Massey algorithm, convolutional codes and the Viterbi algorithm, low-density parity-check codes and belief propogation algorithms. The first two codes are ubiquitous in modern communications systems, while low-density parity-check codes are a promising alternative that has dominated research conferences in the last few years. All three codes were developed in the late 1950's and early 1960's, but low-density parity-check codes were not appreciated until the last decade.

These three families of codes lead to a great variety of interesting mathematics. The construction and algorithms for encoding and decoding Reed-Solomon codes employ the algebraic theory of finite fields and polynomial rings. The code vectors take values in a finite field and an error switches one field element to another. The Berlekamp-Massey algorithm computes a polynomial that identifies where the error occured.

There is also a beautiful algebraic theory for convolutional codes, but in engineering practice the algebraic aspects are minimized. Convolutional codes are usually binary and they are modeled by circuit diagrams. Errors are probabilistic: the decoder receives a probability that a given bit is a 0 or 1. The Viterbi algorithm then attempts to find the most likely codeword given the input probabilities.

Low-density parity check codes are also studied primarily in the binary case. There is a highly efficient decoding algorithm for these codes, based on probabilistic reasoning, that is similar to the Viterbi algorithm and to algorithms used in artificial intelligence. Decoding can be modeled by messages, passed along the edges of a bipartite graph, that represent ``beliefs'' about the binary values of a codeword. Research into LDPC codes is very active and includes analysis of the decoding algorithm, both experimental and theoretical, and construction of good codes using a variety of methods, random, algebraic, geometric, and combinatorial.

Course Materials

No textbooks cover exactly what I want, so the course will be based on my lectures and course notes. A list of references and recommendations for articles and book chapters will be given as the course develops. Maple experimentation will be required. You may use Maple in the department computerlabs or purchase it, at a student price. Maplesoft has a special program to offer Maple 9 for just $75 to students in this course. Check
http://www.maplesoft.com/Forms/Promotion.asp?PromoID="{47B6EC25-B5DF-49ED-A147-97F454B5A8F0}\"&Action=start

Course notes will be avaliable in January.

Recommended References:

I will add to the following list of references as the course progresses.

R. Blahut, Theory and Practice of Error Control Codes, Addison-Wesley, 1983. Unfortunately out of print and not in our library. I have a copy.

Grading

There will be several written assignments, computer simulation of algorithms using Maple (some things may be done in Matlab), and a project to be presented to the class and in writing.