Paul Gaudio and Jack McDonough -- Error-Correcting Codes: a Brief History, Basics, and Applications Your paper and presentation were generally good (if a bit rough in execution) as an introduction to coding theory and some related combinatorial questions -- in particular the sphere-packing bound and the examples of perfect codes. The major issues I see: a) I'm not sure you really appreciate that nearest-neighbor decoding, while a good theoretical tool, is completely impractical on codes where k is large (see comment 4 below). This means that your discussion of decoding is only scratching the surface and not representative of what people actually do. That's not a big issue because it's beyond the scope of what you were being asked to do in the project. But it's something that you should realize is not quite what you were claiming. b) I would have also liked to see some discussion of the fact that the Hamming [7,4,3] code is only one of the first members of an infinite family, where n = 2^m - 1, k = 2^m - m - 1 and d = 3. The code you looked at has m = 3. But for instance you also get [15,11,3], [31,26,3], ... Hamming codes by taking m = 4, m = 5, ... . You could have described the patterns for their generator and parity check matrices without too much trouble, since the redundancy set columns in G in each case are the 2^m - 1 vectors of length m obtained from the binary expansions of the integers 1,2, ... , 2^(m-1). Some specific comments: 0) Did you make the figure on p. 3 yourself? If not, you should give the source. Later, on p. 8, I'm pretty sure you did not make the figure because you don't discuss anything about "deep holes" for codes in the text(!) Document your sources for figures as well as text. 1) p. 2 -- "The vast majority of coding and information theory’s development occurred during the 1950’s" > That's certainly true for the early development and the basics. However, research about coding methods is still going on today and there have been many new ideas found since the 1950's "... Bell Laboratories (the company founded by Alexander Graham Bell, the inventor of the telephone) ... " > Technical point: Bell Labs was the research lab run by the Bell System, but it wasn't the whole company, which had a quite complicated corporate structure that changed from time to time. Before the break-up in 1984, AT&T (American Telephone and Telegraph) was the umbrella company that owned the regional telephone service providers in most of the US and ran Bell Labs as a subsidiary. 2) p. 4 -- "with the finite field F_q^k" > Be careful, the finite field is the F_q. F_q^k is the k-dimensional vector space over the field F_q, where a typical element is a vector (a_1, ... , a_k) with a_i in F_q for all i. (You were also saying this in your presentation, but it's not technically correct.) 3) p. 4 -- "k​ defines the information set of a code C​ (Little, Mosteig 1)." > It would be better to say that the information set of a code consists of k binary digits (``bits'') 4) p. 5 and 6 -- It's not really correct to say that the Nearest Neighbor Decoding is the "most popular strategy." It's the theoretical tool used to analyze the error-correcting capacity of a code, but it's not the way decoding is done in practice in most cases (it's too time consuming and complex if you have a code where k is large, so 2^k is huge). The decoding methods used in practice do not compare the received word directly with all the code words to determine the nearest neighbor because there are just too many of them. For example, if k = 64 (not untypical for the codes used in some applications), then 2^64 is approx. 1.8 x 10^19. Even if decoding doesn't have to happen in real time (and it does in lots of cases), you NEVER want to be comparing a received word with that many codewords(!) to determine the nearest neighbor. 5) p. 6 (and in presentation) From what you say about the theorem here, I can't quite tell whether you understand that "any error vector of weight at most t = [(d-1)/2]" and "wt(e) ≤ t=[(d-1)/2] . (Pless 11-12)" are two completely equivalent ways of saying exactly the same thing. Why did you feel that you needed to say both of them?? 6) p. 6 and 7 -- You are using several different notations B(x,t), B_t(x) and S_t(x) for the same Hamming balls(!) 7) p. 6 and 7: The two "different" proofs of the disjointness of the Hamming balls of radius t = [(d-1)/2] about distinct codewords in a code of minimum distance d are also follow essentially exactly the same approach. I'm not sure why you felt you needed both of them. 8) p. 12 - end. The discussion of the binary Golay codes is good, but you never really say why the extended code (the one with n = 24) has d = 8 not d = 4. One way to do that in this case is to generate all 2^{12} = 4096 codewords and compute their weights (brute force, but not that bad if you use a computer!) Final Project Grade Computation Bibliography: 8/10 Paper: 55/60 Presentation: 27/30 Total: 90/100