Your final project should consist of an expository paper roughly 3–5 pages in length related to an application of abstract or linear algebra. You should give a brief description of the topic along with relevant definitions, ideas, and results, but you need not include proofs. Be sure to include examples when relevant—please do not copy examples from sources if possible. You should not assume your readers have more than a general mathematical background, except for familiarity with things we have covered in this course. The paper should be understandable to a fellow member of the class.
Below is a list of possible final projects topics. Some references are provided as a starting point, but you may find it helpful to consult other texts, articles, or web sites. (Remember to cite your sources appropriately.) You are welcome to suggest your own topics if there is another area or application you are interested in, so please feel free to discuss your ideas with me.
Only one student can work on a given topic, and topics will be assigned as requested (first come, first serve).
Use of LaTeX for typesetting is encouraged. See, for instance, the following tutorial for more details.
All topics (including ones listed below) must be approved by me by Thursday, April 6 (or earlier).
If you would like me to read through a rough draft and give you comments, please give me a rough draft by Thursday, April 20. (This is optional!)
The final project will be due on Monday, May 4 by 11 am.
A (balanced incomplete) block design is a collection of subsets of a fixed set that is, in some sense, balanced or symmetric. While this notion originated from experiment design, it is also has applications in coding theory. See §2.1–2.2 and §3.2 of the textbook.
Shannon's theorem (or the noisy-channel coding theorem) gives a theoretical bound on how much information can be sent through a noisy channel and shows the existence of good error correcting codes without constructing such codes explicitly. This project requires a bit of familiarity with probability. See §2.1–2.2 of van Lint, An Introduction to Coding Theory, available here.
Real-world identification numbers use a variety of error-detection schemes. These range from simple checksums to more complicated schemes, such as the Verhoeff algorithm, which is based on the dihedral group of order 10. See the relevant sections (and exercises) of Gallian, Contemporary Abstract Algebra, available here. (For a checksum scheme based on a more complicated algebraic structure, see the Damm algorithm.)
Three people enter a room, and a blue or red hat is put on each of their heads. Each of them must simultaneously either guess the color of their own hat or pass. They win if one of them guesses correctly and no one guesses incorrectly. What is the best strategy? The answer is related to coding theory. See the following New York Times article, as well as this article by Lenstra and Seroussi.
The Vigenère cipher is a simple cipher dating back to the 16th century. Unfortunately, due to its simplicity, it can easily be broken using simple cryptanalysis. See §7.1–7.2 of the textbook.
Elliptic curves arise as the zeroes of a certain type of cubic equation. Interestingly, one can define a group structure on the set of points of an elliptic curve in a nonobvious way. Over a finite field, this group can be used as the basis of a public-key cryptography system. See §9.4 and §9.7 of the textbook.
The Merkle-Hellman knapsack cryptosystem was a public-key cryptography system based on the knapsack problem. Since this problem is known to be difficult in general but easy in some cases, this was used as the basis for a cryptosystem. Unfortunately, this system was shown to be insecure by Shamir in 1982. See §8.5 of Rosen, Elementary Number Theory and its applications.
The Cayley-Purser algorithm was a public-key cryptosystem described by 16-year old Sarah Flannery (based on unpublished work of Michael Purser). Unfortunately, this system was later shown to be insecure. See Flannery, Cryptography: An Investigation of a New Algorithm vs. the RSA, available here.
The Advanced Encryption Standard (AES) is an encryption algorithm adopted by the US government and now used worldwide. It consists of multiple rounds of linear and nonlinear operations and is thus far thought to be safe from feasible attacks. See §10 of the textbook.
Though the RSA cryptosystem is currently thought to be secure, it is vulnerable to attack if not used properly. See this survey by Boneh. (For your project, you should choose one or two of these to discuss in detail.)
Recently there has been research into the effectiveness of RSA keys used in the real world. While most keys are secure, a small percentage can be attacked due to faulty key generation. See the following recent paper by Heninger, Durumeric, Wustrow, and Halderman.
Quantum cryptography refers to cryptographic systems that use features of quantum mechanics, such as the Heisenberg uncertainty principle, to ensure security. Such systems can, for instance, detect whether a message has been seen by an intruder. See §6.3 of Mollin, An Introduction to Cryptography and the Nature article by Klarreich, Can You Keep a Secret?.
A cryptographic hash is a one-way function used for many applications, such as digital signatures, message authentication, and proof-of-work systems (such as used by Bitcoin and other cryptocurrencies). See the following survey by Mironov.
Suppose Alice and Bob want to make a decision using a coin-flip, but they are talking on the phone. Is there a way that they can do this so that it is impossible for either of them to cheat? See §11.5 of Rosen, Elementary Number Theory and its applications.
Pollard's p–1 algorithm is a special purpose factoring algorithm for when one of the prime factors p is such that p–1 has only small prime factors. See §4.6 of Rosen, Elementary Number Theory and its applications.
You may instead discuss another common factorization algorithm, such as the quadratic sieve method (see the following article by Pomerance).
The Pohlig-Hellman algorithm is a special purpose algorithm for solving the discrete log problem modulo p when p–1 has only small prime factors. See §8.2.2 and exercise 8.1 of Katz-Lindell, Introduction to Modern Cryptography, available here.
You may instead discuss another algorithm for the discrete log problem, such as the index calculus method. See §8.2.4 of Katz-Lindell above.
The Miller-Rabin primality test is a fast probabilistic algorithm for primality testing. See the Wikipedia article.
You may instead discuss another primality test, such as the AKS primality test. See the following article by Bornemann in the Notices of the American Mathematical Society.
Hidden Markov models are statistical models similar to Markov chains except that the states are not directly observable but instead cause probabilistic observations. They are useful for modeling more complex phenomena than Markov chains, such as speech recognition. See the following article by Rabiner and Juang.
It is often desirable to perform random walks on graphs that "spread out" quickly. This leads to the notion of an expander graph which is governed by the second-largest eigenvalue of its adjacency matrix. See the following notes by Nielsen.
How many ways are there to color the faces of a cube with n different colors? This question is trickier than expected because many colorings of a cube are equivalent under rotation. There is a method of answering such questions using group theory called Pólya counting. See §11.1–11.3 of the textbook.
Twisty puzzles such as the Rubik's cube are often inherently mathematical in nature. One can often use tools from group theory to analyze and help solve such puzzles. For instance, see these notes by Tom Davis.
One common way to solve a linear system in two variables x and y is to solve for y in terms of x and then substitute, thereby eliminating y. It is similarly possible to eliminate y in a polynomial system of equations of the form f(x,y)=g(x,y)=0 using the theory of resultants. See §2 from Sturmfels, "Introduction to Resultants," available here.
One approach to facial recognition uses principal component analysis to break down an image into eigenfaces. See the following paper by Turk and Pentland.