EECS 208 | Computational Principles for High-Dimensional Data Analysis
This 4 unit graduate course EECS208 introduces basic geometric and statistical concepts and principles of low-dimensional models for high-dimensional signal and data analysis, spanning basic theory, efficient algorithms, and diverse applications. We will discuss recovery theory, based on highdimensional geometry and non-asymptotic statistics, for sparse, low-rank, and low-dimensional models – including compressed sensing theory, matrix completion, robust principal component analysis, and dictionary learning etc. We will introduce principled methods for developing efficient optimization algorithms for recovering low-dimensional structures, with an emphasis on scalable and efficient first-order methods, for solving the associated convex and nonconvex problems. We will illustrate the theory and algorithms with numerous application examples, drawn from computer vision, image processing, audio processing, communications, scientific imaging, bioinformatics, information retrieval etc. The course will provide ample mathematical and programming exercises with supporting algorithms, codes, and data. A final course project will give students additional hands-on experience with an application area of their choosing. Throughout the course, we will discuss strong conceptual, algorithmic, and theoretical connections between low-dimensional models with other popular data-driven methods such as deep neural networks (DNNs), providing new perspectives to understand deep learning.
The course includes 3 hours lectures (by the Instructor) per week and 1 hour discussion session every 2 - 3 weeks (depending on the pace of the course). Homework includes both written exercises and programming exercises. A final course project includes a midterm proposal and final presentation and report.