Course format:
The course will constitute of several modules, where in each module we will focus on a specific algorithmic problem or algorithmic setting. The goal will be to study theoretical tools to analyze algorithms that work well in this specific domain.
A tentative list of modules is given below (we will not learn all of them, but choose a selection depending on preferences of the lecturer and audience). Under each module, there is a tentative list of topics/concepts.
Grading:
30% homework (around 3 HWs)
20% scribe (writing a summary of 1-2 lectures)
50% final project
Tentative list of modules and topics:
1. Algorithms for approximate nearest neighbor search.
Doubling dimension; geometric random graphs; Steiner hardness; spectral notions.
2. Models and algorithms for large graphs.
Random network models; core-periphery structure; highway dimension & skeleton dimension; applied notions of graph expansion
3. Algorithms in human brain.
Natural shapes and the visual system; algorithmic models for short term memory; Hopfield networks and associative memory.
--
We will study a subset of the following topics (not all, due to time constraints):
4. Clustering.
Perturbation resilience; approximation stability;
5. Smoothed analysis.
Linear programming and the simplex method.
6. Deep learning: selected topics.
Overparameterization; local minima; tensor decompositions.
7. Randomized algorithms: selected topics.
Simple hash functions and input entropy; instance optimality; self-improving algorithms.