Prerequisites: 6.006, 6.042 and one of 6.045 or 6.046, or permission of instructor
Instructors: Profs. Ryan Williams (firstname.lastname@example.org) and Virginia Williams (email@example.com)
Schedule: MW2:30-4, room 36-156
An overview of the theory of parameterized algorithms and the emergent "problem-centric" theory of fine-grained complexity, both of which reconsider how to measure the difficulty and feasibility of solving computational problems. Topics include: fixed-parameter tractability (FPT) and its characterizations, the W-hierarchy (W, W, W[P], etc), 3SUM-hardness, APSP-equivalences, SETH hardness of problems, as well as connections to circuit complexity and other aspects of computing.
Extended description follows:
This course consists of two parts, both of which are intensively active research areas in theoretical computer science.
The first part deals with problem parameterizations. Many problems we want to solve are often NP-hard or worse, but they need to be solved anyway. Over the years, multiple paradigms for coping with NP-hardness have been introduced by either relaxing the problem or strengthening the computational model. Within the last 20 years, a new paradigm arose where one measures the time complexity of an algorithm not just in terms of the input length but also a small side parameter. A priori, the parameter can be anything, but the interesting case is when complex instances of the problem still have relatively small parameter values. The goal is to identify interesting parameterizations of hard problems where we can design algorithms running in time polynomial in the input length but possibly exponential (or worse) in the small parameter. Such an algorithm is called "fixed-parameter tractable". The first part of the course will develop techniques for designing efficient fixed-parameter algorithms, and for showing that for various problems such algorithms are unlikely to exist.
The second part of the course focuses not on "hard" problems but rather problems that have already been deemed "easy", i.e. those that are known to have polynomial time algorithms and thus for which NP-hardness says nothing. The best known algorithms for many easy problems have high runtimes and are rarely used in practice. Improving these running times has been a longstanding open problem, with little to no progress. It is thus completely plausible that these algorithms are optimal. However, developing unconditional lower bounds seems nearly impossible with current techniques. A new, conditional theory of hardness has recently been developed, based around several plausible conjectures. The theory develops interesting reductions between seemingly very different problems, showing that the reason why the known algorithms have been difficult to improve is likely the same, even though the known runtimes of the problems of interest might be very different. The second part of the course will deal with this “fine-grained complexity” theory. We will see many reductions, and also connections to classical circuit complexity theory.
The course will be heavily research-oriented with lots of open problems, and it will introduce you to new ways of thinking about existing theory.