EECS Special Seminar: Virginia Williams "A Fine-grained Approach to Algorithms and Complexity"

SHARE:

Event Speaker: 

Virginia Williams

Event Location: 

32-G882 (Hewlett Room)

Event Date/Time: 

Thursday, January 21, 2016 - 4:00pm

 Abstract:  Algorithms research has developed an impressive and
ever-growing toolbox of algorithmic techniques. Despite this, however,
there are many important computational problems for which none of the
existing techniques seem to work. The fastest known algorithms for these
problems are essentially the classical, often brute-force algorithms
discovered when these problems were first studied. Examples of such
problems are pervasive throughout computer science: (1) for the
all-pairs shortest paths problem (APSP) in n node graphs, motivated by
GPS and Internet routing applications, all algorithms run in n^{3-o(1)}
time; (2) the longest common subsequence (LCS) of two sequences of
length n, central in computational biology, seems to require n^{2-o(1)}
time; (3) the Boolean Satisfiability problem (SAT) on formulas with n
variables seems to require 2^{n-o(n)} time, and many many more.

These problems need to be solved in practice, where runtimes such as n^2
and n^3 are often not considered fast.
Why have substantial improvements over the classical algorithms been so
difficult to attain? Are all these problems hard for the same reason?

Proving that faster algorithms do not exist seems difficult – it is an
open problem whether even notoriously hard problems like SAT have linear
time algorithms. Traditional approaches for proving hardness have
focused on placing problems in complexity classes such as AC0, L, P, NP,
PSPACE etc. Such an approach, however, does not address the exact
running time needed to solve problems. The theory of NP-completeness
comes close- if a problem is NP-complete, it is considered unlikely that
it has a polynomial time algorithm, since then many other very difficult
problems would have a fast algorithms. NP-completeness, however, cannot
address questions such as whether n^2 is the fastest possible running
time for LCS. For this, a more fine-grained approach that focuses on
exact runtimes is needed. My research has helped shape fine-grained
complexity, and has led to showing that many problems are equivalent. In
this talk, I will give an overview of the fine-grained approach and
highlight a few of its recent successes, both in giving lower bounds for
problems, and for developing new algorithms.

Bio:
Virginia V. Williams is an assistant professor of computer science at
Stanford University. She obtained her Ph.D. in 2008 from Carnegie Mellon
where she was advised by Guy Blelloch. Before joining Stanford, she
spent time at the Institute for Advanced Study and UC Berkeley. Her main
area of interest is broadly in the design and analysis of algorithms,
and more specifically in graph and matrix algorithms. Her work on matrix
multiplication in particular was a spotlight paper at STOC'12, and was
also covered by the press.