Prereq: 6.867, or solid grasp of machine learning
Constantinos Daskalakis, Aleksander Madry, Ankur Moitra, Asuman Ozdaglar, Pablo Parrilo,
Martin Rinard, Arvind Satyanarayan, Armando Solar-Lezama, David Sontag, Russ Tedrake, Antonio Torralba
Schedule: TR12:30-2, 32-144
Today’s ML solutions can achieve impressive—sometimes super-human—performance that suggests that a broad deployment of ML will revolutionize almost every aspect of our lives. However, much of its development so far has been focused on clean and controlled/simulated settings and has had an average-case/“proof of concept” perspective in mind. And this mindset, despite being very useful up until now, turns out to be grossly inadequate for the—often high-stakes—nature of real-world deployment of ML. Here, the chief design goal is robustness to the variety of corruptions that abound in real-world contexts, whether benign or malicious in nature. It is also necessary to have a fine-grained grasp of the biases learned by our models and the impact of employing them in decision making. Finally, to be truly useful, our ML tools need to be understandable to humans and easy to work with even for users with no ML expertise.
The current ML toolkit tends to fail catastrophically with respect to the above criteria. As such, we need a new, multi-pronged effort that will revisit all the tenets of the existing ML framework and build the next generation of machine learning that can be deployed in the real world in a safe, secure and responsible manner.
This course will begin with background lectures, and then shift into a series of lectures (given by co-instructors) about fundamental ideas and phenomena that are relevant to building a truly deployable ML. Class projects will then require the students to explore these topics in more depth and present them in a form of a presentation and report.