Doctoral thesis: Computing moral hypotheticals

SHARE:

Event Speaker: 

Dylan Holmes

Event Location: 

via zoom, see details below

Event Date/Time: 

Thursday, July 29, 2021 - 3:00pm

Abstract:
Our moral judgments depend on our ability to imagine what else might
have happened: we forgive harms that prevent greater harms, we excuse
bad outcomes when all others seem worse, and we condemn inaction when
good actions are within reach. To explain how we do this, I built a
computational model that reads and evaluates short text-based stories,
computing hypotheticals in order to make moral judgments.

I identify what specialized knowledge we need in order to know /which/
hypothetical alternatives to consider. I show how to connect abstract
knowledge about moral harms to the particular details in a story.
Finally, I show how the system can assess outcomes in a purely
qualitative, human-like way by decomposing outcomes into their harmful
components; I argue that---as in real life---many outcomes are
incomparable.

I support my theoretical claims with references to the cognitive science
and philosophical literature, and I demonstrate the system's
explanatory breadth with diverse examples including escalating
revenge, slap-on-the-wrist, preventive harm, self-defense, and
counterfactual dilemma resolution.

The key insight is that hypothetical context modulates understanding.
With this system, I shed light on what is needed to grasp hypothetical
context as effortlessly and automatically as we humans do. And I lay
the groundwork for moral reasoning systems that are as nuanced,
imaginative, and articulate as we humans are.
 
Thesis Supervisor: Prof. Randall Davis
Readers:  Profs. Gerald Sussman and Peter Szolovits
 
 
To attend this defense, please contact the doctoral candidate at dxh at mit dot edu