My research employs the methods of model theory and proof theory in order to study a number of different types of explanation. In particular, I am concerned with: constitutive explanatory terms such as ‘necessary for’, ‘sufficient for’, ‘what it is for’, ‘just is for’; causal explanatory terms such as ’caused’ and ‘makes the case that’; and normative explanatory terms such as ‘reason for’ in claims like, “The rainy weather is a reason for John to take an umbrella.” I am also interested in theories of subject-matter, relevance, counterfactual and indicative conditionals, as well as logics for tense and modality.

In order to facilitate the study of hyperintensional models for languages which include such operators as mentioned above, I developed a model-checker software for both finding countermodels and establishing the validity of inferences over all models up to a user specified complexity.

In addition to understanding how these common elements of human reasoning function, I am interested in applications of my work on causal, normative, and conditional reasoning in AI safety in order to assist efforts to maintain oversight of AI-assisted decision-making in socially and morally sensitive sectors.

Separately, I am working to construct modern analogues of ancient Indian philosophies of the self which are often stated in constitutive explanatory terms.

Papers

In Progress