My research develops a unified hyperintensional semantics and logic for a Language of Thought (LoT). Just as the material sciences have devised methods for refining the raw materials of the natural world, philosophical logic employs model theory and proof theory in order to engineer the concepts that are better fit for theory building. In addition to these traditional methods, I have developed a programmatic methodology for implementing semantic theories with the model-checker in order to rapidly prototype and explore novel semantic theories (see software for details and this handout from a recent talk).

The Language of Thought

Whereas logic has traditionally focused on small language fragments, I am constructing a unified hyperintensional semantics and logic for The Language of Thought (LoT) which includes a wide variety of logical operators with which to reason. I have developed a semantics and logic for the following:

In addition to the operators above, I aim to include epistemic operators for reasoning under uncertainty as well as normative operators for reasoning about relative values and imperatives to support alignment with other agents. For instance, here are some of the operators that I intend to include:

Given a unified semantics for the LoT operators, logical consequences may be evaluated using formal methods as demonstrated by the model-checker. By contrast, attempting to develop a unified semantics and logic without implementing the semantics programmatically makes evaluating logical consequences impractical to carry out by traditional methods, writing semantic proofs by hand. See software for further details on the programmatic methodology that I have developed to support the development of a unified semantics and logic for the LoT operators.

Formal Verification

In addition to providing a unified semantic framework for interpreting LoT operators, I am working to unify the logics employed in formal methods by identifying the following as subsystems given the expressive resources provided by the LoT:

In addition to modeling system behavior and verifying software, this project is motivated by applications in AI Safety. In analogy, formal verification is to generative AI as the critical mind is to the creative. By working to integrate these counterpoising elements, I aim to contribute to the development of trustworthy AI systems that will support rather than thwart a safe digital future.

Publications

In Progress