Current methods of AI reasoning employ domain-specific heuristic strategies that remain inconsistent, unverifiable, and opaque. Any morally accountable agent worthy of our trust must be able to premeditate actions, estimate the risk of counterfactual alternatives, rank plans according to preferences and norms, enact chosen plans while monitoring execution, compare intended versus actual outcomes, and update preferences and methods of estimation. In order to conduct general and transparent planning of this kind, trustworthy AI agents must be able to reason with circumstantial modal, tense, and counterfactual conditional operators as well as causal, epistemic, and deontic operators for reasoning under uncertainty about imperatives, preferences, and moral responsibly in coordination with human and other AI agents.
In order to equip AI agents with a capacity for self-reflection and high-cognitive reasoning, I developed the model-checker
package to implement a programmatic semantics for the Logos, a unified formal language of thought. The next phase of this project will:
- Extend the Logos to include causal, epistemic, and deontic operators, implementing semantic theories for each with the
model-checker
. - Build the
proof-checker
package in LEAN to develop the metalogic and proof theory for the semantic theories implemented in themodel-checker
.
These resources provide foundations for building AI agents capable of meta-cognition in addition to deductive, abductive, and inductive reasoning which integrates with existing domain-specific heuristic methods of AI reasoning. Further details are provided below.
Model-Checker
In order to extend the methods of standard truth-conditional semantics, the
draws on Z3‘s python API to provide a general purpose programmatic methodology for implementing modular semantic theories for formal languages. By streamlining the process of developing and systematically testing novel semantic theories to identify their logics, the model-checker
model-checker
makes it possible to rapidly prototyping and combine semantic theories. These methods aim to support the growth and maturity of formal semantics as a discipline by easing the process of investigating increasingly complex theories for unified formal languages with an extensible collection of operators.
Although computational resources are commonplace, the ability to make use of these resources to develop and explore the implications of novel semantic theories remains limited. For instance, Prover9 and Mace are restricted to first-order and equational statements. However, for the purposes of developing new semantic theories and studying their corresponding logics, it is often useful to have greater flexibility in the semantic primitives that are used to define the models of the language. More information about the methodology for implementing a semantics can be found in the GitHub repository.
By providing well-documented and open-source tooling for developing and sharing novel semantic theories, this project makes this high performance methodology accessible for open source collaboration. The TheoryLib
consists of a library of semantic theories that can be explored and modified, where users can contribute novel semantic theories to the TheoryLib
to be shared publicly.
In addition to a general purpose methodology for rapidly prototyping modular semantic theories, the model-checker
includes a unified semantic theory for the Logos which provides an extensible formal language of thought to equip AI agents with a capacity for consistent, transparent, and verifiable reasoning worth of trust.
Proof-Checker
The proof-checker
aims to support the development of the metalogic and proof theory for the Logos with the LEAN proof assistant. After developing and testing a semantic theory for a given language with the model-checker
, the corresponding proof theory and metatheory can then be efficiently developed with the proof-checker
package to establish soundness before attempting to complete the logic. In addition to providing a domain specific language, the proof-checker
will allow users to query the model-checker
from within LEAN before attempting to write a derivation.
Although computational systems cannot search the space of all models, the model-checker
is able to prove that there are no finite models up to a user specified level of complexity, providing evidence that the logical consequence in question holds in full generality. The proof-checker
may them be used to help establish which logical consequences hold in full generality in addition to proving metalogical theorems of interest.