Current methods of AI reasoning employ domain-specific heuristic strategies that remain inconsistent, unverifiable, and opaque. Any morally accountable agent worthy of our trust must be able to premeditate actions, estimate the risk of counterfactual alternatives, rank plans according to preferences and norms, enact chosen plans while monitoring execution, compare intended versus actual outcomes, and update preferences and methods of estimation. In order to conduct and transparent planning of this kind, trustworthy AI agents must be able to reason with circumstantial modal, tense, and counterfactual conditional operators as well as causal, epistemic, and deontic operators for reasoning under uncertainty about imperatives, preferences, and moral responsibly in coordination with human and other AI agents.
In order to equip AI agents with a capacity for self-reflection and high-cognitive reasoning, I developed the model-checker
package to implement a programmatic semantics for the Logos, a unified formal language of thought. Further details about the model-checker
are provided below. Building on the current implementation of the model-checker
, the next phases of this project are summarized below:
- Extend the Logos to include causal, epistemic, and deontic operators, implementing semantic theories for each with the
model-checker
. - Build the
proof-checker
package in LEAN to develop the metalogic and proof theory for the semantic theories implemented in themodel-checker
.
These projects seek to provide general purpose tooling and methodology along with detailed documentation to make these resources accessible to support open source collaboration. Once these initial phases are well under way, the project will proceed to deploy these resources at scale to build AI agents capable of meta-cognition in addition to deductive, inductive, and abductive reasoning which integrates with existing domain-specific heuristic methods of reasoning determined by the context of application.
Model-Checker
In order to extend the methods of standard model-theoretic semantics, the
draws on Z3‘s python API to provide a general purpose programmatic methodology for implementing modular semantic theories for formal languages. By streamlining the otherwise grueling process of attempting to develop and systematically test novel semantic theories by hand to identify their logics, the model-checker
model-checker
makes it possible to rapidly prototyping and combine semantic theories. These methods aim to support the growth and maturity of formal semantics as a discipline by easing the process of investigating increasingly complex theories for unified formal languages with an extensible collection of operators.
By providing well-documented and open-source tooling for developing and sharing novel semantic theories, this project makes high performance methodology accessible for decentralized collaboration. The TheoryLib
consists of a library of semantic theories that can be explored and modified, where users can contribute novel semantic theories to the TheoryLib
to be shared publicly.
Although computational resources are commonplace, the ability to make use of these resources to develop and explore the implications of novel semantic theories remains limited. For instance, Prover9 and Mace are restricted to first-order and equational statements. However, for the purposes of developing new semantic theories and studying their corresponding logics, it is often useful to have greater flexibility in the semantic primitives that are used to define the models of the language. More information about the methodology for implementing a semantics can be found in the GitHub repository:
In addition to a general purpose methodology for rapidly prototyping modular semantic theories, the model-checker
includes a unified semantic theory for the Logos which provides an extensible formal language of thought by which to equip AI agents with a capacity for consistent, transparent, and verifiable reasoning. The Logos is extensible and aims to include a wide variety of operators indicated in research.
Proof-Checker
The proof-checker
is a new project that aims to support the development of the metalogic and proof theory for the Logos with the LEAN proof assistant. After developing and testing a semantic theory for a given language with the model-checker
, the corresponding proof theory and metatheory can then be efficiently developed with the proof-checker
package to establish soundness before attempting to complete the logic. In addition to providing a domain specific language (DSL), the proof-checker
will allow users to query the model-checker
from within LEAN before attempting to write a derivation.
Although computational systems cannot search the space of all models (typically a proper class), the model-checker
is able to prove that there are no finite models up to a user specified level of complexity, providing evidence that the logical consequence in question holds in full generality. The proof-checker
may them be used to help establish which logical consequences hold in full generality in addition to proving metalogical theorems of interest.