NARS
Workshop

August 10, 2025

Reykjavík University, Iceland

At AGI 2025

Being one of the most sophisticated models of AGI, NARS (Non-Axiomatic Reasoning System) has attracted much interest from researchers, AI professionals, and students worldwide. The goal of the NARS project is to build thinking machines. Endeavors are made to uniformly explain and reproduce many cognitive facilities, including reasoning, learning, planning, etc., to provide a unified theory, model, and system for AI.


Built on top of its open-source implementation (OpenNARS), several NARS-based and NARS-related projects have been undertaken and are currently under development. To encourage communication and collaboration among the researchers and introduce these projects to the AGI community, a workshop will be held during AGI-25 as an integrated part of the conference to discuss the ongoing AGI research directly or indirectly related to NARS.


The half-day workshop (on-site and virtual) will be held during AGI-25 and will be open to all conference attendees. The workshop contains presentations on specific topics and a general discussion on future research.


Submissions to the workshop are welcome. Presentation title and abstract should be emailed to pei.wang@temple.edu before July 31, 2025. Full papers are acceptable, though not required. The accepted presentations and papers will be maintained on a workshop website, and future publication opportunities will be explored.


IJCAI-18

Key Information:

When:
August 10, 2025 (On-site and Online Event), from 2:00pm - 5:00pm (GMT)

Where:
AGI-2025
Reykjavík University, Iceland
menntavegur 1
101-102 reykjavík
iceland

Watch Online:
Youtube Broadcast

Contact:

Pei Wang:
(Workshop Organizer)
pei.wang@temple.edu
Bowen Xu:
(Workshop Leader)
bowen.xu@temple.edu
Christian Hahm:
(Website) christian.hahm@temple.edu

TALKS

Pei Wang

Professor
Temple University

Talks:

  • Self-control in NARS

NARS utilizes mental operators to drive its internal sensors and actuators, enabling self-perception and influencing its own cognitive processes. The execution and consequences of these operations form the system's internal experience, which mirrors certain functions of the human mind, including emotions and feelings. Self-control in NARS happens when specific steps in its working cycle are treated as deliberate operations, guided by this internal experience. Consequently, NARS's reasoning operates as a dual process, integrating both automatic (subconscious) and controlled (conscious) aspects.


Christian Hahm

Ph.D. student
Temple University

Talks:

  • Automatic optimization of NARS using Evolutionary Algorithms

In this talk, we will discuss the use of evolutionary algorithms to evolve the meta level and object level of NARS. The use of automatic optimization can free NARS designers from the labor of manually tweaking meta-level parameters for each specific problem domain and experiment. This will allow experimenters more time to focus on the high-level problems of NARS development, while automatically finding high-performing NARS systems for their use cases. We will review a concrete genetic encoding that was developed for NARS, including experimental results where NARS evolved to better control robots for certain tasks.


Tangrui Li

Ph.D. student
Temple University

Talks:

  • Integrating NARS and LLMs

Based on Non-Axiomatic Logic Large Language Models (LLMs) have attracted significant attention due to their strong performance. However, their outputs often raise skepticism, as the internal logic is opaque. Even when outputs are correct, the reasoning path may be flawed. To build trust, research increasingly aims to integrate logic for justification of LLM outputs. However, existing approaches have their limitations: 1) Self-generated chain-of-thought explanations: prompting LLMs to emit logical steps along with answers is appealing, but frequently the explanation is not causally connected to the result. 2) Post-hoc logical translation and analysis: mapping the model's output into mathematical logic and verifying it improves rigor but often incurs prohibitive computational complexity and struggles with vague or underspecified nature of natural language. Moreover, purely symbolic logic cannot account for the intuitive, tacit reasoning that resists full verbalization. To overcome these shortcomings, this presentation will propose two novel attempts that aim to make logical structure intrinsic to LLM outputs, rather than added after the fact.

Method 1: Neural-symbolic backbone using Non-Axiomatic Logic
Inspired by the theory of Non-Axiomatic Logic (NAL) and the Non-Axiomatic Reasoning System (NARS), where meaning emerges from concept relations rather than fixed axiomatic definitions, we build a neuro-symbolic architecture combining a Mixture-of-Experts (MoE) backbone model with NAL as a formal substrate, which is able to ensure logical consistency of outputs. But a downside is that, while the LLM has consistent logic outputs, humans still cannot easily interpret the hidden formal structure.

Method 2: Learning human-interpretable control via exploration
To address interpretability issue raised in method 1, the second approach emphasizes NAL's self-control and perception through sensorimotor channels to interact with external environments. In a simplified setting, the system actively explores interactions and observes outcomes generated by the LLM (e.g., LLM controls a robot where the command is "go left" and the movement can be observed correctly in the environment). Learning from these observations, the NARS-based framework acquires a grounding that enables comprehension of natural-language meaning in context. Over time, more complex linguistic inputs can be understood using previously learned simpler sentences, effectively translating new language into logical structure via incremental learning. We aim to develop a system that, based on large-scale observation of LLM behavior and consequences, enables structurally logical analysis of LLM outputs while progressively enabling human interpretability. Such a system could also benefit multi-LLM collaboration by providing a shared logical background.


Bowen Xu

Ph.D. student
Temple University

Talks:

  • Constructive notions of space and object

In Artificial Intelligence (AI) research, space is often implicitly assumed as a prerequisite for perception; specifically, objective three-dimensional space is treated as a frame within which objects reside, and perception is conceived as the process of constructing internal representations that mirror external objects. While this assumption has proven effective in certain applications, we argue that it is inadequate for Artificial General Intelligence (AGI). Through exploring the essence of spatial sense and objects, we propose a normative theory that aims to enable AGI systems to understand space and objects without assuming a predetermined-dimensional space or external objects. Several illustrative examples are provided to support the theory. Nonetheless, the question of how to build a complete perception system grounded in this theory remains open, with the primary challenge being the design of efficient learning mechanisms. Despite this, the present work may offer a theoretical foundation for the development of embodied AGI systems capable of acquiring understandings of space and objects.


David Ireland

Senior Research Scientist
Australian E-Health Research Center, CSIRO, Australia

Talks:

  • A Common LISP implementation of NARS

Presentation

Video


Jeffrey Freeman

Co-founder & CTO
CleverThis/CleverLibre

Talks:

  • DEFT - Domain-agnostic Evaluation, Freeman's Theorem

We present CleverAgents, a revolutionary framework that integrates reactive stream processing, stateful graph workflows, and automated knowledge graph extraction to create the first semantically-aware multi-agent LLM orchestration system. The framework's core innovation is its integration with CleverSwarm, a novel text-to-knowledge-graph conversion engine that decomposes all textual communications --- both user prompts and LLM responses --- into structured, queryable knowledge representations, enabling unprecedented routing fidelity while dramatically reducing computational overhead.

Unlike traditional agent frameworks that rely on expensive LLM-based routing decisions, CleverAgents leverages CleverSwarm's knowledge graph decomposition to perform semantic message routing through graph traversal and pattern matching operations. When text enters the system via CleverAgents' unified "routes" architecture, CleverSwarm automatically extracts entities, relationships, and semantic structures, creating a comprehensive knowledge graph that preserves all information while enabling efficient programmatic analysis. This semantic decomposition allows the framework to route messages based on conceptual content, relationship patterns, and knowledge domain overlap rather than computationally expensive natural language processing.

The system's Dynamic Stream-Graph Bridge mechanism operates on both reactive message flows and semantic knowledge structures, automatically upgrading from lightweight stream processing to stateful graph execution when knowledge complexity thresholds are exceeded. Our Knowledge-Aware Route Bridge monitors semantic density, entity relationship complexity, and conceptual depth to determine optimal processing paradigms. Messages containing simple factual queries remain in efficient RxPy streams, while complex multi-domain reasoning tasks are elevated to LangGraph workflows with full context preservation. CleverSwarm's knowledge graph integration enables intelligent agent specialization where agents are matched to tasks based on semantic overlap between their knowledge domains and incoming message content. The framework maintains persistent knowledge graphs for conversation context, enabling agents to access structured representations of previous interactions without expensive context window management. This approach reduces LLM API calls by up to 80% for routing and context retrieval operations while improving response accuracy through precise semantic understanding.

The framework's Enhanced Template Registry extends beyond traditional parameterization to include knowledge graph schema templates, enabling systematic knowledge structure reuse across different deployment contexts. Behavioral-driven development testing validates semantic routing accuracy and knowledge preservation across diverse agent orchestration patterns.

Our evaluation demonstrates significant computational advantages in scenarios requiring both reactive responsiveness and deep semantic understanding, including: knowledge-driven conversational AI with persistent semantic memory, multi-domain research synthesis with concept-aware routing, and adaptive agent networks that scale processing complexity based on semantic rather than syntactic characteristics. The integration of structured knowledge representation with reactive processing creates a new paradigm for agent frameworks that optimizes both computational efficiency and semantic fidelity.

CleverAgents with CleverSwarm integration represents a fundamental advancement toward semantically-intelligent agent orchestration, providing the first framework to unify reactive processing, stateful workflows, and automated knowledge extraction in a single, computationally efficient system that scales with semantic complexity rather than text volume.

Schedule

All times are in Greenwich Mean Time (GMT)
TimeEvent
2:00pm - 5:00pm NARS Workshop
Speakers & Topics
Pei Wang
Self-control in NARS
Christian Hahm
Automatic optimization of NARS using Evolutionary Algorithms
(theory, experiments,presentation)
Tangrui Li
Integrating NARS and LLMs
Bowen Xu
Constructive notions of space and object
David Ireland
A Common LISP implementation of NARS
(presentation, video, code)
Jeffrey Freeman
DEFT - Domain-agnostic Evaluation, Freeman's Theorem


Organizing Committee



Dr. Pei Wang

Temple University, U.S.A.

Bowen Xu

Temple University, U.S.A.

Christian Hahm

Temple University, U.S.A.

Jeffrey Freeman

CleverLibre