**NARS in a nutshell** 🥜 Tangrui (Tory) Li - tuo90515@temple.edu, tangrui.li@temple.edu ------ To learn more about NARS, I firstly need you to **bear some imperfections**. In propositional logic, if we have in propositional logic: $$ A, A \rightarrow B,A \rightarrow C $$ We can get \( B,C \). Though it will almost take no time, it requires "two steps", 1) reasoning on \( A, A \rightarrow B \) and 2) reasoning on \( A, A \rightarrow C \). NARS will do it "step by step". You might be confused that "why we need to take a break between these very simple steps". The point is that not all tasks are simple. Image when you are working, you hear the fire alarm, which is an emergency and you need to quit your current task and process it. So, to let NARS get ready to process emergency, it will separate a complicated task into several "one-shot" small tasks, such that one can be processed in one system cycle. But this raises another problem, "if a task is separated into small pieces and allow other tasks to interrupt it, how to make the reasoning continuously (progressively)?" It is possible, and that is why we need to have the the control mechanism (classical mathematical logic has two parts: 1) grammar, 2) rules and truth tables, but we have a third part, the control mechanism, to make all ongoing tasks processed with a **reasonable** order. We are still working on this problem, no reliable solutions so far. But the overall method is about assigning different tasks a reasonable priority and adjust it due to context and long-term history to control the order of reasoning. Then I would like to explain why NARS is non-axiomatic. When talking about axioms, we mean "something true by default". So, it is not helpful to talk about which axiom is true, which axiom is false, since the truth of all axioms is **manually** defined. The only possibility is to say "whether the using of some axioms is true or false". For example, "using Newton's mechanics on large objects is **correct**", and "using Newton's mechanics on minuscule objects is **incorrect**". If we have a basic binary definition of correctness and incorrectness. One sentence may have positive and negative (binary) justifications. For example, "apple 🍎 is red" is one sentence. If we find an apple is really red (namely \(A_a\)), this is a positive evidence. And if we find an apple is not red (namely \(A_b\)), this is a negative evidence. If we put all these sentences together like below, we may find "apple is red" is "derived" by "\(A_a\) is red \( \land \) \( A_a \) is apple", etc. Or, **if we have \( < A_a \rightarrow red >. \) and \( < A_a \rightarrow apple >. \), these evidences support \( < apple \rightarrow red >. \) naturally**. This is only one inference rule in NARS, but it at least shows you the truth-values of NARS is somewhat determined by the reasoning itself (the distribution of evidences). (red ones are negative evidences and the green one is the positive evidence) And so, that is one reason why we choose to use term-logic, since the above structure is nothing but syllogism. It provides a **syntactical** way of reasoning, while preserving evidences to reason **semantically**. By now, I think you understand the motivation of NARS and what we want. But you may have new questions, that is, NARS, as a term-logical system, its reasoning ability is widely questioned. Indeed, but we designed NAL's multi-layer structure (NAL-9 to NAL-1) to cover more complex cases such as sequences (time or space), variables, quantifiers, etc. It's all about making NARS a blank system (it can have prior knowledge, if you want), in an open environment (possibly in communication with other agents, human beings), it is able to receive a constant stream of uncontrolled external information and respond appropriately while accumulating external knowledge to make it better. Here, we see that we only need to provide NARS with an interface to convert external information into Narsese, and then NARS learns everything by itself, which is not limited by the diversity of external information and the length of learning time , which is why we think NARS is a potential AGI. The above is a brief review of NARS. I know there are a lot not shown. But if you are interested, please let us know if you have any questions. 😊 ------ Team website: https://cis.temple.edu/tagit/ Sample projects: https://cis.temple.edu/tagit/#projects Latest version: https://github.com/MoonWalker1997/PyNARS Stable version: https://github.com/opennars/opennars/releases/tag/v3.0.4