NeuroLISP: High-level symbolic programming with attractor neural networks

Gregory P. Davis, Garrett E. Katz, Rodolphe J. Gentili, James A. Reggia

Research output: Contribution to journalArticlepeer-review

2 Scopus citations


Despite significant improvements in contemporary machine learning, symbolic methods currently outperform artificial neural networks on tasks that involve compositional reasoning, such as goal-directed planning and logical inference. This illustrates a computational explanatory gap between cognitive and neurocomputational algorithms that obscures the neurobiological mechanisms underlying cognition and impedes progress toward human-level artificial intelligence. Because of the strong relationship between cognition and working memory control, we suggest that the cognitive abilities of contemporary neural networks are limited by biologically-implausible working memory systems that rely on persistent activity maintenance and/or temporal nonlocality. Here we present NeuroLISP, an attractor neural network that can represent and execute programs written in the LISP programming language. Unlike previous approaches to high-level programming with neural networks, NeuroLISP features a temporally-local working memory based on itinerant attractor dynamics, top-down gating, and fast associative learning, and implements several high-level programming constructs such as compositional data structures, scoped variable binding, and the ability to manipulate and execute programmatic expressions in working memory (i.e., programs can be treated as data). Our computational experiments demonstrate the correctness of the NeuroLISP interpreter, and show that it can learn non-trivial programs that manipulate complex derived data structures (multiway trees), perform compositional string manipulation operations (PCFG SET task), and implement high-level symbolic AI algorithms (first-order unification). We conclude that NeuroLISP is an effective neurocognitive controller that can replace the symbolic components of hybrid models, and serves as a proof of concept for further development of high-level symbolic programming in neural networks.

Original languageEnglish (US)
Pages (from-to)200-219
Number of pages20
JournalNeural Networks
StatePublished - Feb 2022


  • Associative learning
  • Cognitive control
  • Compositionality
  • Programmable neural networks
  • Symbolic processing
  • Working memory

ASJC Scopus subject areas

  • Cognitive Neuroscience
  • Artificial Intelligence


Dive into the research topics of 'NeuroLISP: High-level symbolic programming with attractor neural networks'. Together they form a unique fingerprint.

Cite this