TY - JOUR
T1 - NeuroLISP
T2 - High-level symbolic programming with attractor neural networks
AU - Davis, Gregory P.
AU - Katz, Garrett E.
AU - Gentili, Rodolphe J.
AU - Reggia, James A.
N1 - Publisher Copyright:
© 2021 Elsevier Ltd
PY - 2022/2
Y1 - 2022/2
N2 - Despite significant improvements in contemporary machine learning, symbolic methods currently outperform artificial neural networks on tasks that involve compositional reasoning, such as goal-directed planning and logical inference. This illustrates a computational explanatory gap between cognitive and neurocomputational algorithms that obscures the neurobiological mechanisms underlying cognition and impedes progress toward human-level artificial intelligence. Because of the strong relationship between cognition and working memory control, we suggest that the cognitive abilities of contemporary neural networks are limited by biologically-implausible working memory systems that rely on persistent activity maintenance and/or temporal nonlocality. Here we present NeuroLISP, an attractor neural network that can represent and execute programs written in the LISP programming language. Unlike previous approaches to high-level programming with neural networks, NeuroLISP features a temporally-local working memory based on itinerant attractor dynamics, top-down gating, and fast associative learning, and implements several high-level programming constructs such as compositional data structures, scoped variable binding, and the ability to manipulate and execute programmatic expressions in working memory (i.e., programs can be treated as data). Our computational experiments demonstrate the correctness of the NeuroLISP interpreter, and show that it can learn non-trivial programs that manipulate complex derived data structures (multiway trees), perform compositional string manipulation operations (PCFG SET task), and implement high-level symbolic AI algorithms (first-order unification). We conclude that NeuroLISP is an effective neurocognitive controller that can replace the symbolic components of hybrid models, and serves as a proof of concept for further development of high-level symbolic programming in neural networks.
AB - Despite significant improvements in contemporary machine learning, symbolic methods currently outperform artificial neural networks on tasks that involve compositional reasoning, such as goal-directed planning and logical inference. This illustrates a computational explanatory gap between cognitive and neurocomputational algorithms that obscures the neurobiological mechanisms underlying cognition and impedes progress toward human-level artificial intelligence. Because of the strong relationship between cognition and working memory control, we suggest that the cognitive abilities of contemporary neural networks are limited by biologically-implausible working memory systems that rely on persistent activity maintenance and/or temporal nonlocality. Here we present NeuroLISP, an attractor neural network that can represent and execute programs written in the LISP programming language. Unlike previous approaches to high-level programming with neural networks, NeuroLISP features a temporally-local working memory based on itinerant attractor dynamics, top-down gating, and fast associative learning, and implements several high-level programming constructs such as compositional data structures, scoped variable binding, and the ability to manipulate and execute programmatic expressions in working memory (i.e., programs can be treated as data). Our computational experiments demonstrate the correctness of the NeuroLISP interpreter, and show that it can learn non-trivial programs that manipulate complex derived data structures (multiway trees), perform compositional string manipulation operations (PCFG SET task), and implement high-level symbolic AI algorithms (first-order unification). We conclude that NeuroLISP is an effective neurocognitive controller that can replace the symbolic components of hybrid models, and serves as a proof of concept for further development of high-level symbolic programming in neural networks.
KW - Associative learning
KW - Cognitive control
KW - Compositionality
KW - Programmable neural networks
KW - Symbolic processing
KW - Working memory
UR - http://www.scopus.com/inward/record.url?scp=85120749576&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85120749576&partnerID=8YFLogxK
U2 - 10.1016/j.neunet.2021.11.009
DO - 10.1016/j.neunet.2021.11.009
M3 - Article
C2 - 34894482
AN - SCOPUS:85120749576
SN - 0893-6080
VL - 146
SP - 200
EP - 219
JO - Neural Networks
JF - Neural Networks
ER -