TY - JOUR
T1 - Learning by non-interfering feedback chemical signaling in physical networks
AU - Anisetti, Vidyesh Rao
AU - Scellier, B.
AU - Schwarz, J. M.
N1 - Publisher Copyright:
© 2023 authors. Published by the American Physical Society. Published by the American Physical Society under the terms of the Creative Commons Attribution 4.0 International license. Further distribution of this work must maintain attribution to the author(s) and the published article's title, journal citation, and DOI.
PY - 2023/4
Y1 - 2023/4
N2 - Both non-neural and neural biological systems can learn. So rather than focusing on purely brain-like learning, efforts are underway to study learning in physical systems. Such efforts include equilibrium propagation (EP) and coupled learning (CL), which require storage of two different states - the free state and the perturbed state - during the learning process to retain information about gradients. Here, we propose a learning algorithm rooted in chemical signaling that does not require storage of two different states. Rather, the output error information is encoded in a chemical signal that diffuses into the network in a similar way as the activation/feedforward signal. The steady-state feedback chemical concentration, along with the activation signal, stores the required gradient information locally. We apply our algorithm using a physical, linear flow network and test it using the Iris data set with 93% accuracy. We also prove that our algorithm performs gradient descent. Finally, in addition to comparing our algorithm directly with EP and CL, we address the biological plausibility of the algorithm.
AB - Both non-neural and neural biological systems can learn. So rather than focusing on purely brain-like learning, efforts are underway to study learning in physical systems. Such efforts include equilibrium propagation (EP) and coupled learning (CL), which require storage of two different states - the free state and the perturbed state - during the learning process to retain information about gradients. Here, we propose a learning algorithm rooted in chemical signaling that does not require storage of two different states. Rather, the output error information is encoded in a chemical signal that diffuses into the network in a similar way as the activation/feedforward signal. The steady-state feedback chemical concentration, along with the activation signal, stores the required gradient information locally. We apply our algorithm using a physical, linear flow network and test it using the Iris data set with 93% accuracy. We also prove that our algorithm performs gradient descent. Finally, in addition to comparing our algorithm directly with EP and CL, we address the biological plausibility of the algorithm.
UR - http://www.scopus.com/inward/record.url?scp=85153477816&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85153477816&partnerID=8YFLogxK
U2 - 10.1103/PhysRevResearch.5.023024
DO - 10.1103/PhysRevResearch.5.023024
M3 - Article
AN - SCOPUS:85153477816
SN - 2643-1564
VL - 5
JO - Physical Review Research
JF - Physical Review Research
IS - 2
M1 - 023024
ER -