TY - GEN
T1 - Learning in a Continuous-Valued Attractor Network
AU - Sosis, Baram
AU - Katz, Garrett
AU - Reggia, James
N1 - Publisher Copyright:
© 2018 IEEE.
PY - 2018/7/2
Y1 - 2018/7/2
N2 - Learning a set of patterns in a content-addressable memory is an important aspect of many neurocomputational systems. Historically, this has often been done using Hebbian learning with attractor neural networks such as the standard discrete-valued Hopfield model. However, such systems are currently severely limited in terms of their memory capacity: as an increasing number of patterns are stored as fixed points, spurious attractors ('false memories') are increasingly created and compromise the network's functionality. Here we adopt a new method for identifying the fixed points (both stored and false memory patterns) learned by attractor networks in general, applying it to the special case of a continuous-valued analogue of the standard Hopfield model. We use computational experiments to show that this continuous-valued model functions as a content-addressable memory, characterizing its ability to learn training examples effectively and to store them at energy minima having substantial basins of attraction. We find that the new fixed point locator method not only identifies learned memories, but also many of the spurious attractors that occur. These results are a step towards systematically characterizing what is learned by attractor networks and may lead to more effective learning by allowing the use of techniques such as selective application of anti-Hebbian unlearning of spurious memories.
AB - Learning a set of patterns in a content-addressable memory is an important aspect of many neurocomputational systems. Historically, this has often been done using Hebbian learning with attractor neural networks such as the standard discrete-valued Hopfield model. However, such systems are currently severely limited in terms of their memory capacity: as an increasing number of patterns are stored as fixed points, spurious attractors ('false memories') are increasingly created and compromise the network's functionality. Here we adopt a new method for identifying the fixed points (both stored and false memory patterns) learned by attractor networks in general, applying it to the special case of a continuous-valued analogue of the standard Hopfield model. We use computational experiments to show that this continuous-valued model functions as a content-addressable memory, characterizing its ability to learn training examples effectively and to store them at energy minima having substantial basins of attraction. We find that the new fixed point locator method not only identifies learned memories, but also many of the spurious attractors that occur. These results are a step towards systematically characterizing what is learned by attractor networks and may lead to more effective learning by allowing the use of techniques such as selective application of anti-Hebbian unlearning of spurious memories.
KW - Attractor neural networks
KW - Directional fibers
KW - Hebbian learning
UR - http://www.scopus.com/inward/record.url?scp=85062244517&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85062244517&partnerID=8YFLogxK
U2 - 10.1109/ICMLA.2018.00048
DO - 10.1109/ICMLA.2018.00048
M3 - Conference contribution
AN - SCOPUS:85062244517
T3 - Proceedings - 17th IEEE International Conference on Machine Learning and Applications, ICMLA 2018
SP - 278
EP - 284
BT - Proceedings - 17th IEEE International Conference on Machine Learning and Applications, ICMLA 2018
A2 - Wani, M. Arif
A2 - Kantardzic, Mehmed
A2 - Sayed-Mouchaweh, Moamar
A2 - Gama, Joao
A2 - Lughofer, Edwin
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 17th IEEE International Conference on Machine Learning and Applications, ICMLA 2018
Y2 - 17 December 2018 through 20 December 2018
ER -