Learning in a Continuous-Valued Attractor Network

Baram Sosis, Garrett Katz, James Reggia

Research output: Chapter in Book/Report/Conference proceedingConference contribution

1 Scopus citations

Abstract

Learning a set of patterns in a content-addressable memory is an important aspect of many neurocomputational systems. Historically, this has often been done using Hebbian learning with attractor neural networks such as the standard discrete-valued Hopfield model. However, such systems are currently severely limited in terms of their memory capacity: as an increasing number of patterns are stored as fixed points, spurious attractors ('false memories') are increasingly created and compromise the network's functionality. Here we adopt a new method for identifying the fixed points (both stored and false memory patterns) learned by attractor networks in general, applying it to the special case of a continuous-valued analogue of the standard Hopfield model. We use computational experiments to show that this continuous-valued model functions as a content-addressable memory, characterizing its ability to learn training examples effectively and to store them at energy minima having substantial basins of attraction. We find that the new fixed point locator method not only identifies learned memories, but also many of the spurious attractors that occur. These results are a step towards systematically characterizing what is learned by attractor networks and may lead to more effective learning by allowing the use of techniques such as selective application of anti-Hebbian unlearning of spurious memories.

Original languageEnglish (US)
Title of host publicationProceedings - 17th IEEE International Conference on Machine Learning and Applications, ICMLA 2018
EditorsM. Arif Wani, Moamar Sayed-Mouchaweh, Edwin Lughofer, Joao Gama, Mehmed Kantardzic
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages278-284
Number of pages7
ISBN (Electronic)9781538668047
DOIs
StatePublished - Jan 15 2019
Event17th IEEE International Conference on Machine Learning and Applications, ICMLA 2018 - Orlando, United States
Duration: Dec 17 2018Dec 20 2018

Publication series

NameProceedings - 17th IEEE International Conference on Machine Learning and Applications, ICMLA 2018

Conference

Conference17th IEEE International Conference on Machine Learning and Applications, ICMLA 2018
CountryUnited States
CityOrlando
Period12/17/1812/20/18

Keywords

  • Attractor neural networks
  • Directional fibers
  • Hebbian learning

ASJC Scopus subject areas

  • Artificial Intelligence
  • Computer Networks and Communications
  • Computer Science Applications
  • Computer Vision and Pattern Recognition
  • Safety, Risk, Reliability and Quality
  • Signal Processing
  • Decision Sciences (miscellaneous)

Fingerprint Dive into the research topics of 'Learning in a Continuous-Valued Attractor Network'. Together they form a unique fingerprint.

  • Cite this

    Sosis, B., Katz, G., & Reggia, J. (2019). Learning in a Continuous-Valued Attractor Network. In M. A. Wani, M. Sayed-Mouchaweh, E. Lughofer, J. Gama, & M. Kantardzic (Eds.), Proceedings - 17th IEEE International Conference on Machine Learning and Applications, ICMLA 2018 (pp. 278-284). [8614073] (Proceedings - 17th IEEE International Conference on Machine Learning and Applications, ICMLA 2018). Institute of Electrical and Electronics Engineers Inc.. https://doi.org/10.1109/ICMLA.2018.00048