Learning in a Continuous-Valued Attractor Network

Baram Sosis, Garrett Katz, James Reggia

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Learning a set of patterns in a content-addressable memory is an important aspect of many neurocomputational systems. Historically, this has often been done using Hebbian learning with attractor neural networks such as the standard discrete-valued Hopfield model. However, such systems are currently severely limited in terms of their memory capacity: as an increasing number of patterns are stored as fixed points, spurious attractors ('false memories') are increasingly created and compromise the network's functionality. Here we adopt a new method for identifying the fixed points (both stored and false memory patterns) learned by attractor networks in general, applying it to the special case of a continuous-valued analogue of the standard Hopfield model. We use computational experiments to show that this continuous-valued model functions as a content-addressable memory, characterizing its ability to learn training examples effectively and to store them at energy minima having substantial basins of attraction. We find that the new fixed point locator method not only identifies learned memories, but also many of the spurious attractors that occur. These results are a step towards systematically characterizing what is learned by attractor networks and may lead to more effective learning by allowing the use of techniques such as selective application of anti-Hebbian unlearning of spurious memories.

Original languageEnglish (US)
Title of host publicationProceedings - 17th IEEE International Conference on Machine Learning and Applications, ICMLA 2018
EditorsM. Arif Wani, Moamar Sayed-Mouchaweh, Edwin Lughofer, Joao Gama, Mehmed Kantardzic
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages278-284
Number of pages7
ISBN (Electronic)9781538668047
DOIs
StatePublished - Jan 15 2019
Event17th IEEE International Conference on Machine Learning and Applications, ICMLA 2018 - Orlando, United States
Duration: Dec 17 2018Dec 20 2018

Publication series

NameProceedings - 17th IEEE International Conference on Machine Learning and Applications, ICMLA 2018

Conference

Conference17th IEEE International Conference on Machine Learning and Applications, ICMLA 2018
CountryUnited States
CityOrlando
Period12/17/1812/20/18

Fingerprint

Data storage equipment
Associative storage
Attractor
Neural networks
Experiments
Fixed point

Keywords

  • Attractor neural networks
  • Directional fibers
  • Hebbian learning

ASJC Scopus subject areas

  • Artificial Intelligence
  • Computer Networks and Communications
  • Computer Science Applications
  • Computer Vision and Pattern Recognition
  • Safety, Risk, Reliability and Quality
  • Signal Processing
  • Decision Sciences (miscellaneous)

Cite this

Sosis, B., Katz, G., & Reggia, J. (2019). Learning in a Continuous-Valued Attractor Network. In M. A. Wani, M. Sayed-Mouchaweh, E. Lughofer, J. Gama, & M. Kantardzic (Eds.), Proceedings - 17th IEEE International Conference on Machine Learning and Applications, ICMLA 2018 (pp. 278-284). [8614073] (Proceedings - 17th IEEE International Conference on Machine Learning and Applications, ICMLA 2018). Institute of Electrical and Electronics Engineers Inc.. https://doi.org/10.1109/ICMLA.2018.00048

Learning in a Continuous-Valued Attractor Network. / Sosis, Baram; Katz, Garrett; Reggia, James.

Proceedings - 17th IEEE International Conference on Machine Learning and Applications, ICMLA 2018. ed. / M. Arif Wani; Moamar Sayed-Mouchaweh; Edwin Lughofer; Joao Gama; Mehmed Kantardzic. Institute of Electrical and Electronics Engineers Inc., 2019. p. 278-284 8614073 (Proceedings - 17th IEEE International Conference on Machine Learning and Applications, ICMLA 2018).

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Sosis, B, Katz, G & Reggia, J 2019, Learning in a Continuous-Valued Attractor Network. in MA Wani, M Sayed-Mouchaweh, E Lughofer, J Gama & M Kantardzic (eds), Proceedings - 17th IEEE International Conference on Machine Learning and Applications, ICMLA 2018., 8614073, Proceedings - 17th IEEE International Conference on Machine Learning and Applications, ICMLA 2018, Institute of Electrical and Electronics Engineers Inc., pp. 278-284, 17th IEEE International Conference on Machine Learning and Applications, ICMLA 2018, Orlando, United States, 12/17/18. https://doi.org/10.1109/ICMLA.2018.00048
Sosis B, Katz G, Reggia J. Learning in a Continuous-Valued Attractor Network. In Wani MA, Sayed-Mouchaweh M, Lughofer E, Gama J, Kantardzic M, editors, Proceedings - 17th IEEE International Conference on Machine Learning and Applications, ICMLA 2018. Institute of Electrical and Electronics Engineers Inc. 2019. p. 278-284. 8614073. (Proceedings - 17th IEEE International Conference on Machine Learning and Applications, ICMLA 2018). https://doi.org/10.1109/ICMLA.2018.00048
Sosis, Baram ; Katz, Garrett ; Reggia, James. / Learning in a Continuous-Valued Attractor Network. Proceedings - 17th IEEE International Conference on Machine Learning and Applications, ICMLA 2018. editor / M. Arif Wani ; Moamar Sayed-Mouchaweh ; Edwin Lughofer ; Joao Gama ; Mehmed Kantardzic. Institute of Electrical and Electronics Engineers Inc., 2019. pp. 278-284 (Proceedings - 17th IEEE International Conference on Machine Learning and Applications, ICMLA 2018).
@inproceedings{6f265b081c2241adae1deb70c84a0fa2,
title = "Learning in a Continuous-Valued Attractor Network",
abstract = "Learning a set of patterns in a content-addressable memory is an important aspect of many neurocomputational systems. Historically, this has often been done using Hebbian learning with attractor neural networks such as the standard discrete-valued Hopfield model. However, such systems are currently severely limited in terms of their memory capacity: as an increasing number of patterns are stored as fixed points, spurious attractors ('false memories') are increasingly created and compromise the network's functionality. Here we adopt a new method for identifying the fixed points (both stored and false memory patterns) learned by attractor networks in general, applying it to the special case of a continuous-valued analogue of the standard Hopfield model. We use computational experiments to show that this continuous-valued model functions as a content-addressable memory, characterizing its ability to learn training examples effectively and to store them at energy minima having substantial basins of attraction. We find that the new fixed point locator method not only identifies learned memories, but also many of the spurious attractors that occur. These results are a step towards systematically characterizing what is learned by attractor networks and may lead to more effective learning by allowing the use of techniques such as selective application of anti-Hebbian unlearning of spurious memories.",
keywords = "Attractor neural networks, Directional fibers, Hebbian learning",
author = "Baram Sosis and Garrett Katz and James Reggia",
year = "2019",
month = "1",
day = "15",
doi = "10.1109/ICMLA.2018.00048",
language = "English (US)",
series = "Proceedings - 17th IEEE International Conference on Machine Learning and Applications, ICMLA 2018",
publisher = "Institute of Electrical and Electronics Engineers Inc.",
pages = "278--284",
editor = "Wani, {M. Arif} and Moamar Sayed-Mouchaweh and Edwin Lughofer and Joao Gama and Mehmed Kantardzic",
booktitle = "Proceedings - 17th IEEE International Conference on Machine Learning and Applications, ICMLA 2018",

}

TY - GEN

T1 - Learning in a Continuous-Valued Attractor Network

AU - Sosis, Baram

AU - Katz, Garrett

AU - Reggia, James

PY - 2019/1/15

Y1 - 2019/1/15

N2 - Learning a set of patterns in a content-addressable memory is an important aspect of many neurocomputational systems. Historically, this has often been done using Hebbian learning with attractor neural networks such as the standard discrete-valued Hopfield model. However, such systems are currently severely limited in terms of their memory capacity: as an increasing number of patterns are stored as fixed points, spurious attractors ('false memories') are increasingly created and compromise the network's functionality. Here we adopt a new method for identifying the fixed points (both stored and false memory patterns) learned by attractor networks in general, applying it to the special case of a continuous-valued analogue of the standard Hopfield model. We use computational experiments to show that this continuous-valued model functions as a content-addressable memory, characterizing its ability to learn training examples effectively and to store them at energy minima having substantial basins of attraction. We find that the new fixed point locator method not only identifies learned memories, but also many of the spurious attractors that occur. These results are a step towards systematically characterizing what is learned by attractor networks and may lead to more effective learning by allowing the use of techniques such as selective application of anti-Hebbian unlearning of spurious memories.

AB - Learning a set of patterns in a content-addressable memory is an important aspect of many neurocomputational systems. Historically, this has often been done using Hebbian learning with attractor neural networks such as the standard discrete-valued Hopfield model. However, such systems are currently severely limited in terms of their memory capacity: as an increasing number of patterns are stored as fixed points, spurious attractors ('false memories') are increasingly created and compromise the network's functionality. Here we adopt a new method for identifying the fixed points (both stored and false memory patterns) learned by attractor networks in general, applying it to the special case of a continuous-valued analogue of the standard Hopfield model. We use computational experiments to show that this continuous-valued model functions as a content-addressable memory, characterizing its ability to learn training examples effectively and to store them at energy minima having substantial basins of attraction. We find that the new fixed point locator method not only identifies learned memories, but also many of the spurious attractors that occur. These results are a step towards systematically characterizing what is learned by attractor networks and may lead to more effective learning by allowing the use of techniques such as selective application of anti-Hebbian unlearning of spurious memories.

KW - Attractor neural networks

KW - Directional fibers

KW - Hebbian learning

UR - http://www.scopus.com/inward/record.url?scp=85062244517&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85062244517&partnerID=8YFLogxK

U2 - 10.1109/ICMLA.2018.00048

DO - 10.1109/ICMLA.2018.00048

M3 - Conference contribution

AN - SCOPUS:85062244517

T3 - Proceedings - 17th IEEE International Conference on Machine Learning and Applications, ICMLA 2018

SP - 278

EP - 284

BT - Proceedings - 17th IEEE International Conference on Machine Learning and Applications, ICMLA 2018

A2 - Wani, M. Arif

A2 - Sayed-Mouchaweh, Moamar

A2 - Lughofer, Edwin

A2 - Gama, Joao

A2 - Kantardzic, Mehmed

PB - Institute of Electrical and Electronics Engineers Inc.

ER -