Training techniques to obtain fault-tolerant neural networks

Ching Tai Chiu, Kishan Mehrotra, Chilukuri K. Mohan, Sanjay Ranka

Research output: Chapter in Book/Entry/PoemConference contribution

42 Scopus citations

Abstract

This paper addresses methods of improving the fault tolerance of feedforward neural nets. The first method is to coerce weights to have low magnitudes during the backpropagation training process, since fault tolerance is degraded by the use of high magnitude weights; at the same time, additional hidden nodes are added dynamically to the network to ensure that desired performance can be obtained. The second method is to add artificial faults to various components (nodes and links) of a network during training. The third method is to repeatedly remove nodes that do not significantly affect the network output, and then add new nodes that share the load of the more critical nodes in the network. Experimental results have shown that these methods can obtain better robustness than backpropagation training, and compare favorably with other approaches [1, 15].

Original languageEnglish (US)
Title of host publicationDigest of Papers - International Symposium on Fault-Tolerant Computing
PublisherIEEE Computer Society
Pages360-369
Number of pages10
ISBN (Print)0818655224
StatePublished - 1994
EventProceedings of the 24th International Symposium on Fault-Tolerant Computing - Austin, TX, USA
Duration: Jun 15 1994Jun 17 1994

Publication series

NameDigest of Papers - International Symposium on Fault-Tolerant Computing
ISSN (Print)0731-3071

Other

OtherProceedings of the 24th International Symposium on Fault-Tolerant Computing
CityAustin, TX, USA
Period6/15/946/17/94

ASJC Scopus subject areas

  • Hardware and Architecture
  • General Engineering

Fingerprint

Dive into the research topics of 'Training techniques to obtain fault-tolerant neural networks'. Together they form a unique fingerprint.

Cite this