Abstract
This paper presents three approaches to improve fault tolerance of neural networks. In two approaches, the traditional backpropagation training algorithm is itself modified so that the trained networks have improved fault tolerance; we achieve better results than others [1, 10] who had also explored this possibility. Our first method is to coerce weights to have low magnitudes, during the training process, since fault tolerance is degraded by the use of high magnitude weights; at the same time, additional hidden nodes are added dynamically to the network to ensure desired performance can be reached. Our second method is to add artificial faults to various components (nodes and links) of a network during training. This leads to the development of networks that perform well even when faults occur in the network. The third method repeatedly eliminates nodes of least sensitivity, then 'splits' the most sensitive nodes and retrains the system. This generally results in the best performance, although it requires a small amount of additional retraining after a network is built. Experimental results have shown that these methods can obtain better robustness than backpropagation training, and compare favorably with other approaches [1, 10].
Original language | English (US) |
---|---|
Title of host publication | IEEE International Conference on Neural Networks - Conference Proceedings |
Publisher | IEEE Computer Society |
Pages | 333-338 |
Number of pages | 6 |
Volume | 1 |
State | Published - 1994 |
Event | Proceedings of the 1994 IEEE International Conference on Neural Networks. Part 1 (of 7) - Orlando, FL, USA Duration: Jun 27 1994 → Jun 29 1994 |
Other
Other | Proceedings of the 1994 IEEE International Conference on Neural Networks. Part 1 (of 7) |
---|---|
City | Orlando, FL, USA |
Period | 6/27/94 → 6/29/94 |
ASJC Scopus subject areas
- Software