## Abstract

The rate of convergence of net output error is very low when training feedforward neural networks for multiclass problems using the back-propagation algorithm. While backpropagation will reduce the Euclidean distance between the actual and desired output vectors, the differences between some of the components of these vectors increase in the first iteration. Furthermore, the magnitudes of subsequent weight changes in each iteration are very small, so that many iterations are required to compensate for the increased error in some components in the initial iterations. Our approach is to use a modular network architecture, reducing a k-class problem to a set of K two-class problems, with a separately trained network for each of the simpler problems. Speedups of one order of magnitude have been obtained experimentally, and in some cases convergence was possible using the modular approach but not using a nonmodular network.

Original language | English (US) |
---|---|

Pages (from-to) | 117-124 |

Number of pages | 8 |

Journal | IEEE Transactions on Neural Networks |

Volume | 6 |

Issue number | 1 |

DOIs | |

State | Published - Jan 1995 |

## ASJC Scopus subject areas

- Software
- Computer Science Applications
- Computer Networks and Communications
- Artificial Intelligence