In this work, we study the dynamical properties of a machine learning technique called reservoir computing in order to gain insight into how representations of chaotic signals are encoded through learning. We train the reservoir on individual chaotic Lorenz signals. The Lorenz system is characterized by a set of equations and known to have three fixed points, all of which are unstable in the chaotic regime of the strange attractor. Exploration of the fixed points of the reservoir whose outputs are trained allows us to understand whether inherent Lorenz dynamics are transposed onto reservoir dynamics during learning. We do so by using a novel fixed point finding technique called directional fibers. Directional fibers are mathematical objects that systematically locate fixed points in a high dimensional space, and are found to be competitive and complementary with other traditional approaches. We find that the reservoir, after training of output weights, contains a higher dimensional projection of the Lorenz fixed points with matching stability, even though the training data did not include the fixed points. This tells us that the reservoir does indeed learn dynamical properties of the Lorenz attractor. We also find that the directional fiber also identifies additional fixed points in the reservoir space outside the projected Lorenz attractor region; these amplify perturbations during prediction and play a role in failure of long-term time series prediction.
Encoding of a Chaotic Attractor in a Reservoir Computer : A Directional Fiber Investigation. / Krishnagopal, Sanjukta; Katz, Garrett; Girvan, Michelle; Reggia, James.2019 International Joint Conference on Neural Networks, IJCNN 2019. Institute of Electrical and Electronics Engineers Inc., 2019. 8851853 (Proceedings of the International Joint Conference on Neural Networks; Vol. 2019-July).
Research output: Chapter in Book/Report/Conference proceeding › Conference contribution