Publisher's Synopsis
Neural networks have been applied successfully in the identification and control of dynamic systems. The universal approximation capabilities of the multilayer perceptron make it a popular choice for modeling nonlinear systems and for implementing generalpurpose nonlinear controllers [HaDe99]. This topic introduces three popular neural network architectures for prediction and control that have been implemented in the Neural Network Toolbox software, and presents brief descriptions of each of these architectures and shows how you can use them: Model Predictive Control, NARMA-L2 (or Feedback Linearization) Control and Model Reference Control. Radial basis networks can require more neurons than standard feedforward backpropagation networks, but often they can be designed in a fraction of the time it takes to train standard feedforward networks. They work best when many training vectors are available. Probabilistic neural networks can be used for classification problems. When an input is presented, the first layer computes distances from the input vector to the training input vectors and produces a vector whose elements indicate how close the input is to a training input. The second layer sums these contributions for each class of inputs to produce as its net output a vector of probabilities. Finally, a compete transfer function on the output of the second layer picks the maximum of these probabilities, and produces a 1 for that class and a 0 for the other classes. Self-organizing in networks is one of the most fascinating topics in the neural network field. Such networks can learn to detect regularities and correlations in their input and adapt their future responses to that input accordingly. The neurons of competitive networks learn to recognize groups of similar input vectors. Self-organizing maps learn to recognize groups of similar input vectors in such a way that neurons physically near each other in the neuron layer respond to similar input vectors. Self-organizing maps do not have target vectors, since their purpose is to divide the input vectors into clusters of similar vectors. There is no desired output for these types of networks. Learning vector quantization (LVQ) is a method for training competitive layers in a supervised manner (with target outputs). A competitive layer automatically learns to classify input vectors. However, the classes that the competitive layer finds are dependent only on the distance between input vectors. If two input vectors are very similar, the competitive layer probably will put them in the same class. There is no mechanism in a strictly competitive layer design to say whether or not any two input vectors are in the same class or different classes. LVQ networks, on the other hand, learn to classify input vectors into target classes chosen by the user. This book develops the following topics: - "Neural Network Control Systems" - "Neural Network Predictive Controller in Simulink" - "NARMA-L2 Neural Controller in Simulink" - "Design Model-Reference Neural Controller in Simulink" - "Import-Export Neural Network Simulink Control Systems" - "Radial Basis Neural Networks" - "Probabilistic Neural Networks" - "Generalized Regression Neural Networks" - "Learning Vector Quantization Networks" - "Self-Organizing and LVQ" - "Cluster with a Competitive Neural Network" - "Cluster with Self-Organizing Map Neural Network" - "Self-Organizing and LVQ" - "Cluster with a Competitive Neural Network" - "Cluster with Self-Organizing Map Neural Network" - "Adaptive Neural Network Filters"