6533b82dfe1ef96bd12907d8

RESEARCH PRODUCT

A NEURAL NETWORK PRIMER

Hervé AbdiHervé Abdi

subject

Radial basis function networkTheoretical computer scienceEcologyLiquid state machineComputer scienceTime delay neural networkApplied MathematicsActivation functionGeneral MedicineTopologyAgricultural and Biological Sciences (miscellaneous)Hopfield networkRecurrent neural networkMultilayer perceptronTypes of artificial neural networks

description

Neural networks are composed of basic units somewhat analogous to neurons. These units are linked to each other by connections whose strength is modifiable as a result of a learning process or algorithm. Each of these units integrates independently (in paral lel) the information provided by its synapses in order to evaluate its state of activation. The unit response is then a linear or nonlinear function of its activation. Linear algebra concepts are used, in general, to analyze linear units, with eigenvectors and eigenvalues being the core concepts involved. This analysis makes clear the strong similarity between linear neural networks and the general linear model developed by statisticians. The linear models presented here are the perceptron and the linear associator. The behavior of nonlinear networks can be described within the framework of optimization and approximation techniques with dynamical systems (e.g., like those used to model spin glasses). One of the main notions used with nonlinear unit networks is the notion of attractor. When the task of the network is to associate a response with some specific input patterns, the most popular nonlinear technique consists of using hidden layers of neurons trained with back-propagation of error. The nonlinear models presented are the Hopfield network, the Boltzmann machine, the back-propagation network and the radial basis function network.

https://doi.org/10.1142/s0218339094000179