6533b830fe1ef96bd129794c
RESEARCH PRODUCT
Using the Hermite Regression Formula to Design a Neural Architecture with Automatic Learning of the “Hidden” Activation Functions
Filippo SorbelloGiovanni PilatoGiorgio VassalloSalvatore Gagliosubject
Flexibility (engineering)Hermite polynomialsArtificial neural networkComputer scienceGeneralizationbusiness.industryActivation functionFunction (mathematics)Sigmoid functionArtificial intelligencebusinessAlgorithmRegressiondescription
The value of the output function gradient of a neural network, calculated in the training points, plays an essential role for its generalization capability. In this paper a feed forward neural architecture (αNet) that can learn the activation function of its hidden units during the training phase is presented. The automatic learning is obtained through the joint use of the Hermite regression formula and the CGD optimization algorithm with the Powell restart conditions. This technique leads to a smooth output function of αNet in the nearby of the training points, achieving an improvement of the generalization capability and the flexibility of the neural architecture. Experimental results, obtained comparing αNet with traditional architectures with sigmoidal or sinusoidal activation functions, show that the former is very flexible and has good approximation and classification capabilities.
year | journal | country | edition | language |
---|---|---|---|---|
2000-01-01 |