Role of the training algorithm in model selection on neural networks
The Neural net?s fit ability is often affected by the network configuration, particularly the number of hidden neurons and input variables. As the size of these parameters increases, the learning also increases, then the fit of network is better. Theoretically, if parameters are increasing regularly, the error should be reduced systematically, provided that the models are nested for each step of the process. In this work, we validated the hypothesis that the addition of hidden neurons in nested models lead to systematic reductions in error, regardless of the learning algorithm used; to illustrate the discussion we used the number of airline passengers and Sunspots in Box &Jenkins, and RProp and Delta Rule as learning methods. Experimental evidence shows that the evaluated training methods show different behaviors as those theoretically expected, it means, not fulfilling the assumption of error reduction.
Main Authors: | , |
---|---|
Format: | Digital revista |
Language: | spa |
Published: |
Universidad de Ciencias Aplicadas y Ambientales U.D.C.A
2011
|
Online Access: | https://revistas.udca.edu.co/index.php/ruadc/article/view/767 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
id |
rev-ruadc-co-article-767 |
---|---|
record_format |
ojs |
institution |
UDCA CO |
collection |
OJS |
country |
Colombia |
countrycode |
CO |
component |
Revista |
access |
En linea |
databasecode |
rev-ruadc-co |
tag |
revista |
region |
America del Sur |
libraryname |
Biblioteca de la UDCA de Colombia |
language |
spa |
format |
Digital |
author |
Sánchez, Paola Velásquez, Juan |
spellingShingle |
Sánchez, Paola Velásquez, Juan Role of the training algorithm in model selection on neural networks |
author_facet |
Sánchez, Paola Velásquez, Juan |
author_sort |
Sánchez, Paola |
title |
Role of the training algorithm in model selection on neural networks |
title_short |
Role of the training algorithm in model selection on neural networks |
title_full |
Role of the training algorithm in model selection on neural networks |
title_fullStr |
Role of the training algorithm in model selection on neural networks |
title_full_unstemmed |
Role of the training algorithm in model selection on neural networks |
title_sort |
role of the training algorithm in model selection on neural networks |
description |
The Neural net?s fit ability is often affected by the network configuration, particularly the number of hidden neurons and input variables. As the size of these parameters increases, the learning also increases, then the fit of network is better. Theoretically, if parameters are increasing regularly, the error should be reduced systematically, provided that the models are nested for each step of the process. In this work, we validated the hypothesis that the addition of hidden neurons in nested models lead to systematic reductions in error, regardless of the learning algorithm used; to illustrate the discussion we used the number of airline passengers and Sunspots in Box &Jenkins, and RProp and Delta Rule as learning methods. Experimental evidence shows that the evaluated training methods show different behaviors as those theoretically expected, it means, not fulfilling the assumption of error reduction. |
publisher |
Universidad de Ciencias Aplicadas y Ambientales U.D.C.A |
publishDate |
2011 |
url |
https://revistas.udca.edu.co/index.php/ruadc/article/view/767 |
work_keys_str_mv |
AT sanchezpaola roleofthetrainingalgorithminmodelselectiononneuralnetworks AT velasquezjuan roleofthetrainingalgorithminmodelselectiononneuralnetworks AT sanchezpaola elroldelalgoritmodeentrenamientoenlaselecciondemodelosderedesneuronales AT velasquezjuan elroldelalgoritmodeentrenamientoenlaselecciondemodelosderedesneuronales |
_version_ |
1763178472304279552 |
spelling |
rev-ruadc-co-article-7672021-07-13T07:56:40Z Role of the training algorithm in model selection on neural networks El rol del algoritmo de entrenamiento en la selección de modelos de redes neuronales Sánchez, Paola Velásquez, Juan Redes Neuronales Algoritmo de Entrenamiento Artificial neural networks Training algorithm The Neural net?s fit ability is often affected by the network configuration, particularly the number of hidden neurons and input variables. As the size of these parameters increases, the learning also increases, then the fit of network is better. Theoretically, if parameters are increasing regularly, the error should be reduced systematically, provided that the models are nested for each step of the process. In this work, we validated the hypothesis that the addition of hidden neurons in nested models lead to systematic reductions in error, regardless of the learning algorithm used; to illustrate the discussion we used the number of airline passengers and Sunspots in Box &Jenkins, and RProp and Delta Rule as learning methods. Experimental evidence shows that the evaluated training methods show different behaviors as those theoretically expected, it means, not fulfilling the assumption of error reduction. La capacidad de ajuste de una red neuronal se ve a menudo afectada por la configuración usada, en especial, en relación al número de neuronas ocultas y de variables de entrada, toda vez que, a medida que el número de parámetros del modelo aumenta, se favorece el aprendizaje de la red y, por tanto, el ajuste es mejor. Teóricamente, un proceso constructivo de adición de parámetros debería conducir a reducciones sistemáticas en el error, siempre y cuando, los modelos sean anidados en cada paso del proceso. En este trabajo, se valida la hipótesis que la adición de neuronas ocultas en modelos anidados debe conducir a reducciones en el error, sin importar el algoritmo de entrenamiento usado; para ejemplificar la discusión, se usaron la serie de pasajeros en líneas aéreas y de manchas solares de Box &Jenkins y los métodos de entrenamiento de Regla Delta y RProp. La evidencia experimental demuestra que los métodos de entrenamiento evaluados exhiben comportamientos diferentes a los teóricamente esperados, incumpliendo el supuesto de reducción del error. Universidad de Ciencias Aplicadas y Ambientales U.D.C.A 2011-06-30 info:eu-repo/semantics/article info:eu-repo/semantics/publishedVersion application/pdf text/html https://revistas.udca.edu.co/index.php/ruadc/article/view/767 10.31910/rudca.v14.n1.2011.767 Revista U.D.C.A Actualidad & Divulgación Científica; Vol. 14 No. 1 (2011): Revista U.D.C.A Actualidad & Divulgación Científica. Enero-Junio; 149-156 Revista U.D.C.A Actualidad & Divulgación Científica; Vol. 14 Núm. 1 (2011): Revista U.D.C.A Actualidad & Divulgación Científica. Enero-Junio; 149-156 Revista U.D.C.A Actualidad & Divulgación Científica; v. 14 n. 1 (2011): Revista U.D.C.A Actualidad & Divulgación Científica. Enero-Junio; 149-156 2619-2551 0123-4226 10.31910/rudca.v14.n1.2011 spa https://revistas.udca.edu.co/index.php/ruadc/article/view/767/839 https://revistas.udca.edu.co/index.php/ruadc/article/view/767/840 /*ref*/ADYA, M.; COLLOPY, F. 1998. How effective are neural networks at forecasting; prediction? A review; evaluation. J. Forecasting (USA). 17:481-495. /*ref*/ANASTASIADIS, A.D.; MAGOULAS, G.D.; VRAHATIS, M.N. 2003. An Efficient Improvement of the Rprop Algorithm. Proceedings of the First International Workshop on Artificial Neural Networks in Pattern Recognition. University of Florence (ITALY). p.197- 201. /*ref*/COTTRELL, M.; GIRARD, B.; GIRARD, Y.; MANGEAS, M.; MULLER, C. 1995. Neural modeling for time series: a statistical stepwise method for weight elimination. IEEE Transactions on Neural Networks (USA). 6(6):1355-1364. /*ref*/CRONE, S.; KOURENTZES, N. 2009. Input-variable Specification for Neural Networks - An Analysis of Forecasting low and high Time Series Frequency. Proceedings of the International Joint Conference on Neural Networks, IJCNN?09 (USA). p.619-626. /*ref*/FAHLMAN, S. 1989. Faster-learning variations of backpropagation: An empirical study. En: Touretzky, D., Hinton, G.; Sejnowski, T. (eds) Proceedings of the 1988 Connectionist Models Summer School (USA). p.38-51 /*ref*/FARAWAY, J.; CHATFIELD, C. 1998. Time series forecasting with neural networks: a comparative study using the airline data. Appl. Statist. (USA). 47:231- 250. /*ref*/GHIASSI, M.; SAIDANE, H.; ZIMBRA, D.K. 2005. A dynamic neural network model for forecasting time series events. International J. Forecasting (USA). 21:341-362. /*ref*/HAGAN, M.T.; DEMUTH, H.B.; BEALE, M.H. 1996. Neural Network Design. Ed. PWS Publishing,Boston: MA(USA). /*ref*/HAMILTON, J.D. 1994. Time Series Analysis. Princeton, New Jersey: Princeton University Press (USA). 820p. /*ref*/HORNIK, K.; STINCHICOMBE, M.; WHITE, H. 1989. Multilayer Feedforward Networks are Universal Approximators. Neural Networks (USA). 2(5):359- 366. /*ref*/MURATA, N.; YOSHIZAWA, S.; AMARI, S. 1994. Network information criterion-determining the number of hidden units for an artificial neural network model. IEEE Transactions on Neural Networks (USA). 5:865- 872. /*ref*/QI, M.; ZHANG, P.G. 2001. An investigation of model selection criteria for neural network time series forecasting. European J. Operational Research (NORWAY). 132:666-680. /*ref*/TANG, Z.; KOEHLER, J.G. 1994. Deterministic global optimal FNN training algorithms. Neural Networks (USA). 7:1405-1412. /*ref*/VELÁSQUEZ, J.D.; DYNER, I.; SOUZA, R.C. 2008. Modelado del precio de la electricidad en brasil usando una red neuronal autorregresiva. Ingeniare. Rev. Chilena Ingeniería. 16(3):394-403. /*ref*/ZHANG, P.G.; PATUWO, B.E.; HU, M.Y. 1998. Forecasting with artificial neural networks: the state of the art. International J. Forecasting (USA). 14(1):35-62. /*ref*/ZHANG, G.P.; PATUWO, B.E.; HU, M.Y. 2001. A simulation study of artificial neural networks for non linear time-series forecasting. Computers & Operations Research (USA). 28(4):381-396. |