Show simple item record


dc.creatorSegura, Enrique Carlos
dc.date.accessioned2019-02-19T21:46:12Z
dc.date.available2019-02-19T21:46:12Z
dc.date.issued2013-12-31
dc.identifier.citationSegura, E. (2013). On the approximation of the inverse dynamics of a robotic manipulator by a neural network trained with a stochastic learning algorithm. INGE CUC, 9(2), 39-43. Recuperado a partir de https://revistascientificas.cuc.edu.co/ingecuc/article/view/4spa
dc.identifier.issn0122-6517, 2382-4700 electrónico
dc.identifier.urihttp://hdl.handle.net/11323/2631
dc.description.abstractThe SAGA algorithm is used to ap-proximate the inverse dynamics of a robotic manipulator with two rotational joints. SAGA (Simulated Annealing Gradient Adaptation) is a stochastic strategy for additive construction of an artificial neural network of the two-layer perceptron type based on three essential ele-ments: a) network weights update by means of the information from the gradient for the cost function; b) approval or rejection of the suggested change through a technique of clas-sical simulated annealing; and c) progressive growth of the neural network as its struc-ture reveals insufficient, using a conservative strategy for adding units to the hidden layer. Experiments are performed and efficiency is analyzed in terms of the relation between mean relative errors -in the training and test-ing sets-, network size, and computation time. The ability of the proposed technique to per-form good approximations by minimizing the complexity of the network’s architecture and, hence, the required computational memory, is emphasized. Moreover, the evolution of mini-mization processes as the cost surface is modi-fied is also discussedeng
dc.description.abstractSe utiliza el algoritmo SAGA para aproximar la dinámica inversa de un manipula-dor robótico con dos juntas rotacionales. SAGA (Simulated Annealing + Gradiente + Adapta-ción) es una estrategia estocástica para la cons-trucción aditiva de una red neuronal artificial de tipo perceptrón de dos capas, basada en tres elementos esenciales: a) actualización de los pe-sos de la red por medio de información del gra-diente de la función de costo; b) aceptación o re-chazo del cambio propuesto por una técnica de recocido simulado (simulated annealing) clási-ca; y c) crecimiento progresivo de la red neuro-nal, en la medida en que su estructura resulta insuficiente, usando una estrategia conserva-dora para agregar unidades a la capa oculta. Se realizan experimentos y se analiza la eficien-cia en términos de la relación entre error rela-tivo medio -en los conjuntos de entrenamien-to y de testeo-, tamaño de la red y tiempos de cómputo. Se hace énfasis en la habilidad de la técnica propuesta para obtener buenas aproxi-maciones, minimizando la complejidad de la ar-quitectura de la red y, por lo tanto, la memoria computacional requerida. Además, se discute la evolución del proceso de minimización a medi-da que la superficie de costo se modificaspa
dc.format.mimetypeapplication/pdf
dc.language.isoengeng
dc.publisherCorporación Universidad de la Costaspa
dc.relation.ispartofseriesINGE CUC; Vol. 9, Núm. 2 (2013)
dc.sourceINGE CUCspa
dc.subjectNeural networkspa
dc.subjectRobotic manipulatorspa
dc.subjectMultilayer perceptronspa
dc.subjectStochastic learningspa
dc.subjectInverse dynamicsspa
dc.subjectNeural networkeng
dc.subjectRobotic manipulatoreng
dc.subjectMultilayer perceptroneng
dc.subjectStochastic learningeng
dc.subjectInverse dynamicseng
dc.titleOn the approximation of the inverse dynamics of a robotic manipulator by a neural network trained with a stochastic learning algorithmspa
dc.title.alternativeOn the approximation of the inverse dynamics of a robotic manipulator by a neural network trained with a stochastic learning algorithmeng
dc.typeArticlespa
dcterms.references[1] V. I. Arnold, “On Functions of three Variables”, Dokl. Akad. Nauk, no.114, pp. 679-681, 1957.spa
dcterms.references[2] G. Cybenko, “Approximation by superpositions of a sigmoidal function”, Math. Control, Signals and Systems, vol.2, no.4, pp. 303-314, 1989.
dcterms.references[3] K. Funahashi, “On the approximate realization of continuous mappings by neural networks”, Neural Networks, vol.2, no.3, pp. 183-92, 1989.
dcterms.references[4] S. Haykin, Neural Networks and Learning Machines. Upper Saddle River, Pearson-Prentice Hall, 2009.
dcterms.references[5] Y. Ito, “Extension of Approximation Capability of Three Layered Neural Networks to Derivatives”, Proc. IEEE Int. Conf. Neural Networks, San Francisco, 1993, pp. 377-381.
dcterms.references[6] S. Kirkpatrick, C. D. Gelatt, and M. P. Vecchi, “Optimization by Simulated Annealing”, Science, vol. 220, pp. 671-680, 1983.
dcterms.references[7] A. N. Kolmogorov, On the Representation of Functions of Many Variables by Superposition of Continuous Functions of one Variable and Addition (1957), Am. Math. Soc. Tr., vol.28, pp. 55-59, 1963.
dcterms.references[8] P. J. Van Laarhoven and E. H. Aarts, Simulated Annealing: Theory and Applications. Dordrech: Kluwer, 2010.
dcterms.references[9] M. Leshno, V. Y. Lin, A. Pinkus and S. Schocken, “Multilayer Feedforward Networks with a Nonpolynomial Activation Function Can Approximate Any Function”, Neural Networks, vol.6, no 6, pp. 861-867, 1993.
dcterms.references[10] A. B. Martínez, R. M. Planas, and E. C. Segura, “Disposición anular de cámaras sobre un robot móvil”, en Actas XVII Jornadas de Automática Santander96, Santander, 1996.
dcterms.references[11] N. Metropolis, A. Rosenbluth, M. Rosenbluth, A. H. Teller, and E. Teller, “Equation of State Calculations by Fast Computing Machines”, J. Chem. Phys, vol. 21, no 6, pp. 1087-91, 1953.
dcterms.references[12] D. E. Rumelhart, G. E Hinton, and R. J. Williams, “Learning representations by back-propagating errors”, Nature no.323, pp. 533-536, 1986.
dcterms.references[13] P. Salamon, P. Paolo Sibani, and R. Frost, Facts, Conjectures and Improvements for Simulated Annealing. SIAM Monographs on Mathematical Modeling and Computation, 2002.
dcterms.references[14] E. C. Segura, A non parametric method for video camera calibration using a neural network, Int. Symp. Multi-Technology Information Processing, Hsinchu, Taiwan, 1996.
dcterms.references[15] E. C. Segura, Optimisation with Simulated Annealing through Regularisation of the Target Function, Proc. XII Congreso Arg. de Ciencias de la Computación, Potrero de los Funes, 2006.
dcterms.references[16] D. A. Sprecher, “On the Structure of Continuous Functions of Several Variables”, Tr. Am. Math. Soc., vol.115, pp. 340-355, 1963.
dc.source.urlhttps://revistascientificas.cuc.edu.co/ingecuc/article/view/4
dc.rights.accessrightsinfo:eu-repo/semantics/openAccessspa
dc.identifier.eissn2382-4700
dc.identifier.pissn0122-6517
dc.type.hasversioninfo:eu-repo/semantics/publishedVersionspa


Files in this item

Thumbnail

This item appears in the following Collection(s)

  • Revistas Científicas
    Artículos de investigación publicados en revistas pertenecientes a la Editorial EDUCOSTA.

Show simple item record