Agarre de objetos utilizando aprendizaje robótico por imitación y datos de fuerza

Carlos Andrés Peña Solórzano

Resumen


En este artículo se trata el agarre de objetos en robótica. Específicamente, la fuerza requerida en los puntos de contacto entre la mano y el objeto para realizar un buen agarre.

Se propone adquirir los datos de fuerza utilizando un guante de datos y codificándolos mediante aprendizaje por imitación. Se utilizan imágenes RGB y de profundidad para determinar la ubicación y orientación de los objetos.

Se prueban varias configuraciones mano-objeto en simulación, comparando la calidad del agarre al utilizar las fuerzas máximas, mínimas y promedio cortado. La variación de la calidad obtenida es pequeña y en algunos casos despreciable, permitiendo concluir que al seleccionar siempre las fuerzas máximas, obtenemos un agarre que se ajusta bien a múltiples configuraciones.

Además, se presenta un sistema de adquisición de datos de fuerza de bajo costo y una etapa de procesamiento de imágenes que permite determinar la ubicación y orientación de los objetos.


Palabras clave


Agarre, robótica, aprendizaje por imitación, programación por demostración, fuerza, procesamiento de imágenes

Referencias


Azad, P., Asfour, T., and Dillmann, R. (2007). Toward an unified representation for imitation of human motion on humanoids. In 2007 IEEE International Conference on Robotics and Automation, pages 2558 –2563.

Bernardin, K., Ogawara, K., Ikeuchi, K., and Dillmann, R. (2005).A sensor fusion approach for recognizing continuous human grasping sequences using hidden markov models.Robotics, IEEE Transactions on, 21(1):47–57.

Billard, A., Calinon, S., Dillmann, R., and Schaal, S. (2007). Handbook of robotics chapter 59: Robot programming by demonstration. Robotics, chapter 59:1371–1394.

Do, M., Asfour, T., and Dillmann, R. (2011). Towards a unifying grasp representation for imitation learning on humanoid robots. In Robotics and Automation (ICRA), 2011 IEEE International Conference on, pages 482–488. IEEE.

Fiala, M. and Ufkes, A. (2011). Visual odometry using 3-dimensional video input. In 2011 Canadian Conference on Computer and Robot Vision (CRV), pages 86 –93.

Forte, D., Gams, A., Morimoto, J., and Ude, A. (2012). On-line motion synthesis and adaptation using a trajectory database. Robotics and Autonomous Systems, 60(10):1327–1339.

Forte, D., Ude, A., and Gams, A. (2011).Realtime generalization and integration of different movement primitives. In Humanoid Robots (Humanoids), 2011 11th IEEE-RAS International Conference on, pages 590–595. IEEE.

Frati, V. and Prattichizzo, D. (2011).Using Kinect for hand tracking and rendering in wearable haptics.In 2011 IEEE World Haptics Conference (WHC), pages 317 –321.

Gabiccini, M., Bicchi, A., Prattichizzo, D., and Malvezzi, M. (2011).On the role of hand synergies in the optimal choice of grasping forces.Autonomous Robots, 31(2-3):235–252.

Gams, A. and Ude, A. (2009).Generalization of example movements with dynamic systems.In 9th IEEE-RAS International Conference on Humanoid Robots, 2009. Humanoids 2009, pages 28–33. IEEE.

Grave, K., Stuckler, J., and Behnke, S. (2010). Improving imitated grasping motions through interactive expected deviation learning. In Humanoid Robots (Humanoids), 2010 10th IEEERAS International Conference on, pages 397–404. IEEE.

Guo, D., Sun, F., Zhang, J., and Liu, H. (2013).A grasp synthesis and grasp synergy analysis for anthropomorphic hand.In Robotics and Biomimetics (ROBIO), 2013 IEEE International Conference on, pages 1617–1622.IEEE.

Hsu, H. J. (2011). The potential of kinect as interactive educational technology. In 2011 2nd International Conference on Education and Management Technology IPCSIT, volume 13,pages 334–338. IACSIT Press.

Kar, A. (2010). Skeletal tracking using Microsoft kinect.Methodology, 1:1–11.

Kormushev, P., Calinon, S., and Caldwell, D. G. (2011). Imitation learning of positional and force skills demonstrated via kinesthetic teaching and haptic input. Advanced Robotics, 25(5):581–603.

León, A., Morales, E., Altamirani, L., and Ruiz, J. (2011). Teaching a robot new tasks through imitation and feedback. In Proc. Workshop: New Developments in Imitation Learning, as part of the International Conference on Machine Learning.

Lopes, M. and Santos-Victor, J. (2003). Visual transformations in gesture imitation: what you see is what you do. In IEEE International Conference on Robotics and Automation, 2003. ICRA ’03., volume 2, pages 2375 – 2381 vol.2.

Malvezzi, M., Gioioso, G., Salvietti, G., Prattichizzo, D., and Bicchi, A. (2013).Syngrasp: a matlab toolbox for grasp analysis of human and robotic hands. In Robotics and Automation(ICRA), 2013 IEEE International Conference on, pages 1088–1093.IEEE.

Nemec, B. and Ude, A. (2012).Action sequencing using dynamic movement primitives.Robotica, 30(5):837.

Oikonomidis, I., Kyriazis, N., and Argyros, A. (2011).Efficient model-based 3d tracking of hand articulations using kinect.In BMVC 2011.BMVA.

Prattichizzo, A. B. D., Malvezzi, M., and Bicchi, A. (2010). On motion and force control of grasping hands with postural synergies. 2010 Robotics: Science and Systems Conference., pages 27–30.

Rozo, L., Jiménez, P., and Torras, C. (2011). Robot learning from demonstration of force-based tasks with multiple solution trajectories. In Advanced Robotics (ICAR), 2011 15th International Conference on, pages 124–129. IEEE.

Rozo, L. D., Jiménez, P., and Torras, C. (2010). Learning force-based robot skills from haptic demonstration. In CCIA, pages 331–340.

Schmidts, A. M., Lee, D., and Peer, A. (2011).Imitation learning of human grasping skills from motion and force data.In Intelligent Robots and Systems (IROS), 2011 IEEE/RSJ International Conference on, pages 1002–1007.IEEE.

Tan, H., Erdemir, E., Kawamura, K., and Du, Q. (2011).A potential field method-based extension of the dynamic movement primitive algorithm for imitation learning with obstacle avoidance. In 2011 International Conference on Mechatronics and Automation (ICMA), pages 525–530. IEEE.

Ude, A., Gams, A., Asfour, T., and Morimoto, J. (2010).Task-specific generalization of discrete and periodic dynamic movement primitives.IEEE Transactions on Robotics, 26(5):800–815.

Veber, M., Dolanc, M., and Bajd, T. (2005).Optimal grasping in humans.Matrix, 1:1.

Villaroman, N., Rowe, D., and Swan, B. (2011). Teaching natural user interaction using openni and the microsoftkinect sensor. In Proceedings of the 2011 conference on Information technology education, pages 227–232. ACM.

Wilson, A. D. (2010). Using a depth camera as a touch sensor. In ACM International Conference on Interactive Tabletops and Surfaces, ITS ’10, pages 69–72, New York, NY, USA. ACM.




DOI: https://doi.org/10.24050/reia.v0i0.613

Métricas de artículo

Vistas de resumen
103




Cargando métricas ...

Enlaces refback

  • No hay ningún enlace refback.




Copyright (c) 2016 Revista EIA

Licencia de Creative Commons
Este obra está bajo una licencia de Creative Commons Reconocimiento-NoComercial-SinObraDerivada 4.0 Internacional.




 

 

 

 

 

UNIVERSIDAD EIA

Sede de Las Palmas: Km 2 + 200 Vía al Aeropuerto José María Córdova Envigado, Colombia. Código Postal: 055428
Tel: (574) 354 90 90. Fax: (574) 386 11 60

Sede de Zúñiga: Calle 25 Sur 42-73 Envigado, Colombia. Código Postal: 055420
Tel: (574) 354 90 90. Fax: (574) 331 34 78
NIT: 890.983.722-6

Sistema OJS - Metabiblioteca |