ROS

Robot Writing

Context Handwriting is one of the most important motor skill we learn as human. Children with handwriting difficulties can find their academic impacted and often succeed less at school. In this project we propose to explore new methods for a robot to write using a pen. The project will use the Fetch Robot and aim to evaluate how a robot could learn handwriting trajectories using human demonstrations. Goals & Milestones During this project, the student will: Learn about ROS Develop a ROS package that enables to control the Fetch Robot arm given a series of handwriting strokes Explore the use of different methods for the robot to learn letter writing from demonstrations Evaluate the implemented method compare to other state of the art methods Topics ROS, Learning by Demonstration, Robotics, Handwriting Prerequisites Skills: Python, C++, ROS, Git. References See Zotero Collection https://www.zotero.org/groups/2419050/hri-unsw/collections/GQERYTFZ

Voice for ROS

Context Natural language is an important part of communication since it offers an intuitive and efficient way of conveying ideas to another individual. Enabling robots to efficiently use language is essential for human-robot collaboration. In this project, we aim to develop an interface between a dialog manager (i.e. DialogFlow) and ROS (Robotics Operating System). By doing this, we will be able to use the powerful dialogue systems in human-robot interaction scenario. A scenario, using tangible robots (Cellulo) combined with voice assistant for upper-arm rehabilitation will be implemented to show the potential of this new ros-package. Goals & Milestones During this project, the student will: Learn about Google DialogFlow and ROS Develop a ROS package that enables to access and manipulates DialogFlow features Develop a Cellulo Rehabilitation Game Test the game with a pilot experiment Topics Voice-Assistant, Human-Robot Interaction, ROS Prerequisites Skills: Python, C++, ROS, Git. References https://dialogflow.com/ https://www.ros.org/ http://wafa.johal.org/project/cellulo/ Hudson, C., Bethel, C. L., Carruth, D. W., Pleva, M., Juhar, J., & Ondas, S. (2017, October). A training tool for speech driven human-robot interaction applications. In 2017 15th International Conference on Emerging eLearning Technologies and Applications (ICETA) (pp. 1-6). IEEE. Moore, R. K. (2017). Is spoken language all-or-nothing? Implications for future speech-based human-machine interaction. In Dialogues with Social Robots (pp. 281-291). Springer, Singapore. Beirl, D., Yuill, N., & Rogers, Y. (2019). Using Voice Assistant Skills in Family Life. In Lund, K., Niccolai, G. P., Lavoué, E., Gweon, C. H., & Baker, M. (Eds.), A Wide Lens: Combining Embodied, Enactive, Extended, and Embedded Learning in Collaborative Settings, 13th International Conference on Computer Supported Collaborative Learning (CSCL) 2019, Volume 1 (pp. 96-103). Lyon, France: International Society of the Learning Sciences.