Cse-Honours

E-Pen

Context Handwriting is one of the most important motor skill we learn as human. Children with handwriting difficulties can find their academic impacted and often succeed less at school. It is even more true for people having to learn different handwriting scripts (i.e. latin, chinese, arabic) In this project we propose to explore new methods to assess and train people’s handwriting in a multiscript handwriting application. The project will aim to develop a new engaging handwriting analysis tool and integrate the analysis in agamified application. The backend of the application will be performing the analsysi of handwirting through a library taking into account various features of the handwriting logs (i.e. pen pressure, tilt, speed). Goals & Milestones During this project, the student will: Develop a library able to analyse strokes and handwriting (backend) Develop a JS app able to record handwriting data Integrate gamification into the app to build a learning game Evaluate the implemented application with end-users evaluating both usability and performances (learning outcomes) Topics Handwriting, JS, Algoritms, Prerequisites Skills: JS, Python, Git. References See Collection https://wafa.johal.org/tags/handwriting/

Find Your Voice: Use of Voice Assistant for Learning

Context Many children struggle to find their voice in social situations. They may be shy, suffer from social anxiety, be new to a culture (migrants) or have impairments in communication due to atypical development (e.g., autism spectrum disorder, speech or hearing disorders). The voices of these children often go unheard, as they find it hard to contribute to a conversation. The Find your Voice (FyV, http://wafa.johal.org/project/fyv/) project was initiated to investigate how joke telling could help children to speak up and gain confidence. We are also interested in story telling and general conversation. Improvements in communication can have a significant impact in confidence, and help children: reduce stress improve self-confidence ease social interactions make friends more easily improving literacy and language To help children develop the ability to communicate, tell jokes or stories to their peers, we propose leveraging social robots (e.g. NAO) and voice assistants (e.g., Alexa, Olly and Google Home) to: Model how to tell jokes/stories and respond to other children during conversations . Practice joke/story telling with a ‘friendly’ and ‘non-judgmental’ audience. Practice turn taking during conversation. TLearn jokes, stories and interesting facts to tell other children. The overall goals of the project are: To enable children to improve their social communication skills by learning intonation and timing, through interacting with voice assistants To learn to how to perform in front of peers and family To make children more confident in social situations The FyV project involves partners in London and California. Goals & Milestones At UNSW, our main goal will be to develop a ‘Learning by Teaching’ application using a robot or voice assistant. This application will allow the user to teach a virtual agent (robot or voice assistant) a joke/story. As the agent learns by demonstration, the user can practice and refine how the story/joke is told until the voice assistant (and the child) is able to tell the joke/story in a satisfactory way.

Human Action Recognition from AD Movies

Context Action Recognition is curcial for robots to perfoma around humans. Indeed, robot need to asses human action and intentions in order to assist them in everyday life tasks and to collaborative efficiently. The field of action recognition has aimed to use typical sensors found on robots to recognize agnts, objects and actions they are performing. Typical approach is to record a dataset of various action and label them. But often theses actions are not natural and it can be difficult to represent the variety of ways to perform actions with a lab built dataset. In this project we propose to use audio desription movies to label actions. AD movies integrate a form of narration to allow virually impaired veiwers to undertsnad the visual element sowed on screen. This information often deals with action actually depicted on the scene. Goals & Milestones During this project, the student will: Develop a pipeline to collect and crop clip of AD movies for at home actions. This extraction tool should be fexible and allow for integration of next action. It will for instance feature video and text processing to extract [Subject+ Action + Object] type of data. Investigate methods for HAR Implement a tree model combaning HAR with YOLO to identify agent and objects Evaluate the HAR pipeline with the Toyota Robot HSR Topics Human Action Recognition, Prerequisites Skills: Python, C++, Git. References https://www-sciencedirect-com.wwwproxy1.library.unsw.edu.au/science/article/pii/S174228761930283X https://openaccess.thecvf.com/content_cvpr_2015/papers/Rohrbach_A_Dataset_for_2015_CVPR_paper.pdf https://dl-acm-org.wwwproxy1.library.unsw.edu.au/doi/abs/10.1145/3355390?casa_token=MrZSE8hoPFYAAAAA:rcwHYdISRyLM5OApuN_2SASbwgBsswxx2EPHy9mGP8NaqIdvBj0q5LIa9_ChdyI_Lzfi4GX0PWjhD54 https://prior.allenai.org/projects/charades https://arxiv.org/pdf/1708.02696.pdf https://arxiv.org/pdf/1806.11230.pdf

Machine Learning for Social Interaction Modelling

Context The field of social human-robot interaction is growing. Understanding how communication between humans (human-human) generalises during human-robot communication is crucial in building fluent and enjoyable interactions with social robots. Everyday, more datasets that feature social interaction between humans and between humans and robots are made freely available online. In this project we propose to take a data-driven approach to build predictive models of social interactions between humans (HH) ( and between humans and robots (HR) interaction using 3 different datasets. Relevant research questions include: Which multi-modal features can be transferable from HH to HR setups? Are there common features that discriminate human behaviour in HH or HR scenarios (e.g. ‘Do people speak less or slower with robots?’ … ) Goals & Milestones During this project, the student will: Explore datasets (PinSoRo, MHHRI and P2PSTORY): type of data (video; audio, point cloud), available labels and annotation … Extract relevant features multimodal on each dataset Evaluate predictive models for each dataset (i.e. engagement) Explore transfer learning from one dataset to another There is also potential to use UNSW’s National Facility for Human-Robot Interaction Research to create a new dataset. Topics Machine Learning, Human-Robot Interaction Prerequisites Skills: Python, ROS, Git. References https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0205999 https://www.cl.cam.ac.uk/research/rainbow/projects/mhhri/ https://www.media.mit.edu/projects/p2pstory/overview/

Persuasive Robots - Exploring Behavioural Styles

Context Social robots are foreseen to be encountered in our everyday life, playing roles as assistant or companion (to mention a few). Recent studies have shown the potential harmful impacts of overtrust in social robotics [1], as robots may collect sensitive information without the user’s knowledge. Behavioural styles allow robots to express themselves differently within the same context. Given a specific gesture, keyframe manipulation can be used in order to generate style-based variation to the gesture. Behavioural styles have been studied in the past to improve robot’s behaviour during human-robot interaction [2]. In this project, we will explore how behavioural styles can influence engagement, trust and persuasion during human-robot interaction. Goals & Milestones Implement behavioural styles for the Nao robot (voice and behaviour) and for a voice assistant (voice only) Design at least two behaviour styles based on human behaviour and personality styles Evaluate and compare these styles via experimentation Design a scenario similar to the one described in paper [3] Setup a data collection environment (posture, video and audio) in the HRI Lab facility of UNSW Select appropriate tasks and/or questionnaires to measure engagement, trust and/or persuasion Evaluate the system via an experiment with users Complete the data analysis Topics Robotics, HRI, Psychology Prerequisites Skills: Python, ROS and Git. References [1]https://media.kasperskycontenthub.com/wp-content/uploads/sites/43/2019/10/14081257/Robots_social_impact_eng.pdf [2] Johal, W., Pesty, S., & Calvary, G. (2014, August). Towards companion robots behaving with style. In The 23rd IEEE International Symposium on Robot and Human Interactive Communication (pp. 1063-1068). IEEE. [3] Bainbridge, W. A., Hart, J. W., Kim, E. S., & Scassellati, B. (2011). The benefits of interactions with physically present robots over video-displayed agents. International Journal of Social Robotics, 3(1), 41-52. [4] Peters, R., Broekens, J., Li, K., & Neerincx, M. A. (2019, July). Robot Dominance Expression Through Parameter-based Behaviour Modulation. In Proceedings of the 19th ACM International Conference on Intelligent Virtual Agents (pp. 224-226). ACM. [5] Shane Saunderson et al. It Would Make Me Happy if You Used My Guess: Comparing Robot Persuasive Strategies in Social Human–Robot Interaction, IEEE Robotics and Automation Letters (2019). DOI: 10.1109/LRA.2019.2897143

Robot Writing

Context Handwriting is one of the most important motor skill we learn as human. Children with handwriting difficulties can find their academic impacted and often succeed less at school. In this project we propose to explore new methods for a robot to write using a pen. The project will use the Fetch Robot and aim to evaluate how a robot could learn handwriting trajectories using human demonstrations. Goals & Milestones During this project, the student will: Learn about ROS Develop a ROS package that enables to control the Fetch Robot arm given a series of handwriting strokes Explore the use of different methods for the robot to learn letter writing from demonstrations Evaluate the implemented method compare to other state of the art methods Topics ROS, Learning by Demonstration, Robotics, Handwriting Prerequisites Skills: Python, C++, ROS, Git. References See Zotero Collection https://www.zotero.org/groups/2419050/hri-unsw/collections/GQERYTFZ

Tangible e-Ink Paper Interfaces for Learners

Context While digital tools are more and more used in classrooms, teachers’ common practice remains to use photocopied paper documents to share and collect learning exercises from their students. With the Tangible e-Ink Paper (TIP) system, we aim to explore the use of tangible manipulatives interacting with paper sheets as a bridge between digital and paper traces of learning. Featuring an e-Ink display, a paper-based localisation system and a wireless connection, TIPs are envisioned to be used as a versatile tool across various curriculum activities. Goals & Milestones Literature Review on Tangible UI (TUI) in Education Implement and test a proof of concept of TIPs for learning Assemble 3 TIPs (3D printing of parts, soldering, etc.) Install libraries on Rasberry PI ( e.g. libdots - used for self-paper-based localisation, bluetooth communication) Develop two demo applications using TIPs for individual work (on A4 sheet of paper) collaborative work (on min A2) Topics Tangible User Interfaces, HCI Prerequisites and Learning Outcomes Skills: Javascript, Python or C++. Git. Qt, Rasberry Pi References https://infoscience.epfl.ch/record/271833/files/paper.pdf https://infoscience.epfl.ch/record/224129/files/paper.pdf https://infoscience.epfl.ch/record/270077/files/Robot_Analytics.pdf https://link.springer.com/chapter/10.1007/978-3-030-29736-7_57

Tangible Human Swarm Interaction

Context: Visuo-Motor coordination problems can impair children in their academic achievements and in their everyday life. Gross visuo-motor skills, in particular, are required in a range of social and educational activities that contribute to children’s physical and cognitive development such as playing a musical instrument, ball-based sports or dancing. Children with visuo-motor coordination difficulties are typically diagnosed with developmental coordination disorder or cerebral palsy and need undergo physical therapy. The therapy sessions are often not engaging for children and conducted individually. In this project, we aim to design new forms of interaction with a swarm for enhance visuo-motor coordination. We propose to develop a game that allows multiple children to play collaboratively on the same table. Goals & Milestones Implement the set of basic swarm behaviour using 4 Cellulo robots Integrate collaorative and tangible interactions Test the system with a participants. We plan ti integrate a measure of cognitive load using eye tracking data Topics HCI, Health, Game, Swarm Robotics Prerequisites Skills: Python, C++, Js References https://www.epfl.ch/labs/chili/index-html/research/cellulo/

Tangible Robots for Collaborative Online Learning

Context: Online learning presents several advantages: decreasing cost, allowing more flexibility and access to far away training resources. However, studies have found that it also limits communications between peers and teachers, limits physical interactions and that it requires a big commitment on the student’s part to plan and stay assiduous in their learning. Goals & Milestones In this project, we aim to design and test a novel way to engage students in collaborative online learning by using haptic enabled tangible robots. The project will consist in: developing a tool allowing the design of online activities for two or more robots to be connected implementing a demonstrator for this new library that will embed a series of small exercises hilightling the new capability of remote haptic-assisted collaboration evaluating the demonstrator with a user experiment Topics HCI, Haptics, Robot, Collaborative Work (Training/Gaminig) Prerequisites Skills: C++, Js, References See Zotero Collection https://www.zotero.org/groups/2419050/hri-unsw/collections/JXBHFMBC Schneider, B., Jermann, P., Zufferey, G., & Dillenbourg, P. (2011). Benefits of a Tangible Interface for Collaborative Learning and Interaction. IEEE Transactions on Learning Technologies, 4(3), 222–232. https://doi.org/10.1109/TLT.2010.36 Asselborn, T., Guneysu, A., Mrini, K., Yadollahi, E., Ozgur, A., Johal, W., & Dillenbourg, P. (2018). Bringing letters to life: Handwriting with haptic-enabled tangible robots. Proceedings of the 17th ACM Conference on Interaction Design and Children, 219–230. East, B., DeLong, S., Manshaei, R., Arif, A., & Mazalek, A. (2016). Actibles: Open Source Active Tangibles. Proceedings of the 2016 ACM International Conference on Interactive Surfaces and Spaces, 469–472. https://doi.org/10.1145/2992154.2996874 Guinness, D., Muehlbradt, A., Szafir, D., & Kane, S. K. (2019a). RoboGraphics: Dynamic Tactile Graphics Powered by Mobile Robots. The 21st International ACM SIGACCESS Conference on Computers and Accessibility, 318–328. https://doi.org/10.1145/3308561.3353804 Guinness, D., Muehlbradt, A., Szafir, D., & Kane, S. K. (2019b). RoboGraphics: Using Mobile Robots to Create Dynamic Tactile Graphics. The 21st International ACM SIGACCESS Conference on Computers and Accessibility, 673–675. https://doi.org/10.1145/3308561.3354597 Guinness, D., Muehlbradt, A., Szafir, D., & Kane, S. K. (2018). The Haptic Video Player: Using Mobile Robots to Create Tangible Video Annotations. Proceedings of the 2018 ACM International Conference on Interactive Surfaces and Spaces, 203–211. https://doi.org/10.1145/3279778.3279805 Guneysu, A., Johal, W., Ozgur, A., & Dillenbourg, P. (2018). Tangible Robots Mediated Collaborative Rehabilitation Design: Can we Find Inspiration from Scripting Collaborative Learning? Workshop on Robots for Learning R4L HRI2018. Guneysu Ozgur, A., Wessel, M. J., Johal, W., Sharma, K., Ozgur, A., Vuadens, P., Mondada, F., Hummel, F. C., & Dillenbourg, P. (2018). Iterative design of an upper limb rehabilitation game with tangible robots. Proceedings of the 2018 ACM/IEEE International Conference on Human-Robot Interaction, 241–250. Guneysu Ozgur, A., Wessel, M. J., Olsen, J. K., Johal, W., Özgür, A., Hummel, F. C., & Dillenbourg, P. (2020). Gamified Motor Training with Tangible Robots in Older Adults: A Feasibility Study and Comparison with Young. Frontiers in Aging Neuroscience, 12. https://doi.org/10.3389/fnagi.2020.00059 Ishii, H., & Ullmer, B. (1997). Tangible Bits: Towards Seamless Interfaces Between People, Bits and Atoms. Proceedings of the ACM SIGCHI Conference on Human Factors in Computing Systems, 234–241. https://doi.org/10.1145/258549.258715 Johal, W., Tran, A., Khodr, H., Özgür, A., & Dillenbourg, P. (2019). TIP: Tangible e-Ink Paper Manipulatives for Classroom Orchestration. Proceedings of the 31st Australian Conference on Human-Computer-Interaction, 595–598. https://doi.org/10.1145/3369457.3369539 Loparev, A., Westendorf, L., Flemings, M., Cho, J., Littrell, R., Scholze, A., & Shaer, O. (2017). BacPack: Exploring the Role of Tangibles in a Museum Exhibit for Bio-Design. Proceedings of the Eleventh International Conference on Tangible, Embedded, and Embodied Interaction, 111–120. https://doi.org/10.1145/3024969.3025000 Okerlund, J., Shaer, O., & Latulipe, C. (2016). Teaching Computational Thinking Through Bio-Design (Abstract Only). Proceedings of the 47th ACM Technical Symposium on Computing Science Education, 698. https://doi.org/10.1145/2839509.2850569 O’Malley, C., & Fraser, D. S. (2004). Literature review in learning with tangible technologies. Ozgur, A. G., Wessel, M. J., Asselborn, T., Olsen, J. K., Johal, W., Özgür, A., Hummel, F. C., & Dillenbourg, P. (2019). Designing Configurable Arm Rehabilitation Games: How Do Different Game Elements Affect User Motion Trajectories? 2019 41st Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), 5326–5330. https://doi.org/10.1109/EMBC.2019.8857508 Ozgur, A., Johal, W., Mondada, F., & Dillenbourg, P. (2017). Haptic-enabled handheld mobile robots: Design and analysis. Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, 2449–2461. Ozgur, A., Lemaignan, S., Johal, W., Beltran, M., Briod, M., Pereyre, L., Mondada, F., & Dillenbourg, P. (2017). Cellulo: Versatile handheld robots for education. 2017 12th ACM/IEEE International Conference on Human-Robot Interaction (HRI, 119–127.

Voice for ROS

Context Natural language is an important part of communication since it offers an intuitive and efficient way of conveying ideas to another individual. Enabling robots to efficiently use language is essential for human-robot collaboration. In this project, we aim to develop an interface between a dialog manager (i.e. DialogFlow) and ROS (Robotics Operating System). By doing this, we will be able to use the powerful dialogue systems in human-robot interaction scenario. A scenario, using tangible robots (Cellulo) combined with voice assistant for upper-arm rehabilitation will be implemented to show the potential of this new ros-package. Goals & Milestones During this project, the student will: Learn about Google DialogFlow and ROS Develop a ROS package that enables to access and manipulates DialogFlow features Develop a Cellulo Rehabilitation Game Test the game with a pilot experiment Topics Voice-Assistant, Human-Robot Interaction, ROS Prerequisites Skills: Python, C++, ROS, Git. References https://dialogflow.com/ https://www.ros.org/ http://wafa.johal.org/project/cellulo/ Hudson, C., Bethel, C. L., Carruth, D. W., Pleva, M., Juhar, J., & Ondas, S. (2017, October). A training tool for speech driven human-robot interaction applications. In 2017 15th International Conference on Emerging eLearning Technologies and Applications (ICETA) (pp. 1-6). IEEE. Moore, R. K. (2017). Is spoken language all-or-nothing? Implications for future speech-based human-machine interaction. In Dialogues with Social Robots (pp. 281-291). Springer, Singapore. Beirl, D., Yuill, N., & Rogers, Y. (2019). Using Voice Assistant Skills in Family Life. In Lund, K., Niccolai, G. P., Lavoué, E., Gweon, C. H., & Baker, M. (Eds.), A Wide Lens: Combining Embodied, Enactive, Extended, and Embedded Learning in Collaborative Settings, 13th International Conference on Computer Supported Collaborative Learning (CSCL) 2019, Volume 1 (pp. 96-103). Lyon, France: International Society of the Learning Sciences.