Projects

R3D3

R3D3

The R3D3 project aimed at investigating situated natural language dialogue systems that  combine limited natural language understanding with the understanding of the nonverbal behaviour of the users in a real-life context. Inspired by Star Wars’ R2D2, we developed an even more advanced robot, our R3D3. R3D3 takes the form of a Receptionist Robot that can have an intelligent dialogue with the visitors to a museum, shop or office building. These dialogues are “double” in the sense that they involve a third party in the form of an avatar or a person shown on a screen by the robot. The R3D3 project addressed scientific ICT challenges in the areas of computer vision, spoken language interaction, and data processing, as well as practical challenges in robot development.

Partners in the project were: the departments of Human Media Interactionand Robotics and Mechatronics of the University of Twente, the Dutch Police AcademyVicarVisionSentientNEMO Science Center and iretail.solutions.

SOCIAL TOUCH TECHNOLOGY

In several projects we are exploring the use of the haptic modality in human-robot interaction and in mediated human-human interaction. In the COMMIT/VIEWW project we have been working on the automatic recognition of touch gestures, and the development of a Tactile Sleeve for Social Touch. In another project sponsored by TNO we have investigated the role of temperature.

DE-ENIGMA

de-enigma

The DE-ENIGMA project, funded by Horizon 2020 (the European Union’s Framework Programme for Research and Innovation), aimed to create and evaluate the effectiveness of a robot-based technology, developed to support autistic children in their learning. The main objective was to design effective and user-adaptable styles of robot behaviour for autistic children, leading to more personalised and effective therapies than previously available.

TASTY BITS AND BYTES

cropped-TBB_new_headerThe Tasty Bits and Bytes (TBB) project has been awarded funding by the board of directors of COMMIT/. In this project we have investigated the use of mixed reality technology to enhance experiences with food and drinks (A more detailed description can be found here). Partners in the project are listed here.

GREAT

The project “Game-Based Rehabilitation Experiences to Augment Therapy” was part of the IUALL project and used an Interactive LED-floor (provided by Ledgo for game based gait rehabilitation. The floor basically functions as a computer screen as well as a multi-touch input device. Each module has pressures sensors which send their position and pressure information to the game. The second part is a user-friendly interface on a tablet. The therapist can use it to switch what game will be played next or adapt the game to the player. For instance, therapists can change the step size, the level of cognitive challenge for puzzles, and the time in which a step should be taken.

AVATAR / LOITER …

avatar-demo

In the AVATAR project, we developed and demonstrated virtual characters (embodied conversational agents) that can play the role of a police officer or a witness in a police interview. The virtual interview platform for training interview skills integrates the speech recognizer developed in the NLSpraak project, a social agent platform (ASAP) for behavior generation, a dialogue module, text-to-speech generation and virtual humans.

For this project we collaborate with the VR company CleVR. The project is sponsored by the Nationale Politie.

avatar-1

The AVATAR project was a continuation of the successful COMMIT/IUALL work on developing serious game for social skills training for police officers in training. The demo LOITER shows a serious game for the training of social skills of police officers. Players of the game have to resolve a conflict with a group of loitering juveniles. Through playing this game, police trainees can improve their social awareness. Players interact with virtual characters (the juveniles) in a 3D environment using a full-body immersive virtual reality system. The virtual juveniles use artificial intelligence to respond to the player according to theories from social psychology. Thus, players’ choices in how to reason with the juveniles determine the outcome of the conflict.

LOITER-300x251

NL-SPRAAK / FIOD-SPRAAK

Sinds januari 2016 wordt een deel van de kosten voor de ontwikkeling van modellen en tools gedragen door de Nationale Politie als onderdeel van het project NLSpraak. NLSpraak is een samenwerking tussen de Nationale Politie (sponsor) en de Universiteit Twente. In dit project worden modellen en tools ontwikkeld voor spraakherkenning van het Nederlands.

De dagelijkse praktijk van het politiewerk wordt deels gekenmerkt door veel administratieve taken waarvoor de specifieke kennis van een agent niet altijd nodig is. Denk hierbij aan het uitwerken van verhoren, taps, of interacties met burgers. De Nationale Politie kijkt continu naar manieren om politiemensen zoveel mogelijk te ontlasten van administratieve taken zodat ze zich kunnen concentreren op hun kerntaak: het ondersteunen van burgers en het vangen van boeven.

Automatische spraakherkenning is een technologie die de potentie heeft een deel van de administratieve taken te versnellen en in de toekomst mogelijk zelfs geheel over te nemen. Binnen het project NLSpraak werken de Nationale Politie, de PolitieAcademie, en de Universiteit Twente samen aan een systeem waarin spraakherkenning wordt ingezet ter ondersteuning van de normale politietaken. Hierbij gaat het om tools en modellen die zoveel mogelijk zullen worden vrijgegeven voor algemeen gebruik.

In FIODSpraak onderzoeken we welke mogelijkheden de automatisch transcriptie van interviews biedt voor de opsporing. Dit project is in samenwerking met telecats. Het project wordt gesponsord door de FIOD/Belastingdienst.

fiodspraak-3

ARIA VALUSPA

Logo-Aria

The ARIA-VALUSPA (Artificial Retrieval of Information Assistants – Virtual Agents with Linguistic Understanding, Social skills, and Personalised Aspects) project has created a framework that will allow easy creation of Artificial Retrieval of Information Assistants (ARIAs) that are capable of holding multi-modal social interactions in challenging and unexpected situations. The system can generate search queries and return the information requested by interacting with humans through virtual characters. These virtual humans are able to sustain an interaction with a user for some time, and react appropriately to the user’s verbal and non-verbal behavior when presenting the requested information and refining search results. Using audio and video signals as input, both verbal and non-verbal components of human communication are captured. Together with a rich and realistic emotive personality model, a sophisticated dialogue management system decides how to respond to a user’s input, be it a spoken sentence, a head nod, or a smile. The ARIA uses special speech synthesizers to create emotionally colored speech and a fully expressive 3D face to create the chosen response. Backchannelling, indicating that the ARIA understood what the user meant, or returning a smile are but a few of the many ways in which it can employ emotionally colored social signals to improve communication.

INTERACTIVE PLAYGROUND

itp

The Interactive Tag Playground (ITP) is an instrumented open space that allows for interactive play. Several players are tracked and their movements are analyzed and form the basis of several game mechanics. We use a floor-projection to visualize each player’s role, but also add novel interactive elements such as power-ups and bonuses. The ITP is more than entertainment, it doubles as a tool to record and study how people interact with each other and the environment. Our final aim is to automatically steer the interactions in such a way that all players remain engaged and physically active.