As you are well aware, recent times have seen growing interest in spoken language based interaction between human beings and 'intelligent' machines. Presaged by the release of Apple's Siri in 2011, speech-enabled devices - such as Amazon Echo and Google Home - are now becoming a familiar feature in people's homes. Coming years are likely to see the appearance of more embodied and situated social agents (such as robots), but, as yet, there is no clear theoretical basis, nor even practical guidelines, for the appropriate integration of spoken language interaction with such entities.
One possible reason for this situation is that the spoken language processing and human-robot interaction communities are fairly distinct, with only modest overlap. As a consequence, spoken language technologists are often working with arbitrary robots (or limit themselves to conversational agents), and roboticists are typically using off-the-shelf spoken language components without too much regard for their appropriateness. For example, the wrong voice on the wrong robot can reduce rather than increase usability (due to the uncanny valley effect).
It is our feeling that both communities would benefit from a deeper understanding of each other's methodologies and research perspectives. We thus propose to organise some kind of international event that would bring the speech and robot communities closer together. However, rather than a conventional workshop or special session at a conference, we have in mind something more strategic, for example a Dagstuhl seminar (http://www.dagstuhl.de/en/program/dagstuhl-seminars/) or IESC meeting (http://www.iesc.univ-corse.fr/en/home/). The aim would be to create an environment for an open and flexible discussion between the two communities, allowing participants to review the critical open research questions, propose appropriate evaluation protocols for speech-based human-robot interaction, and investigate opportunities to collect and share relevant corpora.
So, in order to progress this initiative, we would appreciate your response to the following ...
[ Y/N ] I would like to be involved in the organisation of such an event.
[ Y/N ] I would be interested to participate in such an event.
[ Y/N ] I would be happy to receive further information about this event.
[ Y/N ] Please remove me from your mailing list.
Finally, please forward this e-mail to anyone who you think might be interested in this initiative.
Roger K. Moore (University of Sheffield, UK)
Joseph Mariani & Laurence Devillers (LIMSI-CNRS, France)
Tatsuya Kawahara (Kyoto University, Japan)
Nestor Becerra Yoma (Universidad de Chile, Chile)
Matthias Scheutz (Tufts University, USA)
Prof ROGER K MOORE* BA(Hons) MSc PhD FIOA FISCA MIET
Chair of Spoken Language Processing
Vocal Interactivity Lab (VILab), Sheffield Robotics
Speech & Hearing Research Group (SPandH)
Department of Computer Science, UNIVERSITY OF SHEFFIELD
Regent Court, 211 Portobello, Sheffield, S1 4DP, UK
* Winner of the 2016 Antonio Zampolli Prize for "Outstanding Contributions
to the Advancement of Language Resources & Language Technology
Evaluation within Human Language Technologies"
Tel: +44 (0) 11422 21807
Fax: +44 (0) 11422 21810
Mob: +44 (0) 7910 073631
Editor-in-Chief: COMPUTER SPEECH AND LANGUAGE