The Multimodal Interaction Activity seeks to extend the Web to allow users to dynamically select the most appropriate mode of interaction for their current needs including any disabilities in order to enable Web application developers to provide an effective user interface for whichever modes the user selects. With multimodal Web applications, users can provide input via speech, handwriting and keystrokes, with output presented via displays, pre-recorded and synthetic speech, audio, and tactile mechanisms such as mobile phone vibrators and Braille strips.
The goal of the Multimodal Interaction Activity is to clearly define how to author concrete multimodal Web applications, for example, coupling a local GUI (e.g., HTML user agent) with a remote Speech I/F (e.g., VoiceXML user agent). The Multimodal Interaction Working Group is important as a central point of coordination within W3C for multimodal activities, and the group collaborates with other related Working Groups, e.g. Voice Browser, Scalable Vector Graphics, Compound Document Formats, Web Applications and Ubiquitous Web Applications.
The MMI WG held a Webinar on "Developing Portable Mobile Applications with Compelling User Experience using the W3C MMI Architecture" on January 31. The 90-minute Webinar was aimed at Web developers who may find it daunting to incorporate innovative input and output methods such as speech, touch, gesture and swipe into their applications, given the diversity of mobile devices and programming techniques available today. There were 134 attendees there and we had great discussion on rich multimodal Web applications. So we decided to hold a workshop on "Rich Multimodal Application Development" on 22-23 July 2013 in NY Metropolitan Area, US.
The Proposed Recommendation of Emotion Markup Language (EmotionML) 1.0 was published on 16 April 2013, and the group is preparing for the final W3C Recommendation now. On the other hand, the second Working Draft of EMMA 1.1 was published on 27 June 2013. Changes from the previous draft can be found in the Status of This Document section of the specification.
The W3C Workshop on Rich Multimodal Application Development was held on 22-23 July 2013 in New York Metropolitan Area, US. The draft minutes from the workshop are available on the W3C server, and the summary will be published shortly.
The group will publish the EmotionML Recommendation shortly, and continue the discussion on EMMA 1.1 and related WG Notes to support new features for multimodal applications based on the discussion during the W3C Workshop on Rich Multimodal Application Development. Also the group is planning to hold some more Webinars based on the feedback from the workshop.
The group will hold its F2F meeting during TPAC 2013 in Shenzhen in November.
Group | Chair | Team Contact | Charter |
---|---|---|---|
Multimodal Interaction Working Group (participants) | Deborah Dahl | Kazuyuki Ashimura | Chartered until 31 March 2014 |
This Activity Statement was prepared for TPAC 2013 per section 5 of the W3C Process Document. Generated from group data.
Kazuyuki Ashimura, Multimodal Interaction Activity Lead