W3C

Multimodal Interaction Activity

News

  • 24 September 2013: W3C Webinar: Discovery in Distributed Multimodal Interaction
    The second MMI webinar on "Discovery in Distributed Multimodal Interaction" was held on September 24, 2013, at 11:00 a.m. ET.
    Prior to this second webinar, the MMI-WG had held the W3C Workshop on Rich Multimodal Application Development on July 22-23 in New York Metropolitan Area, US, and identified that distributed/dynamic applications depend on the ability of devices and environments to find each other and learn what modalities they support. Therefore this second webinar focused on the topic of device/service discovery to handle Modality Components of the MMI Architecture dynamically.
    The discussion during the webinar interested anyone who wanted to take advantage of the dramatic increase in new interaction modes, whether for health care, financial services, broadcasting, automotive, gaming, or consumer devices.
    Also several experts from the industry and analyst communities shared their experiences and views on the explosive growth of opportunities for the development of applications that provide enhanced multimodal user-experiences. Read more at the Webinar site.
  • 22-23 July 2013: The W3C Workshop on Rich Multimodal Application Development was held on 22-23 July 2013 in New York Metropolitan Area, US.

More News ...

Why Multimodal Interaction?

Multimodal interaction offers significant ease of use benefits over uni-modal interaction, for instance, when hands-free operation is needed, for mobile devices with limited keyboards, and for controlling other devices when a traditional desktop computer is unavailable to host the application user interface. This is being driven by advances in embedded and network-based speech processing that are creating opportunities for integrated multimodal Web browsers and for solutions that separate the handling of visual and aural modalities, for example, by coupling a local HTML5 user agent with a remote speech service.

Target audience >>

Current Situation

The goal of the Multimodal Interaction Working Group is to provide standards that will enable interaction using a wide variety of modalities. These modalities include both those currently available, such as touch, keyboard and speech, as well as emerging modalities such as handwriting, camera, and accelerometers.. Because of the ever-expanding set of interaction modalities, the group has focused on a generic architecture that defines communication between modality components and an interaction manager, based on standard life cycle events. This architecture is described in the Multimodal Architecture and Interfaces specification. The group is now launching a complementary work item to address the areas of registration and discovery of MMI Architecture components.

The details of the interpretation of user input captured by the various modalities and sent to the Interaction Manager are expressed using the Extensible Multi-Modal Annotation (EMMA) specification. The Working Group is also working to address the underlying representation of two basic forms of user input-- ink and emotion, because there were no existing standards for these forms of input. The Ink Markup Language (InkML) standard describes how ink and gesture inputs can be represented in XML, and the Emotion Markup Language specification describes an XML representation for emotion.

The work of the Multimodal Interaction Working Group is applicable to a wide variety of types of interactions -- not only interactions with the traditional desktop browser and keyboard, but also in mobile contexts. In addition, the work also applies to use cases where the devices involved, such as household appliances, automobiles, or televisions, have very diverse forms of displays and input controls.

The Working Group is chartered through 31 July 2013 under the terms of the W3C Patent Policy (5 February 2004 Version). To promote the widest adoption of Web standards, W3C seeks to issue Recommendations that can be implemented, according to this policy, on a Royalty-Free basis.

The Working Group is chaired by Deborah Dahl. The W3C Team Contact is Kazuyuki Ashimura.

We want to hear from you!

We are very interested in your comments and suggestions. If you have implemented multimodal interfaces, please share your experiences with us, as we are particularly interested in reports on implementations and their usability for both end-users and application developers. We welcome comments on any of our published documents on our public mailing list archive. To subscribe to the discussion list send an email to www-multimodal-request@w3.org with the word subscribe in the subject header. Previous discussion can be found in the public To unsubscribe send an email to www-multimodal-request@w3.org with the word unsubscribe in the subject header.

How to join the Working Group

New participants are always welcome.If your organization is already a member of W3C, ask your W3C Advisory Committee Representative (member only link) to fill out the online registration form to confirm that your organization is prepared to commit the time and expense involved in participating in the group. You will be expected to attend weekly Working Group teleconferences, all Working Group face to face meetings (about 2 or 3 times a year) and to respond in a timely fashion to email requests. Further details about joining are available on the Working Group (member only link) page. Requirements for patent disclosures, as well as terms and conditions for licensing essential IPR are given in the W3C Patent Policy.

More information about the W3C is available, as is information about joining the W3C.

Patent Disclosures

W3C maintains a public list of any patent disclosures made in connection with the deliverables of the group; that page also includes instructions for disclosing a patent.

  翻译: