The Multimodal Interaction Working Group closed in February 2017
News
8 September 2015:
First Public Working Draft: EMMA: Extensible MultiModal Annotation markup language Version 2.0
The Multimodal Interaction
Working Group has published a Working Draft
of EMMA:
Extensible MultiModal Annotation markup language Version 2.0. This
specification describes markup for representing interpretations of
user input (speech, keystrokes, pen input, etc.) and productions of
system output together with annotations for confidence scores,
timestamps, medium, etc. It forms part of the proposals for the W3C
Multimodal Interaction Framework.
23 June 2015:
New charter for the Multimodal Interaction Working Group approved
the W3C Multimodal
Interaction Working Group has been rechartered to continue its work
through 31 December 2016.
As we interact with technology through more and more diverse devices,
the critical need for standards for multimodal interaction becomes
increasingly clear.
See also the
announcement sent to the MMI public list
for more information.
11 June 2015:
"Discovery & Registration of Multimodal Modality Components: State Handling"
is published as a First Public Working Draft
The Multimodal Interaction
Working Group has published a Working Draft
of Discovery
& Registration of Multimodal Modality Components: State
Handling. This document is addressed to people who want either to
develop Modality Components for Multimodal Applications distributed
over a local network or “in the cloud”. With this goal, in
a multimodal system implemented according to the Multimodal
Architecture Specification, the system must discover and register its
Modality Components in order to preserve the overall state of the
distributed elements. In this way, Modality Components can be composed
with automation mechanisms in order to adapt the Application to the
state of the surrounding environment. Learn more about
the Multimodal Interaction
Activity.
Multimodal interaction offers significant ease of use benefits
over uni-modal interaction, for instance, when hands-free
operation is needed, for mobile devices with limited keyboards,
and for controlling other devices when a traditional desktop
computer is unavailable to host the application user interface.
This is being driven by advances in embedded and network-based
speech processing that are creating opportunities for integrated
multimodal Web browsers and for solutions that separate the
handling of visual and aural modalities, for example, by coupling
a local HTML5 user agent with a remote speech service.
The goal of the Multimodal Interaction Working Group is to
provide standards that will enable interaction using a wide
variety of modalities. These modalities include both those
currently available, such as touch, keyboard and speech, as well
as emerging modalities such as handwriting, camera, and
accelerometers.. Because of the ever-expanding set of interaction
modalities, the group has focused on a generic architecture that
defines communication between modality components and an
interaction manager, based on standard life cycle events. This
architecture is described in the Multimodal Architecture
and Interfaces specification. The group is now launching a
complementary work item to address the areas of registration and
discovery of MMI Architecture components.
The details of the interpretation of user input captured by the
various modalities and sent to the Interaction Manager are
expressed using the Extensible
Multi-Modal Annotation (EMMA) specification.
The work of the Multimodal Interaction Working Group is
applicable to a wide variety of types of interactions -- not only
interactions with the traditional desktop browser and keyboard,
but also in mobile contexts. In addition, the work also applies to
use cases where the devices involved, such as household
appliances, automobiles, or televisions, have very diverse forms
of displays and input controls.
The Working Group is chartered through 31 December 2016 under the
terms of the W3C
Patent Policy (5 February 2004 Version). To promote the
widest adoption of Web standards, W3C seeks to issue
Recommendations that can be implemented, according to this policy,
on a Royalty-Free basis.
We are very interested in your comments and suggestions. If you
have implemented multimodal interfaces, please share your
experiences with us, as we are particularly interested in reports
on implementations and their usability for both end-users and
application developers. We welcome comments on any of our
published documents on our public mailing list archive.
To subscribe
to the discussion list send an email to
www-multimodal-request@w3.org with the word subscribe in the
subject header. Previous discussion can be found in the public To
unsubscribe
send an email to www-multimodal-request@w3.org with the word
unsubscribe in the subject header.
How to join the Working Group
New participants are always welcome.If your organization is
already a member
of W3C, ask your W3C
Advisory Committee Representative (member only
link) to fill out the online registration form to confirm
that your organization is prepared to commit the time and expense
involved in participating in the group. You will be expected to
attend weekly Working Group teleconferences, all Working Group
face to face meetings (about 2 or 3 times a year) and to respond
in a timely fashion to email requests. Further details about
joining are available on the Working
Group (member only
link) page. Requirements for patent disclosures, as well as
terms and conditions for licensing essential IPR are given in the
W3C Patent
Policy.
W3C maintains a public list of any patent disclosures made
in connection with the deliverables of the group; that page also
includes instructions for disclosing a patent.