Skip to search form
Skip to main content
Skip to account menu
Semantic Scholar
Semantic Scholar's Logo
Search 225,209,507 papers from all fields of science
Search
Sign In
Create Free Account
W3C MMI
Known as:
MMI
The Multimodal Interaction Activity is an initiative from W3C aiming to provide means (mostly XML) to support Multimodal interaction scenarios on the…
Expand
Wikipedia
(opens in a new tab)
Create Alert
Alert
Related topics
Related topics
10 relations
Emotion Markup Language
Mobile phone
Multimodal Architecture and Interfaces
Personal computer
Expand
Broader (1)
Multimodal interaction
Papers overview
Semantic Scholar uses AI to extract papers important to this topic.
2017
2017
Multimodal Fusion and Fission within the W3C MMI Architectural Pattern
Dirk Schnelle-Walka
,
Carlos Duarte
,
Stefan Radomski
2017
Corpus ID: 63655922
The current W3C recommendation for multimodal interfaces provides a standard for the message exchange and overall structure of…
Expand
2017
2017
Discovery and Registration: Finding and Integrating Components into Dynamic Systems
B. Rodriguez
,
Jean-Claude Moissinac
2017
Corpus ID: 64187078
One of the major gaps in the current HTML5 web platform is the lack of an interoperable means for a multimodal application to…
Expand
2017
2017
SCXML on Resource Constrained Devices
Stefan Radomski
,
J. Heuschkel
,
Dirk Schnelle-Walka
,
M. Mühlhäuser
2017
Corpus ID: 63823283
Ever since their introduction as a visual formalism by Harel et al. in 1987, state-charts played an important role to formally…
Expand
2017
2017
The W3C MMI Architecture in the Context of the Smart Car
Dirk Schnelle-Walka
,
Stefan Radomski
SmartObject@IUI
2017
Corpus ID: 18772046
With the GENIVI project, an open source approach to ease the development of scalable in-vehicle infotainment systems is available…
Expand
2015
2015
Formal verification of multimodal dialogs in pervasive environments
Stefan Radomski
2015
Corpus ID: 34414309
Providing reliable and coherent interfaces to end-users in pervasive environments with a wealth of connected sensors and…
Expand
2015
2015
Modern Standards for VoiceXML in Pervasive Multimodal Applications
Dirk Schnelle-Walka
,
Stefan Radomski
,
M. Mühlhäuser
2015
Corpus ID: 63406882
In this chapter, we will consider the language support of VoiceXML 2.1 to express flexible dialogs in pervasive environments…
Expand
2014
2014
Accessible TV based on the W3C MMI Architecture
K. Ashimura
,
Osamu Nakamura
,
M. Isshiki
Global Conference on Consumer Electronics
2014
Corpus ID: 3362141
These days Web technology is applied to various CE devices, e.g., smart TVs. However, ordinary GUI is not necessarily the best…
Expand
2014
2014
Engineering interactive systems with SCXML
Dirk Schnelle-Walka
,
Stefan Radomski
,
T. Lager
,
Jim Barnett
,
D. Dahl
,
M. Mühlhäuser
Engineering Interactive Computing System
2014
Corpus ID: 426814
The W3C is about to finalize the SCXML standard to express Harel state-machines as XML documents. In unison with the W3C MMI…
Expand
2013
2013
JVoiceXML as a modality component in the W3C multimodal architecture
Dirk Schnelle-Walka
,
Stefan Radomski
,
M. Mühlhäuser
Journal on Multimodal User Interfaces
2013
Corpus ID: 255541157
Research regarding multimodal interaction led to a multitude of proposals for suitable software architectures. With all…
Expand
2004
2004
Extensible MultiModal Annotation (EMMA)
M. Froumentin
2004
Corpus ID: 41093007
This talk will introduce the W3C Multimodal Interaction Activity, whose goal is to design a framework of specifications to enable…
Expand
By clicking accept or continuing to use the site, you agree to the terms outlined in our
Privacy Policy
(opens in a new tab)
,
Terms of Service
(opens in a new tab)
, and
Dataset License
(opens in a new tab)
ACCEPT & CONTINUE