Multimedia and multimodal systems: commonalities and differences
@inproceedings{Anastopoulou2001MultimediaAM, title={Multimedia and multimodal systems: commonalities and differences}, author={Stamatina Anastopoulou and Chris Baber and Mike Sharples}, year={2001} }
2. Definitions Due to differences on terminology usage in the literature the terms modality, medium and representation need to be defined. In a communication act, such as learning in a classroom, modality refers to the sensory or perceptual experience (e.g. visual, tactile, etc.) and is closely related to the individual. Medium is a means of conveying a representation (to a human), e.g. a diagram or a text. Representation sketches or stores information, e.g. semantic net, English language…
14 Citations
On-Line Assessment : An Audio-Visual Approach
- Computer Science
- 2013
This paper discusses the final conclusions and empirically derived guidelines for the investigation of the role of some multimodal metaphors in On-Line Assessment: An Audio-Visual Approach interfaces…
Culturally responsive and meaningful music education: Multimodality, meaning-making, and communication in diverse learning contexts
- Education, SociologyResearch Studies in Music Education
- 2021
Music is learned and taught in multiple ways dependent on the socio-cultural contexts in which learning occurs. The processes employed by music teachers have been extensively explored by music…
Semantic-associative visual content labelling and retrieval: A multimodal approach
- Computer ScienceSignal Process. Image Commun.
- 2007
Developing Literacy and the Arts in Schools
- Education
- 2019
The teaching of the arts and literacy in schools is often at odds with one another. The desire for schools to improve results on high-stakes testing can lead to a narrow view of literacy rather than…
What is Multimodality?
- Computer ScienceMMSR
- 2021
A new task-relative definition of (multi)modality in the context of multimodal machine learning that focuses on representations and information that are relevant for a given machine learning task is proposed.
Guidelines for edutainment in e-learning systems
- Computer ScienceICSE 2011
- 2011
The study tested the benefits of integration of various modalities representing natural recorded speech, earcons, and virtual avatars within the e-learning framework through three experiments using usability parameters that include efficiency, effectiveness, and user satisfaction.
Multimodal interface design for multimodal meeting content retrieval
- Computer Science
- 2004
This thesis will investigate which modalities, and in which combinations, are best suited for use in a multimodal interface that allows users to retrieve the content of recorded and processed…
Multimodal interface design for multimodal meeting content retrieval
- Computer ScienceICMI '04
- 2004
This thesis will investigate which modalities, and in which combinations, are best suited for use in a multimodal interface that allows users to retrieve the content of recorded and processed…
Multimodal Semantic-Associative Collateral Labelling and Indexing of Still Images
- Computer Science2007 International Workshop on Content-Based Multimedia Indexing
- 2007
A novel framework for multimodal semantic-associative collateral image labelling, aiming at associating image regions with textual keywords, is described, and the notion of collateral context is introduced, which is represented as a co-occurrence matrix of the visual keywords.
The Importance of Exploring How Culture and Society Impact on Music Learning and Teaching
- Sociology
- 2018
Music is heavily influenced by the society and culture in which it is produced. Consequently, the ways in which music is taught and learnt are also impacted on by social and cultural values and…
References
SHOWING 1-10 OF 16 REFERENCES
Integration and Synchronization of Input Modes during Multimodal Human-Computer Interaction
- Computer ScienceCHI
- 1997
The present research analyzed multimodal interaction while people spoke and wrote to a simulated dynamic map system and revealed that the temporal precedence of writing over speech was a major theme, with pen input conveying location information first in a sentence.
Perceptual user interfaces
- Computer Science, Art
- 2000
This chapter describes the emerging Perceptual User Interfaces field and then reports on three PUI-motivated projects: computer vision-based techniques to visually perceive relevant information about the user, and three projects to accommodate a wider range of scenarios, tasks, users and preferences.
Natural communication with information systems
- Computer ScienceProceedings of the IEEE
- 2000
An experimental multimodal system is developed to study several aspects of natural style human-computer communication and the technologies of image and gaze processing, hands-free conversation, and force feedback tactile transduction are combined and used simultaneously for manipulating objects in a shared workspace.
Media integration in multimodal interfaces
- Computer ScienceProceedings of First Signal Processing Society Workshop on Multimedia Signal Processing
- 1997
The specific requirements revealed by the practical experience when designing multimodal interfaces are described, which are designed keeping in mind that they will be used in an isolated way, not in combination with other devices.
MVIEWS: multimodal tools for the video analyst
- Computer ScienceIUI '98
- 1998
MVIEWS is a system for annotating, indexing, extracting, and disseminating information from video streams for surveillance and intelligence applications, implemented within the Open Agent Architecture, a distributed multiagent framework that enables rapid integration of component technologies.
Artificial experts - social knowledge and intelligent machines
- Computer ScienceInside technology
- 1990
Sociologist Harry Collins explains what computers can't do, but also studies the ordinary and extraordinary things that they can do, and argues that although machines are limited because they cannot reproduce in symbols what every community knows, the authors give them abilities because of the way they embed them in their society.
QuickSet: multimodal interaction for distributed applications
- Computer ScienceMULTIMEDIA '97
- 1997
QuickSet: Multimodal Interaction for Distributed Applications Philip R. Cohen, Michael Johnston, David McGee, Sharon Oviatt, Jay Pittman, Ira Smith, Liang Chen and Josh Glow Center for Human Computer…
Introduction. Intelligence and Multimodality in Multimedia Interfaces: Research and Applications
- J. e. Lee. Menlo
- 1996
Intelligence and Multimodality in Multimedia Interfaces: Research and Applications
- Computer ScienceAAAI 1997
- 1997