Abstract
Easy and efficient access to large amounts of data has become an essential aspect of our everyday life. In this paper we investigate possibilities of supporting information representation through the combined use of multiple modalities of perceptions such as sight, touch and kinesthetics. We present a theoretical framework to analyze these approaches and exemplify our findings with case studies of three emergent projects. The results are a contribution to a larger discussion of multimodal information representation at the intersection of theory and practice.
Original language | English |
---|---|
Article number | 5507 |
Journal | First Monday |
Volume | 20 |
Issue number | 6 |
DOIs | |
Publication status | Published - 1 Jun 2015 |
Scopus Subject Areas
- Human-Computer Interaction
- Computer Networks and Communications
- Law