Modality (human–computer interaction): Difference between revisions

Content deleted Content added
Added section on cooperation types for modalities
m ce
 
(46 intermediate revisions by 25 users not shown)
Line 1:
{{Short description|Type of data}}
{{distinguish|Mode (computeruser interface)}}
{{Unreferenced|date=December 2009}}
In the context of [[human–computer interaction]], a '''modality''' is the classification of a single independent channel of sensory [[input/output]] between a computer and a human. Such channels may differ based on sensory nature (e.g., visual vs. auditory),<ref name="HCI Overview2">{{cite journal|last1 = Karray|first1 = Fakhreddine|last2 = Alemzadeh|first2 = Milad|last3 = Saleh|first3 = Jamil Abou|last4 = Arab|first4 = Mo Nours|title = Human-Computer Interaction: Overview on State of the Art|journal = International Journal on Smart Sensing and Intelligent Systems|date = March 2008|volume = 1|issue = 1| pages=137–159 | doi=10.21307/ijssis-2017-283 |url = http://www.s2is.org/issues/v1/n1/papers/paper9.pdf|accessdate = April 21, 2015|archive-url = https://web.archive.org/web/20150430205510/http://s2is.org/Issues/v1/n1/papers/paper9.pdf|archive-date = April 30, 2015|url-status = dead}}</ref> or other significant differences in processing (e.g., text vs. image).<ref>{{cite arXiv | eprint=2301.13823 | author1=Jing Yu Koh | last2=Salakhutdinov | first2=Ruslan | last3=Fried | first3=Daniel | title=Grounding Language Models to Images for Multimodal Inputs and Outputs | date=2023 | class=cs.CL }}</ref>
A system is designated unimodelunimodal if it has only one modality implemented, and multimodel[[multimodal interaction|multimodal]] if it has more than one.<ref name="HCI Overview2" /> When multiple modalities are available for some tasks or aspects of a task, the system is said to have overlapping modalities. If multiple modalities are available for a task, the system is said to have redundant modalities. Multiple modalities can be used in combination to provide complementary methods that may be redundant but convey information more effectively.<ref>{{CitationCite neededbook|datetitle =April 2014Interactive Systems. Design, Specification, and Verification|last1 = Palanque|first1 = Philippe|publisher = Springer Science & Business Media|year = 2001|isbn = 9783540416630|pages = [https://archive.org/details/springer_10.1007-3-540-44675-3/page/n50 43]|last2 = Paterno|first2 = Fabio|url = https://archive.org/details/springer_10.1007-3-540-44675-3}}</ref> Modalities can be generally defined in two forms: computer-human and human-computer modalities.
 
Multiple modalities can be used in combination to provide complimentary methods that may be redundant but convey information more effectively.<ref>{{Cite book|title = Interactive Systems. Design, Specification, and Verification|last = Palanque|first = Philippe|publisher = Springer Science & Business Media|year = 2001|isbn = 9783540416630|___location = |pages = 43|last2 = Paterno|first2 = Fabio|url = https://books.google.com/books?id=RddIwyhAvDAC&dq=}}</ref> Modalities can be generally defined in two forms: human-computer and computer-human modalities.
==Computer–human modalities==
==Computer–Human Modalities==
Computers utilize a wide range of technologies to communicate and send information to humans:
Any human sense can used as a computer to human modality. The following are examples of modalities and their implementations through which a computer could send information to a human:
* Common modalities
** [[Visual perception|Vision]] – computer graphics typically through a screen
** [[Hearing (sense)|Audition]] – various audio outputs
** [[HapticsHaptic technology|Tactition]] – vibrations or other movement
 
* Uncommon modalities
** [[Taste|Gustation]] (taste)
Line 19 ⟶ 18:
** [[Equilibrioception]] (balance)
 
TheAny human sense can be used as a computer to human modality. However, the modalities of [[visual perception|seeing]] and [[hearing (sense)|hearing]] are the most commonly employed since they are capable of transmitting more information at a higher speed than other modalities, 250 to 300<ref name=Ziefle98>{{cite journal|last1=Ziefle|first1=M|title=Effects of display resolution on visual performance.|journal=Human factorsFactors|date=December 1998|volume=40|issue=4|pages=554–68|pmid=9974229|doi=10.1518/001872098779649355}}</ref> and 150–160150 to 160<ref>Williams, J. R. (1998). Guidelines for the use of multimedia in instruction, Proceedings of the Human Factors and Ergonomics Society 42nd Annual Meeting, 1447–1451</ref> [[words per minute]], respectively. Though uncommonlynot commonly implemented as computer-human modality, tactition can achieve an average of 125 wpm <ref>{{cite web|title=Braille|url=http://www.acb.org/node/67|website=ACB|publisher=American Council of the Blind|accessdate=21 April 2015}}</ref> through the use of a [[refreshable Braille display]]. Other more common forms of tactition are smartphone and game controller vibrations.
 
==Human–Computer Modalities==
==Human–computer modalities==
The computer can be equipped with various types of [[input devices]] and sensors to allow it to receive information from the human.
Computers can be equipped with various types of [[input devices]] and sensors to allow them to receive information from humans. Common input devices are often interchangeable if they have a standardized method of communication with the computer and [[Affordance|afford]] practical adjustments to the user. Certain modalities can provide a richer interaction depending on the context, and having options for implementation allows for more robust systems.<ref>{{Cite book|title = Berkshire Encyclopedia of Human-computer Interaction|last = Bainbridge|first = William|publisher = Berkshire Publishing Group LLC|year = 2004|isbn = 9780974309125|pages = 483|url = https://books.google.com/books?id=568u_k1R4lUC}}</ref>
 
* Simple modalities
** [[Keyboard (computing)|Keyboard]]
** [[Pointing device]]
** [[Touchscreen]]
 
* Complex modalities
** [[Computer Visionvision]]
** [[Speech Recognitionrecognition]]
** [[Accelerometer|Motion]]
** [[Orientation (geometry)|Orientation]]
With the increasing popularity of [[smartphones]], the general public are becoming more comfortable with the more complex modalities. Motion and orientation are commonly used in smartphone mapping applications. Speech recognition is widely used with Virtual Assistant applications. Computer Vision is now common in camera applications that are used to scan documents and QR codes.
 
==Using multiple modalities==
{{main|Multimodal interaction}}
Having multiple modalities in a system gives more [[affordance]] to users and can contribute to a more robust system. Having more also allows for greater [[accessibility]] for users who work more effectively with certain modalities. Multiple modalities can be used as backup when certain forms of communication are not possible. This is especially true in the case of redundant modalities in which two or more modalities are used to communicate the same information. Certain combinations of modalities can add to the expression of a computer-human or human-computer interaction because the modalities each may be more effective at expressing one form or aspect of information than others.
 
There are six types of relationscooperation between modalities, and they help define how a combination or fusion of modalities cooperatework together to convey information more effectively.<ref name=":0">{{Cite book|title = Multimodal Human Computer Interaction and Pervasive Services|last = Grifoni|first = Patrizia|publisher = IGI Global|year = 2009|isbn = 9781605663876|___location = |pages = 37|url = https://books.google.com/books?id=O8CqMtIKSWwC&source=gbs_navlinks_s}}</ref>
 
==Benefits of Multimodal Systems==
There are six types of relations between modalities, and they help define how a combination or fusion of modalities cooperate to convey information more effectively.<ref>{{Cite book|title = Multimodal Human Computer Interaction and Pervasive Services|last = Grifoni|first = Patrizia|publisher = IGI Global|year = 2009|isbn = 9781605663876|___location = |pages = 37|url = https://books.google.com/books?id=O8CqMtIKSWwC&source=gbs_navlinks_s}}</ref>
* '''Equivalence:''' information is presented in multiple ways and can be interpreted as the same information
* '''Specialization:''' when a specific kind of information is always processed through the same modality
* '''Redundancy:''' multiple modalities process the same information
* '''ComplimentarityComplementarity:''' multiple modalities take separate information and merge it
* '''Transfer: '''a modality produces information that another modality consumes
* '''Concurrency:''' multiple modalities take in separate information that is not merged
 
Complementary-redundant systems are those which have multiple sensors to form one understanding or dataset, and the more effectively the information can be combined without duplicating data, the more effectively the modalities cooperate. Having multiple modalities for communication is common, particularly in smartphones, and often their implementations work together towards the same goal, for example gyroscopes and accelerometers working together to track movement.<ref name=":0"/>
 
==See also==
* [[{{Annotated link|Multimodal interaction]]learning}}
* [[{{Annotated link|Multisensory integration]]}}
* [[{{Annotated link|User- interface]]}}
* [[Multisensory integration]]
 
* [[Interactive Multimodal Information Management (IM)2|NCCR IM2: Swiss project on Multimodal interaction]]
==References==
{{Reflist}}
 
{{DEFAULTSORT:Modality (Human-Computer Interaction)}}
[[Category:Multimodal interaction]]
 
 
{{Comp-sci-stub}}