Enactive interfaces: Difference between revisions

Content deleted Content added
Brews ohare (talk | contribs)
Brews ohare (talk | contribs)
mNo edit summary
Line 2:
'''Enactive interfaces''' are interactive systems that allow organization and transmission of knowledge obtained through action. Examples are interfaces that couple a human with a machine to do things usually done unaided, such as shaping a three-dimensional object using a graphic display on a screen,<ref name=Fukuda/> or using interactive video to allow a student to visually engage with mathematics concepts.<ref name=Held/> Enactive interface design can be approached through the idea of raising awareness of [[affordances]], that is, optimization of the awareness of possible actions available to someone using the enactive interface.<ref name=Stoffregen/> This optimization involves visibility, affordance, and feedback.<ref name=Stone/><ref name=Zudilova/>
 
Enactive interfaces are new types of [[Human–computer interaction|human-computer interface]] that allow to express and transmit the enactive knowledge by integrating different sensory aspects. The driving concept of enactive interfaces is then the fundamental role of motor action for storing and acquiring knowledge (action driven interfaces). Enactive interfaces are then capable of conveying and understanding gestures of the user, in order to provide an adequate response in perceptual terms. Enactive interfaces can be considered a new step in the development of the human-computer interaction because they are characterized by a closed loop between the natural gestures of the user (efferent component of the system) and the perceptual modalities activated (afferent component). Enactive interfaces can be conceived to exploit this direct loop and the capability of recognizing complex gestures.
 
The development of such interfaces requires the creation of a common vision between different research areas like [[computer vision]], [[Haptic perception|haptic]] and sound processing, giving more attention on the motor action aspect of interaction. An example of prototypical systems that are able to introduce en active interfaces are reactive robots, robots that are always in contact with the human hand (like current play console controllers, [[Wii Remote]]) and are capable of interpreting the human movements and guiding the human for the completion of a manipulation task.