3D human–computer interaction: Difference between revisions

Content deleted Content added
Undid revision 1031289195 by 49.182.33.202 (talk)
OAbot (talk | contribs)
m Open access bot: doi updated in citation with #oabot.
 
(45 intermediate revisions by 19 users not shown)
Line 1:
{{short description|Form of human-machine interaction}}
[[File:Virtual-Fixtures-USAF-AR.jpg|thumb|[[Virtual Fixture]]s, a system for 3D human-computer interaction.]]
In [[computing]], '''3D human–computer interaction''' is a form of [[human-machinehuman–computer interaction]] where [[User (computing)|users]] are able to move and perform [[interaction]] in [[Three-dimensional space|3D space]]. Both humanthe user and machinethe computer process information where the physical position of elements in the 3D space is relevant. It largely encompasses [[virtual reality]] and [[augmented reality]].
 
The 3D space used for interaction can be the real [[physical space]], a [[virtual space]] representation simulated inon the computer, or a combination of both. When the real physical space is used for data input, the human interacts with the machine performing actions using an [[input device]] that [[Positional tracking|detects the 3D position]] of the human interaction, among other things. When it is used for data output, the simulated 3D virtual scene is projected onto the real environment through one [[output device]].
 
The principles of 3D interaction are applied in a variety of domains such as [[tourism]], [[Digital art|art]], [[Video game|gaming]], [[simulation]], [[education]], [[information visualization]], or [[scientific visualization]].<ref name="BowmanBook">{{cite book |last= Bowman |first= Doug A. |date=2004 |title= 3D User Interfaces: Theory and Practice |___location=Redwood City, CA, USA |publisher=Addison Wesley Longman Publishing Co., Inc. |isbn= 978-0201758672}}</ref>
 
== History ==
Research in 3D interaction and 3D display began in the 1960s, pioneered by researchers like [[Ivan Sutherland]], Fred Brooks, Bob Sproull, Andrew Ortony and Richard Feldman. But it was not until 1962 when [[Morton Heilig]] invented the [[Sensorama]] simulator.<ref name="US3050870A">{{cite patent |inventor-last=Heilig |inventor-first=Morton L |inventorlink=Morton Heilig |datefdate=1961-01-10 |title=Sensorama simulator |issue-datepubdate=1962-08-28 |country-code=US |patent-number=US3050870A3050870A |url = https://patentimages.storage.googleapis.com/90/34/2f/24615bb97ad68e/US3050870.pdf }}</ref> It provided 3D video feedback, as well motion, audio, and feedbacks to produce a virtual environment.
 
The next stage of development was Dr. [[Ivan Sutherland]]’sSutherland’s completion of his pioneering work in 1968, the Sword of Damocles.<ref name="HMD">[[Ivan Sutherland|Sutherland, I. E.]] (1968). "[http://design.osu.edu/carlson/history/PDFs/p757-sutherland.pdf A head-mounted three dimensional display] {{Webarchive|url=https://web.archive.org/web/20160304013350/http://design.osu.edu/carlson/history/PDFs/p757-sutherland.pdf |date=2016-03-04 }}". ''Proceedings of AFIPS 68'', pp. 757-764</ref>
Research in 3D interaction and 3D display began in the 1960s, pioneered by researchers like Ivan Sutherland, Fred Brooks, Bob Sproull, Andrew Ortony and Richard Feldman. But it was not until 1962 when [[Morton Heilig]] invented the [[Sensorama]] simulator.<ref name="US3050870A">{{cite patent |inventor-last=Heilig |inventor-first=Morton L |inventorlink=Morton Heilig |date=1961-01-10 |title=Sensorama simulator |issue-date=1962-08-28 |country-code=US |patent-number=US3050870A |url = https://patentimages.storage.googleapis.com/90/34/2f/24615bb97ad68e/US3050870.pdf }}</ref> It provided 3D video feedback, as well motion, audio, and feedbacks to produce a virtual environment.
The next stage of development was Dr. [[Ivan Sutherland]]’s completion of his pioneering work in 1968, the Sword of Damocles.<ref name="HMD">[[Ivan Sutherland|Sutherland, I. E.]] (1968). "[http://design.osu.edu/carlson/history/PDFs/p757-sutherland.pdf A head-mounted three dimensional display] {{Webarchive|url=https://web.archive.org/web/20160304013350/http://design.osu.edu/carlson/history/PDFs/p757-sutherland.pdf |date=2016-03-04 }}". ''Proceedings of AFIPS 68'', pp. 757-764</ref>
He created a head-mounted display that produced 3D virtual environment by presenting a left and right still image of that environment.
 
Line 22 ⟶ 23:
 
== 3D user interfaces ==
[[File:3D_User_Interaction.jpg|thumb|528x528pxupright=2|Scheme of 3D Useruser Interactioninteraction phases]]{{About|physicalboth software and hardware 3D input/output devices|3D user interfaces asuniquely software|Graphical user interface#Three-dimensional3D user interfaces|3D GUI|section=yes}}[[User interface|User interfaces]]s are the means for communication between users and systems. 3D interfaces include media for 3D representation of system state, and media for 3D user input or manipulation. Using 3D representations is not enough to create 3D interaction. The users must have a way of performing actions in 3D as well. To that effect, special input and output devices have been developed to support this type of interaction. Some, such as the 3D mouse, were developed based on existing devices for 2D interaction.
 
3D user interfaces, are user interfaces where 3D interaction takes place, this means that the user's tasks occur directly within a three-dimensional space. The user must communicate with commands, requests, questions, intent, and goals to the system, and in turn this one has to provide feedback, requests for input, information about their status, and so on.
Line 35 ⟶ 36:
 
==== 3D visual displays ====
This type of devicesdevice areis the most popular. and itsIts goal is to present the information produced by the system through the human visual system in a three-dimensional way. The main features that distinguish these devices are: field of regard and [[field of view]], [[spatial resolution]], screen geometry, light transfer mechanism, [[refresh rate]] and [[ergonomics]].
 
Another way to characterize these devices is according to the different categories of [[depth perception]] cues used to achieve that the user can understand the three-dimensional information. The main types of displays used in 3D UIsuser interfaces are: monitors, surround-screen displays, workbenches, hemispherical displays, head-mounted displays, arm-mounted displays and autostereoscopic displays. [[Virtual reality headset|Virtual reality headsets]]s and CAVEs ([[Cave Automatic Virtual Environment]]s (CAVEs) are examples of a fully immersive visual displaydisplays, where the user can see only the virtual world and not the real world. Semi-immersive displays allow users to see both. Monitors and workbenches are examples of semi-immersive displays.
This type of devices are the most popular and its goal is to present the information produced by the system through the human visual system in a three-dimensional way. The main features that distinguish these devices are: field of regard and [[field of view]], [[spatial resolution]], screen geometry, light transfer mechanism, [[refresh rate]] and [[ergonomics]].
 
Another way to characterize these devices is according to the different categories of [[depth perception]] cues used to achieve that the user can understand the three-dimensional information. The main types of displays used in 3D UIs are: monitors, surround-screen displays, workbenches, hemispherical displays, head-mounted displays, arm-mounted displays and autostereoscopic displays. [[Virtual reality headset|Virtual reality headsets]] and CAVEs ([[Cave Automatic Virtual Environment]]) are examples of a fully immersive visual display, where the user can see only the virtual world and not the real world. Semi-immersive displays allow users to see both. Monitors and workbenches are examples of semi-immersive displays.
 
==== 3D audio displays ====
 
==== 3D audio displays ====
3D Audio displays are devices that present information (in this case sound) through the human auditory system, which is especially useful when supplying ___location and spatial information to the users. Its objective is to generate and display a spatialized 3D sound so the user can use its psychoacoustic skills and be able to determine the ___location and direction of the sound. There are different localizations cues: binaural cues, spectral and dynamic cues, [[head-related transfer function]]s, [[reverberation]], [[sound intensity]] and vision and environment familiarity. Adding background audio component to a display also adds to the sense of realism.
 
==== 3D haptic displaysdevices ====
 
These devices use the sense of touch to simulate the physical interaction between the user and a virtual object. There are three different types of 3D Haptic displays: those that provide the user a sense of force, the ones that simulate the sense of touch and those that use both. The main features that distinguish these devices are: haptic presentation capability, resolution and [[ergonomics]]. The human haptic system has 2 fundamental kinds of cues, tactile and kinesthetic. Tactile cues are a type of human touch cues that have a wide variety of skin receptors located below the surface of the skin that provide information about the texture, temperature, pressure and damage. Kinesthetic cues are a type of human touch cues that have many receptors in the muscles, joints and tendons that provide information about the angle of joints and stress and length of muscles.
 
=== 3D user interface input hardware ===
 
These hardware devices are called input devices and their aim is to capture and interpret the actions performed by the user. The [[degrees of freedom]] (DOF) are one of the main features of these systems. Classical interface components (such as mouse and keyboards and arguably touchscreen) are often inappropriate for non 2D interaction needs.<ref name ="BowmanBook" /> These systems are also differentiated according to how much physical interaction is needed to use the device, purely active need to be manipulated to produce information, purely passive do not need to.
The main categories of these devices are standard (desktop) input devices, tracking devices, control devices, navigation equipment, [[Gesturegesture interface|gesture interfaces]]s, [[Mouse (computing)|3D mice]], and [[brain-computer interface|brain-computer interfaces]]s.
 
==== Desktop Input devices ====
 
This type of devices are designed for an interaction 3D on a desktop, many of them have an initial design thought in a traditional interaction in two dimensions, but with an appropriate mapping between the system and the device, this can work perfectly in a three-dimensional way. There are different types of them: [[Keyboard (computer)|keyboards]], [[Mouse (computing)|2D mice]] and trackballs, pen-based tablets and stylus, and [[joystick]]s. Nonetheless, many studies have questioned the appropriateness of desktop interface components for 3D interaction <ref name ="BowmanBook" /><ref>{{cite conference | last1=Chen | first1=Michael | last2=Mountford | first2=S. Joy | last3=Sellen | first3=Abigail | title=Proceedings of the 15th annual conference on Computer graphics and interactive techniques - SIGGRAPH '88 | chapter=A study in interactive 3-D rotation using 2-D control devices | publisher=ACM Press | ___location=New York, New York, USA | year=1988 | pages=121–129 | isbn=0-89791-275-6 | doi=10.1145/54852.378497 | chapter-url = https://www.microsoft.com/en-us/research/uploads/prod/2016/08/3-d-rotation-88.pdf| doi-access=free }}</ref><ref>{{cite journal | last1=Yu | first1=Lingyun | last2=Svetachov | first2=Pjotr| last3=Isenberg | first3=Petra |author3-link= Petra Isenberg | last4= Everts |first4=Maarten H.|last5=Isenberg | first5=Tobias | title=FI3D: Direct-Touch Interaction for the Exploration of 3D Scientific Visualization Spaces | journal=IEEE Transactions on Visualization and Computer Graphics | volume=16 | issue=6 | date=2010-10-28 | issn=1077-2626 | doi=10.1109/TVCG.2010.157 | pmid=20975204 | pages=1613–1622 | s2cid=14354159 | url=https://hal.inria.fr/inria-00587377/PDF/Yu_2010_FDT.pdf}}</ref> though this is still debated.<ref>{{cite conference | last1=Terrenghi | first1=Lucia | last2=Kirk | first2=David | last3=Sellen | first3=Abigail | last4=Izadi | first4=Shahram | title=Proceedings of the SIGCHI Conference on Human Factors in Computing Systems | chapter=Affordances for manipulation of physical versus digital media on interactive surfaces | publisher=ACM Press | ___location=New York, New York, USA | year=2007 | pages=1157–1166 | isbn=978-1-59593-593-9 | doi=10.1145/1240624.1240799 }}</ref><ref name="ComparisonArticle">{{cite conference | last1=Besançon | first1=Lonni | last2=Issartel | first2=Paul | last3=Ammi | first3=Mehdi | last4=Isenberg | first4=Tobias | title=Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems | chapter=Mouse, Tactile, and Tangible Input for 3D Manipulation | publisher=ACM Press | ___location=New York, New York, USA | year=2017 | pages=4727–4740 | isbn=978-1-4503-4655-9 | doi=10.1145/3025453.3025863 | chapter-url = https://hal.inria.fr/hal-01436206/document | arxiv=1603.08735 }}</ref>
 
==== Tracking devices ====
Line 61 ⟶ 58:
3D user interaction systems are based primarily on [[motion capture|motion tracking]] technologies, to obtain all the necessary information from the user through the [[gesture recognition|analysis of their movements or gestures]], these technologies are called, tracking technologies.
 
Trackers detect or monitor head, hand or body movements and send that information to the computer. The computer then translates it and ensures that position and orientation are reflected accurately in the virtual world. Tracking is important in presenting the correct viewpoint, coordinating the spatial and sound information presented to users as well the tasks or functions that they could perform. 3D trackers have been identified as mechanical, magnetic, ultrasonic, optical, and hybrid inertial. Examples of trackers include [[Motion capture|motion trackers]], [[Eye tracker|eye trackerstracker]]s, and data gloves. A simple 2D mouse may be considered a navigation device if it allows the user to move to a different ___location in a virtual 3D space. Navigation devices such as the [[treadmill]] and [[bicycle]] make use of the natural ways that humans travel in the real world. Treadmills simulate walking or running and bicycles or similar type equipment simulate vehicular travel. In the case of navigation devices, the information passed on to the machine is the user's ___location and movements in virtual space. [[Wired glove|Wired gloves]]s and bodysuits allow gestural interaction to occur. These send hand or body position and movement information to the computer using sensors.
 
For the full development of a 3D User Interaction system, is required to have access to a few basic parameters, all this technology-based system should know, or at least partially, as the relative position of the user, the absolute position, angular velocity, rotation data, orientation or height. The collection of these data is achieved through systems of space tracking and sensors in multiple forms, as well as the use of different techniques to obtain.
Line 70 ⟶ 67:
However, there are several systems that are closely adapted to the objectives pursued, the determining factor for them is that systems are auto content, i.e., all-in-one and does not require a fixed prior reference, these systems are as follows:
 
===== Nintendo WIIWii Remote ("Wiimote") =====
[[File:Wiimote-Safety-First.jpg|thumb|Wiimote device]]
 
The [[Wii Remote]] device does not offer a technology based on 6-DOF. since againHowever, cannot provide absolute position, in contrast,it is equipped with a multitude of sensors, which convertmake a 2D device init a great tool of interaction in 3D environments. It has gyroscopes to detect rotation of the user, accelerometers ADXL3000, for obtaining speed and movement of the hands, optical sensors for determining orientation and electronic compasses and infra-red devices to capture the position.
 
This device has gyroscopes to detect rotation of the user, accelerometers ADXL3000, for obtaining speed and movement of the hands, optical sensors for determining orientation and electronic compasses and infra-red devices to capture the position.
 
This type of device can be affected by external references of [[infra-red]] light bulbs or candles, causing errors in the accuracy of the position.
 
===== Google Tango Devices =====
[[File:Google ATAP's Project Tango tablet (15387052663).jpg|thumb|Google's Project Tango tablet, 2014]] The [[Tango (platform)|Tango Platform]] is an augmented reality computing platform, developed and authored by the Advanced Technology and Projects (ATAP), a skunkworks division of Google. It uses computer vision and internal sensors (like gyroscopes) to enable mobile devices, such as smartphones and tablets, to detect their position relative to the world around them without using GPS or other external signals. It can therefore be used to provide 6-DOF input which can also be combined with its multi-touch screen.<ref>{{cite journal | last1=Besancon | first1=Lonni | last2=Issartel | first2=Paul | last3=Ammi | first3=Mehdi | last4=Isenberg | first4=Tobias | title=Hybrid Tactile/Tangible Interaction for 3D Data Exploration | journal=IEEE Transactions on Visualization and Computer Graphics | volume=23 | issue=1 | year=2017 | issn=1077-2626 | doi=10.1109/tvcg.2016.2599217 | pmid=27875202 | pages=881–890 | s2cid=16626037 | url = https://hal.inria.fr/hal-01372922/document }}</ref> The Google Tango devices can be seen as more integrated solutions than the early prototypes combining spatially-tracked devices with touch-enabled-screens for 3D environments.<ref>{{cite conference | last1=Fitzmaurice | first1=George W. | last2=Buxton | first2=William | title=Proceedings of the ACM SIGCHI Conference on Human factors in computing systems | chapter=An empirical evaluation of graspable user interfaces | publisher=ACM Press | ___location=New York, New York, USA | year=1997 | pages=43–50 | isbn=0-89791-802-9 | doi=10.1145/258549.258578 | chapter-url = http://www.dgp.toronto.edu/~gf/papers/PhD%20-%20Graspable%20UIs/Thesis.gf.html | doi-access=free }}</ref><ref>{{cite conference | last1=Angus | first1=Ian G. | last2=Sowizral | first2=Henry A. | editor-last=Fisher | editor-first=Scott S. | editoreditor2-last2last=Merritt | editoreditor2-first2first=John O. | editoreditor3-last3last=Bolas | editoreditor3-first3first=Mark T. | title=Embedding the 2D interaction metaphor in a real 3D virtual environment | series=Stereoscopic Displays and Virtual Reality Systems II | publisher=SPIE | date=1995-03-30 | volume=2409 | pages=282–293 | doi=10.1117/12.205875 }}</ref><ref>{{cite conference | last1=Poupyrev | first1=I. | last2=Tomokazu | first2=N. | last3=Weghorst | first3=S. | title=Proceedings. IEEE 1998 Virtual Reality Annual International Symposium (Cat. No.98CB36180) | chapter=Virtual Notepad: handwriting in immersive VR | year=1998 | pages=126–132 | publisher=IEEE Comput. Soc | isbn=0-8186-8362-7 | doi=10.1109/vrais.1998.658467 | chapter-url = http://www8.cs.umu.se/kurser/TDBD12/HT02/papers/virtual%20notepad.pdf }}</ref>
 
===== Microsoft KINECTKinect =====
[[File:Xbox-360-Kinect-Standalone.png|thumb|Kinect Sensor]]
 
Line 92 ⟶ 87:
 
===== Leap Motion =====
[[File:Leap Motion Controller.JPG|thumb|Leap Motion Controller]]
The [[Leap Motion]] is a new system of tracking of hands, designed for small spaces, allowing a new interaction in 3D environments for desktop applications, so it offers a great fluidity when browsing through three-dimensional environments in a realistic way.
It is a small device that connects via USB to a computer, and used two cameras with infra-red light LED, allowing the analysis of a hemispheric area about 1 meter on its surface, thus recording responses from 300 frames per second, information is sent to the computer to be processed by the specific software company.
 
Line 102 ⟶ 98:
 
==== Travel ====
Travel is a conceptual technique that consists in the movement of the viewpoint (virtual eye, virtual camera) from one ___location to another. This orientation is usually handled in immersive virtual environments by [[head tracking]].
ExistsThere exists five types of travel interaction techniquestechnique:
* '''Physical movement''': uses the user's body motion to move through the virtual environment. IsThis is an appropriate technique when isyou requiredneed an augmented perception of the feeling of being present, or when isthe user needs to requireddo physical effort formfor thea usersimulation.
* '''Manual viewpoint manipulation''': the user's hands movements determine the displacementaction onin the virtual environment. One example could be when the user moves their hands in a way that seems like isthey are grabbing a virtual rope, and pullsthen hispulls selfthemself up. This technique could be easy to learn and efficient, but can cause fatigue.
* '''Steering''': the user has to constantly indicate wherein what direction to move. IsThis is a common and efficient technique. One example of this areis the gaze-directed steering, where the head orientation determines the direction of travel.
* '''Target-based travel''': the user specifies a destination point, and the systemviewpoint effectuatesmoves to the displacementnew ___location. This travel can be executed by teleport, where the user is instantly moved to the destination point, or, the system can execute some stream of transition movementmovements to the destiny. These techniques are very simple from the user's point of view, because hethey only hashave to indicate the destination.
* '''Route planning''': the user specifies the path that should be taken through the environment, and the system executes the movement. The user may draw a path on a map of the virtual environment to plan a route. This technique allows users to control travel, while they have the ability to do other tasks during motion.
 
==== Wayfinding ====
Line 128 ⟶ 124:
==== Selection ====
 
The task of selecting objects or 3D volumes in a 3D environments requires first being able to find the desired target and then being able to select it. Most 3D datasets/environments are severed by occlusion problems,<ref>{{cite conference | last=Shneiderman | first=B. | title=Proceedings 1996 IEEE Symposium on Visual Languages | chapter=The eyes have it: a task by data type taxonomy for information visualizations | year=1996 | pages=336–343 | publisher=IEEE Comput. Soc. Press | isbn=0-8186-7508-X | doi=10.1109/vl.1996.545307 | hdl=1903/466 | hdl-access=free }}</ref> so the first step of finding the target relies on manipulation of the viewpoint or of the 3D data itself in order to properly identify the object or volume of interest. This initial step is then of course tightly coupled with manipulations in 3D. Once the target is visually identified, users have access to a variety of techniques to select it.
 
Usually, the system provides the user a 3D cursor represented as a human hand whose movements correspond to the motion of the hand tracker. This virtual hand technique <ref>{{cite journal | last1=Poupyrev | first1=I. | last2=Ichikawa | first2=T. | last3=Weghorst | first3=S. | last4=Billinghurst | first4=M. | title=Egocentric Object Manipulation in Virtual Environments: Empirical Evaluation of Interaction Techniques | journal=Computer Graphics Forum | volume=17 | issue=3 | year=1998 | issn=0167-7055 | doi=10.1111/1467-8659.00252 | pages=41–52 | citeseerx=10.1.1.95.4933 | s2cid=12784160 }}</ref> is rather intuitive because simulates a real-world interaction with objects but with the limit of objects that we can reach inside a reach-area.
 
To avoid this limit, there are many techniques that have been suggested, like the Go-Go technique.<ref>{{cite book | last1=Poupyrev | first1=Ivan | last2=Billinghurst | first2=Mark | last3=Weghorst | first3=Suzanne | last4=Ichikawa | first4=Tadao | title=Proceedings of the 9th annual ACM symposium on User interface software and technology - UIST '96 | chapter=The go-go interaction technique: nonNon-linear mapping for direct manipulation in VR | pages=79–80 | website=ACM Digital Library | doi=10.1145/237091.237102 | chapter-url=http://www.ivanpoupyrev.com/e-library/1998_1996/uist96.pdf | access-date=2018-05-18| chapter=The go-go interaction technique | year=1996 | isbn=978-0897917988 | s2cid=1098140 }}</ref> This technique allows the user to extend the reach-area using a non-linear mapping of the hand: when the user extends the hand beyond a fixed threshold distance, the mapping becomes non-linear and the hand grows.
 
Another technique to select and manipulate objects in 3D virtual spaces consists in pointing at objects using a virtual-ray emanating from the virtual hand.<ref>{{cite techreporttech report |first=Mark R.|last=Mine |title=Virtual Environment Interaction Techniques |institution=Department of Computer Science University of North Carolina |date= 1995 | url=http://www.cs.unc.edu/techreports/95-018.pdf}}</ref> When the ray intersects with the objects, it can be manipulated. Several variations of this technique has been made, like the aperture technique, which uses a conic pointer addressed for the user's eyes, estimated from the head ___location, to select distant objects. This technique also uses a hand sensor to adjust the conic pointer size.
 
Many other techniques, relying on different input strategies, have also been developed.<ref>{{cite journal | last1=Argelaguet | first1=Ferran | last2=Andujar | first2=Carlos | s2cid=8565854 | title=A survey of 3D object selection techniques for virtual environments | journal=Computers & Graphics | volume=37 | issue=3 | year=2013 | issn=0097-8493 | doi=10.1016/j.cag.2012.12.003 | pages=121–136 | url=https://hal.archives-ouvertes.fr/hal-00907787/file/Manuscript.pdf }}</ref><ref>{{cite journal | last1=Besançon | first1=Lonni | last2=Sereno | first2=Mickael | last3=Yu | first3=Lingyun | last4=Ammi | first4=Mehdi | last5=Isenberg | first5=Tobias | title=Hybrid Touch/Tangible Spatial 3D Data Selection | journal=Computer Graphics Forum | publisher=Wiley | volume=38 | issue=3 | year=2019 | issn=0167-7055 | doi=10.1111/cgf.13710 | pages=553–567 | s2cid=199019072 | url=https://hal.inria.fr/hal-02079308/file/Besancon_2019_HTT.pdf }}</ref>
 
==== Manipulation ====
Line 142 ⟶ 138:
3D Manipulations occurs before a selection task (in order to visually identify a 3D selection target) and after a selection has occurred, to manipulate the selected object. 3D Manipulations require 3 DOF for rotations (1 DOF per axis, namely x, y, z) and 3 DOF for translations (1 DOF per axis) and at least 1 additional DOF for uniform zoom (or alternatively 3 additional DOF for non-uniform zoom operations).
 
3D Manipulations, like navigation, is one of the essential tasks with 3D data, objects or environments. It is the basis of many 3D software (such as Blender, Autodesk, [[VTK]]) which are widely used. These software, available mostly on computers, are thus almost always combined with a mouse and keyboard. To provide enough DOFs (the mouse only offers 2), these software rely on modding with a key in order to separately control all the DOFs involved in 3D manipulations. With the recent avent of multi-touch enabled smartphones and tablets, the interaction mappings of these software have been adapted to multi-touch (which offers more simultaneous DOF manipulations than a mouse and keyboard). A survey conducted in 2017 of 36 commercial and academic mobile applications on Android and iOS however suggested that most applications did not provide a way to control the minimum 6 DOFs required,<ref name="ComparisonArticle" /> but that among those which did, most made use of a 3D version of the RST (Rotation Scale Translation) mapping: 1 finger is used for rotation around x and y, while two-finger interaction controls rotation around z, and translation along x, y, and z.
 
=== System Control ===
Line 158 ⟶ 154:
== See also ==
* [[Finger tracking]]
 
* [[Interaction technique]]
* [[Interaction design]]
* [[Human–computer interaction]]
* [[Cave Automatic Virtual Environment]] (CAVE)
* [[VirtualFitts's realitylaw]]
 
== References ==
Line 180 ⟶ 174:
# {{cite conference|last=Fröhlich|first=B.|author2=Plate, J.|year=2000|title=The Cubic Mouse: A New Device for 3D Input|___location=New York|publisher=ACM Press|pages=526–531|doi=10.1145/332040.332491|book-title=Proceedings of ACM CHI 2000}}
# [http://www.dlr.de/sc/desktopdefault.aspx/tabid-8995/15534_read-38219/ Interaction Techniques]. ''DLR - Simulations- und Softwaretechnik''. Retrieved October 18, 2015
# {{cite conference|last=Keijser|first=J.|author2=Carpendale, S.|author3=Hancock, M.|author4=Isenberg, T.|year=2007|title=Exploring 3D Interaction in Alternate Control-Display Space Mappings|___location=Los Alamitos, CA|publisher=IEEE Computer Society|pages=526–531|book-title=Proceedings of the 2nd IEEE Symposium on 3D User Interfaces|url=https://innovis.cpsc.ucalgary.ca/innovis/uploads/Publications/Publications/Keijser_2007_E3I.pdf}}
# Larijani, L. C. (1993). The Virtual Reality Primer. United States of America: R. R. Donnelley and Sons Company.
# Rhijn, A. van (2006). [https://www.narcis.nl/publication/RecordID/oai:pure.tue.nl:publications%2F8f142821-8dce-4835-bc14-5a8661b37ec3 Configurable Input Devices for 3D Interaction using Optical Tracking]. Eindhoven: Technische Universiteit Eindhoven.
Line 193 ⟶ 187:
 
* [http://liinwww.ira.uka.de/bibliography/Misc/spatial.input.html#browse Bibliography on 3D Interaction and Spatial Input]
* [http://3dinterfacedesign.com/index.html The Inventor of the 3D Window Interface 1998] {{Webarchive|url=https://web.archive.org/web/20200703043424/http://3dinterfacedesign.com/index.html |date=2020-07-03 }}
* [http://research.cs.vt.edu/3di/ 3DI Group]
* [http://www.cescg.org/CESCG-2000/JFlasar/index.html 3D Interaction in Virtual Environments]
Line 200 ⟶ 194:
{{User interfaces}}
{{Operating system}}
 
{{DEFAULTSORT:3dui}}
[[Category:Human–computer interaction]]
[[Category:Virtual reality]]
[[Category:User interface techniques]]
[[Category:3D human-computer interaction]]