3D human–computer interaction: Difference between revisions

Content deleted Content added
m JustMakeTheAccount moved page 3D user interaction to 3D human-computer interaction: Move to common name
OAbot (talk | contribs)
m Open access bot: doi updated in citation with #oabot.
 
(3 intermediate revisions by 2 users not shown)
Line 1:
{{short description|Form of human-machine interaction}}
[[File:Virtual-Fixtures-USAF-AR.jpg|thumb|[[Virtual Fixture]]s, a system for 3D human-computer interaction.]]
'''3D interaction''', also known as '''3D human-computerhuman–computer interaction''', is a form of [[human-computerhuman–computer interaction]] where [[User (computing)|users]] are able to move and perform [[Human–computer interaction|interaction]] in [[Three-dimensional space|3D space]]. Both the user and the computer process information where the physical position of elements in 3D space is relevant. It largely encompasses [[virtual reality]] and [[augmented reality]].
 
The 3D space used for interaction can be the real [[physical space]], a [[virtual space]] representation simulated on the computer, or a combination of both. When the real physical space is used for data input, the human interacts with the machine performing actions using an [[input device]] that [[Positional tracking|detects the 3D position]] of the human interaction, among other things. When it is used for data output, the simulated 3D virtual scene is projected onto the real environment through one [[output device]].
Line 23:
 
== 3D user interfaces ==
[[File:3D_User_Interaction.jpg|thumb|528x528pxupright=2|Scheme of 3D user interaction phases]]{{About|both software and hardware 3D input/output devices|uniquely software 3D user interfaces|3D GUI|section=yes}}[[User interface]]s are the means for communication between users and systems. 3D interfaces include media for 3D representation of system state, and media for 3D user input or manipulation. Using 3D representations is not enough to create 3D interaction. The users must have a way of performing actions in 3D as well. To that effect, special input and output devices have been developed to support this type of interaction. Some, such as the 3D mouse, were developed based on existing devices for 2D interaction.
 
3D user interfaces, are user interfaces where 3D interaction takes place, this means that the user's tasks occur directly within a three-dimensional space. The user must communicate with commands, requests, questions, intent, and goals to the system, and in turn this one has to provide feedback, requests for input, information about their status, and so on.
Line 75:
 
===== Google Tango Devices =====
[[File:Google ATAP's Project Tango tablet (15387052663).jpg|thumb|Google's Project Tango tablet, 2014]] The [[Tango (platform)|Tango Platform]] is an augmented reality computing platform, developed and authored by the Advanced Technology and Projects (ATAP), a skunkworks division of Google. It uses computer vision and internal sensors (like gyroscopes) to enable mobile devices, such as smartphones and tablets, to detect their position relative to the world around them without using GPS or other external signals. It can therefore be used to provide 6-DOF input which can also be combined with its multi-touch screen.<ref>{{cite journal | last1=Besancon | first1=Lonni | last2=Issartel | first2=Paul | last3=Ammi | first3=Mehdi | last4=Isenberg | first4=Tobias | title=Hybrid Tactile/Tangible Interaction for 3D Data Exploration | journal=IEEE Transactions on Visualization and Computer Graphics | volume=23 | issue=1 | year=2017 | issn=1077-2626 | doi=10.1109/tvcg.2016.2599217 | pmid=27875202 | pages=881–890 | s2cid=16626037 | url = https://hal.inria.fr/hal-01372922/document }}</ref> The Google Tango devices can be seen as more integrated solutions than the early prototypes combining spatially-tracked devices with touch-enabled-screens for 3D environments.<ref>{{cite conference | last1=Fitzmaurice | first1=George W. | last2=Buxton | first2=William | title=Proceedings of the ACM SIGCHI Conference on Human factors in computing systems | chapter=An empirical evaluation of graspable user interfaces | publisher=ACM Press | ___location=New York, New York, USA | year=1997 | pages=43–50 | isbn=0-89791-802-9 | doi=10.1145/258549.258578 | chapter-url = http://www.dgp.toronto.edu/~gf/papers/PhD%20-%20Graspable%20UIs/Thesis.gf.html | doi-access=free }}</ref><ref>{{cite conference | last1=Angus | first1=Ian G. | last2=Sowizral | first2=Henry A. | editor-last=Fisher | editor-first=Scott S. | editor2-last=Merritt | editor2-first=John O. | editor3-last=Bolas | editor3-first=Mark T. | title=Embedding the 2D interaction metaphor in a real 3D virtual environment | series=Stereoscopic Displays and Virtual Reality Systems II | publisher=SPIE | date=1995-03-30 | volume=2409 | pages=282–293 | doi=10.1117/12.205875 }}</ref><ref>{{cite conference | last1=Poupyrev | first1=I. | last2=Tomokazu | first2=N. | last3=Weghorst | first3=S. | title=Proceedings. IEEE 1998 Virtual Reality Annual International Symposium (Cat. No.98CB36180) | chapter=Virtual Notepad: handwriting in immersive VR | year=1998 | pages=126–132 | publisher=IEEE Comput. Soc | isbn=0-8186-8362-7 | doi=10.1109/vrais.1998.658467 | chapter-url = http://www8.cs.umu.se/kurser/TDBD12/HT02/papers/virtual%20notepad.pdf }}</ref>
 
===== Microsoft Kinect =====