Content deleted Content added
improve lead |
m Open access bot: doi updated in citation with #oabot. |
||
(19 intermediate revisions by 2 users not shown) | |||
Line 1:
{{short description|Form of human-machine interaction}}
[[File:Virtual-Fixtures-USAF-AR.jpg|thumb|[[Virtual Fixture]]s, a system for 3D human-computer interaction.]]
The 3D space used for interaction can be the real [[physical space]], a [[virtual space]] representation simulated on the computer, or a combination of both. When the real physical space is used for data input, the human interacts with the machine performing actions using an [[input device]] that [[Positional tracking|detects the 3D position]] of the human interaction, among other things. When it is used for data output, the simulated 3D virtual scene is projected onto the real environment through one [[output device]].
Line 7 ⟶ 8:
== History ==
Research in 3D interaction and 3D display began in the 1960s, pioneered by researchers like [[Ivan Sutherland]], Fred Brooks, Bob Sproull, Andrew Ortony and Richard Feldman. But it was not until 1962 when [[Morton Heilig]] invented the [[Sensorama]] simulator.<ref name="US3050870A">{{cite patent |inventor-last=Heilig |inventor-first=Morton L |inventorlink=Morton Heilig |fdate=1961-01-10 |title=Sensorama simulator |pubdate=1962-08-28 |country=US |number=3050870A |url = https://patentimages.storage.googleapis.com/90/34/2f/24615bb97ad68e/US3050870.pdf }}</ref> It provided 3D video feedback, as well motion, audio, and feedbacks to produce a virtual environment.▼
The next stage of development was Dr.
▲Research in 3D interaction and 3D display began in the 1960s, pioneered by researchers like Ivan Sutherland, Fred Brooks, Bob Sproull, Andrew Ortony and Richard Feldman. But it was not until 1962 when [[Morton Heilig]] invented the [[Sensorama]] simulator.<ref name="US3050870A">{{cite patent |inventor-last=Heilig |inventor-first=Morton L |inventorlink=Morton Heilig |fdate=1961-01-10 |title=Sensorama simulator |pubdate=1962-08-28 |country=US |number=3050870A |url = https://patentimages.storage.googleapis.com/90/34/2f/24615bb97ad68e/US3050870.pdf }}</ref> It provided 3D video feedback, as well motion, audio, and feedbacks to produce a virtual environment.
▲The next stage of development was Dr. [[Ivan Sutherland]]’s completion of his pioneering work in 1968, the Sword of Damocles.<ref name="HMD">[[Ivan Sutherland|Sutherland, I. E.]] (1968). "[http://design.osu.edu/carlson/history/PDFs/p757-sutherland.pdf A head-mounted three dimensional display] {{Webarchive|url=https://web.archive.org/web/20160304013350/http://design.osu.edu/carlson/history/PDFs/p757-sutherland.pdf |date=2016-03-04 }}". ''Proceedings of AFIPS 68'', pp. 757-764</ref>
He created a head-mounted display that produced 3D virtual environment by presenting a left and right still image of that environment.
Line 22 ⟶ 23:
== 3D user interfaces ==
[[File:3D_User_Interaction.jpg|thumb|
3D user interfaces, are user interfaces where 3D interaction takes place, this means that the user's tasks occur directly within a three-dimensional space. The user must communicate with commands, requests, questions, intent, and goals to the system, and in turn this one has to provide feedback, requests for input, information about their status, and so on.
Line 35 ⟶ 36:
==== 3D visual displays ====
This type of
Another way to characterize these devices is according to the different categories of [[depth perception]] cues used to achieve that the user can understand the three-dimensional information. The main types of displays used in 3D user interfaces are: monitors, surround-screen displays, workbenches, hemispherical displays, head-mounted displays, arm-mounted displays and autostereoscopic displays. [[Virtual reality headset]]s and
▲This type of devices are the most popular and its goal is to present the information produced by the system through the human visual system in a three-dimensional way. The main features that distinguish these devices are: field of regard and [[field of view]], [[spatial resolution]], screen geometry, light transfer mechanism, [[refresh rate]] and [[ergonomics]].
▲Another way to characterize these devices is according to the different categories of [[depth perception]] cues used to achieve that the user can understand the three-dimensional information. The main types of displays used in 3D user interfaces are: monitors, surround-screen displays, workbenches, hemispherical displays, head-mounted displays, arm-mounted displays and autostereoscopic displays. [[Virtual reality headset]]s and CAVEs ([[Cave Automatic Virtual Environment]]) are examples of a fully immersive visual display, where the user can see only the virtual world and not the real world. Semi-immersive displays allow users to see both. Monitors and workbenches are examples of semi-immersive displays.
==== 3D audio displays ====▼
3D Audio displays are devices that present information (in this case sound) through the human auditory system, which is especially useful when supplying ___location and spatial information to the users. Its objective is to generate and display a spatialized 3D sound so the user can use its psychoacoustic skills and be able to determine the ___location and direction of the sound. There are different localizations cues: binaural cues, spectral and dynamic cues, [[head-related transfer function]]s, [[reverberation]], [[sound intensity]] and vision and environment familiarity. Adding background audio component to a display also adds to the sense of realism.
==== 3D haptic
These devices use the sense of touch to simulate the physical interaction between the user and a virtual object. There are three different types of 3D Haptic displays: those that provide the user a sense of force, the ones that simulate the sense of touch and those that use both. The main features that distinguish these devices are: haptic presentation capability, resolution and [[ergonomics]]. The human haptic system has 2 fundamental kinds of cues, tactile and kinesthetic. Tactile cues are a type of human touch cues that have a wide variety of skin receptors located below the surface of the skin that provide information about the texture, temperature, pressure and damage. Kinesthetic cues are a type of human touch cues that have many receptors in the muscles, joints and tendons that provide information about the angle of joints and stress and length of muscles.
=== 3D user interface input hardware ===
These hardware devices are called input devices and their aim is to capture and interpret the actions performed by the user. The [[degrees of freedom]] (DOF) are one of the main features of these systems. Classical interface components (such as mouse and keyboards and arguably touchscreen) are often inappropriate for non 2D interaction needs.<ref name ="BowmanBook" /> These systems are also differentiated according to how much physical interaction is needed to use the device, purely active need to be manipulated to produce information, purely passive do not need to.
The main categories of these devices are standard (desktop) input devices, tracking devices, control devices, navigation equipment, [[gesture interface]]s, [[Mouse (computing)|3D mice]], and [[
==== Desktop Input devices ====
Line 73 ⟶ 70:
[[File:Wiimote-Safety-First.jpg|thumb|Wiimote device]]
The [[Wii Remote]] device does not offer a technology based on 6-DOF.
This type of device can be affected by external references of [[infra-red]] light bulbs or candles, causing errors in the accuracy of the position.
===== Google Tango Devices =====
[[File:Google ATAP's Project Tango tablet (15387052663).jpg|thumb|Google's Project Tango tablet, 2014]] The [[Tango (platform)|Tango Platform]] is an augmented reality computing platform, developed and authored by the Advanced Technology and Projects (ATAP), a skunkworks division of Google. It uses computer vision and internal sensors (like gyroscopes) to enable mobile devices, such as smartphones and tablets, to detect their position relative to the world around them without using GPS or other external signals. It can therefore be used to provide 6-DOF input which can also be combined with its multi-touch screen.<ref>{{cite journal | last1=Besancon | first1=Lonni | last2=Issartel | first2=Paul | last3=Ammi | first3=Mehdi | last4=Isenberg | first4=Tobias | title=Hybrid Tactile/Tangible Interaction for 3D Data Exploration | journal=IEEE Transactions on Visualization and Computer Graphics | volume=23 | issue=1 | year=2017 | issn=1077-2626 | doi=10.1109/tvcg.2016.2599217 | pmid=27875202 | pages=881–890 | s2cid=16626037 | url = https://hal.inria.fr/hal-01372922/document }}</ref> The Google Tango devices can be seen as more integrated solutions than the early prototypes combining spatially-tracked devices with touch-enabled-screens for 3D environments.<ref>{{cite conference | last1=Fitzmaurice | first1=George W. | last2=Buxton | first2=William | title=Proceedings of the ACM SIGCHI Conference on Human factors in computing systems | chapter=An empirical evaluation of graspable user interfaces | publisher=ACM Press | ___location=New York, New York, USA | year=1997 | pages=43–50 | isbn=0-89791-802-9 | doi=10.1145/258549.258578 | chapter-url = http://www.dgp.toronto.edu/~gf/papers/PhD%20-%20Graspable%20UIs/Thesis.gf.html | doi-access=free }}</ref><ref>{{cite conference | last1=Angus | first1=Ian G. | last2=Sowizral | first2=Henry A. | editor-last=Fisher | editor-first=Scott S. | editor2-last=Merritt | editor2-first=John O. | editor3-last=Bolas | editor3-first=Mark T. | title=Embedding the 2D interaction metaphor in a real 3D virtual environment | series=Stereoscopic Displays and Virtual Reality Systems II | publisher=SPIE | date=1995-03-30 | volume=2409 | pages=282–293 | doi=10.1117/12.205875 }}</ref><ref>{{cite conference | last1=Poupyrev | first1=I. | last2=Tomokazu | first2=N. | last3=Weghorst | first3=S. | title=Proceedings. IEEE 1998 Virtual Reality Annual International Symposium (Cat. No.98CB36180) | chapter=Virtual Notepad: handwriting in immersive VR | year=1998 | pages=126–132 | publisher=IEEE Comput. Soc | isbn=0-8186-8362-7 | doi=10.1109/vrais.1998.658467 | chapter-url = http://www8.cs.umu.se/kurser/TDBD12/HT02/papers/virtual%20notepad.pdf }}</ref>
===== Microsoft Kinect =====
Line 92 ⟶ 87:
===== Leap Motion =====
[[File:Leap Motion Controller.JPG|thumb|Leap Motion Controller]]
The [[Leap Motion]] is a It is a small device that connects via USB to a computer, and used two cameras with infra-red light LED, allowing the analysis of a hemispheric area about 1 meter on its surface, thus recording responses from 300 frames per second, information is sent to the computer to be processed by the specific software company.
Line 160 ⟶ 156:
* [[Interaction technique]]
* [[Interaction design]]
* [[Cave Automatic Virtual Environment]] (CAVE)
* [[
== References ==
Line 202 ⟶ 197:
{{DEFAULTSORT:3dui}}
[[Category:Human–computer interaction]]
[[Category:User interface techniques]]
[[Category:3D human-computer interaction]]
|