Content deleted Content added
Altanner1991 (talk | contribs) →3D user interfaces: grammar |
Citation bot (talk | contribs) Alter: title. Add: year, chapter-url, pages, chapter, s2cid. Removed or converted URL. Formatted dashes. | Use this bot. Report bugs. | Suggested by AManWithNoPlan | #UCB_CommandLine 178/10001 |
||
Line 55:
==== Desktop Input devices ====
This type of devices are designed for an interaction 3D on a desktop, many of them have an initial design thought in a traditional interaction in two dimensions, but with an appropriate mapping between the system and the device, this can work perfectly in a three-dimensional way. There are different types of them: [[Keyboard (computer)|keyboards]], [[Mouse (computing)|2D mice]] and trackballs, pen-based tablets and stylus, and [[joystick]]s. Nonetheless, many studies have questioned the appropriateness of desktop interface components for 3D interaction <ref name ="BowmanBook" /><ref>{{cite conference | last1=Chen | first1=Michael | last2=Mountford | first2=S. Joy | last3=Sellen | first3=Abigail | title=Proceedings of the 15th annual conference on Computer graphics and interactive techniques - SIGGRAPH '88 | chapter=A study in interactive 3-D rotation using 2-D control devices | publisher=ACM Press | ___location=New York, New York, USA | year=1988 | pages=121–129 | isbn=0-89791-275-6 | doi=10.1145/54852.378497 | chapter-url = https://www.microsoft.com/en-us/research/uploads/prod/2016/08/3-d-rotation-88.pdf}}</ref><ref>{{cite journal | last1=Yu | first1=Lingyun | last2=Svetachov | first2=Pjotr| last3=Isenberg | first3=Petra | last4= Everts |first4=Maarten H.|last5=Isenberg | first5=Tobias | title=FI3D: Direct-Touch Interaction for the Exploration of 3D Scientific Visualization Spaces | journal=IEEE Transactions on Visualization and Computer Graphics | volume=16 | issue=6 | date=2010-10-28 | issn=1077-2626 | doi=10.1109/TVCG.2010.157 | pmid=20975204 | pages=1613–1622 | s2cid=14354159 | url=https://hal.inria.fr/inria-00587377/PDF/Yu_2010_FDT.pdf}}</ref> though this is still debated.<ref>{{cite conference | last1=Terrenghi | first1=Lucia | last2=Kirk | first2=David | last3=Sellen | first3=Abigail | last4=Izadi | first4=Shahram | title=Proceedings of the SIGCHI Conference on Human Factors in Computing Systems | chapter=Affordances for manipulation of physical versus digital media on interactive surfaces | publisher=ACM Press | ___location=New York, New York, USA | year=2007 | pages=1157–1166 | isbn=978-1-59593-593-9 | doi=10.1145/1240624.1240799 }}</ref><ref name="ComparisonArticle">{{cite conference | last1=Besançon | first1=Lonni | last2=Issartel | first2=Paul | last3=Ammi | first3=Mehdi | last4=Isenberg | first4=Tobias | title=Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems | chapter=Mouse, Tactile, and Tangible Input for 3D Manipulation | publisher=ACM Press | ___location=New York, New York, USA | year=2017 | pages=4727–4740 | isbn=978-1-4503-4655-9 | doi=10.1145/3025453.3025863 | chapter-url = https://hal.inria.fr/hal-01436206/document | arxiv=1603.08735 }}</ref>
==== Tracking devices ====
Line 80:
===== Google Tango Devices =====
[[File:Google ATAP's Project Tango tablet (15387052663).jpg|thumb|Google's Project Tango tablet, 2014]] The [[Tango (platform)|Tango Platform]] is an augmented reality computing platform, developed and authored by the Advanced Technology and Projects (ATAP), a skunkworks division of Google. It uses computer vision and internal sensors (like gyroscopes) to enable mobile devices, such as smartphones and tablets, to detect their position relative to the world around them without using GPS or other external signals. It can therefore be used to provide 6-DOF input which can also be combined with its multi-touch screen.<ref>{{cite journal | last1=Besancon | first1=Lonni | last2=Issartel | first2=Paul | last3=Ammi | first3=Mehdi | last4=Isenberg | first4=Tobias | title=Hybrid Tactile/Tangible Interaction for 3D Data Exploration | journal=IEEE Transactions on Visualization and Computer Graphics | volume=23 | issue=1 | year=2017 | issn=1077-2626 | doi=10.1109/tvcg.2016.2599217 | pmid=27875202 | pages=881–890 | s2cid=16626037 | url = https://hal.inria.fr/hal-01372922/document }}</ref> The Google Tango devices can be seen as more integrated solutions than the early prototypes combining spatially-tracked devices with touch-enabled-screens for 3D environments.<ref>{{cite conference | last1=Fitzmaurice | first1=George W. | last2=Buxton | first2=William | title=Proceedings of the ACM SIGCHI Conference on Human factors in computing systems | chapter=An empirical evaluation of graspable user interfaces | publisher=ACM Press | ___location=New York, New York, USA | year=1997 | pages=43–50 | isbn=0-89791-802-9 | doi=10.1145/258549.258578 | chapter-url = http://www.dgp.toronto.edu/~gf/papers/PhD%20-%20Graspable%20UIs/Thesis.gf.html }}</ref><ref>{{cite conference | last1=Angus | first1=Ian G. | last2=Sowizral | first2=Henry A. | editor-last=Fisher | editor-first=Scott S. |
===== Microsoft Kinect =====
Line 128:
==== Selection ====
The task of selecting objects or 3D volumes in a 3D environments requires first being able to find the desired target and then being able to select it. Most 3D datasets/environments are severed by occlusion problems,<ref>{{cite conference | last=Shneiderman | first=B. | title=Proceedings 1996 IEEE Symposium on Visual Languages | chapter=The eyes have it: a task by data type taxonomy for information visualizations | year=1996 | pages=336–343 | publisher=IEEE Comput. Soc. Press | isbn=0-8186-7508-X | doi=10.1109/vl.1996.545307 | hdl=1903/466 | hdl-access=free }}</ref> so the first step of finding the target relies on manipulation of the viewpoint or of the 3D data itself in order to properly identify the object or volume of interest. This initial step is then of course tightly coupled with manipulations in 3D. Once the target is visually identified, users have access to a variety of techniques to select it.
Usually, the system provides the user a 3D cursor represented as a human hand whose movements correspond to the motion of the hand tracker. This virtual hand technique <ref>{{cite journal | last1=Poupyrev | first1=I. | last2=Ichikawa | first2=T. | last3=Weghorst | first3=S. | last4=Billinghurst | first4=M. | title=Egocentric Object Manipulation in Virtual Environments: Empirical Evaluation of Interaction Techniques | journal=Computer Graphics Forum | volume=17 | issue=3 | year=1998 | issn=0167-7055 | doi=10.1111/1467-8659.00252 | pages=41–52 | citeseerx=10.1.1.95.4933 | s2cid=12784160 }}</ref> is rather intuitive because simulates a real-world interaction with objects but with the limit of objects that we can reach inside a reach-area.
To avoid this limit, there are many techniques that have been suggested, like the Go-Go technique.<ref>{{cite book | last1=Poupyrev | first1=Ivan | last2=Billinghurst | first2=Mark | last3=Weghorst | first3=Suzanne | last4=Ichikawa | first4=Tadao | title=The go-go interaction technique: non-linear mapping for direct manipulation in VR | pages=79–80 | website=ACM Digital Library | doi=10.1145/237091.237102 | chapter-url=http://www.ivanpoupyrev.com/e-library/1998_1996/uist96.pdf | access-date=2018-05-18| chapter=The go-go interaction technique | year=1996 | isbn=978-0897917988 | s2cid=1098140 }}</ref> This technique allows the user to extend the reach-area using a non-linear mapping of the hand: when the user extends the hand beyond a fixed threshold distance, the mapping becomes non-linear and the hand grows.
Another technique to select and manipulate objects in 3D virtual spaces consists in pointing at objects using a virtual-ray emanating from the virtual hand.<ref>{{cite techreport |first=Mark R.|last=Mine |title=Virtual Environment Interaction Techniques |institution=Department of Computer Science University of North Carolina |date= 1995 | url=http://www.cs.unc.edu/techreports/95-018.pdf}}</ref> When the ray intersects with the objects, it can be manipulated. Several variations of this technique has been made, like the aperture technique, which uses a conic pointer addressed for the user's eyes, estimated from the head ___location, to select distant objects. This technique also uses a hand sensor to adjust the conic pointer size.
Many other techniques, relying on different input strategies, have also been developed.<ref>{{cite journal | last1=Argelaguet | first1=Ferran | last2=Andujar | first2=Carlos | s2cid=8565854 | title=A survey of 3D object selection techniques for virtual environments | journal=Computers & Graphics | volume=37 | issue=3 | year=2013 | issn=0097-8493 | doi=10.1016/j.cag.2012.12.003 | pages=121–136 | url=https://hal.archives-ouvertes.fr/hal-00907787/file/Manuscript.pdf }}</ref><ref>{{cite journal | last1=Besançon | first1=Lonni | last2=Sereno | first2=Mickael | last3=Yu | first3=Lingyun | last4=Ammi | first4=Mehdi | last5=Isenberg | first5=Tobias | title=Hybrid Touch/Tangible Spatial 3D Data Selection | journal=Computer Graphics Forum | publisher=Wiley | volume=38 | issue=3 | year=2019 | issn=0167-7055 | doi=10.1111/cgf.13710 | pages=553–567 | s2cid=199019072 | url=https://hal.inria.fr/hal-02079308/file/Besancon_2019_HTT.pdf }}</ref>
==== Manipulation ====
|