Virtual Human Interaction Lab: Difference between revisions

Content deleted Content added
Digital anonymity: Unencyclopedic
Preparing to implement RM consensus: Changed link from ABC News to ABC News (United States) using Move+
 
(23 intermediate revisions by 10 users not shown)
Line 1:
<!-- Please do not remove or change this AfD message until the discussion has been closed. -->
{{Article for deletion/dated|page=Virtual Human Interaction Lab|timestamp=20190203200820|year=2019|month=February|day=3|substed=yes|help=off}}
<!-- Once discussion is closed, please place on talk page: {{Old AfD multi|page=Virtual Human Interaction Lab|date=3 February 2019|result='''keep'''}} -->
<!-- End of AfD message, feel free to edit beyond this point -->
{{refcleanup|date=July 2011}}
{{Infobox laboratory
Line 8 ⟶ 4:
|image =
|established = 2003
|research_field = [[Virtual Realityreality]]
|director = [http://comm.stanford.edu/faculty/bailenson/ Jeremy Bailenson]
|city = [[Stanford, California|Stanford]], CA[[United USAStates]]
|address = McClatchy Hall, Building 120 <br/>450 Serra Mall<br/>Stanford, CA 94309<br/>USA
|nickname = VHIL
Line 17 ⟶ 13:
}}
 
The '''Virtual Human Interaction Lab''' ('''VHIL''') at [[Stanford University]]. It was founded in 2003 by Professor Jeremy Bailenson, associate professor of communication at Stanford University. The lab conducts research for the Communication Department.
 
==History==
==Faculty and research staff==
Its founding director was Stanford professor Jeremy Bailenson.<ref name="Chronicle">Everett Cook, December 22, 2017 [https://www.sfchronicle.com/collegesports/article/Now-used-far-beyond-Stanford-virtual-reality-12451227.php Now used far beyond Stanford, virtual reality keeps growing at the Farm], ''[[San Francisco Chronicle]]''</ref> As of April 2014, it had an advanced [[virtual reality]] lab and setup,<ref name="NewYork">Farhad Manjoo, April 2, 2014 [https://www.nytimes.com/2014/04/03/technology/personaltech/virtual-reality-perfect-for-an-immersive-society.html If You Like Immersion, You’ll Love This Reality], ''[[The New York Times]]''</ref> which was used to teach visitors and students on various topics.<ref name="Nbc">Joe Rosato Jr., December 20, 2018 [https://www.nbcbayarea.com/news/local/Stanford-Takes-Students-on-Virtual-Undersea-Journey-503250741.html Stanford Takes Students on Virtual Undersea Journey], ''[[NBC]]''</ref> The company's VR software is "free to any interested organization."<ref name="Today">Marco della Cava, April 8, 2016 [https://www.usatoday.com/story/tech/news/2016/04/08/virtual-reality-tested-tool-confront-racism-sexism/82674406/ Virtual reality tested by NFL as tool to confront racism, sexism], ''[[USA Today]]''</ref> According to the ''[[Los Angeles Times]],'' it is at the "forefront" of the [virtual reality content] movement" in 2015, at which point Jeremy Bailenson remained the head of the organization.<ref name="Times">Steven Zeitchik, April 24, 2015 [https://www.latimes.com/entertainment/movies/moviesnow/la-et-mn-vr-watch-stanford-bailenson-zuckerburg-tribeca-20150424-story.html Title], ''[[Los Angeles Times]]''</ref>
* Jeremy Bailenson, professor of communication, VHIL Founder
* Shawnee Baughman, Lab Manager, B.S. and M.S. in communication at Stanford University
 
==VR projects==
*'''The Crystal Reef''' (2016) - had it's world premierePremiered at the 2016 [[Tribeca Film Festival]] in 2016. It was later featured by ''[[TIME]]''. AIt is a 360-degree video VR experience, it is about effects of [[ocean acidification]].<ref name="Stanford">Stanford University [https://vhil.stanford.edu/the-crystal-reef/ The Crystal Reef], ''[[Stanford]]''</ref><ref name="Phys">Rob Jordan of Stanford University, November 30, 2018 [https://phys.org/news/2018-11-virtual-reality-powerful-environmental-tool.html Virtual reality could serve as powerful environmental education tool], ''[[Phys.org]]''</ref>
*'''The Stanford Ocean Acidification Experience''' (2016) - (SOAE)Also hadcalled it's'SOAE'', worldthe premiere"experience" premiered at the Tribeca 2016 Film Festival in 2016 (then-called ''The Crystal Reef: Interactive'') in conjunction with the 360 video. SOAE is an immersive VR experience, which allows users to become a scientist and interact with their environment. ''SOAE has'' been exhibited at the [[U.S.US Senate]] and the [[Palau National Congress]].<ref>{{Cite web|url=https://vhil.stanford.edu/soae/|title = The Stanford Ocean Acidification Experience}}</ref>
*''Becoming Homeless: A Human Experience'' (2017) - Premiered at the 2017 Tribeca Film Festival,<ref>{{Cite web|url=http://vhil.stanford.edu/becominghomeless/|title=Becoming Homeless: A Human Experience}}</ref> it was originally developed for the lab's "Empathy At Scale" research project.<ref name="Kqed">Rachael Myrow, June 27, 2016 [https://www.kqed.org/arts/11744066/stanfords-virtual-reality-lab-cultivates-empathy-for-the-homeless Stanford's Virtual Reality Lab Cultivates Empathy for the Homeless], ''[[KQED Inc.|KQED]]''</ref>
*'''Coral Compass: Fighting Climate Change in Palau''' (2018) - world premiere at the Tribeca Film Festival in 2018. In this 360 experience, viewersViewers travel to Palau., Thelooking piece focuses onat how the leaders of Palaucountry is working with scientists to adapt to [[climate change]].<ref>{{Cite web|url=http://vhil.stanford.edu/coralcompass/|title = Coral Compass}}</ref>
 
==Research==
*'''The Stanford Ocean Acidification Experience''' (2016) - (SOAE) had it's world premiere at the Tribeca Film Festival in 2016 (then-called ''The Crystal Reef: Interactive'') in conjunction with the 360 video. SOAE is an immersive VR experience, which allows users to become a scientist and interact with their environment. SOAE has been exhibited at the [[U.S. Senate]] and the [[Palau National Congress]].<ref>https://vhil.stanford.edu/soae/</ref>
===Current research===
The*Digital anonymity - in 2010, the group iswas studying how digital media users who anonymize themselves via their avatars may be perceived differently from media users who use avatars that resemble their physical world selves.<ref name="Avatar">[http://news.stanford.edu/news/2010/february22/avatar-behavior-study-022510.html/ Avatar Behavior Study]</ref>
*Mediators and mimicry - researching how [[online dispute resolution]] (ODR) may help mediators strike a delicate balance between developing rapport and maintaining impartiality.
*Out-of-body experience - studies self-presence, or an out-of-body experience.
*Augmented perspective taking - researching how immersion and interactivity can enhance the ability to understand other minds and how the virtual experience can influence our attitudes and behaviors.
*Self-endorsing - researches how using the self as the source of persuasive messages can influence attitudes and behaviors in various persuasive contexts.
*Automatic facial feature detection and analyses - this methodology uses just a small webcam and computer software to predict an individual's errors and performance quality based only on facial features that are tracked and logged automatically.
 
===Former topics===
*'''Becoming Homeless: A Human Experience''' (2017) - had it's world premiere at the Tribeca Film Festival in 2017. It was originally developed for the Lab's Empathy At Scale research project.<ref>http://vhil.stanford.edu/becominghomeless/</ref>
The lab has studied topics such as:
 
===*[[Proteus effect===]]
*'''Coral Compass: Fighting Climate Change in Palau''' (2018) - world premiere at the Tribeca Film Festival in 2018. In this 360 experience, viewers travel to Palau. The piece focuses on how the leaders of Palau is working with scientists to adapt to climate change.<ref>http://vhil.stanford.edu/coralcompass/</ref>
===*[[Transformed social interaction===]]
 
Through*Facial thisIdentity lineCapture<ref>[http://vhil.stanford.edu/pubs/2005/identity-capture.html/ ofIdentity research,Capture]</ref> and Presidential Candidate Preference - it was found that by morphing a subject's face in a 40:60 ratio with that of John Kerry and George W. Bush, the subject was more likely to prefer the candidate that shared their features. This study has implications concerning the use of a voter's image and overall face morphing during national elections to sway a voter's decision.
==Current research==
*Virtual aging's effect on financial decisions<ref name="Abc">{{cite web |title=Virtual Reality Study Encourages Subjects to Save for the Future |website=[[ABC News (United States)|ABC News]] |url=https://abcnews.go.com/Technology/virtual-reality-study-encourages-subjects-save-future/story?id=12358259/}}</ref>
===Digital anonymity===
*Eye witness testimony and virtual police lineups - In collaboration with the Research Center for Virtual Environments and Behavior, the [[National Science Foundation]], and the [[Federal Judicial Center]], VHIL examined the capabilities of pointing out witnesses during a police lineup while in a virtual environment. VR gives witnesses the opportunities to examine in a 3D environment, at different distances and even gives them the opportunity to examine the suspect at the recreated scene of the crime.
 
*Diversity simulation - allowing participants to experience another race or gender
The group is studying how digital media users who anonymize themselves via their avatars may be perceived differently from media users who use avatars that resemble their physical world selves.
 
===Mediators and mimicry===
 
A mediator's success hinges on two important factors: impartiality and rapport. Ironically, the process of establishing rapport can undermine the mediator's ability to convey a sense of impartiality. Thus, mediators face a dilemma – a dilemma that we believe [[digital media]] might be able to help solve. We are how exploring the affordances of [[online dispute resolution]] (ODR) may help mediators strike a delicate balance between developing rapport and maintaining impartiality. One area that is of particular interest to us is digital mimicry. Mimicry is known to elicit a wide variety of favorable responses; using tracking technology and computer algorithms we can make virtual mediators subtly yet perfectly mimic disputants' head movements.
 
===Out-of-body experience===
 
What if the virtual self could "feel" in a virtual world the same way the physical self can feel in the physical world? Navigating virtual 3D environments, performing remote surgery, and tanning on a virtual island would become second-nature at this level of full immersion. We are studying ways to create and measure this phenomenon, known as self-presence, or an out-of-body experience. Current questions we are asking in this research area include what stimuli are necessary to induce digital body ownership and what modifications of avatars and virtual environments increase self-presence.
 
===Augmented perspective taking===
 
Perspective taking is the ability to mentally put oneself in the shoes of another to imagine what the other person might be thinking and feeling in a certain situation. Immersive virtual environments allow people to vividly share the perceptual experiences of others as if they are in the heat of the moment. In essence, our abilities to take the perspective of another person can be augmented by viscerally sharing their experiences - seeing, hearing, and feeling what the other person did in a particular situation. We can now literally climb into the skin of the other person to fully embody their body and senses. Current projects explore how novel affordances of interactive digital media such as immersion and interactivity can enhance the ability to understand other minds and how the virtual experience can influence our attitudes and behaviors.
 
===Self-endorsing===
 
Self-endorsing is a novel persuasion strategy made possible by the advancement of interactive digital media. The self is no longer just a passive receiver of information, but can simultaneously partake in the formation and dispersion of persuasive messages, persuading the self with the self. What may have sounded like a topic of a futuristic science fiction movie can now be easily and rapidly done using simple graphics software. Tapping into the framework of self-referencing, research on self-endorsing explores how using the self as the source of persuasive messages can powerfully influence attitudes and behaviors in various persuasive contexts.
 
===Automatic facial feature detection and analyses===
 
While most prior research on facial expressions involve some form of manual coding by human coders based on established facial coding systems (e.g., FACS), this methodology uses just a small webcam and computer software to predict an individual's errors and performance quality based only on facial features that are tracked and logged automatically. Using just the first five to seven minutes of facial feature data, researchers were able to predict a participant's performance on a 30-minute experimental task with up to 90% accuracy. There are countless applications for this methodology that would facilitate research of other media effects. For instance, this methodology can predict purchasing decisions based on facial expressions (e.g., "buying" face vs. "not-buying" face) while participants engage in an online shopping task. Researchers can also monitor emotional fluctuations in real time as people make their selection of media content and verify whether or not the choices are contributing toward maintaining a good mood (i.e., mood management theory; Zillmann) based on their facial expressions. In addition, advertisers could benefit by receiving real-time data on the participant's responses to advertisements. Automatic facial feature analysis is not yet a perfect 'looking glass' to a person's mind, but its advantages are obvious and promising.
 
==Past research==
===Proteus effect===
{{main|Proteus effect}}
Researchers discovered that by allowing a subject to use an avatar of varying attractiveness or height, this affected how they acted in a virtual environment. They adapted to the role they felt their avatar played.
 
===Transformed social interaction===
{{main|Transformed social interaction}}
The phenomenon of transformed social interaction hopes to explore what occurs when behaviors that take place in collaborative virtual environments are augmented or decremented. The lab's hope is to see how permitting commonly impossible behaviors in virtual environments alters and ultimately enhances the way that people perform in learning and business meetings.
 
===Facial Identity Capture and Presidential Candidate Preference===
 
Through this line of research, it was found that by morphing a subject's face in a 40:60 ratio with that of John Kerry and George W. Bush, the subject was more likely to prefer the candidate that shared their features. This study has implications concerning the use of a voter's image and overall face morphing during national elections to sway a voter's decision.
 
===Virtual aging's affect on financial decisions===
 
Researchers found that when subjects were presented with digital, older versions of themselves they subsequently adapted their spending behavior to save more for the future.
 
===Eye witness testimony and virtual police lineups===
 
In collaboration with the Research Center for Virtual Environments and Behavior, the National Science Foundation, and the Federal Judicial Center, VHIL examined the capabilities of pointing out witnesses during a police lineup while in a virtual environment. VR gives witnesses the opportunities to examine in a 3D environment, at different distances and even gives them the opportunity to examine the suspect at the recreated scene of the crime.
 
===Diversity simulation===
 
Using virtual reality allows people to truly experience the proverbial "walk a mile" in someone else's shoes. By allowing participants to experience another race or gender, researchers at VHIL hoped to raise awareness about ongoing issues with diversity.
 
== References ==
Line 104 ⟶ 64:
== External links ==
*[http://www.stanfordvr.com/ Virtual Human Interaction Lab]
*[http://news.stanford.edu/news/2010/february22/avatar-behavior-study-022510.html/ Avatar Behavior Study]
*[http://abcnews.go.com/Technology/virtual-reality-study-encourages-subjects-save-future/story?id=12358259/ Virtual Reality Encourages Subject to Save for the Future]
*[http://vhil.stanford.edu/pubs/2005/identity-capture.html/ Identity Capture]
 
[[Category:Stanford University]]