History of computer animation: Difference between revisions

Content deleted Content added
Citation bot (talk | contribs)
Add: s2cid. | Use this bot. Report bugs. | Suggested by Josve05a | #UCB_toolbar
Osaka University: bare url corrected
 
(42 intermediate revisions by 32 users not shown)
Line 1:
{{Short description|None}}
{{Use mdy dates|date=October 2021}}
The '''history of [[computer animation]]''' began as early as the 1940s and 1950s, when people began to experiment with [[computer graphics]] – most notably by [[John Whitney (animator)|John Whitney]]. It was only by the early 1960s when [[digital computer]]s had become widely established, that new avenues for innovative computer graphics blossomed. Initially, uses were mainly for scientific, engineering and other research purposes, but artistic experimentation began to make its appearance by the mid-1960s – most notably by Dr. Thomas Calvert. By the mid-1970s, many such efforts were beginning to enter into public media. Much computer graphics at this time involved [[2D computer graphics|2-dimensionalD]] imagery, though increasingly as computer power improved, efforts to achieve 3-dimensionalD realism became the emphasis. By the late 1980s, photo-realistic [[3D computer graphics|3D3-D]] was beginning to appear in film movies, and by mid-1990s had developed to the point where 3D3-D animation could be used for entire feature film production.
 
==The earliest pioneers: 1940s to mid-1960s==
 
===John Whitney===
[[John Whitney (animator)|John Whitney Sr.]] (1917–1995) was an American animator, composer and inventor, widely considered to be one of the fathers of computer animation.<ref>[http://www.siggraph.org/artdesign/profile/whitney/whitney.html SIGGRAPH Whitney Profile page] {{Webarchive|url=https://web.archive.org/web/20120416170517/http://www.siggraph.org/artdesign/profile/whitney/whitney.html |date=April 16, 2012 }} (retrieved April 20, 2012)</ref> In the 1940s and 1950s, he and his brother James created a series of experimental films made with a custom-built device based on old anti-aircraft analog computers ([[Kerrison Predictor]]s) connected by [[servosservomechanism]]s to control the motion of lights and lit objects – the first example of [[motion control photography]]. One of Whitney's best known works from this early period was the animated title sequence from [[Alfred Hitchcock]]'s 1958 film ''[[Vertigo (film)|Vertigo]]'',<ref>[https://books.google.com/books?id=C-GeAgAAQBAJ&dq=%22tormented+inner+landscape%22+Jules+Lissajous+hollywood&pg=PT110 Alex Through the Looking-Glass: How Life Reflects Numbers and Numbers Reflect Life]</ref> which he collaborated on with graphic designer [[Saul Bass]]. In 1960, Whitney established his company Motion Graphics Inc, which largely focused on producing titles for film and television, while continuing further experimental works. In 1968, his pioneering motion control model photography was used on [[Stanley Kubrick]]'s film ''[[2001: A Space Odyssey (film)|2001: A Space Odyssey]]'', and also for the [[slit-scan photography]] technique used in the film's "Star Gate" finale.
 
===The first digital image===
Line 21:
Edward Zajac produced one of the first computer generated films at Bell Labs in 1963, titled ''A Two Gyro Gravity Gradient [[Spacecraft attitude control|attitude control]] System'', which demonstrated that a satellite could be stabilized to always have a side facing the Earth as it orbited.<ref>[http://dada.compart-bremen.de/node/4693 Edward Zajac on CompArt database] (retrieved 2012/04/20)</ref>
 
[[Ken Knowlton]] developed the [[Beflix]] (Bell Flicks) animation system in 1963, which was used to produce dozens of artistic films by artists [[Stan VanDerBeek]], Knowlton and [[Lillian Schwartz]].<ref>Knowlton, K. C., "Computer -Generated Movies," ''Science'', Vol. 150, (November 1965), pp. 116–1120.</ref> Instead of raw programming, Beflix worked using simple "graphic primitives", like draw a line, copy a region, fill an area, zoom an area, and the like.
 
In 1965, Michael Noll created computer-generated stereographic 3D3-D movies, including a ballet of stick figures moving on a stage.<ref>Noll, A. Michael, "Computer-Generated Three-Dimensional Movies", ''Computers and Automation'', Vol. 14, No. 11, (November 1965), pp 20–23.</ref> Some movies also showed four-dimensional hyper-objects projected to three dimensions.<ref>Noll, A. Michael, "A Computer Technique for Displaying n-Dimensional Hyperobjects", ''Communications of the ACM'', Vol. 10, No. 8, (August 1967), pp 469–473.</ref> Around 1967, Noll used the 4D4-D animation technique to produce computer-animated title sequences for the commercial film short ''Incredible Machine'' (produced by Bell Labs) and the TV special ''The Unexplained'' (produced by Walt DeFaria).<ref>Noll, A. Michael, "Computer Animation and the Fourth Dimension", ''AFIPS Conference Proceedings'', Vol. 33, 1968 Fall Joint Computer Conference, ''Thompson Book Company'': Washington, D.C. (1968), pp. 1279–1283.</ref> Many projects in other fields were also undertaken at this time.
 
===Boeing-Wichita===
In the 1960s, [[William Fetter]] was a graphic designer for [[Boeing]] at [[Wichita, Kansas|Wichita]], and was credited with coining the phrase "Computer Graphics" to describe what he was doing at Boeing at the time (though Fetter himself credited this to colleague Verne Hudson).<ref>[http://courses.washington.edu/eatreun/html/history/h_nw.html University of Washington History: William Fetter] (retrieved 2012/04/20)</ref>
<ref>http://www.elysiuminc.com/gpdis/2014/DX28_Boeing-Kasik-Senesac-Visualization-DX-Open.pdf Boeing-Wichita</ref> Fetter's work included the 1964 development of ergonomic descriptions of the human body that are both accurate and adaptable to different environments, and this resulted in the first 3D3-D animated [[Wire-frame model|"wire-frame"]] figures.<ref name="BoeingMan">{{cite web |url= https://www.boeing.com/features/innovation-quarterly/nov2017/feature-technical-computer-graphics.page |title= Something worth seeing |date= November 2017 |website= Boeing Innovation Quarterly |publisher= Boeing |access-date= April 9, 2019 |quote= In 1964, William Fetter, a Boeing technical illustrator, created the first digital model of a human body to evaluate engineering designs for ergonomic quality. Exploring reach and visual field issues, he plotted a series of individual models of "The Boeing Man," which later came to be known simply as "Boeman," and produced early computer animation sequences.}}</ref><ref name="BoeingMan2">{{cite web |url= https://secure.boeingimages.com/archive/William-Fetter%27s-Boeing-Man-2F3XC5YCZNC.html#/SearchResult&ITEMID=2F3XC5YCZNC&POPUPPN=1&POPUPIID=2F3XC5YCZNC |title= William Fetter's Boeing Man |website= Boeing Images |publisher= Boeing |access-date= April 9, 2019 |quote= William Fetter (1928–2002), a Boeing art director, was the first person to draw a human figure using a computer. This figure is known as the "Boeing Man." In 1960, Fetter coined the term "computer graphics" in a description of his work on cockpit design for the Boeing Company.}}</ref>
Such human figures became one of the most iconic images of the early history of computer graphics, and often were referred to as the "Boeing Man". Fetter died in 2002.
 
===Ivan Sutherland===
[[Ivan Sutherland]] is considered by many to be the creator of Interactive Computer Graphics, and an internet pioneer. He worked at the Lincoln Laboratory at MIT ([[Massachusetts Institute of Technology]]) in 1962, where he developed a program called ''Sketchpad I'', which allowed the user to interact directly with the image on the screen. This was the first [[Graphicalgraphical Useruser Interfaceinterface]], and is considered one of the most influential computer programs everan writtenindividual byhas anever individualwritten.<ref>[http://www.cl.cam.ac.uk/techreports/UCAM-CL-TR-574.pdf Sketchpad: A man-machine graphical communication system] (retrieved 2012/04/22)</ref>
 
==Mid-1960s to mid-1970s==
 
===The University of Utah===
[[University of Utah|Utah]] was a major center for computer animation in this period. The computer science faculty was founded by [[David C. Evans (computer scientist)|David Evans]] in 1965, and many of the basic techniques of 3D3-D computer graphics were developed here in the early 1970s with [[DARPA|ARPA]] funding (''Advanced Research Projects Agency''). Research results included Gouraud, Phong, and Blinn shading, texture mapping, [[hidden-surface determination|hidden surface]] algorithms, curved [[subdivision surface|surface subdivision]], real-time line-drawing and raster image display hardware, and early virtual reality work.<ref>[http://www.cs.utah.edu/gdc/history/ Utah – Computer Graphics history] (retrieved 2012/04/22)</ref> In the words of Robert Rivlin in his 1986 book ''The Algorithmic Image: Graphic Visions of the Computer Age'', "almost every influential person in the modern computer-graphics community either passed through the University of Utah or came into contact with it in some way".<ref>The algorithmic image: graphic visions of the computer age, ''Harper & Row Publishers, Inc.'' New York, NY, USA 1986. {{ISBN|0914845802}}</ref>
 
==== Shaded 3D graphics ====
[[File:1967 512x512 Cube Rendering at Univ of Utah.png|thumb|An image of a cube generated at the University of Utah in 1967]]
In the mid-1960s, one of the most difficult problems in computer graphics was the [[Hidden-line removal|"hidden-line" problem]] – how to render a 3D model while properly removing the lines that should not be visible to the observer.<ref>{{Cite book |url=https://bitsavers.org/magazines/Datamation/196605.pdf |title=Datamation |date=May 1966 |pages=22–29}}</ref> One of the first successful approaches to this was published at the 1967 [[Fall Joint Computer Conference]] by Chris Wylie, David Evans, and Gordon Romney, and demonstrated shaded 3D objects such as cubes and [[Tetrahedron|tetrahedra]].<ref>{{Cite book |last1=Wylie |first1=Chris |last2=Romney |first2=Gordon |last3=Evans |first3=David |last4=Erdahl |first4=Alan |chapter=Half-tone perspective drawings by computer |date=1967-11-14 |title=Proceedings of the November 14-16, 1967, fall joint computer conference on - AFIPS '67 (Fall) |chapter-url=https://dl.acm.org/doi/10.1145/1465611.1465619 |___location=New York, NY, USA |publisher=Association for Computing Machinery |pages=49–58 |doi=10.1145/1465611.1465619 |isbn=978-1-4503-7896-3}}</ref> An improved version of this algorithm was demonstrated in 1968, including shaded renderings of 3D text, spheres, and buildings.<ref>{{Citation |last1=Romney |first1=Gordon W. |title=Real-time display of computer generated half-tone perspective pictures |date=1998-07-01 |work=Seminal graphics: pioneering efforts that shaped the field, Volume 1 |volume=1 |pages=283–288 |url=https://dl.acm.org/doi/10.1145/280811.281011 |access-date= |place=New York, NY, USA |publisher=Association for Computing Machinery |doi=10.1145/280811.281011 |isbn=978-1-58113-052-2 |last2=Watkins |first2=Gary S. |last3=Evans |first3=David C.}}</ref>
 
A shaded 3D computer animation of a colored [[Soma cube]] exploding into pieces was created at the University of Utah as part of Gordon Romney's 1969 PhD dissertation, along with shaded renderings of 3D text, 3D graphs, trucks, ships, and buildings.<ref>{{Cite book |last=Gordon W. Romney |url=https://archive.org/details/computerassisted0000unse_p7o2 |title=Computer Assisted Assembly and Rendering of Solids |date=August 1969 |publisher=University of Utah, Computer Science Dept. |others=Internet Archive}}</ref> This paper also coined the term "rendering" in reference to computer drawings of 3D objects. Another 3D shading algorithm was implemented by [[John Warnock]] for his 1969 dissertation.<ref>{{Cite thesis |last=Warnock |first=John Edward |title=A hidden surface algorithm for computer generated halftone pictures |date=June 1969 |degree=PhD |publisher=The University of Utah |url=https://dl.acm.org/doi/book/10.5555/905316 |doi=}}</ref>
[[File:1970 Church Rendering by Watkins at Univ of Utah.png|thumb|A color image of a church generated by the Watkins algorithm at the University of Utah in 1970]]
A truly real-time shading algorithm was developed by Gary Watkins for his 1970 PhD dissertation, and was the basis of the [[Gouraud shading]] technique, developed the following year.<ref>{{Cite book |last=Watkins |first=Gary |url=https://bitsavers.org/pdf/univOfUtah/UTECH-CSc-70-101_Watkins_Dissertation_Jun70.pdf |title=A real-time visible surface algorithm |date=June 1970 |publisher=The University of Utah}}</ref><ref>{{Cite thesis |last=Gouraud |first=Henri |title=Computer display of curved surfaces |date=1971 |degree=PhD |publisher=The University of Utah |url=https://dl.acm.org/doi/book/10.5555/905323 |doi=}}</ref> Robert Mahl's 1970 dissertation at the University of Utah described smooth shading of [[quadric surface]]s.<ref>{{Cite thesis |last=Mahl |first=Robert |url=https://collections.lib.utah.edu/details?id=704102 |title=Visible surface algorithms for quadric patches |date=December 1970 |publisher=The University of Utah}}</ref>
 
Further innovations in shaded 3D graphics at the University of Utah included a more realistic shading technique by [[Bui Tuong Phong]] for his dissertation in 1973 and texture mapping by [[Edwin Catmull]] for his 1974 dissertation.<ref>{{Cite book |last=Phong |first=Bui Tuong |url=https://collections.lib.utah.edu/details?id=712686 |title=Illimunation of computer generated images |date=July 1973 |publisher=The University of Utah}}</ref><ref>{{Cite thesis |last=Catmull |first=Edwin Earl |url=https://collections.lib.utah.edu/details?id=2111909 |title=A subdivision algorithm for computer display of curved surfaces |date=December 1974 |publisher=The University of Utah}}</ref>
 
==== Virtual reality ====
Around 1972, a [[virtual reality headset]] known as the "Sorcerer's Apprentice" became operational at the University of Utah, which used [[head tracking]] and a device similar to [[Massachusetts Institute of Technology|MIT]]'s Lincoln Wand to track the user's hand in 3D space.<ref>{{Cite thesis |url=https://collections.lib.utah.edu/details?id=706529 |title=Graphical man/machine communications |date=December 1972 |publisher=The University of Utah |last1=Evans |first1=David }}</ref> This headset, like Ivan Sutherland's [[The Sword of Damocles (virtual reality)|"Sword of Damocles"]], was capable of simple, unshaded [[Wire-frame model|wireframe]] 3D graphics; however, the Sorcerer's Apprentice added the capability to create and manipulate 3D objects in real-time through the hand tracking device, termed the "wand". Commands to be performed by the 3D wand could be chosen by pointing the wand at a physical wall chart.<ref>{{Cite thesis |last=Vickers |first=Donald Lee |url=https://collections.lib.utah.edu/details?id=705942 |title=Sorcerer's apprentice: head-mounted display and wand |date=July 1974 |publisher=The University of Utah}}</ref>
 
==== Character rigging and keyframing ====
An important innovation in computer animation at the University of Utah was the creation of the program "KEYFRAME", which would allow a user to pose and [[Key frame|keyframe]] a [[Character rigging|rigged]] humanoid 3D character, create [[walk cycle]]s and other movements, [[Lip sync|lip-sync]] the character, all using a [[Computer mouse|mouse]]-based [[Graphical user interface|graphical interface]], and then render a shaded animation of the rigged character performing the walk cycle, hand movement, or other animation. This program, as well as one for creating a 3D animation of a football match, were created by Barry Wessler for his 1973 PhD dissertation.<ref>{{Cite book |last=Wessler |first=Barry David |url=https://collections.lib.utah.edu/details?id=712684 |title=Computer-assisted visual communication |date=July 1973 |publisher=The University of Utah}}</ref> The capabilities of the "KEYFRAME" program were demonstrated in a short film, ''Not Just Reality'', which featured walk cycles, lip syncing, facial expressions, and further movement of a shaded humanoid 3D character.<ref>{{Cite AV media |url=https://www.youtube.com/watch?v=0sl72MD6Ycc |title=Not Just Reality |date=2023-03-19 |last=jellyvista |access-date=2025-01-06 |via=YouTube}}</ref>
 
===Evans and Sutherland===
Line 45 ⟶ 61:
 
===Ohio State===
[[Charles Csuri]], an artist at The [[Ohio State University]] (OSU), started experimenting with the application of computer graphics to art in 1963. His efforts resulted in a prominent CGCGI research laboratory that received funding from the [[National Science Foundation]] and other government and private agencies. The work at OSU revolved around animation languages, complex modeling environments, user-centric interfaces, human and creature motion descriptions, and other areas of interest to the discipline.<ref>[http://design.osu.edu/carlson/history/ACCAD-overview/overview.html A complete history of the Ohio State program] {{Webarchive|url=https://web.archive.org/web/20140605095322/http://design.osu.edu/carlson/history/ACCAD-overview/overview.html |date=June 5, 2014 }} (retrieved July 2, 2012)</ref><ref>"Computers and Art", by Charles Csuri and James Shaffer, ''AFIPS Conference Proceedings'', V33, FJCC, 1968.</ref><ref>[http://www.siggraph.org/artdesign/profile/csuri/ Charles Csuri profile at SIGGRAPH] {{Webarchive|url=https://web.archive.org/web/20141008033928/http://www.siggraph.org/artdesign/profile/csuri/ |date=October 8, 2014 }} (retrieved July 3, 2012)</ref>
 
===''Cybernetic Serendipity''===
Line 51 ⟶ 67:
 
===Scanimate===
The first machine to achieve widespread public attention in the media was [[Scanimate]], an analog [[computer animation]] system designed and built by Lee Harrison of the Computer Image Corporation in Denver. From around 1969 onward, Scanimate systems were used to produce much of the video-based animation seen on television in commercials, show titles, and other graphics. It could create animations in [[Real-time computer graphics|real time]], a great advantage over digital systems at the time.<ref>[http://scanimate.zfx.com Sieg, David W. (2003). Old-School Electronic Animation Central – Formerly the Scanimate Files.] {{webarchive|url=https://web.archive.org/web/20120515120200/http://scanimate.zfx.com/ |date=May 15, 2012 }} (Retrieved March 13, 2004)</ref> American animation studio [[Hanna-Barbera]] experimented with using Scanimate to create an early form of digital [[cutout animation|cutout style]]. A clip of artists using the machine to manipulate scanned images of ''Scooby-Doo'' characters, scaling and warping the artwork to simulate animation, is available at the [[Internet Archive]].<ref>{{cite AV media |people=Seig, David; Harrison, Lee |date=2004 |title=The Development of Computer Generated Animated Characters|type=DVD |language=English |url=https://www.worldcat.org/oclc/234090730 |access-date=July 29, 2022 |archive-url=https://web.archive.org/web/20220729201913/https://www.worldcat.org/title/scanimate-dvd-1/oclc/234090730 |archive-date=July 29, 2022 |oclc= 234090730}} [https://archive.org/details/SCANIMATEDVDCOMPLETO Alt URL]</ref>
 
===National Film Board of Canada===
The [[National Film Board of Canada]], already a world center for animation art, also began experimentation with computer techniques in 1969.<ref>"Retired NRC Scientists Burtnyk and Wein honoured as Fathers of Computer Animation Technology in Canada". ''Sphere'' (National Research Council of Canada) 4. 1996. (Retrieved April 20, 2011).</ref> Most well-known of the early pioneers with this was artist [[Peter Foldes]], who completed ''Metadata'' in 1971. This film comprised drawings animated by gradually changing from one image to the next, a technique known as "interpolating" (also known as "inbetweening" or "morphing"), which also featured in a number of earlier art examples during the 1960s.<ref name="NFBC-NRC">From [http://design.osu.edu/carlson/history/tree/nfbc.html "The Film Animator Today: Artists Without A Canvas"] {{Webarchive|url=https://web.archive.org/web/20120402221929/http://design.osu.edu/carlson/history/tree/nfbc.html |date=April 2, 2012 }} (retrieved April 22, 2012)</ref> In 1974, Foldes completed ''[[Hunger (1974 film)|Hunger / La Faim]]'', which was one of the first films to show solid filled (raster scanned) rendering, and was awarded the Jury Prize in the short film category at [[1974 Cannes Film Festival]], as well as an Academy Award nomination. Foldes and the National Film Board of Canada employed pioneering keyframe computer technology developed at the [[National Research Council Canada|National Research Council]] of Canada (NRC) by scientist Nestor Burtnyk in 1969. Burtnyk and his collaborator Marceli Wein received the Academy Award in 1997 in recognition of their role in the field.<ref>{{Cite news |last=Deachman |first=Bruce |date=August 31, 2018 |title=And the Oscar goes to...: Ottawa scientists were pioneers in animation technology |url=https://ottawacitizen.com/news/local-news/and-the-oscar-goes-to-ottawa-scientists-were-pioneers-in-animation-technology |access-date=April 20, 2025 |work=Ottawa Citizen}}</ref> The NRC team also contributed high-profile animation sequences to the celebrated BBC documentary series The Ascent of Man (1973).<ref>{{Cite web |last=National Research Council staff |date=October 20, 2015 |title=Computer Animation - An Oscar Winning Performance |url=https://ingeniumcanada.org/channel/innovation/computer-animation-oscar-winning-performance |access-date=April 20, 2025 |website=Ingenium Channel}}</ref>
 
===Atlas Computer Laboratory and Antics===
The [[Atlas Computer Laboratory]] near Oxford was for many years a major facility for computer animation in Britain.<ref>[http://www.chilton-computing.org.uk/acl/home.htm Atlas Computer Laboratory, Chilton: 1961–1975] (retrieved June 3, 2009)</ref> The first entertainment cartoon made was ''The Flexipede'', by Tony Pritchett, which was first shown publicly at the Cybernetic Serendipity exhibition in 1968.<ref>[http://animaland-ecotone.blogspot.com.es/2008/09/flexipede.html "The Flexipede"] by Tony Pritchett (retrieved April 22, 2012)</ref> Artist Colin Emmett and animator [[Alan Kitching]] first developed solid filled colour rendering in 1972, notably for the title animation for the [[BBC]]'s ''[[The Burke Special]]'' TV program.
 
In 1973, Kitching went on to develop a software called "Antics", which allowed users to create animation without needing any programming.<ref>Alan Kitching, "Computer Animation, Some New Antics", ''BKSTS Journal'', December 1973, pp. 372–386.</ref><ref>[http://www.antics1.demon.co.uk/ATK_biog.html Biography of Alan Kitching at Antics Workshop] {{Webarchive|url=https://web.archive.org/web/20191229150251/http://www.antics1.demon.co.uk/ATK_biog.html |date=December 29, 2019 }} (retrieved July 23, 2012).</ref> The package was broadly based on conventional "cel" (celluloid) techniques, but with a wide range of tools including camera and graphics effects, interpolation ("inbetweening"/"morphing"), use of skeleton figures and grid overlays. Any number of drawings or cels could be animated at once by "choreographing" them in limitless ways using various types of "movements". At the time, only black & white plotter output was available, but Antics was able to produce full-color output by using the [[Technicolor]] Three-strip Process. Hence the name Antics was coined as an acronym for ''AN''imated ''T''echnicolor-''I''mage ''C''omputer ''S''ystem.<ref name="AK-BKSTS-73">[http://www.antics1.demon.co.uk/history.html#L3 "Computer Animation, Some New Antics"] {{Webarchive|url=https://web.archive.org/web/20080402093113/http://www.antics1.demon.co.uk/history.html#L3 |date=April 2, 2008 }}, ''BKSTS Journal'', December 1973 – full scanned article (retrieved April 22, 2012)</ref> Antics was used for many animation works, including the first complete documentary movie ''Finite Elements'', made for the Atlas Lab itself in 1975.<ref>[http://www.chilton-computing.org.uk/acl/applications/animation/p001.htm Atlas Computer Laboratory – Finite Elements] (retrieved April 22, 2012).</ref>
 
:From around the early 1970s, much of the emphasis in computer animation development was towards ever increasing realism in 3D3-D imagery, and on visual effects designed for use in feature movies.
 
===First digital animation in a feature film===
The first feature film to use [[digital image processing]] was the 1973 film ''[[Westworld (film)|Westworld]]'', a science-fiction film written and directed by novelist [[Michael Crichton]], in which humanoid robots live amongst the humans.<ref>[http://www.beanblossom.in.us/larryy/cgi.html A Brief, Early History of Computer Graphics in Film] {{webarchive|url=https://web.archive.org/web/20120717074134/http://www.beanblossom.in.us/larryy/cgi.html |date=July 17, 2012 }} Larry Yaeger, August 16, 2002 (last update, retrieved March 24, 2010)</ref> John Whitney, Jr., and Gary Demos at [[Information International, Inc.]] digitally processed motion picture photography to appear [[Pixelization|pixelized]] to portray the Gunslinger android's [[Perspective (cognitive)|point of view]]. The cinegraphic block portraiture was accomplished using the Technicolor Three-strip Process to color-separate each frame of the source images, then scanning them to convert into rectangular blocks according to its tone values, and finally outputting the result back to film. The process was covered in the ''[[American Cinematographer]]'' article "Behind the scenes of Westworld".<ref>''[[American Cinematographer]]'' 54(11):1394–1397, 1420–1421, 1436–1437. November 1973.</ref>
 
===SIGGRAPH===
Sam Matsa whose background in graphics started with the APT project at MIT with Doug Ross and Andy Van Dam petitioned [[Association for Computing Machinery]] (ACM) to form SICGRAPHSIGGRAPH (Special Interest Committee on Computer Graphics), the forerunner of [[ACM SIGGRAPH]] in 1967.<ref>{{Cite web | url=http://www.siggraph.org/publications/newsletter/v32n1/columns/machover.html | title=SIGGRAPH Computer Graphics Newsletter – Computer Graphics Pioneers | access-date=May 26, 2014 | archive-url=https://web.archive.org/web/20150924101810/http://www.siggraph.org/publications/newsletter/v32n1/columns/machover.html | archive-date=September 24, 2015 | url-status=dead }}</ref> In 1974, the first [[SIGGRAPH]] conference on computer graphics opened. This annual conference soon became the dominant venue for presenting innovations in the field.<ref>SIGGRAPH is an acronym for ''Special Interest Group on Computer GRAPHics and Interactive Techniques'' and is sponsored by the ''Association for Computing Machinery'' (ACM).</ref><ref>[https://web.archive.org/web/19961221040900/http://siggraph.org/ ACM SIGGRAPH – Official website]</ref>
 
==Towards 3D3-D: mid-1970s into the 1980s==
 
===Early 3D3-D animation in the cinema===
The first use of 3D3-D wireframe imagery in mainstream cinema was in the sequel to ''Westworld'', ''[[Futureworld]]'' (1976), directed by Richard T. Heffron. This featured a computer-generated hand and face created by then University of Utah graduate students [[Edwin Catmull]] and [[Fred Parke]] which had initially appeared in their 1972 experimental short ''[[A Computer Animated Hand]].''<ref name="sltrib">{{cite news|url=http://www.sltrib.com/sltrib/mobile/53193670-90/film-catmull-computer-animation.html.csp|title=Pixar founder's Utah-made ''Hand'' added to National Film Registry|work=[[The Salt Lake Tribune]]|date=December 28, 2011|access-date=January 8, 2012}}</ref> The same film also featured snippets from 1974 experimental short ''Faces and Body Parts''. The [[Academy Awards|OscarAcademy Award]]-winning 1975 short animated film ''[[Great (1975 film)|Great]]'', about the life of the [[Victorian era|Victorian]] engineer [[Isambard Kingdom Brunel]], contains a brief sequence of a rotating wireframe model of Brunel's final project, the iron steam ship [[SS Great Eastern]].The third film to use this technology was ''[[Star Wars (film)|Star Wars]]'' (1977), written and directed by [[George Lucas]], with wireframe imagery in the scenes with the Death Star plans, the targeting computers in the [[X-wing]] fighters, and the ''[[Millennium Falcon]]'' spacecraft.
 
The [[Walt Disney Productions|Walt Disney]] film ''[[The Black Hole (1979 film)|The Black Hole]]'' (1979, directed by Gary Nelson) used wireframe rendering to depict the titular black hole, using equipment from Disney's engineers. In the same year, the science-fiction horror film ''[[Alien (film)|Alien]]'', directed by [[Ridley Scott]], also used wireframewire-frame model graphics, in this case to render the navigation monitors in the spaceship. The footage was produced by Colin Emmett at the Atlas Computer Laboratory.<ref>[http://www.chilton-computing.org.uk/acl/applications/animation/p014.htm "My Work on the Alien", Bryan Wyvill] (retrieved June 30, 2012)</ref>
 
===Nelson Max===
Line 80 ⟶ 96:
 
===NYIT===
In 1974, Alex Schure, a wealthy New York entrepreneur, established the Computer Graphics Laboratory (CGL) at the [[New York Institute of Technology Computer Graphics Lab|New York Institute of Technology]] (NYIT). He put together the most sophisticated studio of the time, with state of the art computers, film and graphic equipment, and hired top technology experts and artists to run it – [[Ed Catmull]], Malcolm Blanchard, [[Fred Parke]] and others all from Utah, plus others from around the country including [[Ralph Guggenheim]], [[Alvy Ray Smith]] and [[Ed Emshwiller]]. During the late 1970s, the staff made numerous innovative contributions to image rendering techniques, and produced many influential software, including the animation program ''Tween'', the paint program ''Paint'', and the animation program ''SoftCel''. Several videos from NYIT become quite famous: ''Sunstone'', by [[Ed Emshwiller]], ''Inside a Quark'', by Ned Greene, and [[The Works (film)|''The Works'']]. The latter, written by [[Lance Williams (graphics researcher)|Lance Williams]], was begun in 1978, and was intended to be the first full-length [[Computer-generated imagery|CGI]] film, but it was never completed, though a trailer for it was shown at SIGGRAPH 1982. In these years, many people regarded NYIT CGCGI Lab as the top computer animation research and development group in the world.<ref name="NYIT-progs">[https://www.cs.cmu.edu/~ph/nyit/masson/nyit.html Brief History of the New York Institute of Technology Computer Graphics Lab] (retrieved June 30, 2012)</ref><ref>[https://www.cs.cmu.edu/~ph/nyit/ A compilation of NYIT images and information can be found on Paul Heckbert's site](retrieved June 30, 2012)</ref>
 
The quality of NYIT's work attracted the attention of George Lucas, who was interested in developing a [[Computer-generated imagery|CGI]] specialvisual effects facility at his company [[Lucasfilm]]. In 1979, he recruited the top talent from NYIT, including Catmull, Smith and Guggenheim to start his division, which later spun off as [[Pixar]], founded in 1986 with funding by [[Apple Inc.]] co-founder [[Steve Jobs]].
 
===Framebuffer===
Line 91 ⟶ 107:
The first commercial framebuffer was produced in 1974 by [[Evans & Sutherland]]. It cost about $15,000, with a resolution of 512 by 512 pixels in 8-bit grayscale color, and sold well to graphics researchers without the resources to build their own framebuffer.<ref>[http://www.computerhistory.org/brochures/companies.php?alpha=d-f&company=com-42b9d8b7f4191 "Company: Evans and Sutherland Computer Corporation", at ''Computer History Museum, California''] (retrieved August 20, 2012).</ref> A little later, [[New York Institute of Technology Computer Graphics Lab|NYIT]] created the first full-color 24-bit [[RGB color space|RGB]] framebuffer by using three of the Evans & Sutherland framebuffers linked together as one device by a minicomputer. Many of the "firsts" that happened at NYIT were based on the development of this first raster graphics system.<ref name="NYIT-progs"/>
 
In 1975, the UK company [[Quantel]], founded in 1973 by Peter Michael,<ref>[https://www.independent.co.uk/news/people/profiles/radiohead-701095.html "Radiohead", biography of Sir Peter Michael], by Darius Sanai, ''The Independent'', September 27, 2000 (retrieved August 24, 2012).</ref> produced the first commercial full-color broadcast framebuffer, the Quantel DFS 3000. It was first used in TV coverage of the [[1976 Summer Olympics|1976 Montreal Olympics]] to generate a [[picture-in-picture]] inset of the Olympic flaming torch while the rest of the picture featured the runner entering the stadium. Framebuffer technology provided the cornerstone for the future development of digital television products.<ref>[http://www.quantel.com/ Quantel company web site] {{Webarchive|url=https://web.archive.org/web/20150910073331/http://www.quantel.com/ |date=September 10, 2015 }} (retrieved August 24, 2012).</ref>
 
By the late 1970s, it became possible for personal computers (such as the [[Apple II]]) to contain low-color framebuffers. However, it was not until the 1980s that a real revolution in the field was seen, and framebuffers capable of holding a standard video image were incorporated into standalone workstations. By the 1990s, framebuffers eventually became the standard for all personal computers.
 
===Fractals===
At this time, a major step forward to the goal of increased realism in 3D3-D animation came with the development of "''[[fractals]]''". The term was coined in 1975 by mathematician [[Benoit Mandelbrot]], who used it to extend the theoretical concept of fractional dimensions to geometric patterns in nature, and published in English translation of his book ''Fractals: Form, Chance and Dimension'' in 1977.<ref>Mandelbrot, Benoît B, 1983. [https://books.google.com/books?id=0R2LkE3N7-oC "The Fractal Geometry of Nature"], ''Macmillan'', {{ISBN|978-0-7167-1186-5}} (retrieved February 1, 2012).</ref><ref>Albers; Alexanderson, 2008. "Benoit Mandelbrot: In his own words". Mathematical people: profiles and interviews. Wellesley, Mass: AK Peters. p. 214, {{ISBN|978-1-56881-340-0}}.</ref>
 
In 1979–80, the first film using fractals to generate the graphics was made by [[Loren Carpenter]] of Boeing. Titled ''[[Vol Libre]]'', it showed a flight over a fractal landscape, and was presented at SIGGRAPH 1980.<ref>[http://design.osu.edu/carlson/history/tree/carpenter.html Loren Carpenter – Biography] {{Webarchive|url=https://web.archive.org/web/20131221022531/http://design.osu.edu/carlson/history/tree/carpenter.html |date=December 21, 2013 }} (retrieved July 3, 2012)</ref> Carpenter was subsequently hired by Pixar to create the fractal planet in the ''Genesis Effect'' sequence of ''[[Star Trek II: The Wrath of Khan]]'' in June 1982.<ref>[http://vimeo.com/5810737 ''Vol Libre'' on Vimeo] (retrieved June 30, 2012)</ref>
 
===JPL and Jim Blinn===
Bob Holzman of [[NASA]]'s [[Jet Propulsion Laboratory]] in California established JPL's Computer Graphics Lab in 1977 as a group with technology expertise in visualizing data being returned from NASA missions. On the advice of Ivan Sutherland, Holzman hired a graduate student from Utah named [[Jim Blinn]].<ref>{{Cite journal |last=Holzman |first=Robert E. |date=1986-07-01 |title=Atoms to astronomy: Computer graphics at the Jet Propulsion Laboratory |url=https://doi.org/10.1007/BF01900326 |journal=The Visual Computer |language=en |volume=2 |issue=3 |pages=159–163 |doi=10.1007/BF01900326 |s2cid=2265857 |issn=1432-2315|url-access=subscription }}</ref><ref>Sutherland once allegedly commented that "There are about a dozen great computer graphics people, and Jim Blinn is six of them."</ref> Blinn had worked with imaging techniques at Utah, and developed them into a system for NASA's visualization tasks. He produced a series of widely seen "fly-by" simulations, including the [[Voyager program|Voyager]], [[Pioneer program|Pioneer]] and [[Galileo (spacecraft)|Galileo]] spacecraft fly-bys of Jupiter, Saturn and their moons. He also worked with [[Carl Sagan]], creating animations for his ''[[Cosmos: A Personal Voyage]]'' TV series. Blinn developed many influential new modelling techniques, and wrote papers on them for the [[IEEE]] (Institute of Electrical and Electronics Engineers), in their journal ''Computer Graphics and Applications''. Some of these included environment mapping, improved highlight modelling, "blobby" modelling, simulation of wrinkled surfaces, and simulation of butts and dusty surfaces.
 
Later in the 1980s, Blinn developed CGCGI animations for an [[Annenberg Foundation|Annenberg/CPB]] TV series, ''[[The Mechanical Universe]]'', which consisted of over 500 scenes for 52 half-hour programs describing physics and mathematics concepts for college students. This he followed with production of another series devoted to mathematical concepts, called ''[[Project Mathematics!]]''.<ref>[http://design.osu.edu/carlson/history/tree/jpl.html Jet Propulsion Lab (JPL) by Wayne Carlson] {{Webarchive|url=https://web.archive.org/web/20150724105628/http://design.osu.edu/carlson/history/tree/jpl.html |date=July 24, 2015 }} (retrieved July 3, 2012)</ref>
 
===Motion control photography===
[[Motion control photography]] is a technique that uses a computer to record (or specify) the exact motion of a film camera during a shot, so that the motion can be precisely duplicated again, or alternatively on another computer, and combined with the movement of other sources, such as CGI elements. Early forms of motion control go back to [[John Whitney (animator)|John Whitney]]'s 1968 work on ''[[2001: A Space Odyssey (film)|2001: A Space Odyssey]]'', and the effects on the 1977 film ''[[Star Wars Episode IV: A New Hope]]'', by [[George Lucas]]' newly created company [[Industrial Light & Magic]] in California (ILM). ILM created a digitally controlled camera known as the [[Dykstraflex]], which performed complex and repeatable motions around stationary spaceship models, enabling separately filmed elements (spaceships, backgrounds, etc.) to be coordinated more accurately with one another. However, neither of these was actually computer-based—Dykstraflex was essentially a custom-built hard-wired collection of knobs and switches.<ref>[https://www.denofgeek.com/movies/13733/the-den-of-geek-interview-john-dykstra Interview with John Dykstra, inventor of Dykstraflex], (retrieved August 9, 2012).</ref> The first commercial computer-based motion control and CGI system was developed in 1981 in the UK by [[Moving Picture Company]] designer [[Bill Mather]].<ref>[https://web.archive.org/web/20131124134921/http://www.rtbot.net/motion_control_photography History of Motion Control Photography at RTBot] (retrieved August 9, 2012).</ref>
 
===3D3-D computer graphics software===
[[3D computer graphics software]] began appearing for [[home computer]]s in the late 1970s. The earliest known example is ''[[3D Art Graphics]]'', a set of [[3D computer graphics]] effects, written by Kazumasa Mitazawa and released in June 1978 for the [[Apple II]].<ref>{{Cite web | url=https://www.brutaldeluxe.fr/projects/cassettes/japan/ | title=Brutal Deluxe Software}}</ref><ref>{{Cite web |url=http://www.neoncluster.com/projects-apple2/apple2-jcassettes.html |title=PROJECTS AND ARTICLES Retrieving Japanese Apple II programs |access-date=March 26, 2017 |archive-url=https://web.archive.org/web/20161005101914/http://www.neoncluster.com/projects-apple2/apple2-jcassettes.html |archive-date=October 5, 2016 |url-status=dead }}</ref>
 
Line 122 ⟶ 138:
In 1981, Quantel released the "[[Quantel Paintbox|Paintbox]]", the first broadcast-quality turnkey system designed for creation and composition of television video and graphics. Its design emphasized the studio workflow efficiency required for live news production. Essentially, it was a framebuffer packaged with innovative user software, and it rapidly found applications in news, weather, station promos, commercials, and the like. Although it was essentially a design tool for still images, it was also sometimes used for frame-by-frame animations. Following its initial launch, it revolutionised the production of television graphics, and some Paintboxes are still in use today due to their image quality, and versatility.<ref>[https://wayback.archive-it.org/all/20121108191518/http://blog.quantel.eu/2011/03/the-quantel-paintbox-a-pioneering-computer-graphics-workstation/ "The Quantel Paintbox – a pioneering computer graphics workstation"], ''Quantel'', March 15, 2011 (retrieved August 24, 2012).</ref>
 
This was followed in 1982 by the [[Quantel Mirage]], or DVM8000/1 "Digital Video Manipulator", a digital real-time video effects processor. This was based on Quantel's own hardware, plus a [[Hewlett-Packard]] computer for custom program effects. It was capable of warping a live video stream by texture mapping it onto an arbitrary three-dimensional shape, around which the viewer could freely rotate or zoom in real-time. It could also interpolate, or morph, between two different shapes. It was considered the first real-time 3D video effects processor, and the progenitor of subsequent [[Digital video effect|DVE]] (Digital video effect) machines. In 1985, Quantel went on to produce "Harry", the first all-digital [[Non-linear editing system|non-linear editing]] and effects compositing system.<ref>[http://www.quantel.com/ Quantel company website] {{Webarchive|url=https://web.archive.org/web/20150910073331/http://www.quantel.com/ |date=September 10, 2015 }} (retrieved August 24, 2012).</ref>
 
===Osaka University===
In 1982, Japan's [[Osaka University]] developed the [[Supercomputing in Japan|LINKS-1 Computer Graphics System]], a [[supercomputer]] that used up to 257 [[Zilog Z8000|Zilog Z8001]] [[microprocessor]]s, used for rendering realistic [[3D computer graphics|3D]] [[computer graphics]]. According to the Information Processing Society of Japan: "The core of 3D image rendering is calculating the luminance of each pixel making up a rendered surface from the given viewpoint, [[Computer graphics lighting|light source]], and object position. The LINKS-1 system was developed to realize an image rendering methodology in which each pixel could be parallel processed independently using [[Ray tracing (graphics)|ray tracing]]. By developing a new software methodology specifically for high-speed image rendering, LINKS-1 was able to rapidly render highly realistic images." It was "used to create the world's first 3D [[planetarium]]-like video of the entire [[Universe|heavens]] that was made completely with computer graphics. The video was presented at the [[Fujitsu]] pavilion at the 1985 International Exposition in [[Tsukuba, Ibaraki|Tsukuba]]."<ref>{{Cite web | url=http://museum.ipsj.or.jp/en/computer/other/0013.html | title=LINKS-1 Computer Graphics System-Computer Museum}}</ref> The LINKS-1 was the world's most powerful computer, as of 1984.<ref>{{cite book | last=Defanti | first=Thomas A. | title=Advances in Computers | chapter=The Mass Impact of Videogame Technology | publisher=Elsevier | volume=23 | date=1984 | isbn=978-0-12-012123-6 | doi=10.1016/s0065-2458(08)60463-5 | doi-access=free | url=http://www.vasulka.org/archive/Writings/VideogameImpact.pdf#page=29 {{Bare URL PDF| access-date=March2025-08-30 2022| page=93–140}}</ref>
 
===3D3-D Fictional Animated Films at the University of Montreal===
In the '80s, [[University of Montreal]] was at the front run of Computer Animation with three successful short 3D3-D animated films with 3D3-D characters.
 
In 1983, Philippe Bergeron, [[Nadia Magnenat Thalmann]], and [[Daniel Thalmann]] directed [[Dream Flight]], considered as the first 3D3-D generated film telling a story. The film was completely programmed using the MIRA graphical language,<ref>N. Magnenat Thalmann, D. Thalmann, '''The Use of 3D- High-Level Graphical Types in the MIRA Animation System''', IEEE Computer Graphics and Applications, Vol. 3, No 9, 1983, pp.9–16</ref> an extension of the [[Pascal programming language]] based on [[Abstract Graphical Data Types]].<ref>N. Magnenat Thalmann, D. Thalmann, '''MIRA-3D3-D: A Three-dimensional Graphical Extension of PASCAL''', Software-Practice and Experience, Vol.13, 1983, pp. 797–808</ref> The film got several awards and was shown at the [[SIGGRAPH]] '83 Film Show.
 
In 1985, Pierre Lachapelle, Philippe Bergeron, Pierre Robidoux and [[Daniel Langlois]] directed [[Tony de Peltrie]], which shows the first animated human character to express emotion through [[facial expressions]] and body movements, which touched the feelings of the audience.<ref>"Friday Flashback #60". eX-SI.</ref><ref>Philippe Bergeron, Pierre Robidoux, Pierre Lachapelle und Daniel Langlois: Tony de Peltrie (1985), Website The Daniel Langlois Foundation: Image du Futur collection.</ref> ''Tony de Peltrie'' premiered as the closing film of [[SIGGRAPH]] '85.
Line 137 ⟶ 153:
 
===Sun Microsystems, Inc===
The [[Sun Microsystems]] company was founded in 1982 by [[Andy Bechtolsheim]] with other fellow graduate students at [[Stanford University]]. Bechtolsheim originally designed the SUN computer as a personal [[Computer-aided design|CAD]] workstation for the Stanford University Network (hence the acronym "SUN"). It was designed around the Motorola 68000 processor with the Unix operating system and virtual memory, and, like SGI, had an embedded frame buffer.<ref>[ftp://reports.stanford.edu/pub/cstr/reports/csl/tr/82/229/CSL-TR-82-229.pdf "The SUN Workstation Architecture"]{{dead link|date=November 2017May 2025|bot=InternetArchiveBot medic}}{{cbignore|fix-attemptedbot=yes medic}}, Andreas Bechtolsheim, Forest Baskett, Vaughan Pratt, March 1982, ''Stanford University Computer systems Laboratory Technical Report No. 229'' (retrieved July 28, 2009).</ref> Later developments included computer servers and workstations built on its own RISC-based processor architecture and a suite of software products such as the Solaris operating system, and the Java platform. By the '90s, Sun workstations were popular for rendering in 3D3-D CGI filmmaking—for example, [[Disney]]-[[Pixar]]'s 1995 movie ''[[Toy Story]]'' used a [[render farm]] of 117 Sun workstations.<ref>[[Toy Story#Animation|Animation and Rendering on ''Toy Story'']]</ref> Sun was a proponent of [[Open system (computing)|open systems]] in general and [[Unix]] in particular, and a major contributor to [[open source software]].<ref>[http://www.stanford.edu/group/wellspring/sun_spotlight.html "Wellspring of Innovation: Sun Microsystems Spotlight"] {{Webarchive|url=https://web.archive.org/web/20090517063315/http://www.stanford.edu/group/wellspring/sun_spotlight.html |date=May 17, 2009 }} Stanford.edu (retrieved July 28, 2009).</ref>
 
===National Film Board of Canada===
Line 143 ⟶ 159:
 
===First turnkey broadcast animation system===
Also in 1982, the first complete turnkey system designed specifically for creating broadcast-standard animation was produced by the Japanese company Nippon Univac Kaisha ("NUK", later merged with [[Burroughs Corporation|Burroughs]]), and incorporated the [[Antics 2-D Animation|Antics 2-D computer animation]] software developed by Alan Kitching from his earlier versions. The configuration was based on the [[VAX-11|VAX 11/780]] computer, linked to a [[Fernseh|Bosch 1-inch]] VTR, via NUK's own framebuffer. This framebuffer also showed realtime instant replays of animated vector sequences ("line test"), though finished full-color recording would take many seconds per frame.<ref>"Antics in Nippon Animation", by Alex Pousselle, ''Byte'' magazine, October 1983, pp 378–381.</ref><ref>"About The Cover", ''IEEE Computer Graphics'' magazine, March 1985, lead article on Antics, cover & pp 6–7.</ref><ref>"Animators' Tool", ''IEEE Computer Graphics'' magazine, December 1985, article on Antics by Margaret Neal, pp 5–7.</ref> The full system was successfully sold to broadcasters and animation production companies across Japan. Later in the '80s, Kitching developed versions of Antics for [[Silicon Graphics|SGI]] and [[Apple Mac]] platforms, and these achieved a wider global distribution.<ref>[http://www.antics1.demon.co.uk/studios.html Antics Studios in the '80s & '90s] {{Webarchive|url=https://web.archive.org/web/20140502010129/http://www.antics1.demon.co.uk/studios.html |date=May 2, 2014 }} (retrieved April 22, 2012)</ref>
 
===First solid 3D3-D CGI in the movies===
The first cinema feature movie to make extensive use of solid 3D3-D [[Computer-generated imagery|CGI]] was [[Walt Disney]]'s ''[[Tron]]'', directed by [[Steven Lisberger]], in 1982. The film is celebrated as a milestone in the industry, though less than twenty minutes of this animation were actually used—mainly the scenes that show digital "terrain", or include vehicles such as ''[[Light Cycle]]s'', tanks and ships. To create the CGI scenes, Disney turned to the four leading computer graphics firms of the day: [[Information International Inc]], [[Robert Abel and Associates]] (both in California), [[Mathematical Applications Group, Inc.|MAGI]], and [[Digital Effects]] (both in New York). Each worked on a separate aspect of the movie, without any particular collaboration.<ref>"The Making of Tron", Richard Patterson, ''[[American Cinematographer]]'', August 1982.</ref> ''[[Tron]]'' was a box office success, grossing $33&nbsp;million on a budget of $17&nbsp;million.<ref>[https://www.boxofficemojo.com/movies/?id=tron.htm "Tron"], at ''Box Office Mojo'' (retrieved July 23, 2012).</ref>
 
In 1984, ''[[Tron]]'' was followed by ''[[The Last Starfighter]]'', a [[Universal Pictures]] / [[Lorimar Film Entertainment|Lorimar]] production, directed by [[Nick Castle]], and was one of cinema's earliest films to use extensive [[Computer-generated imagery|CGI]] to depict its many starships, environments and battle scenes. This was a great step forward compared with other films of the day, such as ''[[Return of the Jedi]]'', which still used conventional physical models.<ref>{{cite journal|author=Shay, Jody|date=February 1987|title=Humpback to the Future|journal=[[Cinefex]]|issue=29}}</ref> The computer graphics for the film were designed by artist [[Ron Cobb]], and rendered by [[Digital Productions]] on a [[Cray X-MP]] supercomputer. A total of 27 minutes of finished CGI footage was produced—considered an enormous quantity at the time. The company estimated that using computer animation required only half the time, and one half to one third the cost of traditional specialvisual effects.<ref>[http://design.osu.edu/carlson/history/lesson6.html#dp Ohio State University] CG history page, (retrieved June 30, 2012).</ref> The movie was a financial success, earning over $28&nbsp;million on an estimated budget of $15&nbsp;million.<ref>[https://www.boxofficemojo.com/movies/?id=laststarfighter.htm "The Last Starfighter"] at ''Box Office Mojo'', (retrieved June 30, 2012).</ref>
 
===Inbetweening and morphing===
Line 155 ⟶ 171:
The term "morphing" did not become current until the late '80s, when it specifically applied to computer inbetweening with photographic images—for example, to make one face transform smoothly into another. The technique uses grids (or "meshes") overlaid on the images, to delineate the shape of key features (eyes, nose, mouth, etc.). Morphing then inbetweens one mesh to the next, and uses the resulting mesh to distort the image and simultaneously [[dissolve (filmmaking)|dissolve]] one to another, thereby preserving a coherent internal structure throughout. Thus, several different digital techniques come together in morphing.<ref>[http://collaboration.cmc.ec.gc.ca/science/rpn/biblio/ddj/Website/articles/DDJ/1993/9307/9307a/9307a.htm "Morphing in 2-D and 3-D"] {{Webarchive|url=https://archive.today/20121216004434/http://collaboration.cmc.ec.gc.ca/science/rpn/biblio/ddj/Website/articles/DDJ/1993/9307/9307a/9307a.htm |date=December 16, 2012 }}, Valerie Hall, ''Curtin University of Technology'', Australia, 1993, (retrieved July 27, 2012).</ref> Computer distortion of photographic images was first done by [[NASA]], in the mid-1960s, to align [[Landsat program|Landsat]] and [[Skylab]] satellite images with each other. [[Texture mapping]], which applies a photographic image to a 3D surface in another image, was first defined by [[Jim Blinn]] and Martin Newell in 1976. A 1980 paper by [[Ed Catmull]] and [[Alvy Ray Smith]] on geometric transformations, introduced a mesh-warping algorithm.<ref>[http://dl.acm.org/citation.cfm?id=800250.807505 "3-D transformations of images in scanline order"], by Ed Catmull & Alvy Ray Smith, (retrieved July 27, 2012).</ref> The earliest full demonstration of morphing was at the 1982 [[SIGGRAPH]] conference, where Tom Brigham of [[NYIT]] presented a short film sequence in which a woman transformed, or "morphed", into a lynx.
 
The first cinema movie to use morphing was [[Ron Howard]]'s 1988 fantasy film ''[[Willow (1988 film)|Willow]]'', where the main character, Willow, uses a magic wand to transform animal to animal to animal and finally, to a sorceress.
 
===3D3-D inbetweening===
With 3D3-D [[Computer-generated imagery|CGI]], the inbetweening of photo-realistic computer models can also produce results similar to morphing, though technically, it is an entirely different process (but is nevertheless often also referred to as "morphing"). An early example is Nelson Max's 1977 film ''Turning a sphere inside out''.<ref name="Nelson-Max"/> The first cinema feature film to use this technique was the 1986 ''[[Star Trek IV: The Voyage Home]]'', directed by [[Leonard Nimoy]], with visual effects by [[George Lucas]]'s company [[Industrial Light & Magic]] (ILM). The movie includes a dream sequence where the crew travel back in time, and images of their faces transform into one another. To create it, ILM employed a new [[3D scanning]] technology developed by [[Cyberware]] to digitize the cast members' heads, and used the resulting data for the computer models. Because each head model had the same number of key points, transforming one character into another was a relatively simple inbetweening.<ref name="shay-14">Shay, 14.</ref>
 
===The Abyss===
In 1989 [[James Cameron]]'s underwater action movie ''[[The Abyss]]'' was released. This was one of the first cinema movies to include photo-realistic [[Computer-generated imagery|CGI]] integrated seamlessly into live-action scenes. A five-minute sequence featuring an animated tentacle or "pseudopod" was created by ILM, who designed a program to produce surface waves of differing sizes and kinetic properties for the pseudopod, including reflection, refraction and a [[morphing]] sequence. Although short, this successful blend of CGI and live -action is widely considered a milestone in setting the direction for further future development in the field.<ref>[https://www.nytimes.com/1989/08/06/movies/film-the-abyss-a-foray-into-deep-waters.html?sec=&spon=&pagewanted=all "A Foray into Deep Waters"], Aljean Harmetz, ''New York Times'', August 6, 1989, p. 15 (retrieved July 14, 2012).</ref>
 
===Walt Disney and CAPS===
''[[The Great Mouse Detective]]'' (1986) was the first [[Disney]] film to extensively use computer animation, a fact that Disney used to promote the film during marketing. CGI was used during a two-minute climax scene on the [[Big Ben]], inspired by a similar climax scene in [[Hayao Miyazaki]]'s ''[[The Castle of Cagliostro]]'' (1979). ''The Great Mouse Detective'', in turn, paved the way for the [[Disney Renaissance]].<ref name="mouseplanet2">{{cite web|last=Korkis|first=Jim|title=How Basil Saved Disney Feature Animation: Part Two|url=https://www.mouseplanet.com/9549/How_Basil_Saved_Disney_Feature_Animation_Part_Two|website=Mouse Planet|date=March 2, 2011|access-date=June 22, 2016}}</ref><ref>{{cite news |last1=Motamayor |first1=Rafael |title=Revisiting 'The Great Mouse Detective', the Unsung Kickstarter of the Disney Renaissance (And One of Disney's Creepiest Movies) |url=https://www.slashfilm.com/the-great-mouse-detective-revisited-2/ |access-date=April 5, 2020 |work=[[/Film]] |date=April 2, 2020}}</ref>
 
The late 1980s saw another milestone in computer animation, this time in 2D2-D: the development of [[Disney]]'s "[[Computer Animation Production System]]", known as "CAPS/ink & paint". This was a custom collection of software, scanners and networked workstations developed by [[The Walt Disney Company]] in collaboration with [[Pixar]]. Its purpose was to computerize the ink-and-paint and post-production processes of traditionally animated films, to allow more efficient and sophisticated post-production by making the practice of hand-painting [[cel]]s obsolete. The animators' drawings and background paintings are scanned into the computer, and animation drawings are inked and painted by digital artists. The drawings and backgrounds are then combined, using software that allows for camera movements, [[multiplane camera|multiplane]] effects, and other techniques—including compositing with 3D3-D image material. The system's first feature film use was in ''[[The Little Mermaid (1989 film)|The Little Mermaid]]'' (1989), for the "farewell rainbow" scene near the end, but the first full-scale use was for ''[[The Rescuers Down Under]]'' (1990), which therefore became the first traditionally animated film to be entirely produced on computer—or indeed, the first 100% digital feature film of any kind ever produced.<ref>[[Waking Sleeping Beauty|''Waking Sleeping Beauty'' (documentary film), Don Hahn, 2009.]] Stone Circle Pictures/Walt Disney Studios Motion Pictures (retrieved August 2, 2012).</ref><ref>{{cite book |title=Disney A-Z: The Official Encyclopedia |url=https://archive.org/details/disneytozofficia00smit |url-access=registration |last=Smith |first=Dave |year=1996 |publisher=Hyperion |___location= New York|isbn=978-0-7868-6223-8 |page=[https://archive.org/details/disneytozofficia00smit/page/414 414]}}</ref>
 
==3D3-D animation software in the 1980s==
The 1980s saw the appearance of many notable new commercial software products:
 
* 1982: [[Autodesk]] Inc was founded in California by [[John Walker (programmer)|John Walker]], with a focus on design software for the PC, with their flagship [[Computer-aided design|CAD]] package ''[[AutoCAD]]''. In 1986, Autodesk's first animation package was ''AutoFlix'', for use with AutoCAD. Their first full 3D3-D animation software was ''[[Autodesk 3ds Max|3D3-D Studio]]'' for [[DOS]] in 1990, which was developed under license by [[Gary Yost]] of The Yost Group.<ref>[http://www.fourmilab.ch/autofile/ John Walker's online history of Autodesk], told through the letters and memos from and to the inner circle of the company (retrieved August 28, 2012).</ref><ref>[http://cgpress.org/archives/cgarticles/the_history_of_3d_studio_pt2 "The History of 3D Studio"] {{Webarchive|url=https://web.archive.org/web/20120918131839/http://www.maxunderground.com/the_history_of_3d_studio_pt2 |date=September 18, 2012 }}, Gary Yost interview (retrieved August 28, 2012).</ref>
* 1983: [[Alias Systems Corporation|Alias Research]] was founded in Toronto, Canada, by Stephen Bingham and others, with a focus on industrial and entertainment software for SGI workstations. Their first product was ''Alias-1'' and shipped in 1985. In 1989, Alias was chosen to animate the pseudopod in [[James Cameron]]'s ''[[The Abyss]]'', which gave the software high-profile recognition in movie animation. In 1990 this developed into ''[[PowerAnimator]]'', often known just as ''Alias''.<ref>[http://proetools.com/blog/a-history-lesson-on-alias-3d-software/ "A History Lesson on Alias 3D Software"] (retrieved August 28, 2012).</ref>
* 1984: [[Wavefront Technologies|Wavefront]] was founded by [[Bill Kovacs]] and others, in California, to produce computer graphics for movies and television, and also to develop and market their own software based on SGI hardware. Wavefront developed their first product, ''Preview'', during the first year of business. The company's production department helped tune the software by using it on commercial projects, creating opening graphics for television programs. In 1988, the company introduced the ''Personal Visualiser''.<ref>[http://design.osu.edu/carlson/history/lesson8.html#wavefront "Commercial animation software companies – Wavefront"] {{Webarchive|url=https://web.archive.org/web/20140618212520/http://design.osu.edu/carlson/history/lesson8.html#wavefront |date=June 18, 2014 }}, Wayne Carlson, ''Ohio State University'', 2003 (retrieved August 28, 2012).</ref><ref>[https://www.imdb.com/company/co0143869/ "Wavefront Technologies"] at the ''Internet Movie Database'' (retrieved August 28, 2012).</ref>
* 1984: TDI (Thomson Digital Image) was created in France as a subsidiary of aircraft simulator company Thomson-CSF, to develop and commercialise on their own 3D3-D system ''Explore'', first released in 1986.
* 1984: Sogitec Audiovisuel, was a division of Sogitec avionics in France, founded by Xavier Nicolas for the production of computer animation films, using their own 3D3-D software developed from 1981 by Claude Mechoulam and others at Sogitec.<ref>[http://histoire3d.siggraph.org/index.php?title=Sogitec,_les_d%C3%A9buts_de_l%27image_de_synth%C3%A8se_en_1981 "Sogitec, les débuts de l'image de synthèse en 1981"], by Alain Grach (retrieved August 28, 2012).</ref>
* 1986: [[Softimage (company)|Softimage]] was founded by National Film Board of Canada filmmaker Daniel Langlois in Montreal. Its first product was called the ''Softimage Creative Environment'', and was launched at SIGGRAPH '88. For the first time, all 3D3-D processes (modelling, animation, and rendering) were integrated. Creative Environment (eventually to be known as ''[[Softimage 3D]]'' in 1988), became a standard animation solution in the industry.<ref>[http://www.digitalmedianet.com/HTM/RESEARCH/Meloni/corporate/3D/Softimage.htm "Corporate Profile on Softimage"] {{webarchive|url=https://web.archive.org/web/20120218152008/http://www.digitalmedianet.com/HTM/RESEARCH/Meloni/corporate/3D/Softimage.htm |date=February 18, 2012 }}, ''Digitalmedianet.com'' (retrieved August 28, 2012).</ref>
* 1987: [[Houdini (software)|Side Effects Software]] was established by Kim Davidson and Greg Hermanovic in Toronto, Canada, as a production/software company based on a 3D3-D animation package called ''PRISMS'', which they had acquired from their former employer ''Omnibus''. Side Effects Software developed this procedural modelling and motion product into a high-end, tightly integrated 2D2-D/3D3-D animation software which incorporated a number of technological breakthroughs.<ref>[http://www.sidefx.com/ Side Effects Software company website] (retrieved August 28, 2012).</ref>
* 1989: the companies TDI and Sogitec were merged to create the new company ExMachina.
 
==CGI in the 1990s==
===Computer animation expands in film and TV===
The decade saw some of the first computer-animated television series. For example [[Quarxs]], created by media artist [[Maurice Benayoun]] and comic book artist [[François Schuiten]], was an early example of a CGI series based on a real screenplay and not animated solely for demonstrative purposes.<ref>{{Cite web|date=July 1, 1991|title=The Quarxs|url=https://benayoun.com/moben/1991/07/01/the-quarxs/|access-date=June 7, 2021|website=MOBEN|language=en-US}}</ref> '''''[[VeggieTales]]''''', an American [[Christian media]], is also one of the first computer-animated series. [[Phil Vischer]] came up with the idea for VeggieTales while testing animation software as a medium for children's videos in the early 1990s.
 
The 1990s began with much of [[Computer-generated imagery|CGI]] technology now sufficiently developed to allow a major expansion into film and TV production. 1991 is widely considered the "breakout year", with two major box-office successes, both making heavy use of CGI.
Line 188 ⟶ 204:
The first of these was [[James Cameron]]'s movie ''[[Terminator 2: Judgment Day]]'',<ref>[https://www.boxofficemojo.com/movies/?id=terminator2.htm "Terminator 2: Judgment Day"] at ''Box Office Mojo'' (retrieved July 25, 2012).</ref> and was the one that first brought CGI to widespread public attention. The technique was used to animate the two "Terminator" robots. The "T-1000" robot was given a "mimetic poly-alloy" (liquid metal) structure, which enabled this shapeshifting character to morph into almost anything it touched. Most of the key Terminator effects were provided by [[Industrial Light & Magic]], and this film was the most ambitious CGI project since the 1982 film ''[[Tron]]''.<ref name=Animatormag>{{cite web|author= Jefferson, David|url=http://www.animatormag.com/archive/issue-30/issue-30-page-14/#.TwpGoJfwb6R|title=Visual Effects on Terminator 2|publisher=Animatormag.com|date=Spring 1993|access-date=January 8, 2012}}</ref>
 
The other was [[Walt Disney|Disney]]'s ''[[Beauty and the Beast (1991 film)|Beauty and the Beast]]'',<ref>[https://www.boxofficemojo.com/movies/?id=beautyandthebeast.htm ''Beauty and the Beast''] at ''Box Office Mojo'' (retrieved July 25, 2012).</ref> the second traditional 2D2-D animated film to be entirely made using [[Computer Animation Production System|CAPS]]. The system also allowed easier combination of hand-drawn art with 3D3-D [[Computer-generated imagery|CGI]] material, notably in the "waltz sequence", where Belle and Beast dance through a computer-generated ballroom as the camera "[[Camera dolly|dollies]]" around them in simulated 3D3-D space.<ref>(2006) Audio commentary by John Musker, Ron Clements, and Alan Menken. Bonus material from ''The Little Mermaid: Platinum Edition'' [DVD]. Walt Disney Home Entertainment.</ref> Notably, ''Beauty and the Beast'' was the first animated film ever to be nominated for a Best Picture Academy Award.<ref>{{Cite web |last=Musical |first=Shrek The |date=2022-11-27 |title=Beauty And The Beast: The First Animated Film To Be Nominated For Best Picture |url=https://www.shrekthemusical.co.uk/beauty-and-the-beast-the-first-animated-film-to-be-nominated-for-best-picture/ |access-date=2022-12-23 |website=STM - Shrek Blog |language=en-US}}</ref>
 
Another significant step came in 1993, with [[Steven Spielberg]]'s ''[[Jurassic Park (film)|Jurassic Park]]'',<ref>[https://www.boxofficemojo.com/movies/?id=jurassicpark.htm ''Jurassic Park'' at Box Office Mojo] (retrieved August 3, 2012).</ref> where 3D3-D [[Computer-generated imagery|CGI]] dinosaurs were integrated with life-sized [[animatronic]] counterparts. The CGI animals were created by ILM, and in a test scene to make a direct comparison of both techniques, Spielberg chose the CGI. Also watching was [[George Lucas]] who remarked "a major gap had been crossed, and things were never going to be the same."<ref>[https://web.archive.org/web/20070930102341/http://www.time.com/time/magazine/article/0,9171,978307,00.html ''Behind the Magic of Jurassic Park'' Richard Corliss, ''TIME'', 1993-04-26] (retrieved August 3, 2012).</ref><ref>Shone, Tom. [https://books.google.com/books?id=_HMOHsjIb5cC&dq=It+was+like+one+of+those+moments+in+history%2C+like+the+invention+of+the+light+bulb+or+the+first+telephone+call&pg=PA218 ''Blockbuster: How Hollywood learned to stop worrying and love the summer''] Pg 218. Simon and Schuster, 2004 {{ISBN|0-7432-3568-1}}, {{ISBN|978-0-7432-3568-6}}</ref><ref>''The Making of Jurassic Park'', Shay, Don and Duncan, Jody, ''Ballantine Books'', 1993, Softcover p. 53, first paragraph.</ref>
 
[[Flocking (behavior)|Flocking]] is the behavior exhibited when a group of birds (or other animals) move together in a flock. A mathematical model of flocking behavior was first simulated on a computer in 1986 by [[Craig Reynolds (computer graphics)|Craig Reynolds]], and soon found its use in animation, beginning with [[Stanley and Stella in: Breaking the Ice]]. ''Jurassic Park'' notably featured flocking, and brought it to widespread attention by mentioning it in the actual script{{Citation needed|reason=unverified claim|date=November 2017}}. Other early uses were the flocking bats in [[Tim Burton]]'s ''[[Batman Returns]]'' (1992), and the wildebeest stampede in [[Walt Disney|Disney]]'s ''[[The Lion King]]'' (1994).<ref>[http://www.gabbai.com/academic/complexity-and-the-aerospace-industry-understanding-emergence-by-relating-structure-to-performance-using-multi-agent-systems/ ''Complexity and the Aerospace Industry: Understanding Emergence by Relating Structure to Performance using Multi-Agent Systems''] {{Webarchive|url=https://web.archive.org/web/20141219110658/http://gabbai.com/academic/complexity-and-the-aerospace-industry-understanding-emergence-by-relating-structure-to-performance-using-multi-agent-systems |date=December 19, 2014 }}, Gabbai, J.M.E, 2005, University of Manchester Doctoral Thesis.</ref>
[[Warner Bros. Animation|Warner Bros]]' 1999 ''[[The Iron Giant]]'' was the first traditionally-animated feature to have a major character, the title character, to be fully computer-generated.<ref>{{Citation|title=The Iron Giant (1999) – IMDb|url=http://www.imdb.com/title/tt0129167/trivia|access-date=March 30, 2021}}</ref>
 
===Flocking===
[[Flocking (behavior)|Flocking]] is the behavior exhibited when a group of birds (or other animals) move together in a flock. A mathematical model of flocking behavior was first simulated on a computer in 1986 by [[Craig Reynolds (computer graphics)|Craig Reynolds]], and soon found its use in animation. ''Jurassic Park'' notably featured flocking, and brought it to widespread attention by mentioning it in the actual script{{Citation needed|reason=unverified claim|date=November 2017}}. Other early uses were the flocking bats in [[Tim Burton]]'s ''[[Batman Returns]]'' (1992), and the wildebeest stampede in [[Walt Disney|Disney]]'s ''[[The Lion King]]'' (1994).<ref>[http://www.gabbai.com/academic/complexity-and-the-aerospace-industry-understanding-emergence-by-relating-structure-to-performance-using-multi-agent-systems/ ''Complexity and the Aerospace Industry: Understanding Emergence by Relating Structure to Performance using Multi-Agent Systems''], Gabbai, J.M.E, 2005, University of Manchester Doctoral Thesis.</ref>
 
With improving hardware, lower costs, and an ever-increasing range of software tools, [[Computer-generated imagery|CGI]] techniques were soon rapidly taken up in both film and television production.
Line 201 ⟶ 214:
In 1993, [[J. Michael Straczynski]]'s ''[[Babylon 5]]'' became the first major television series to use [[Computer-generated imagery|CGI]] as the primary method for their visual effects (rather than using hand-built models), followed later the same year by [[Rockne S. O'Bannon]]'s ''[[SeaQuest DSV]]''.
 
Also the same year, the French company [[:fr:Fantôme (studio)|Studio Fantome]] produced the first full-length completely computer-animated TV series, ''[[Insektors]]'' (26×13'),<ref>[http://www.awn.com/fantome/english/fr_main.htm Studio Fantome at ''Animation World Network''] {{webarchive|url=https://web.archive.org/web/20121119035412/http://www.awn.com/fantome/english/fr_main.htm |date=November 19, 2012 }} (retrieved August 8, 2012).</ref><ref>[http://www.awn.com/fantome/english/fr_ser1.htm ''Insektors'' at ''Animation World Network'' International Emmy Award 1994, "Children and Young People"] {{webarchive|url=https://web.archive.org/web/20130619000318/http://www.awn.com/fantome/english/fr_ser1.htm |date=June 19, 2013 }} (retrieved August 8, 2012).</ref> though they also produced an even earlier all 3D3-D short series, ''Geometric Fables'' (50 x 5') in 1991.<ref>[http://www.awn.com/fantome/english/fr_geom.htm ''Geometric Fables'' at ''Animation World Network''] {{Webarchive|url=https://web.archive.org/web/20121119053201/http://www.awn.com/fantome/english/fr_geom.htm |date=November 19, 2012 }} (retrieved August 8, 2012).</ref> A little later, in 1994, the Canadian TV CGI series ''[[ReBoot]]'' (48×23') was aired, produced by [[Mainframe Entertainment]] and [[Alliance Atlantis Communications]], two companies that also created ''[[Beast Wars: Transformers]]'' which was released 2 years after ReBoot.<ref>[https://www.wired.com/wired/archive/5.03/reboot.html "Before Toy Story there was ... ReBoot"], by Rogier van Bakel, ''[[Wired (magazine)|Wired]]'' (retrieved August 8, 2012).</ref>
 
In 1995, there came the first fully computer-animationanimated feature film, [[Disney]]-[[Pixar]]'s ''[[Toy Story]]'', which was a huge commercial success.<ref>[https://boxofficemojo.com/movies/?id=toystory.htm ''Toy Story'' at Box Office Mojo] (retrieved July 18, 2012).</ref> This film was directed by [[John Lasseter]], a co-founder of Pixar, and former Disney animator, who started at Pixar with short movies such as ''[[Luxo Jr.]]'' (1986), ''[[Red's Dream]]'' (1987), and ''[[Tin Toy]]'' (1988), which was also the first computer-generated animated short film to win an Academy Award. Then, after some long negotiations between Disney and Pixar, a partnership deal was agreed in 1991 with the aim of producing a full feature movie, and ''Toy Story'' was the result.<ref name="PaikInfinity103">{{cite book|last=Paik|first=Karen|title=To Infinity and Beyond!: The Story of Pixar Animation Studios|url=https://books.google.com/books?id=uDAGknVpUwgC&q=buzz+lightyear+to+infinity+and+beyond&pg=PA104|access-date=March 13, 2009|publisher=[[Chronicle Books]]|___location=San Francisco|year=2007|page=103|isbn=978-0-8118-5012-4}}</ref>
 
The following years saw a greatly increased uptake of digital animation techniques, with many new studios going into production, and existing companies making a transition from traditional techniques to CGI. Between 1995 and 2005 in the US, the average effects budget for a wide-release feature film leapt from $5&nbsp;million to $40&nbsp;million. According to Hutch Parker, President of Production at [[20th Century Fox]], {{As of|2005|lc=on}}, "50 percent of feature films have significant effects. They're a character in the movie." However, CGI has made up for the expenditures by grossing over 20% more than their real-life counterparts, and by the early 2000s, computer-generated imagery had become the dominant form of special effects.<ref>[https://www.wired.com/wired/archive/13.02/fxgods.html "F/X Gods" by Anne Thompson, Wired.com] (retrieved August 3, 2012).</ref>
 
[[Warner Bros. Animation|Warner Bros]]' 1999 ''[[The Iron Giant]]'' was the first traditionally- animated feature to have a major character, the title character, to be fully computer-generatedCGI.<ref>{{Citation|title=The Iron Giant (1999) – IMDb|url=http://www.imdb.com/title/tt0129167/trivia|access-date=March 30, 2021}}</ref>
===Motion capture===
 
[[Motion capture]], or '''"Mocap"''', records the movement of external objects or people, and has applications for medicine, sports, robotics, and the military, as well as for animation in film, TV and games. The earliest example would be in 1878, with the pioneering photographic work of [[Eadweard Muybridge]] on human and animal locomotion, which is still a source for animators today.<ref>[http://www.muybridge.org/ The Eadweard Muybridge Online Archive], access to most of Muybridge's motion studies, at printable resolutions, along with a growing number of animations (retrieved August 12, 2012).</ref> Before computer graphics, capturing movements to use in animation would be done using [[Rotoscoping]], where the motion of an actor was filmed, then the film used as a guide for the frame-by-frame motion of a hand-drawn animated character. The first example of this was [[Max Fleischer]]'s ''[[Out of the Inkwell]]'' series in 1915, and a more recent notable example is the 1978 [[Ralph Bakshi]] 2D animated movie ''[[The Lord of the Rings (1978 film)|The Lord of the Rings]]''.
===Motion -capture===
[[Motion -capture]], or '''"MocapMo-cap"''', records the movement of external objects or people, and has applications for medicine, sports, robotics, and the military, as well as for animation in film, TV and games. The earliest example would be in 1878, with the pioneering photographic work of [[Eadweard Muybridge]] on human and animal locomotion, which is still a source for animators today.<ref>[http://www.muybridge.org/ The Eadweard Muybridge Online Archive], access to most of Muybridge's motion studies, at printable resolutions, along with a growing number of animations (retrieved August 12, 2012).</ref> Before computer graphics, capturing movements to use in animation would be done using [[Rotoscoping]], where the motion of an actor was filmed, then the film used as a guide for the frame-by-frame motion of a hand-drawn animated character. The first example of this was [[Max Fleischer]]'s ''[[Out of the Inkwell]]'' series in 1915, and a more recent notable example is the 1978 [[Ralph Bakshi]] 2D2-D animated movie ''[[The Lord of the Rings (1978 film)|The Lord of the Rings]]''.
 
Computer-based motion -capture started as a [[photogrammetric]] analysis tool in [[biomechanics]] research in the 1970s and 1980s.<ref>{{cite journal| doi=10.1016/j.cub.2005.08.016 | pmid=16111929 | volume=15 | issue=16 | title=Mechanics of animal movement | year=2005 | journal=Current Biology | pages=R616–R619 | last1 = Alexander | first1 = R. McNeill| s2cid=14032136 | doi-access=free }}</ref> A performer wears markers near each joint to identify the motion by the positions or angles between the markers. Many different types of markers can be used—lights, reflective markers, LEDs, infra-red, inertial, mechanical, or wireless RF—and may be worn as a form of suit, or attached direct to a performer's body. Some systems include details of face and fingers to capture subtle expressions, and such is often referred to as "[[performance -capture]]". The computer records the data from the markers, and uses it to animate digital character models in 2D2-D or 3D3-D computer animation, and in some cases this can include camera movement as well. In the 1990s, these techniques became widely used for visual effects.
 
Video games also began to use motion -capture to animate in-game characters. As early as 1988, an early form of motion -capture was used to animate the [[2D computer graphics|2D2-D]] main character of the [[Martech]] video game ''[[Vixen (video game)|Vixen]]'', which was performed by model [[Corinne Russell]].<ref>{{cite magazine|magazine=[[Retro Gamer]]|title=Martech Games – The Personality People|page=51|issue=133|first=Graeme|last=Mason|url=https://issuu.com/michelfranca/docs/retro_gamer____133}}</ref> Motion -capture was later notably used to animate the [[3D computer graphics|3D3-D]] character models in the [[Sega Model 2]] [[arcade game]] ''[[Virtua Fighter 2]]'' in 1994.<ref>{{cite web|last=Wawro|first=Alex|title=Yu Suzuki Recalls Using Military Tech to Make Virtua Fighter 2 |url=httphttps://www.gamasutragamedeveloper.com/viewbusiness/news/228512/Yu_Suzuki_recalls_using_military_tech_to_make_Virtua_Fighter_2.phpyu-suzuki-recalls-using-military-tech-to-make-i-virtua-fighter-2-i-|website=[[Gamasutra]]|access-date=August 18, 2016|date=October 23, 2014}}</ref> In 1995, examples included the [[Atari Jaguar]] CD-based game ''[[Highlander: The Last of the MacLeods]]'',<ref>[http://www.atarimax.com/freenet/freenet_material/6.16and32-BitComputersSupportArea/8.OnlineMagazines/showarticle.php?569 ''Atari Explorer Online''], Vol 04 Iss 09, January 1, 1996 (retrieved August 12, 2012).</ref><ref>[http://radoff.com/blog/2008/08/22/anatomy-of-an-mmorpg/ Jon Radoff, "Anatomy of an MMORPG"] {{webarchive|url=https://web.archive.org/web/20091213053756/http://radoff.com/blog/2008/08/22/anatomy-of-an-mmorpg/ |date=December 13, 2009 }} (retrieved August 12, 2012).</ref> and the arcade [[fighting game]] ''[[Soul Edge]]'', which was the first video game to use [[Motion capture#Passive markers|passive optical]] motion-capture technology.<ref>{{Cite web | url=http://www.motioncapturesociety.com/resources/industry-history | title=History of Motion Capture | access-date=September 14, 2014 | archive-url=https://web.archive.org/web/20120514044040/http://www.motioncapturesociety.com/resources/industry-history | archive-date=May 14, 2012 | url-status=dead }}</ref>
 
Another breakthrough where a cinema film used motion -capture was creating hundreds of digital characters for the film ''[[Titanic (1997 film)|Titanic]]'' in 1997. The technique was used extensively in 1999 to create Jar-Jar Binks and other digital characters in ''[[Star Wars: Episode I – The Phantom Menace]]''.
 
===Match moving===
[[Match moving]] (also known as '''motion tracking''' or '''camera tracking'''), although related to motion capture, is a completely different technique. Instead of using special cameras and sensors to record the motion of subjects, match moving works with pre-existing live-action footage, and uses computer software alone to track specific points in the scene through multiple frames, and thereby allow the insertion of CGI elements into the shot with correct position, scale, orientation, and motion relative to the existing material. The terms are used loosely to describe several different methods of extracting subject or camera motion information from a motion picture. The technique can be 2D or 3D, and can also include matching for camera movements. The earliest commercial software examples being ''3D-Equalizer'' from Science.D.Visions<ref>[http://www.sci-d-vis.com/ Science.D.Visions website] (retrieved August 14, 2012).</ref> and ''rastrack'' from Hammerhead Productions,<ref>[http://www.cgw.com/Publications/CGW/2000/Volume-23-Issue-8-August-2000-/Simply-Marvel-ous.aspx "Simply Marvel-ous"], by Debra Kaufman, ''Computer Graphics World'', August 8, 2000 (retrieved August 14, 2012).</ref> both starting mid-90s.
 
The first step is identifying suitable features that the software tracking algorithm can lock onto and follow. Typically, features are chosen because they are bright or dark spots, edges or corners, or a facial feature—depending on the particular tracking algorithm being used. When a feature is tracked it becomes a series of 2D2-D coordinates that represent the position of the feature across the series of frames. Such tracks can be used immediately for 2D2-D motion tracking, or then be used to calculate 3D3-D information. In 3D3-D tracking, a process known as "calibration" derives the motion of the camera from the inverse-projection of the 2D2-D paths, and from this a "reconstruction" process is used to recreate the photographed subject from the tracked data, and also any camera movement. This then allows an identical virtual camera to be moved in a 3D3-D animation program, so that new animated elements can be composited back into the original live-action shot in perfectly matched perspective.<ref>[http://www.cgw.com/Publications/CGW/2001/Volume-24-Issue-9-September-2001-/Move-for-Move.aspx "Move for Move"], by Audrey Doyle, ''Computer Graphics World'', September 9, 2000 (retrieved August 14, 2012).</ref>
 
In the 1990s, the technology progressed to the point that it became possible to include virtual stunt doubles. Camera tracking software was refined to allow increasingly complex visual effects developments that were previously impossible. Computer-generated extras also became used extensively in crowd scenes with advanced flocking and crowd simulation software. Being mainly software-based, match moving has become increasingly affordable as computers become cheaper and more powerful. It has become an essential visual effects tool and is even used providing effects in live television broadcasts.<ref>[http://www.thepixelart.com/breakdown-best-matchmoving-and-tracking-applications/ "A Breakdown of Best Matchmoving and Tracking Applications"], by Topher Welsh, ''Pixel Art'', Friday, November 27, 2009 (retrieved August 14, 2012).</ref>
 
===Virtual studio===
In television, a [[virtual studio]], or '''virtual set''', is a studio that allows the real-time combination of people or other real objects and computer generated environments and objects in a seamless manner. It requires that the 3D3-D CGI environment is automatically locked to follow any movements of the live camera and lens precisely. The essence of such system is that it uses some form of camera tracking to create a live stream of data describing the exact camera movement, plus some realtime CGI rendering software that uses the camera tracking data and generates a synthetic image of the virtual set exactly linked to the camera motion. Both streams are then combined with a video mixer, typically using [[chroma key]]. Such virtual sets became common in TV programs in the 1990s, with the first practical system of this kind being the ''Synthevision virtual studio'' developed by the Japanese broadcasting corporation [[NHK]] (Nippon Hoso Kyokai) in 1991, and first used in their science special, ''Nano-space''.<ref>[http://ivizlab.sfu.ca/arya/Papers/IEEE/Multimedia/1998/Jan/Image%20Compositing.pdf "Image Compositing Based on Virtual Cameras"], by Masaki Hayashi, ''IEEE'', January 1998 (retrieved August 18, 2012).</ref>
<ref>[http://www.nhk.or.jp/strl/publica/labnote/lab447.html "Virtual Studio System for TV Program Production"] {{Webarchive|url=https://web.archive.org/web/20041209163407/http://www.nhk.or.jp/strl/publica/labnote/lab447.html |date=December 9, 2004 }}, ''NHK Laboratories'', Note No. 447, by Kazuo Fukui, Masaki Hayashi, Yuko Yamanouchi (retrieved August 18, 2012).</ref> Virtual studio techniques are also used in filmmaking, but this medium does not have the same requirement to operate entirely in realtime. Motion control or camera tracking can be used separately to generate the CGI elements later, and then combine with the live-action as a [[post-production]] process. However, by the 2000s, computer power had improved sufficiently to allow many virtual film sets to be generated in realtime, as in TV, so it was unnecessary to composite anything in post-production.
 
===Machinima===
[[Machinima]] uses realtime 3D3-D computer graphics rendering engines to create a cinematic production. Most often, video games machines are used for this. The [[Academy of Machinima Arts & Sciences]] (AMAS), a non-profit organization formed 2002, and dedicated to promoting machinima, defines machinima as "animated filmmaking within a real-time virtual 3-D environment". AMAS recognizes exemplary productions through awards given at its annual<ref>"3D Game-Based Filmmaking: The Art of Machinima", by [[Paul Marino]], ''Paraglyph Press'', 2004, {{ISBN|1-932111-85-9}}.</ref><ref>[http://www.machinima.org/ The Academy of Machinima Arts & Sciences website] {{Webarchive|url=https://web.archive.org/web/20051124061328/http://www.machinima.org/ |date=November 24, 2005 }} (retrieved August 14, 2012).</ref> The practice of using graphics engines from video games arose from the animated software introductions of the '80s "[[demoscene]]", [[Disney Interactive Studios]]' 1992 video game [[Stunt Island]], and '90s recordings of gameplay in [[first-person shooter]] video games, such as [[id Software]]'s [[Doom (1993 video game)|Doom]] and [[Quake (video game)|Quake]]. Machinima Film Festival. Machinima-based artists are sometimes called machinimists or machinimators.
 
==3D3-D animation software in the 1990s==
There were many developments, mergers and deals in the 3D3-D software industry in the '90s and later.
 
* [[Wavefront Technologies|Wavefront]] followed the success of ''Personal Visualiser'' with the release of ''Dynamation'' in 1992, a powerful tool for interactively creating and modifying realistic, natural images of dynamic events. In 1993, Wavefront acquired Thomson Digital Images (TDI), with their innovative product ''Explore'', a tool suite that included ''3Design'' for modelling, ''Anim'' for animation, and ''Interactive Photorealistic Renderer'' (IPR) for rendering. In 1995, Wavefront was bought by [[Silicon Graphics]], and merged with [[Alias Systems Corporation|Alias]].<ref>[http://design.osu.edu/carlson/history/lesson8.html#wavefront "Commercial animation software companies – Wavefront"] {{Webarchive|url=https://web.archive.org/web/20140618212520/http://design.osu.edu/carlson/history/lesson8.html#wavefront |date=June 18, 2014 }}, Wayne Carlson, Ohio State University (retrieved September 3, 2012).</ref>
* [[Alias Systems Corporation|Alias Research]] continued the success of ''[[PowerAnimator]]'' with movies like ''[[Terminator 2: Judgment Day]]'', ''[[Batman Returns]]'' and ''[[Jurassic Park (film)|Jurassic Park]]'', and in 1993 started the development of a new entertainment software, which was later to be named ''[[Autodesk Maya|Maya]]''. Alias found customers in animated film, TV series, visual effects, and video games, and included many prominent studios, such as [[Industrial Light & Magic]], [[Pixar]], [[Sony Pictures Imageworks]], [[Walt Disney]], and [[Warner BrothersBros.]]. Other Alias products were developed for applications in architecture and engineering. In 1995, SGI purchased both Alias Research and Wavefront in a 3-way deal, and the merged company [[Alias Systems Corporation|Alias Wavefront]] was launched.<ref>[http://design.osu.edu/carlson/history/lesson8.html#aliasresearch "Commercial animation software companies – Alias Research"] {{Webarchive|url=https://web.archive.org/web/20140618212520/http://design.osu.edu/carlson/history/lesson8.html#aliasresearch |date=June 18, 2014 }}, Wayne Carlson, Ohio State University (retrieved September 3, 2012).</ref>
* [[Alias Systems Corporation|Alias Wavefront]]'s new mission was to focus on developing the world's most advanced tools for the creation of digital content. ''[[PowerAnimator]]'' continued to be used for visual effects and movies (such as ''[[Toy Story]]'', ''[[Casper (film)|Casper]]'', and ''[[Batman Forever]]''), and also for video games. Further development of the ''Maya'' software went ahead, adding new features such as motion -capture, facial animation, motion blur, and "time warp" technology. [[Computer-aided design|CAD]] industrial design products like ''[[Autodesk AliasStudio|AliasStudio]]'' and ''Alias Designer'' became standardized on Alias|Wavefront software. In 1998, Alias|Wavefront launched ''[[Autodesk Maya|Maya]]'' as its new 3D3-D flagship product, and this soon became the industry's most important animation tool. ''Maya'' was the merger of three packages—Wavefront's ''Advanced Visualizer'', Alias's ''Power Animator'', and TDI's ''Explore''. In 2003 the company was renamed simply "Alias". In 2004, SGI sold the business to a private investment firm, and it was later renamed to [[Alias Systems Corporation]]. In 2006, the company was bought by [[Autodesk]].<ref>[http://design.osu.edu/carlson/history/lesson8.html#aw "Commercial animation software companies – Alias|Wavefront"] {{Webarchive|url=https://web.archive.org/web/20140618212520/http://design.osu.edu/carlson/history/lesson8.html#aw |date=June 18, 2014 }}, Wayne Carlson, Ohio State University (retrieved September 3, 2012).</ref><ref>[https://web.archive.org/web/20040622205615/http://www.aliaswavefront.com/eng/about/history/index.shtml "About Alias"] at ''Wayback Machine'' (retrieved September 3, 2012).</ref>
* [[Softimage (company)|Softimage]] developed further features for ''Creative Environment'', including the ''Actor Module'' (1991) and ''Eddie'' (1992), including tools such as inverse kinematics, enveloping, metaclay, flock animation, and many others. Softimage customers include many prominent production companies, and Softimage has been used to create animation for hundreds of major feature films and games. In 1994, [[Microsoft]] acquired Softimage, and renamed the package ''[[Softimage 3D]]'', releasing a [[Windows NT]] port two years later.<ref>[http://www.microsoft.com/presspass/press/1996/jan96/3danimpr.mspx "3D – press release"] {{Webarchive|url=https://web.archive.org/web/20111229141042/http://www.microsoft.com/presspass/press/1996/jan96/3danimpr.mspx |date=December 29, 2011 }}, ''Microsoft'', 1996-1 (retrieved July 7, 2012).</ref><ref>[https://www.nytimes.com/1994/02/15/business/company-news-an-acquisition-by-microsoft.html "COMPANY NEWS; An Acquisition By Microsoft"], ''The New York Times'', February 15, 1994 (retrieved July 7, 2012).</ref> In 1998, after helping to port the products to Windows and financing the development of ''[[Autodesk Softimage|Softimage]]'' and ''Softimage|DS'', Microsoft sold the Softimage unit to [[Avid Technology]], who was looking to expand its visual effect capabilities. Then, in 2008, Autodesk acquired the brand and the animation assets of Softimage from Avid, thereby ending Softimage Co. as a distinct entity. The video-related assets of Softimage, including ''Softimage|DS'' (now ''Avid|DS'') continue to be owned by Avid.<ref>[http://www.prnewswire.co.uk/cgi/news/release?id=35215 "Pr Newswire Uk: Avid Technology To Acquire Softimage Subsidiary Of Microsoft Corporation"], ''Prnewswire.co.uk'' (retrieved July 7, 2012).</ref><ref>[http://design.osu.edu/carlson/history/lesson8.html#softimage "Commercial animation software companies – Softimage"] {{Webarchive|url=https://web.archive.org/web/20140618212520/http://design.osu.edu/carlson/history/lesson8.html#softimage |date=June 18, 2014 }}, Wayne Carlson, Ohio State University (retrieved September 3, 2012).</ref>
* [[Autodesk]] Inc's PC DOS-based ''[[Autodesk 3ds Max|3D Studio]]'' was eventually superseded in 1996 when The Yost Group developed [[Autodesk 3ds Max|3D Studio Max]] for Windows NT. Priced much lower than most competitors, ''3D Studio Max'' was quickly seen as an affordable solution for many professionals. Of all animation software, ''3D Studio Max'' serves the widest range of users. It is used in film and broadcast, game development, corporate and industrial design, education, medical, and web design. In 2006, Autodesk acquired [[Alias Systems Corporation|Alias]], bringing the ''StudioTools'' and ''Maya'' software products under the Autodesk banner, with ''3D Studio Max'' rebranded as ''[[Autodesk 3ds Max]]'', and ''Maya'' as ''[[Autodesk Maya]]''. Now one of the largest software companies in the world, Autodesk serves more than 4 million customers in over 150 countries.<ref>[http://www.maxunderground.com/the_history_of_3d_studio_pt2/2 "The History of 3D Studio – Gary Yost interview"] {{Webarchive|url=https://web.archive.org/web/20111123213316/http://www.maxunderground.com/the_history_of_3d_studio_pt2/2 |date=November 23, 2011 }} (retrieved July 7, 2012).</ref><ref>[http://www.the-area.com/maxturns20/history "History of Autodesk 3ds Max"] {{webarchive|url=https://web.archive.org/web/20110222174236/http://www.the-area.com/maxturns20/history |date=February 22, 2011 }} (retrieved August 28, 2012).</ref><ref>[http://design.osu.edu/carlson/history/lesson8.html#3ds "Commercial animation software companies – Autodesk"] {{Webarchive|url=https://web.archive.org/web/20140618212520/http://design.osu.edu/carlson/history/lesson8.html#3ds |date=June 18, 2014 }}, Wayne Carlson, Ohio State University (retrieved September 3, 2012).</ref>
Line 247 ⟶ 262:
|first=Paul
|author2=Tim Hawkins |author3=Chris Tchou |author4=Haarm-Pieter Duiker |author5=Westley Sarokin |author6=Mark Sagar
|title=Proceedings of the 27th annual conference on Computer graphics and interactive techniques - SIGGRAPH '00
| title chapter= Acquiring the reflectance field of a human face
| publisher = ACM
|pages=145–156
| publisher = ACM
| year = 2000
| doi = 10.1145/344779.344855
Line 256 ⟶ 273:
</ref> which was the last missing piece of the puzzle to make [[digital look-alike]]s of known actors.
 
===Motion -capture, photorealism, and uncanny valley===
The first mainstream cinema film fully made with [[motion -capture]] was the 2001 Japanese-American ''[[Final Fantasy: The Spirits Within]]'' directed by [[Hironobu Sakaguchi]], which was also the first to use photorealistic CGI characters.<ref>[https://web.archive.org/web/20051121073232/http://www.time.com/time/magazine/article/0,9171,997597,00.html ''Cinema: A Painstaking Fantasy''] Chris Taylor, Time, July 31, 2000 (retrieved August 8, 2012).</ref> The film was not a box-office success.<ref>[https://www.boxofficemojo.com/movies/?id=finalfantasy.htm ''Final Fantasy: The Spirits Within''] at Box Office Mojo (retrieved August 12, 2012).</ref> Some commentators have suggested this may be partly because the lead CGI characters had facial features whichthat fell into the "[[uncanny valley]]".<ref>The uncanny valley is a hypothesis in the field of robotics and 3D3-D computer animation, which holds that when human replicas look and act almost, but not perfectly, like actual human beings, it causes a response of revulsion among human observers. The "valley" refers to the dip in a graph of the comfort level of humans as a function of a robot's human likeness.</ref> In 2002, Peter Jackson's ''[[The Lord of the Rings: The Two Towers]]'' was the first feature film to use a real-timerealtime motion -capture system, which allowed the actions of actor [[Andy Serkis]] to be fed direct into the 3D3-D CGI model of [[Gollum]] as it was being performed.<ref>Gollum: How We Made Movie Magic, a 2003 memoir by British actor Andy Serkis</ref>
 
Motion capture is seen by many as replacing the skills of the animator, and lacking the animator's ability to create exaggerated movements that are impossible to perform live. The end credits of [[Pixar]]'s film ''[[Ratatouille (film)|Ratatouille]]'' (2007) carry a stamp certifying it as "100% Pure Animation — No Motion Capture!" However, proponents point out that the technique usually includes a good deal of adjustment work by animators as well. Nevertheless, in 2010, the US Film Academy ([[Academy of Motion Picture Arts and Sciences|AMPAS]]) announced that motion-capture films will no longer be considered eligible for "Best Animated Feature Film" Oscars, stating "Motion capture by itself is not an animation technique."<ref>[http://www.oscars.org/press/pressreleases/2010/20100708.html "Rules Approved for 83rd Academy Awards"], AMPAS Press Release, July 8, 2010 (retrieved August 8, 2012)</ref><ref>[https://www.economist.com/blogs/prospero/2011/10/performance-capture-animation "Tintin and the dead-eyed zombies"], by Prospero, The Economist, October 31, 2011 (retrieved August 8, 2012)</ref>
Line 264 ⟶ 281:
The early 2000s saw the advent of [[virtual cinematography|fully virtual cinematography]] with its audience debut considered to be in the 2003 films ''[[The Matrix Reloaded]]'' and ''[[The Matrix Revolutions]]'' with its digital look-alikes so convincing that it is often impossible to know if some image is a human imaged with a camera or a digital look-alike shot with a [[computer simulation|simulation]] of a camera. The scenes built and imaged within virtual cinematography are the ''"Burly brawl"'' and the end showdown between [[Neo (The Matrix)|Neo]] and [[Agent Smith]]. With [[conventional]] [[cinematography|cinematographic]] methods the burly brawl would have been prohibitively time-consuming to make with years of [[compositing]] required for a scene of few minutes. Also a human actor could not have been used for the end showdown in Matrix Revolutions: Agent Smith's [[cheekbone]] gets punched in by Neo leaving the digital look-alike naturally unhurt.
 
==3D3-D animation software in the 2000s==
* [[Blender (software)]] is a free open source virtual cinematography package, used by professionals and enthusiasts alike.
* [[Poser (software)|Poser]] is another DIY 3D3-D graphics program especially aimed at user-friendly animation of [[wikt:soft|soft]] objects
* [[Pointstream Software]] is a professional [[optical flow]] program that uses a [[pixel]] as its basic primitive form usually tracked over a [[multi-camera setup]] from the esteemed [[Arius3D]], makers of the [[Cartesian coordinate system|XYZ]] [[RGB]] [[3D scanner|scanner]], used in the production process of the Matrix sequels
* [[Adobe Substance]] is a software that allows artists to create 3-D assets, models, materials, patterns, and [https://discover.therookies.co/2023/05/02/40-mind-blowing-digital-art-projects-created-with-adobe-substance-designer/ lighting].
 
==CGI in the 2010s==
{{update section|date=October 2022}}
In SIGGRAPH 2013 [[Activision]] and [[University of Southern California|USC]] presented a [[real-time computing|real-time]] digital face look-alike of "Ira" using the USC light stage X by Ghosh et al. for both [[reflectance capture|reflectance field]] and motion capture.<ref name="Deb2013">{{cite web
{{cite web
| last = Debevec
| first = Paul
| title = Digital Ira SIGGRAPH 2013 Real-Time Live
| url = http://gl.ict.usc.edu/Research/DigitalIra/
| access-date = July 31, 2013}}
| archive-date = February 21, 2015
</ref><ref name="Deb2013-2">
| archive-url = https://web.archive.org/web/20150221212728/http://gl.ict.usc.edu/Research/DigitalIra/
| url-status = dead
}}</ref><ref name="Deb2013-2">
{{cite web
| last = Debevec
Line 285 ⟶ 305:
| url = http://www.debevec.org
| access-date = July 31, 2013}}
</ref> The end result, both precomputed and [[real-time computer graphics|real-time rendered]] with the state-of-the-art [[Graphics processing unit]]: ''Digital Ira'',<ref name="Deb2013"/> looks fairly realistic. Techniques previously confined to high-end virtual cinematography systems are rapidly moving into the video games and [[leisure]] [[Application software|applications]].
 
==Further developments==
Line 298 ⟶ 318:
[[Category:Computer-related introductions in 1960]]
[[Category:Computer animation| ]]
[[Category:History of animation|computer animation]]
[[Category:History of computing|computer animation]]
[[Category:New media]]
[[Category:Multimedia]]