This article needs additional citations for verification. (July 2013) |
Digital puppetry is the manipulation and performance of digitally animated 2D or 3D figures and objects in a virtual environment that are rendered in real-time by computers. It is most commonly used in filmmaking and television production but has also been used in interactive theme park attractions and live theatre.
The exact definition of what is and is not digital puppetry is subject to debate among puppeteers and computer graphics designers, but it is generally agreed that digital puppetry differs from conventional computer animation in that it involves performing characters in real-time, rather than animating them frame by frame.
Digital puppetry is closely associated with character animation, motion capture technologies, and 3D animation, as well as skeletal animation. Digital puppetry is also known as virtual puppetry, performance animation, living animation, aniforms, live animation and real-time animation (although the latter also refers to animation generated by computer game engines). Machinima is another form of digital puppetry, and Machinima performers are increasingly being identified as puppeteers.
One of the earliest pioneers of digital puppetry was Lee Harrison III. He conducted experiments in the early 1960s that animated figures using analog circuits and a cathode ray tube. Harrison rigged up a body suit with potentiometers and created the first working motion capture rig, animating 3D figures in real-time on his CRT screen. He made several short films with this system, which he called ANIMAC.[1] Among the earliest examples of digital puppets produced with the system included a character called "Mr. Computer Image" who was controlled by a combination of the ANIMAC's body control rig and an early form of voice-controlled automatic lip sync.[2]
Perhaps the first truly commercially successful example of a digitally animated figure being performed and rendered in real-time is Waldo C. Graphic, a character created in 1988 by Jim Henson and Pacific Data Images for the Muppet television series The Jim Henson Hour. Henson had used the Scanimate system to generate a digital version of his Nobody character in real-time for the television series Sesame Street as early as 1970[3] and Waldo grew out of experiments Henson conducted to create a computer generated version of his character Kermit the Frog[4] in 1985.[5]
Waldo's strength as a computer-generated puppet was that he could be controlled by a single puppeteer (Steve Whitmire[6]) in real-time in concert with conventional puppets. The computer image of Waldo was mixed with the video feed of the camera focused on physical puppets so that all of the puppeteers in a scene could perform together. (It was already standard Muppeteering practice to use monitors while performing, so the use of a virtual puppet did not significantly increase the complexity of the system.) Afterward, in post-production, PDI re-rendered Waldo in full resolution, adding a few dynamic elements on top of the performed motion.[7]
Waldo C. Graphic can be seen today in Jim Henson's Muppet*Vision 3D at Disney's Hollywood Studios in Lake Buena Vista, Florida.
Another significant development in digital puppetry in 1988 was Mike Normal, which Brad DeGraf and partner Michael Wahrman developed to show off the real-time capabilities of Silicon Graphics' then-new 4D series workstations. Unveiled at the 1988 SIGGRAPH convention, it was the first live performance of a digital character. Mike was a sophisticated talking head driven by a specially built controller that allowed a single puppeteer to control many parameters of the character's face, including mouth, eyes, expression, and head position.[8]
The system developed by deGraf/Wahrman to perform Mike Normal was later used to create a representation of the villain Cain in the motion picture RoboCop 2, which is believed to be the first example of digital puppetry being used to create a character in a full-length motion picture.
Trey Stokes was the puppeteer for both Mike Normal's SIGGRAPH debut and Robocop II.
One of the most widely seen successful examples of digital puppetry in a TV series is Sesame Street's "Elmo's World" segment. A set of furniture characters were created with CGI, to perform simultaneously with Elmo and other real puppets. They were performed in real-time on set, simultaneously with live puppet performances. As with the example of Henson's Waldo C. Graphic above, the digital puppets' video feed was seen live by both the digital and physical puppet performers, allowing the digital and physical characters to interact.[9]
Walt Disney Imagineering has also been an important innovator in the field of digital puppetry, developing new technologies to enable visitors to Disney theme parks to interact with some of the company's famous animated characters.[10] In 2004, they used digital puppetry techniques to create the Turtle Talk with Crush attractions at Epcot and Disney California Adventure Park. In the attraction, a hidden puppeteer performs and voices a digital puppet of Crush, the laid-back sea turtle from Finding Nemo, on a large rear-projection screen. To the audience, Crush appears to be swimming inside an aquarium and engages in unscripted, real-time conversations with theme park guests.
Disney Imagineering continued its use of digital puppetry with the Monsters Inc. Laugh Floor, a new attraction in Tomorrowland at Walt Disney World's Magic Kingdom, which opened in the spring of 2007. Guests temporarily enter the "monster world" introduced in Disney and Pixar's 2001 film, Monsters, Inc., where they are entertained by Mike Wazowski and other monster comedians who are attempting to capture laughter, which they convert to energy. Much like Turtle Talk, the puppeteers interact with guests in real time, just as a real-life comedian would interact with his/her audience.
Disney also uses digital puppetry techniques in Stitch Encounter, which opened in 2006 at the Hong Kong Disneyland park. Disney has another version of the same attraction in Disneyland Resort Paris called Stitch Live!
Since 2014, the United States Army's Program Executive Office for Simulation, Training, Research, and Instrumentation (PEO STRI), a division of US Army Simulation and Training Technology Center (STTC), has been experimenting with digital puppetry as a method of teaching advanced situational awareness for infantry squads.[11] A single improvisor using motion capture technology from Organic Motion Inc interacted with squads through the medium of several different life-sized avatars of varying ages and genders that were projected onto multiple walls throughout an urban operations training facility. The motion capture technology was paired with real-time voice shifting to achieve the effect.[12]
A digital puppet is controlled onscreen in real-time by a puppeteer who uses a telemetric input device known as a Waldo (after the short story "Waldo" by Robert A. Heinlein which features a man who invents and uses such devices), connected to the computer. The X-Y-Z axis movement of the input device causes the digital puppet to move correspondingly.
Computer facial animation is primarily an area of computer graphics that encapsulates methods and techniques for generating and animating images or models of a character's face. The importance of human faces in verbal and non-verbal communication and advances in computer graphics hardware and software have caused considerable scientific, technological, and artistic interests in computer facial animation.
An object (puppet) or human body is used as a physical representation of a digital puppet and manipulated by a puppeteer. The movements of the object or body are matched correspondingly by the digital puppet in real time. Motion capture puppetry is commonly used, for example, by VTubers, who rig digital avatars to correspond to the movements of their heads.
Virtual human (or digital human) are simulations of human beings on computers. The research domain is concerned with their representation, movement, and behavior, and also show that the human-like appearance of virtual human shows higher message credibility than anime-like virtual human in an advertising context. A particular case of a virtual human is the virtual actor, which is a virtual human (avatar or autonomous) representing an existing personality and acting in a film or a series.
Aniforms is a two-dimensional cartoon character operated like a puppet, to be displayed to live audiences or in visual media. The concept was invented by Morey Bunin with his spouse Charlotte, Bunin being a puppeteer who had worked with string marionettes and hand puppets. The distinctive feature of an Aniforms character is that it displays a physical form that appears "animated" on a real or simulated television screen. The technique was used in television production.
A production technique that can be used to perform digital puppets. Machinima involves creating computer-generated imagery (CGI) using the low-end 3D engines in video games. Players act out scenes in real-time using characters and settings within a game and the resulting footage is recorded and later edited into a finished film.[13]