It has been suggested that this article should be split into articles titled 3D computer animation and 2D computer animation. (discuss) (June 2019) |
Computer animation is the process used for digitally generating moving images. The more general term computer-generated imagery (CGI) encompasses both still images and moving images, while computer animation only refers to moving images. Modern computer animation usually uses 3D computer graphics.
Computer animation is a digital successor to stop motion and traditional animation. Instead of a physical model or illustration, a digital equivalent is manipulated frame-by-frame. Also, computer-generated animations allow a single graphic artist to produce such content without using actors, expensive set pieces, or props. To create the illusion of movement, an image is displayed on the computer monitor and repeatedly replaced by a new similar image but advanced slightly in time (usually at a rate of 24, 25, or 30 frames/second). This technique is identical to how the illusion of movement is achieved with television and motion pictures.
To trick the visual system into seeing a smoothly moving object, the pictures should be drawn at around 12 frames per second or faster (a frame is one complete image).[1] With rates above 75 to 120 frames per second, no improvement in realism or smoothness is perceivable due to the way the eye and the brain both process images. At rates below 12 frames per second, most people can detect jerkiness associated with the drawing of new images that detracts from the illusion of realistic movement.[2] Conventional hand-drawn cartoon animation often uses 15 frames per second in order to save on the number of drawings needed, but this is usually accepted because of the stylized nature of cartoons. To produce more realistic imagery, computer animation demands higher frame rates.
Films seen in theaters in the United States run at 24 frames per second, which is sufficient to create the illusion of continuous movement. For high resolution, adapters are used.
Computer-generated animation is an umbrella term for three-dimensional (3D) animation, and 2D computer animation. These also include subcategories like asset driven, hybrid, and digital drawn animation. Creators animate using code or software instead of pencil-to-paper drawings. There are many techniques and disciplines in computer generated animation, some of which are digital representations of traditional animation - such as key frame animation - and some of which are only possible with a computer - such fluid simulation.
'CG' Animators can break physical laws by using mathematical algorithms to cheat mass, force and gravity, and more. Fundamentally, computer-generated animation is a powerful tool which can improve the quality of animation by using the power of computing to unleash the animator's imagination. This is because Computer Generated Animation allows for things like onion skinning which allows 2D animators to see the flow of their work all at once, and interpolation which allows 3D animators to automate the process of inbetweening.
Movie | Type of Computer Generated Animation | Impact |
---|---|---|
Toy Story 2 | Stylized 3D computer animation[3] | Pixar developed cutting-edge technology for fully 3D animation. 'Toy Story' is considered a turning point for 3D animation in general.[4] |
Godzilla Minus One | Digital VFX, photorealistic[5] | Toho studios won an Oscar for its ground breaking VFX on a small budget relative to most box-office movies.[6] |
The Breadwinner | 2D computer animation[7] | Was praised for its 2D animated style, showing the possibilities of what the format could portray. |
Interstellar | Hyper photorealistic CGI following scientific principles[8] | The VFX artists working on Interstellar published a paper about the science and mathematics that were used to create the famous 'Gargantua' black hole.[8] |
Klaus | Hybrid 3D and 3D computer animation[9] | The use of 3D lighting for 2D animation in this movie opened up a door to many new animation styles for 2D animators. |
For 3D computer animations, objects (models) are built on the computer monitor (modeled) and 3D figures are rigged with a virtual skeleton. Then the limbs, eyes, mouth, clothes, etc. of the figure are moved by the animator on key frames. Normally, the differences between key frames are drawn in a process known as tweening. However, in 3D computer animation, this is done automatically, and is called interpolation. Finally, the animation is rendered and composited.
Before becoming a final product, 3D computer animations only exist as a series of moving shapes and systems within 3d software, and must be rendered. This can happen as a separate process for animations developed for movies and short films, or it can be done in real-time when animated for videogames. After an animation is rendered, it can be composited into a final product.
For 3D models, attributes can describe any characteristic of the object that can be animated. This includes transformation (movement from one point to another), scaling, rotation, and more complex attributes like blend shape progression (morphing from one shape to another). Each attribute gets a channel on which keyframes can be set. These keyframes can be used in more complex ways such as animating in layers (combining multiple sets of key frame data), or keying control objects to deform or control other objects. For instance, a character's arms can have a skeleton applied, and the joints can have transformation and rotation keyframes set. The movement of the arm joints will then cause the arm shape to deform.
3D animation software interpolates between keyframes by generating a spline between keys plotted on a graph which represents the animation. Additionally, these splines can follow bezier curves to control how the spline curves relative to the keyframes. Using interpolation allows 3D animators to dynamically change animations without having to redo all the in-between animation. This also allows the creation of complex movements such as ellipses with only a few keyframes. Lastly, interpolation allows the animator to change the framerate, timing, and even scale of the movements at any point in the animation process.
Another way to automate 3D animation is to use procedural tools such as 4D noise. Noise is any algorithm that plots pseudo-random values within a dimensional space.[10] 4D noise can be used to do things like move a swarm of bees around; the first three dimensions correspond to the position of the bees in space, and the fourth is used to change the bee's position over time. Noise can also be used as a cheap replacement for simulation. For example, smoke and clouds can be animated using noise.
Node-based animation is useful for animating organic and chaotic shapes. By using nodes, an animator can build up a complex set of animation rules that can be applied either to many objects at once, or one very complex object. A good example of this would be setting the movement of particles to match the beat of a song.
There are many different disciplines of 3D animation, some of which include entirely separate artforms. For example, hair simulation for computer animated characters in and of itself is a career path which involves separate workflows,[11] and different software and tools. The combination of all or some 3D computer animation disciplines is commonly referred to within the animation industry as the 3D animation pipeline.[12]
Discipline | Explanation | Tools | Examples |
---|---|---|---|
Face Rigging | A facial rig is a rig that includes muscles, deformation, mesh displacement, and other techniques to enable the animation of facial expressions, and phonemes for lip syncing. | Autodesk Maya, Blender | In 'Avatar, Way of Water', WETA workshops meticulously designed the digital muscles in the faces of their characters so that their emotional range could be comparable to that of a human.[13] |
Facial Animation | This is the process of animating facial animations, lip-syncing, and animating phoneme blend-shapes (shapes that the face morphs into) | Autodesk Maya, Blender, Autodesk 3DS Max | In Pixar's 'Turning Red', animators took influence from anime style facial expressions to inform their animation.[14] |
Character Animation | Specifically the animation of characters. 3D character animation is its own specialty do to the complexity required to animated dancing, running, fighting, or high fidelity motion such as playing basketball. | Autodesk Maya, Blender | Pixar's 'Incredibles' won the 2004 Visual Effects Society Award for Outstanding Animated Character in an Animated Feature |
Cloth Simulation | Cloth simulation is a subset of simulation but specifically for things like clothes. In modern 3D computer animation, cloth simulation is becoming more and more advanced and widely used. | Houdini, Blender | Pixar's 'Coco' advanced the use of high fidelity clothes by designing new tools to combine cloth simulation with character animation.[15] |
2D computer graphics are still used for stylistic, low bandwidth, and faster real-time renderings.
Computer animation is essentially a digital successor to stop motion techniques, but using 3D models, and traditional animation techniques using frame-by-frame animation of 2D illustrations.
For 2D figure animations, separate objects (illustrations) and separate transparent layers are used with or without that virtual skeleton.
In 2D computer animation, moving objects are often referred to as "sprites." A sprite is an image that has a location associated with it. The location of the sprite is changed slightly, between each displayed frame, to make the sprite appear to move.[16] The following pseudocode makes a sprite move from left to right:
var int x := 0, y := screenHeight / 2; while x < screenWidth drawBackground() drawSpriteAtXY (x, y) // draw on top of the background x := x + 5 // move to the right
Computer-assisted animation is usually classed as two-dimensional (2D) animation and is also known as digital ink and paint. Drawings are either hand drawn (pencil to paper) or interactively drawn (on the computer) using different assisting appliances and are positioned into specific software packages. Within the software package, the creator places drawings into different key frames which fundamentally create an outline of the most important movements.[17] The computer then fills in the "in-between frames", a process commonly known as Tweening.[18] Computer-assisted animation employs new technologies to produce content faster than is possible with traditional animation, while still retaining the stylistic elements of traditionally drawn characters or objects.[19]
Examples of films produced using computer-assisted animation are the rainbow sequence at the end of The Little Mermaid (the rest of the films listed use digital ink and paint in their entirety), The Rescuers Down Under, Beauty and the Beast, Aladdin, The Lion King, Pocahontas, The Hunchback of Notre Dame, Hercules, Mulan, Tarzan, We're Back! A Dinosaur's Story, Balto, Anastasia, Titan A.E., The Prince of Egypt, The Road to El Dorado, Spirit: Stallion of the Cimarron and Sinbad: Legend of the Seven Seas.
Early digital computer animation was developed at Bell Telephone Laboratories in the 1960s by Edward E. Zajac, Frank W. Sinden, Kenneth C. Knowlton, and A. Michael Noll.[22] Other digital animation was also practiced at the Lawrence Livermore National Laboratory.[23]
In 1967, a computer animation named "Hummingbird" was created by Charles Csuri and James Shaffer.[24] In 1968, a computer animation called "Kitty" was created with BESM-4 by Nikolai Konstantinov, depicting a cat moving around.[25] In 1971, a computer animation called "Metadata" was created, showing various shapes.[26]
An early step in the history of computer animation was the sequel to the 1973 film Westworld, a science-fiction film about a society in which robots live and work among humans.[27] The sequel, Futureworld (1976), used the 3D wire-frame imagery, which featured a computer-animated hand and face both created by University of Utah graduates Edwin Catmull and Fred Parke.[28] This imagery originally appeared in their student film A Computer Animated Hand, which they completed in 1972.[29][30]
Developments in CGI technologies are reported each year at SIGGRAPH,[31] an annual conference on computer graphics and interactive techniques that is attended by thousands of computer professionals each year.[32] Developers of computer games and 3D video cards strive to achieve the same visual quality on personal computers in real-time as is possible for CGI films and animation. With the rapid advancement of real-time rendering quality, artists began to use game engines to render non-interactive movies, which led to the art form Machinima.
CGI short films have been produced as independent animation since 1976.[33] Early examples of feature films incorporating CGI animation include the live-action films Star Trek II: The Wrath of Khan and Tron (both 1982),[34] and the Japanese anime film Golgo 13: The Professional (1983).[35] VeggieTales is the first American fully 3D computer-animated series sold directly (made in 1993); its success inspired other animation series, such as ReBoot (1994) and Transformers: Beast Wars (1996) to adopt a fully computer-generated style.
The first full-length computer-animated television series was ReBoot,[36] which debuted in September 1994; the series followed the adventures of characters who lived inside a computer.[37] The first feature-length computer-animated film is Toy Story (1995), which was made by Disney and Pixar:[38][39][40] following an adventure centered around anthropomorphic toys and their owners, this groundbreaking film was also the first of many fully computer-animated movies.[39]
The popularity of computer animation (especially in the field of special effects) skyrocketed during the modern era of U.S. animation.[41] Films like Avatar (2009) and The Jungle Book (2016) use CGI for the majority of the movie runtime, but still incorporate human actors into the mix.[42] Computer animation in this era has achieved photorealism, to the point that computer-animated films such as The Lion King (2019) are able to be marketed as if they were live-action.[43][44]
In most 3D computer animation systems, an animator creates a simplified representation of a character's anatomy, which is analogous to a skeleton or stick figure.[45] They are arranged into a default position known as a bind pose, or T-Pose. The position of each segment of the skeletal model is defined by animation variables, or Avars for short. In human and animal characters, many parts of the skeletal model correspond to the actual bones, but skeletal animation is also used to animate other things, with facial features (though other methods for facial animation exist).[46] The character "Woody" in Toy Story, for example, uses 712 Avars (212 in the face alone). The computer does not usually render the skeletal model directly (it is invisible), but it does use the skeletal model to compute the exact position and orientation of that certain character, which is eventually rendered into an image. Thus by changing the values of Avars over time, the animator creates motion by making the character move from frame to frame.
There are several methods for generating the Avar values to obtain realistic motion. Traditionally, animators manipulate the Avars directly.[47] Rather than set Avars for every frame, they usually set Avars at strategic points (frames) in time and let the computer interpolate or tween between them in a process called keyframing. Keyframing puts control in the hands of the animator and has roots in hand-drawn traditional animation.[48]
In contrast, a newer method called motion capture makes use of live action footage.[49] When computer animation is driven by motion capture, a real performer acts out the scene as if they were the character to be animated.[50] Their motion is recorded to a computer using video cameras and markers and that performance is then applied to the animated character.[51]
Each method has its advantages and as of 2007, games and films are using either or both of these methods in productions. Keyframe animation can produce motions that would be difficult or impossible to act out, while motion capture can reproduce the subtleties of a particular actor.[52] For example, in the 2006 film Pirates of the Caribbean: Dead Man's Chest, Bill Nighy provided the performance for the character Davy Jones. Even though Nighy does not appear in the movie himself, the movie benefited from his performance by recording the nuances of his body language, posture, facial expressions, etc. Thus motion capture is appropriate in situations where believable, realistic behavior and action is required, but the types of characters required exceed what can be done throughout the conventional costuming.
3D computer animation combines 3D models of objects and programmed or hand "keyframed" movement. These models are constructed out of geometrical vertices, faces, and edges in a 3D coordinate system. Objects are sculpted much like real clay or plaster, working from general forms to specific details with various sculpting tools. Unless a 3D model is intended to be a solid color, it must be painted with "textures" for realism. A bone/joint animation system is set up to deform the CGI model (e.g., to make a humanoid model walk). In a process known as rigging, the virtual marionette is given various controllers and handles for controlling movement.[53][54] Animation data can be created using motion capture, or keyframing by a human animator, or a combination of the two.[55]
3D models rigged for animation may contain thousands of control points — for example, "Woody" from Toy Story uses 700 specialized animation controllers. Rhythm and Hues Studios labored for two years to create Aslan in the movie The Chronicles of Narnia: The Lion, the Witch and the Wardrobe, which had about 1,851 controllers (742 in the face alone). In the 2004 film The Day After Tomorrow, designers had to design forces of extreme weather with the help of video references and accurate meteorological facts. For the 2005 remake of King Kong, actor Andy Serkis was used to help designers pinpoint the gorilla's prime location in the shots and used his expressions to model "human" characteristics onto the creature. Serkis had earlier provided the voice and performance for Gollum in J. R. R. Tolkien's The Lord of the Rings trilogy.
Computer animation can be created with a computer and an animation software. Some impressive animation can be achieved even with basic programs; however, the rendering can require much time on an ordinary home computer.[56] Professional animators of movies, television and video games could make photorealistic animation with high detail. This level of quality for movie animation would take hundreds of years to create on a home computer. Instead, many powerful workstation computers are used;[57] Silicon Graphics said in 1989 that the animation industry's needs typically caused graphical innovations in workstations.[58] Graphics workstation computers use two to four processors, and they are a lot more powerful than an actual home computer and are specialized for rendering. Many workstations (known as a "render farm") are networked together to effectively act as a giant computer,[59] resulting in a computer-animated movie that can be completed in about one to five years (however, this process is not composed solely of rendering). A workstation typically costs $2,000 to $16,000 with the more expensive stations being able to render much faster due to the more technologically advanced hardware that they contain. Professionals also use digital movie cameras, motion/performance capture, bluescreens, film editing software, props, and other tools used for movie animation. Programs like Blender allow for people who can not afford expensive animation and rendering software to be able to work in a similar manner to those who use the commercial grade equipment.[60]
The realistic modeling of human facial features is both one of the most challenging and sought after elements in computer-generated imagery. Computer facial animation is a highly complex field where models typically include a very large number of animation variables.[61] Historically speaking, the first SIGGRAPH tutorials on State of the art in Facial Animation in 1989 and 1990 proved to be a turning point in the field by bringing together and consolidating multiple research elements and sparked interest among a number of researchers.[62]
The Facial Action Coding System (with 46 "action units", "lip bite" or "squint"), which had been developed in 1976, became a popular basis for many systems.[63] As early as 2001, MPEG-4 included 68 Face Animation Parameters (FAPs) for lips, jaws, etc., and the field has made significant progress since then and the use of facial microexpression has increased.[63][64]
In some cases, an affective space, the PAD emotional state model, can be used to assign specific emotions to the faces of avatars.[65] In this approach, the PAD model is used as a high level emotional space and the lower level space is the MPEG-4 Facial Animation Parameters (FAP). A mid-level Partial Expression Parameters (PEP) space is then used to in a two-level structure – the PAD-PEP mapping and the PEP-FAP translation model.[66]
Realism in computer animation can mean making each frame look photorealistic, in the sense that the scene is rendered to resemble a photograph or make the characters' animation believable and lifelike.[67] Computer animation can also be realistic with or without the photorealistic rendering.[68]
One trend in computer animation has been the effort to create human characters that look and move with the highest degree of realism. A possible outcome when attempting to make pleasing, realistic human characters is the uncanny valley, the concept where the human audience (up to a point) tends to have an increasingly negative, emotional response as a human replica looks and acts more and more human. Films that have attempted photorealistic human characters, such as The Polar Express,[69][70][71] Beowulf,[72] and A Christmas Carol[73][74] have been criticized as "disconcerting" and "creepy".
The goal of computer animation is not always to emulate live action as closely as possible, so many animated films instead feature characters who are anthropomorphic animals, legendary creatures and characters, superheroes, or otherwise have non-realistic, cartoon-like proportions.[75] Computer animation can also be tailored to mimic or substitute for other kinds of animation, like traditional stop-motion animation (as shown in Flushed Away or The Peanuts Movie). Some of the long-standing basic principles of animation, like squash and stretch, call for movement that is not strictly realistic, and such principles still see widespread application in computer animation.[76]
The popularity of websites that allow members to upload their own movies for others to view has created a growing community of independent and amateur computer animators.[77] With utilities and programs often included free with modern operating systems, many users can make their own animated movies and shorts. Several free and open-source animation software applications exist as well. The ease at which these animations can be distributed has attracted professional animation talent also. Companies such as PowToon and Vyond attempt to bridge the gap by giving amateurs access to professional animations as clip art.
The oldest (most backward compatible) web-based animations are in the animated GIF format, which can be uploaded and seen on the web easily.[78] However, the raster graphics format of GIF animations slows the download and frame rate, especially with larger screen sizes. The growing demand for higher quality web-based animations was met by a vector graphics alternative that relied on the use of a plugin. For decades, Flash animations were a common format, until the web development community abandoned support for the Flash Player plugin. Web browsers on mobile devices and mobile operating systems never fully supported the Flash plugin.
By this time, internet bandwidth and download speeds increased, making raster graphic animations more convenient. Some of the more complex vector graphic animations had a slower frame rate due to complex rendering compared to some of the raster graphic alternatives. Many of the GIF and Flash animations were already converted to digital video formats, which were compatible with mobile devices and reduced file sizes via video compression technology. However, compatibility was still problematic as some of the video formats such as Apple's QuickTime and Microsoft Silverlight required plugins. YouTube was also relying on the Flash plugin to deliver digital video in the Flash Video format.
The latest alternatives are HTML5 compatible animations. Technologies such as JavaScript and CSS animations made sequencing the movement of images in HTML5 web pages more convenient. SVG animations offered a vector graphic alternative to the original Flash graphic format, SmartSketch. YouTube offers an HTML5 alternative for digital video. APNG (Animated PNG) offered a raster graphic alternative to animated GIF files that enables multi-level transparency not available in GIFs.
Computer animation uses different techniques to produce animations. Most frequently, sophisticated mathematics is used to manipulate complex three-dimensional polygons, apply "textures", lighting and other effects to the polygons and finally rendering the complete image. A sophisticated graphical user interface may be used to create the animation and arrange its choreography. Another technique called constructive solid geometry defines objects by conducting Boolean operations on regular shapes, and has the advantage that animations may be accurately produced at any resolution.
The examples and perspective in this section deal primarily with the United States and do not represent a worldwide view of the subject. (May 2018) |
Some notable producers of computer-animated feature films include:
Multiple high quality text-to-video models, AI systems that can generate video clips from prompted text, were released in 2022.