The Problem With Motion
Motion is central to animation, but our personal experiences with motion have given us a number of expectations about how it works.
The simplest way to approach the question of "what happened between time-"A" and time-"B" is to do a linear interpolation between the spatial points, simply dividing the interval into some number of frames. Such a straight linear interpolation between two viewing conditions generates a series of view-vectors as shown at left.
So far, so good, but now imagine that the next key frame is as shown at right (Frame 3). Linear interpolation to this view produces the next set of images. The problem arises at the transition from path a to path b. At this point the camera does an "instantaneous" change in direction. This instantaneous change is quite perceptible since we "know" that someone carrying a camera couldn't make that change in direction quite so quickly.
[In mathematics-eze we would say that position was continuous (meaning we didn't instantly move from one place to another), but that the first derivative with respect to time (i.e., velocity) was not, as a consequence, the second derivative (acceleration) would be infinite at the transition. Our visual-kinesthetic analysis of the animation is actually able to detect this, and finds it jarring.]
The solution to this problem is to smooth or spline the camera path (and the interpolation of focal point changes and cone-of-vision changes as well) so that they are more continuous. The primary difficulty with this is that the actual path will not exactly follow the linear path, so predicting the "camera" position gets a little difficult. As a consequence, when animating a fly-through of a building, we might accidentally pass the camera through the corner of a hall-way rather than making a fairly sharp turn, as the tweening process smoothes our turn at the corner. Adding more keyframes, or control points along the spline path, can usually get control of this problem.
The other condition under which linear interpolation fails is, of course, subject motion. Motion is not only continuous, it's derivative is continuous. That is, objects having mass (weight) don't start or stop moving instantly. Nor do they change direction instantly. Objects don't simply spring into motion, which is what linear interpolation implies (the straight line at left, giving the first, evenly spaced, vertical sequence of tic marks), they accelerate into motion, and then decelerate to a stop (as shown by the curved line, and second, unevenly spaced, vertical tic sequence). This issue is addressed by systems which provide a means of easing into and out of key frames.
The Other Problem with Motion
Subject motion obviously means "one part of the model moves relative to the rest". If the whole model moves, there's no difference between subject motion and camera motion (except possibly for lighting effects). The question becomes one of describing these relative motions. To make a model component move in a dumb geometry editor, we must take responsibility for adjusting the positions of it's parts, moving each piece between frames. What we want, of course, is to be able to create a smarter representation of the system, using higher level concepts than "position". We would like to "link" parts of the model, creating a "hinge" at the connection point, or a "pivot", etc. Mechanical systems (doors, car engines, etc.) have fairly clear mechanical connectivity in the form of hinges, axles, pivots, etc., and are therefore fairly easy to animate through a series of constraints applied to the geometry in conjunction with a driving force of some sort. In such a system, after we have defined the geometry and the nature of object interconnectedness, we could animate a door opening by setting the starting and ending angle of "Hinge #3" plus the number of frames.
Even very complex motion system, such as the human body, can be described in this way. As the old song says "the ankle bone's connected to the shin bone, the shin bone's connected to the knee bone, the knee bone's connected to the thigh bone, the thigh bone's connected to the hip bone...". Not only that, but each joint can only make certain motions (knees bend front-to-back, not side-to-side, and even then they don't bend as far forward as they do back). By progressively linking the foot to the ankle, to the shin, to the knee, to the thigh, to the hip, we create an hierarchical model: rotate the hip joint and the whole leg moves.
This greatly simplifies animating the leg, since we need only specify a sequence of angular displacements for each limb. What should these motions be? The problem is that "walking" remains a very complex motion, involving sequential patterns of angular rotation of hip, knee, and ankle, as well as minor motions of the hips, all keeping the weight over the feet, adapting to sloped surfaces, etc. While we can control the individual settings of knee, ankle, etc., we don't have a model of how they should be related. In fact, the best "animated" animals (the "tin can lady" and various metallic cats, etc.) actually digitize their motion from the real thing (i.e., animators film a real person making the motion, with dots marked on their bodies at the critical points, then they digitize the motions from the film, and lastly, they apply the "real" (digitized) motion to their synthetic model, which gives it a very "lifelike" motion, as one would expect.
Much recent research has gone into making models for which the key frame (or "scripting") instructions are more along the lines of "Zimbo walks across to the left" instead of a zillion ankle/knee/hip rotations. Obviously, such a system is not a simple "geometry editor". Next time you see an animated figure, watch how their feet touch the ground. Good animations make it look like contact is really made, poor ones look like the foot landed on Jell-o or never touched at all. Again, view this movie for a good example of recent research into this problem.
Last updated: April, 2014