TAPESTRY: The Art of Representation and Abstraction
The Synthetic Camera
What's this about?
The object of the exercise is to generate a rendering of a 3D scene, just as you might take a photograph. The DATA within the computer program act as objects in the real world. What, then, takes the place of our EYE or CAMERA? What about the camera Lens? and film?
The rendering process, or graphics pipeline, takes the place of the optics and sensory nerves of the eye, or of the lens, shutter, and film of the camera. However, as with a real camera, the image produced by the graphics pipeline depends of several factors, including:
- The Position of the camera in space,
- The Orientation of the camera (the direction in which it is facing),
- The Projection type (analagous to the lens of the camera), and
- The kind of 'film' that stores the image (in this case, a product of the rendering process)
"View Vector" (aka "Line of Sight") ≈ position + orientation
Imagine a vector from the eye or camera (the "eye point") to a point in the scene (the "focal point" or "center of view"). With one assumption (that the camera's "up" direction aligns with the world's "up" direction), this vector completely specifies the Position and Orientation for a camera. The focal point will turn out to be in the center of the image, and the view-vector's relationship to the model geometry determines whether the program produces a 1-point, 2-point or 3-point perspective (see below).
"Projection" ≈ Lens Length
A camera, mounted on a rigid tripod, pointed at a fixed scene, can still record radically different images if we change the lens attached to the camera. The lens changes the way in which light is focused on the film. That is, it changes the way light is projected. In the same way, a given view-vector can be used to produce different images of your model geometry though changes to the projection used. The common projections are perspective and parallel.
Cameras v. Synthetic Cameras v. Camera Analogies
Photographers may have noticed that several traditional concerns are missing from this discussion ...
- The depth of field (with only a few exceptions, renderings have infinite depth of field, and it's all in focus). However, check out the "frustum" discussion.
- Shutter-speed and f-stop (since the digital world doesn't change while the rendering is in progress "motion blur" has to be added back in where needed (rare), and light levels are controlled almost exclusively through controlling the number and intensity of light sources).
- Lens-flare and chromatic aberration don't appear as issues.
These issues arise due to the chemistry of film or the physics of lens optics. A few true "synthetic camera" rendering systems do take such things into account, or simulate them through lightening/darkening of the image. The discussion here might be better understood as a "camera analogy" but that feels awkward.
Last updated: April, 2014