What's this about?
A real camera lens focuses light on the film based on distance. Objects closer and further away are out of focus. This quality is referred to as "depth of field". A "pinhole" camera has infinite depth of field, as do most rendering systems. However, there is a depth-related feature that software can exercise and real cameras cannot: hither and yon clipping.
The View Volume: or "frustum"
The figure above shows a typical 3-point perspective set-up (and resulting image). The eye-point is represented with the small sphere, and the view-vector is drawn as an arrow reaching into the model. A pyramid-shaped volume (the frustum of vision) shows the side-to-side and _top-to-bottom boundaries of the scene — objects outside this volume are not seen. They are said to be "clipped" by the sides of the frustum. So far, this corresponds with the way a real camera works; a zoom lens just changes the angular size (or width) of that frustum, making objects bigger or smaller because more or less of the world is squeezed into a fixed size image. Of course, we always see all data which is "in front of" the camera.
Notice that the frustum has a "top" (near the eye) and "bottom" (far away). By using these as a "front" (hither) and "back" (yon) clipping plane, respectively, the rendering program can "clip" (not draw) objects between the eye and the front plane, or objects beyond the rear plane. Using this, we can "slice" our building, turning a one-point perspective into a section-perspective
The camera analogy says ALL screen images arise from locating a camera in the model (setting the station point), specifying a focal point ("center of view" or "look-at point"), and selecting a lens for the camera. The lens determines whether we get an axonometric/elevation or a perspective. This next bit is a little tricky, and stretches the analogy.
First, note that the "cone of vision" relates to whether the lens is telephoto or normal/wide-angle. A "telephoto" lens sees a small part of the model, so the rays of light are mostly parallel, whereas a "normal" lens, with a wider angle of view, captures the diverging rays of a perspective projection. Simply switching lenses would means a big change in image size (zooming in on the center, or out). BUT, if we imagine the picture plane being located at the focal point and use the normal-lens cone-of-vision to find the edges of the rectangle formed by intersecting the cone of vision with the picture plane, we could construct a parallel-projection box with the same image size, which is what most software does. To understand this better, examine the two screen captures from form-Z, which has a view-editing command ("Edit Cone-of-vision") that makes this particularly clear.
Last updated: April, 2014