What's it all about?Recall that there are two very different ways to characterize how light interacts with surfaces in the built environment. They are diffuse and specular, as illustrated below.
The Diffuse Assumption
|Everything that we've looked at up to this point has been built around the assumption of a perfectly diffuse reflection. In this scheme incident light is reflected uniformly in all directions, regardless of the angle of incidence.|
The Specular Assumption
|In contrast, in most science courses light is described in terms of a perfectly specular reflection, as illustrated in the diagram at left, where an incident ray arrives at the surface from somewhere, is partially absorbed (not shown), partially reflected as a reflected ray, and (possibly) partially transmitted, as a refracted ray.|
Each of the algorithms that we have looked at (flat, Gouraud, Phong, etc.) is built on the assumption that light reflects diffusely off of the surfaces of the model, and that only light that moves directly from a surface to the eye (or camera) is of interest.
The Big IdeaWhat if we built a rendering algorithm around the specular assumption, rather than the diffuse one?
Doing this would make it possible to model shiny objects, mirrors, waxed floors, reflective ponds, etc. Of course, it's only interesting if we actually pay attention to light that has reflected between objects in the model (how else would we see the building reflected in the pond?). So, we must follow light as it bounces around within the model.
And, of course, we need to still need to consider the light arriving at each surface from the various light sources. Lambert's law (cosine shading) still applies because it describes the intensity of light falling on the surface, regardless of how it eventually reflects.
The ProblemUnfortunately, a rendering algorithm built on the idea of actually following light rays from their sources to the eye will quickly run aground. Much of the light leaving each source never gets to our eye. We simply cannot trace every ray of light from each fixture until it or some tiny fraction of it reaches the eye of the viewer. Computers don't generally like infinite numbers! It would be easier to build the room and photograph it rather than waiting for the calculation to finish.
The Trick!Remember that what we are ultimately looking at is a raster image, with a finite number of pixels. Why not think of the screen as a clear piece of glass between us and the 3D model.The question would then be reformulated in terms of figuring out what color each pixel of the image should be. When we do this, it suggests a powerful idea:
Work backwards following the ray of light that's responsible for the color of a particular pixel, tracing the ray from the eye, through the screen, into the model, to see where it came from.
A Little More Detail
The diagram illustrates a partially complete ray tracing. We see on the (oblique) screen a partial image of the cube data object. The turquoise pixel is just about to be rendered. Let's "listen in" on what happens next...
- The program begins by "shooting" a ray from the hypothetical eye of the observer, through the pixel in question, and into "the data".
- The ray is tested against all the polygons of the model to see if it intersects any. If it does not, the pixel is colored the background color.
- If the ray intersects one or more polygons, the one nearest the screen is selected. The ray's angle of incidence is calculated, and the surface's index of refraction is looked up.
- Now, TWO rays are created leaving the point of intersection. One is the reflected ray and the other is the refracted ray.
- If we are calculating shadows, a ray is shot towards each light source. It will test to uncover shadowing objects.
- For EACH active ray, return to the second step and start testing again. Stop the process after a certain number of loops, or when a ray strikes the background.
Another reason it takes a long time is that there is little "coherence" to the process. That is, the algorithm can't use much information from the adjacent pixels. It pretty much does each pixel independently.
Much research has been done on ways to improve ray tracing speed, with lots of concentration on the "ray-polygon intersection test", which is where lots of computation time is spent. Different strategies, which assign polygons to subregions of space, make it possible to eliminate whole groups based on the general direction in which the ray is going.
Another unfortunate quality of ray tracings is that NONE of the calculation time helps you when you change the view point slightly and start rendering a new frame, as in an animation. Again, you start over.
Last updated: April, 2014