openGL Rendering Basics
excerpts
from: OpenGL Programming Guide 2nd ed
modified for C++.NET 2005
The amount of processing required to render three dimensional computer
images is substantial. The openGL library of tools handles most
of the complexity for you. However it is necessary to have some
understanding of what is being done in order to properly use these
tools. As we study openGL we will become familiar with the
different levels of operations shown in the sketch of the rendering
pipeline shown below:
For now it is sufficient to note that
we will describe graphical objects as vertices (locations of points in
a model space) and images will be rendered as twodimensional arrays of
pixels (rasterization).
Sometimes we will call predefined collections of vertices (e.g. the
GLUT sphere, cone, or torus), and sometimes we will define objects
manually be entering the verticies of triangles, rectangles (quads), or
polygons. These object descriptions can be rendered immediately
or they can be stored in a display
list and called as needed. The surface characteristics (textures) of objects can be based
on simple shading models, predefined materials properties, or
applied to surfaces from reference images. Finally we can view
the image as it is rendered or we can display one image while another
is being drawn and then swap the buffers. This is called double buffering and is used in
graphics animation.
//
openGL_12.cpp
//
#include "stdafx.h"
#include <GL/glut.h>
void display(void)
{
/*
clear all
pixels */
glClear (GL_COLOR_BUFFER_BIT);
/*
draw white
polygon (rectangle) with corners at
* (0.25, 0.25, 0.0) and
(0.75,
0.75, 0.0)
*/
glColor3f (1.0, 1.0, 1.0);
glBegin(GL_POLYGON);
glVertex3f (0.25, 0.25, 0.0);
glVertex3f (0.75, 0.25, 0.0);
glVertex3f (0.75, 0.75, 0.0);
glVertex3f (0.25, 0.75, 0.0);
glEnd();
/*
don't wait!
* start processing buffered OpenGL
routines
*/
glFlush ();
}
void init (void)
{
/*
select
clearing (background) color */
glClearColor (0.0, 0.0, 0.0, 0.0);
/*
initialize
viewing values */
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(0.0, 1.0, 0.0, 1.0, 1.0,
1.0);
}
/*
*
Declare initial window size, position, and display mode
*
(single buffer and RGBA). Open window with "hello"
*
in its title bar. Call initialization routines.
*
Register callback function to display graphics.
*
Enter main loop and process events.
*/
void main(int argc, char**
argv)
{
glutInit(&argc, argv);
glutInitDisplayMode (GLUT_SINGLE 
GLUT_RGB);
glutInitWindowSize (250, 250);
glutInitWindowPosition (100, 100);
glutCreateWindow ("hello");
init ();
glutDisplayFunc(display);
glutMainLoop();
}
Clearing the Window
Drawing on a computer screen is
different from drawing on paper in that the paper starts out white, and
all you have to do is draw the picture. On a computer, the memory
holding the picture is usually filled with the last picture you drew,
so you typically need to clear it to some background color before you
start to draw the new scene. The color you use for the background
depends on the application. For a word processor, you might clear to
white (the color of the paper) before you begin to draw the text. If
you're drawing a view from a spaceship, you clear to the black of space
before beginning to draw the stars, planets, and alien spaceships.
Sometimes you might not need to clear the screen at all; for example,
if the image is the inside of a room, the entire graphics window gets
covered as you draw all the walls.
You must also know how the colors of pixels are stored in the graphics
hardware known as bitplanes. There are two methods of storage. Either
the red, green, blue, and alpha (RGBA) values of a pixel can be
directly stored in the bitplanes, or a single index value that
references a color lookup table is stored. RGBA colordisplay mode is
more commonly used, so most of the examples in this book use it. As an
example, these lines of code clear an RGBA mode window to black:
glClearColor(0.0,
0.0, 0.0, 0.0);
glClear(GL_COLOR_BUFFER_BIT);
The first line sets the clearing color to black, and the next command
clears the entire window to the current clearing color. The single
parameter to glClear( ) indicates which buffers are to be cleared. In
this case, the program clears only the color buffer, where the image
displayed on the screen is kept. Typically, you set the clearing color
once, early in your application, and then you clear the buffers as
often as necessary. OpenGL keeps track of the current clearing color as
a state variable rather than requiring you to specify it each time a
buffer is cleared.
To clear both the color buffer and the depth buffer, you would use the
following sequence of commands:
glClearColor(0.0,
0.0, 0.0, 0.0);
glClearDepth(1.0);
glClear(GL_COLOR_BUFFER_BIT
 GL_DEPTH_BUFFER_BIT);
In this case, the call to glClearColor( ) is the same as before, the
glClearDepth( ) command specifies the value to which every pixel of the
depth buffer is to be set, and the parameter to the glClear( ) command
now consists of the bitwise OR of all the buffers to be cleared.
OpenGL allows you to specify multiple buffers because clearing is
generally a slow operation, since every pixel in the window (possibly
millions) is touched, and some graphics hardware allows sets of buffers
to be cleared simultaneously. Hardware that doesn't support
simultaneous clears performs them sequentially. The difference between
glClear(GL_COLOR_BUFFER_BIT
 GL_DEPTH_BUFFER_BIT);
and
glClear(GL_COLOR_BUFFER_BIT);
glClear(GL_DEPTH_BUFFER_BIT);
is that although both have the same final effect, the first example
might run faster on many machines. It certainly won't run more slowly.
Specifying a Color
With OpenGL, the description of the
shape of an object being drawn is independent of the description of its
color. Whenever a particular geometric object is drawn, it's drawn
using the currently specified coloring scheme. The coloring scheme
might be as simple as "draw everything in fireengine red," or might be
as complicated as "assume the object is made out of blue plastic, that
there's a yellow spotlight pointed in such and such a direction, and
that there's a general lowlevel reddishbrown light everywhere else."
In general, an OpenGL programmer first sets the color or coloring
scheme and then draws the objects. Until the color or coloring scheme
is changed, all objects are drawn in that color or using that coloring
scheme. This method helps OpenGL achieve higher drawing performance
than would result if it didn't keep track of the current color. For
example, the pseudocode
set_current_color(red);
draw_object(A);
draw_object(B);
set_current_color(green);
set_current_color(blue);
draw_object(C);
draws objects A and B in red, and
object C in blue. The command on the fourth line that sets the current
color to green is wasted. Coloring, lighting, and shading are all large
topics with entire chapters or large sections devoted to them. To draw
geometric primitives that can be seen, however, you need some basic
knowledge of how to set the current color; this information is provided
in the next paragraphs.
To set a color, use the command
glColor3f( ). It takes three parameters, all of which are
floatingpoint
numbers between 0.0 and 1.0. The parameters are, in order, the red,
green, and blue components of the color. You can think of these three
values as specifying a "mix" of colors: 0.0 means don't use any of that
component, and 1.0 means use all you can of that component. Thus, the
code
glColor3f(1.0,
0.0, 0.0);
makes the brightest red the system
can draw, with no green or blue components. All zeros makes black; in
contrast, all ones makes white. Setting all three components to 0.5
yields gray (halfway between black and white). Here are eight commands
and the colors they would set.
glColor3f(0.0,
0.0, 0.0); black
glColor3f(1.0,
0.0, 0.0); red
glColor3f(0.0,
1.0, 0.0); green
glColor3f(1.0,
1.0, 0.0); yellow
glColor3f(0.0,
0.0, 1.0); blue
glColor3f(1.0,
0.0, 1.0); magenta
glColor3f(0.0,
1.0, 1.0); cyan
glColor3f(1.0,
1.0, 1.0); white
You might have noticed earlier that
the routine to set the clearing color, glClearColor( ), takes four
parameters, the first three of which match the parameters for
glColor3f( ). The fourth parameter is the alpha value which will be
discussed later. For now, set the fourth parameter of
glClearColor( ) to 0.0, which is its default value.
Forcing Completion of Drawing
OpenGL provides the command
glFlush( ), which forces the client to send the network packet even
though it might not be full. Where there is no network and all commands
are truly executed immediately on the server, glFlush( ) might have no
effect. However, if you're writing a program that you want to work
properly both with and without a network, include a call to glFlush( )
at the end of each frame or scene. Note that glFlush( ) doesn't wait
for
the drawing to complete  it just forces the drawing to begin
execution, thereby guaranteeing that all previous commands execute in
finite time even if no further rendering commands are executed.
If glFlush( ) isn't sufficient for
you, try glFinish( ). This command flushes the network as glFlush( )
does
and then waits for notification from the graphics hardware or network
indicating that the drawing is complete in the framebuffer. You might
need to use glFinish( ) if you want to synchronize tasks  for example,
to make sure that your threedimensional rendering is on the screen
before you use Display PostScript to draw labels on top of the
rendering. Another example would be to ensure that the drawing is
complete before it begins to accept user input. After you issue a
glFinish( ) command, your graphics process is blocked until it receives
notification from the graphics hardware that the drawing is complete.
Keep in mind that excessive use of glFinish( ) can reduce the
performance of your application, especially if you're running over a
network, because it requires roundtrip communication. If glFlush( ) is
sufficient for your needs, use it instead of glFinish( ).
void
glFinish(void);
Forces all previously issued OpenGL commands to complete. This command
doesn't return until all effects from previous commands are fully
realized.
Coordinate System Survival Kit
Whenever you initially open a window
or later move or resize that window, the window system will send an
event to notify you. If you are using GLUT, the notification is
automated; whatever routine has been registered to glutReshapeFunc( )
will be called. You must register a callback function that will
reestablish the rectangular region that will be the new rendering
canvas and define the coordinate system to which objects will be
drawn. Later you'll see how to define threedimensional
coordinate systems, but right now, just create a simple, basic
twodimensional coordinate system into which you can draw a few
objects. Call glutReshapeFunc(reshape), where reshape( ) is given as,
void reshape (int
w, int h)
{
glViewport (0, 0,
(GLsizei) w, (GLsizei) h);
glMatrixMode
(GL_PROJECTION);
glLoadIdentity
();
gluOrtho2D
(0.0, (GLdouble) w, 0.0, (GLdouble) h);
}
The internals of GLUT will pass this
function two arguments: the width and height, in pixels, of the new,
moved, or resized window. glViewport( ) adjusts the pixel rectangle for
drawing to be the entire new window. The next three routines adjust the
coordinate system for drawing so that the lowerleft corner is (0, 0),
and the upperright corner is (w, h).
To explain it another way, think
about a piece of graphing paper. The w and h values in reshape( )
represent how many columns and rows of squares are on your graph paper.
Then you have to put axes on the graph paper. The gluOrtho2D( ) routine
puts the origin, (0, 0), all the way in the lowest, leftmost square,
and makes each square represent one unit. Now when you render the
points, lines, and polygons in the rest of this chapter, they will
appear on this paper in easily predictable squares. (For now, keep all
your objects twodimensional.)
Describing
Points, Lines, and Polygons
This section explains how to describe
OpenGL geometric primitives. All geometric primitives are eventually
described in terms of their vertices  coordinates that define the
points themselves, the endpoints of line segments, or the corners of
polygons. The next section discusses how these primitives are displayed
and what control you have over their display.
What Are Points, Lines, and Polygons?
You probably have a fairly good idea
of what a mathematician means by the terms point, line, and polygon.
The OpenGL meanings are similar, but not quite the same. One difference
comes from the limitations of computerbased calculations. In any
OpenGL implementation, floatingpoint calculations are of finite
precision, and they have roundoff errors. Consequently, the
coordinates of OpenGL points, lines, and polygons suffer from the same
problems. Another more important difference arises from the limitations
of a raster graphics display. On such a display, the smallest
displayable unit is a pixel, and although pixels might be less than
1/100 of an inch wide, they are still much larger than the
mathematician's concepts of infinitely small (for points) or infinitely
thin (for lines). When OpenGL performs calculations, it assumes points
are represented as vectors of floatingpoint numbers. However, a point
is typically (but not always) drawn as a single pixel, and many
different points with slightly different coordinates could be drawn by
OpenGL on the same pixel.
Points
A point is represented by a set of
floatingpoint numbers called a vertex. All internal calculations are
done as if vertices are threedimensional. Vertices specified by the
user as twodimensional (that is, with only x and y coordinates) are
assigned a z coordinate equal to zero by OpenGL.
Advanced
OpenGL works in the homogeneous
coordinates of threedimensional projective geometry, so for internal
calculations, all vertices are represented with four floatingpoint
coordinates (x, y, z, w). If w is different from zero, these
coordinates correspond to the Euclidean threedimensional point (x/w,
y/w, z/w). You can specify the w coordinate in OpenGL commands, but
that's rarely done. If the w coordinate isn't specified, it's
understood to be 1.0. (See Appendix F for more information about
homogeneous coordinate systems.)
Lines
In OpenGL, the term line refers to a
line segment, not the mathematician's version that extends to infinity
in both directions. There are easy ways to specify a connected series
of line segments, or even a closed, connected series of segments. In
all cases, though, the lines constituting the connected series are
specified in terms of the vertices at their endpoints.
Polygons
Polygons are the areas enclosed by
single closed loops of line segments, where the line segments are
specified by the vertices at their endpoints. Polygons are typically
drawn with the pixels in the interior filled in, but you can also draw
them as outlines or a set of points. In general, polygons can be
complicated, so OpenGL makes some strong restrictions on what
constitutes a primitive polygon. First, the edges of OpenGL polygons can't
intersect (a mathematician would call a polygon satisfying this
condition a simple polygon). Second, OpenGL polygons must be convex,
meaning that they cannot have indentations. Stated precisely, a region
is convex if, given any two points in the interior, the line segment
joining them is also in the interior. OpenGL, however, doesn't restrict
the number of line segments making up the boundary of a convex polygon.
Note that polygons with holes can't be described. They are nonconvex,
and they can't be drawn with a boundary made up of a single closed
loop. Be aware that if you present OpenGL with a nonconvex filled
polygon, it might not draw it as you expect. For instance, on most
systems no more than the convex hull of the polygon would be filled. On
some systems, less than the convex hull might be filled.
The reason for the OpenGL
restrictions on valid polygon types is that it's simpler to provide
fast polygonrendering hardware for that restricted class of polygons.
Simple polygons can be rendered quickly. The difficult cases are hard
to detect quickly. So for maximum performance, OpenGL crosses its
fingers and assumes the polygons are simple.
Rectangles
Since rectangles are so common in graphics applications, OpenGL
provides a filledrectangle drawing primitive, glRect*( ). You can draw
a rectangle as a polygon, as described in "OpenGL Geometric Drawing
Primitives," but your particular implementation of OpenGL might have
optimized glRect*( ) for rectangles.
void
glRect{sifd}(TYPEx1, TYPEy1, TYPEx2, TYPEy2);
void
glRect{sifd}v(TYPE*v1, TYPE*v2);
Draws the rectangle defined by the
corner points (x1, y1) and (x2, y2). The rectangle lies in the plane
z=0 and has sides parallel to the x and yaxes. If the vector form of
the function is used, the corners are given by two pointers to arrays,
each of which contains an (x, y) pair. Note that although the rectangle
begins with a particular orientation in threedimensional space (in the
xy plane and parallel to the axes), you can change this by applying
rotations or other transformations.
Specifying Vertices
With OpenGL, all geometric objects are ultimately described as an
ordered set of vertices. You use the glVertex*( ) command to specify a
vertex.
void
glVertex{234}{sifd}[v](TYPEcoords);
Specifies a vertex for use in
describing a geometric object. You can supply up to four coordinates
(x, y, z, w) for a particular vertex or as few as two (x, y) by
selecting the appropriate version of the command. If you use a version
that doesn't explicitly specify z or w, z is understood to be 0 and w
is understood to be 1. Calls to glVertex*( ) are only effective between
a glBegin( ) and glEnd( ) pair.
glVertex2s(2, 3);
glVertex3d(0.0,
0.0, 3.1415926535898);
glVertex4f(2.3,
1.0, 2.2, 2.0);
GLdouble
dvect[3] = {5.0, 9.0, 1992.0};
glVertex3dv(dvect);
The first example represents a vertex
with threedimensional coordinates (2, 3, 0). (Remember that if it
isn't specified, the z coordinate is understood to be 0.) The
coordinates in the second example are (0.0, 0.0, 3.1415926535898)
(doubleprecision floatingpoint numbers). The third example represents
the vertex with threedimensional coordinates (1.15, 0.5, 1.1).
(Remember that the x, y, and z coordinates are eventually divided by
the w coordinate.) In the final example, dvect is a pointer to an array
of three doubleprecision floatingpoint numbers. On some machines, the
vector form of glVertex*( ) is more efficient, since only a single
parameter needs to be passed to the graphics subsystem. Special
hardware might be able to send a whole series of coordinates in a
single batch. If your machine is like this, it's to your advantage to
arrange your data so that the vertex coordinates are packed
sequentially in memory. In this case, there may be some gain in
performance by using the vertex array operations of OpenGL.
OpenGL Geometric Drawing Primitives
Now that you've seen how to specify vertices, you still need to know
how to tell OpenGL to create a set of points, a line, or a polygon from
those vertices. To do this, you bracket each set of vertices between a
call to glBegin( ) and a call to glEnd( ). The argument passed to
glBegin( ) determines what sort of geometric primitive is constructed
from the vertices.
glBegin(GL_POLYGON);
glVertex2f(0.0,
0.0);
glVertex2f(0.0,
3.0);
glVertex2f(4.0,
3.0);
glVertex2f(6.0,
1.5);
glVertex2f(4.0,
0.0);
glEnd(); 

If
you had used GL_POINTS instead of GL_POLYGON, the primitive would have
been simply the five points shown in Figure 26. Table 22 in the
following function summary for glBegin( ) lists the ten possible
arguments and the corresponding type of primitive.
void
glBegin(GLenum mode);
marks the beginning of a vertexdata list that describes a geometric
primitive. The type of primitive is indicated by mode, which can be any
of the values shown below:
Value
Meaning
GL_POINTS
individual points
GL_LINES
pairs of vertices interpreted as
individual line segments
GL_LINE_STRIP
series of connected line segments
GL_LINE_LOOP
same as above, with a segment
added between last and first vertices
GL_TRIANGLES
triples of vertices interpreted
as triangles
GL_TRIANGLE_STRIP linked
strip of triangles
GL_TRIANGLE_FAN
linked fan of triangles
GL_QUADS
quadruples of vertices interpreted as foursided
polygons
GL_QUAD_STRIP
linked strip of quadrilaterals
GL_POLYGON
boundary of a
simple, convex polygon
Line Details
With OpenGL, you can specify lines with different widths and lines that
are stippled in various ways  dotted, dashed, drawn with alternating
dots and dashes, and so on.
void
glLineWidth(GLfloat width);
Sets the width in pixels for rendered
lines; width must be greater than 0.0 and by default is 1.0. The actual
rendering of lines is affected by the antialiasing mode, in the same
way as for points. (See "Antialiasing" in Chapter 6.) Without
antialiasing, widths of 1, 2, and 3 draw lines 1, 2, and 3 pixels wide.
With antialiasing enabled, noninteger line widths are possible, and
pixels on the boundaries are typically drawn at less than full
intensity. As with point sizes, a particular OpenGL implementation
might limit the width of nonantialiased lines to its maximum
antialiased line width, rounded to the nearest integer value. You can
obtain this floatingpoint value by using GL_LINE_WIDTH_RANGE with
glGetFloatv( ).
openGL
Matrix Transformations
The openGL image rendering process is managed as a series of matrix
transformations. Each point (vertex) passes through a
series of transformations defined as matrix products. The viewing
transformations must precede the modeling transformations in your code,
but you can specify the projection and viewport transformations at any
point before drawing occurs. The figure below shows the order in
which these operations occur on your computer.
To specify viewing, modeling, and
projection transformations, you construct a 4 × 4 matrix M, which is then multiplied by the
coordinates of each vertex v
in the scene to accomplish the transformation
v'=Mv
(Remember that vertices always have
four coordinates (x, y, z, w),
though in most cases w is 1 and for twodimensional data z is 0.) Note that viewing and
modeling transformations are automatically applied to surface normal
vectors, in addition to vertices. (Normal vectors are used only in eye
coordinates.) This ensures that the normal vector's relationship to the
vertex data is properly preserved. The viewing and modeling
transformations you specify are combined to form the modelview matrix,
which is applied to the incoming object coordinates to yield eye
coordinates. Next, if you've specified additional clipping planes to
remove certain objects from the scene or to provide cutaway views of
objects, these clipping planes are applied.
After that, openGL applies the
projection matrix to yield clip coordinates. This transformation
defines a viewing volume; objects outside this volume are clipped so
that they're not drawn in the final scene. After this point, the
perspective division is performed by dividing coordinate values by w, to produce normalized device
coordinates.
Homogeneous Coordinates
OpenGL commands usually deal with two and threedimensional vertices,
but in fact all are treated internally as threedimensional homogeneous
vertices comprising four coordinates. Every column vector (x, y, z, w)T
represents a homogeneous vertex if at least one of its elements is
nonzero. If the real number a is nonzero, then (x, y, z, w)T and (ax,
ay, az, aw)T represent the same homogeneous vertex. (This is just like
fractions: x/y = (ax)/(ay).) A threedimensional euclidean space point
(x, y, z)T becomes the homogeneous vertex with coordinates (x, y, z,
1.0)T, and the twodimensional euclidean point (x, y)T becomes (x, y,
0.0, 1.0)T.
As long as w is nonzero, the homogeneous vertex (x, y, z, w)T
corresponds to the threedimensional point (x/w, y/w, z/w)T. If w =
0.0, it corresponds to no euclidean point, but rather to some idealized
"point at infinity." To understand this point at infinity, consider the
point (1, 2, 0, 0), and note that the sequence of points (1, 2, 0, 1),
(1, 2, 0, 0.01), and (1, 2.0, 0.0, 0.0001), corresponds to the
euclidean points (1, 2), (100, 200), and (10000, 20000). This sequence
represents points rapidly moving toward infinity along the line 2x = y.
Thus, you can think of (1, 2, 0, 0) as the point at infinity in the
direction of that line.
Note: OpenGL might not handle
homogeneous clip coordinates with w < 0 correctly. To be sure that
your code is portable to all OpenGL systems, use only nonnegative w
values.
Transforming Vertices
Vertex transformations (such as rotations, translations, scaling, and
shearing) and projections (such as perspective and orthographic) can
all be represented by applying an appropriate 4 × 4 matrix to the
coordinates representing the vertex. If v represents a homogeneous
vertex and M is a 4 × 4 transformation matrix, then Mv is the
image of v under the transformation by M. (In computergraphics
applications, the transformations used are usually nonsingular  in
other words, the matrix M can be inverted. This isn't required, but
some problems arise with nonsingular transformations.) After
transformation, all transformed vertices are clipped so that x, y, and
z are in the range [ &ohgr; , w] (assuming w > 0). Note that
this range corresponds in euclidean space to [1.0, 1.0].
Transforming Normals
Normal vectors aren't transformed in the same way as vertices or
position vectors. Mathematically, it's better to think of normal
vectors not as vectors, but as planes perpendicular to those vectors.
Then, the transformation rules for normal vectors are described by the
transformation rules for perpendicular planes. A homogeneous plane is
denoted by the row vector (a, b, c, d), where at least one of a, b, c,
or d is nonzero. If q is a nonzero real number, then (a, b, c, d) and
(qa, qb, qc, qd) represent the same plane. A point (x, y, z, w)T is on
the plane (a, b, c, d) if ax+by+cz+dw = 0. (If w = 1, this is the
standard description of a euclidean plane.) In order for (a, b, c, d)
to represent a euclidean plane, at least one of a, b, or c must be
nonzero. If they're all zero, then (0, 0, 0, d) represents the "plane
at infinity," which contains all the "points at infinity."
If p is a homogeneous plane and v is a homogeneous vertex, then the
statement "v lies on plane p" is written mathematically as pv = 0,
where pv is normal matrix multiplication. If M is a nonsingular vertex
transformation (that is, a 4 × 4 matrix that has an inverse M1),
then pv = 0 is equivalent to pM1Mv = 0, so Mv lies on the plane pM1.
Thus, pM1 is the image of the plane under the vertex transformation M.
If you like to think of normal vectors as vectors instead of as the
planes perpendicular to them, let v and n be vectors such that v is
perpendicular to n. Then, nTv = 0. Thus, for an arbitrary nonsingular
transformation M, nTM1Mv = 0, which means that nTM1 is the transpose
of the transformed normal vector. Thus, the transformed normal vector
is (M1)Tn. In other words, normal vectors are transformed by the
inverse transpose of the transformation that transforms points. Whew!
Transformation
Matrices
Although any nonsingular matrix M represents a valid projective
transformation, a few special matrices are particularly useful. These
matrices are listed in the following subsections.
Translation
The call glTranslate*(x, y, z) generates T, where
Scaling
The call glScale*(x, y, z) generates S, where
Notice that S1 is defined only if x, y, and z are all nonzero.
Rotation
The call glRotate*(a, x, y, z) generates R as follows:
Let v = (x, y, z)T, and u = v/v = (x', y', z')T. Also let
Then
The R matrix is always defined. If x=y=z=0, then R is the identity
matrix. You can obtain the inverse of R, R1, by substituting 
&agr; for a, or by transposition. The glRotate*( ) command
generates
a matrix for rotation about an arbitrary axis. Often, you're rotating
about one of the coordinate axes; the corresponding matrices are as
follows:
As before, the inverses are obtained by transposition.
Perspective Projection
The call glFrustum(l, r, b, t, n, f) generates R, where
R is defined as long as l /= r, t /= b, and n /= f.
An Example: Building an Icosahedron
To illustrate some of the
considerations that arise in approximating a surface, let's look at
some example code sequences. This code concerns the vertices of a
regular icosahedron (which is a Platonic solid composed of twenty faces
that span twelve vertices, each face of which is an equilateral
triangle). An icosahedron can be considered a rough approximation for a
sphere. Example 213 defines the vertices and triangles making up an
icosahedron and then draws the icosahedron.
#define X
.525731112119133606
#define
Z .850650808352039932
static GLfloat
vdata[12][3] =
{
{X, 0.0, Z}, {X, 0.0, Z}, {X, 0.0, Z}, {X, 0.0, Z},
{0.0, Z, X}, {0.0, Z, X}, {0.0, Z, X}, {0.0, Z, X},
{Z, X, 0.0}, {Z, X, 0.0}, {Z, X, 0.0}, {Z, X, 0.0}
};
static GLuint
tindices[20][3] =
{
{0,4,1}, {0,9,4}, {9,5,4}, {4,5,8}, {4,8,1},
{8,10,1}, {8,3,10}, {5,3,8}, {5,2,3}, {2,7,3},
{7,10,3}, {7,6,10}, {7,11,6}, {11,0,6}, {0,1,6},
{6,1,10}, {9,0,11}, {9,11,2}, {9,2,5}, {7,2,11}
};
int i;
glBegin(GL_TRIANGLES);
for (i = 0; i < 20; i++)
{
/* color information here */
glVertex3fv(&vdata[tindices[i][0]][0]);
glVertex3fv(&vdata[tindices[i][1]][0]);
glVertex3fv(&vdata[tindices[i][2]][0]);
}
glEnd();
The strange numbers X and Z are chosen so that the distance from the
origin to any of the vertices of the icosahedron is 1.0. The
coordinates of the twelve vertices are given in the array vdata[ ][ ],
where the zeroth vertex is { &Xgr; , 0.0, &Zgr; }, the first
is {X, 0.0, Z}, and so on. The array tindices[][] tells how to link the
vertices to make triangles. For example, the first triangle is made
from the zeroth, fourth, and first vertex. If you take the vertices for
triangles in the order given, all the triangles have the same
orientation.
//
openGL_icosahedron
//
#include "stdafx.h"
#include <gl/glut.h>
#define X
.525731112119133606
#define Z
.850650808352039932
static GLfloat
vdata[12][3] =
{
{X, 0.0, Z}, {X, 0.0, Z}, {X,
0.0, Z},
{X, 0.0, Z},
{0.0, Z, X}, {0.0, Z, X}, {0.0,
Z, X},
{0.0, Z, X},
{Z, X, 0.0}, {Z, X, 0.0}, {Z, X,
0.0},
{Z, X, 0.0}
};
static GLuint
tindices[20][3] =
{
{0,4,1}, {0,9,4}, {9,5,4},
{4,5,8},
{4,8,1},
{8,10,1}, {8,3,10}, {5,3,8},
{5,2,3},
{2,7,3},
{7,10,3}, {7,6,10}, {7,11,6},
{11,0,6},
{0,1,6},
{6,1,10}, {9,0,11}, {9,11,2},
{9,2,5},
{7,2,11}
};
void render(void)
{
glClear(GL_COLOR_BUFFER_BIT);
glBegin(GL_TRIANGLES);
for
(int i = 0; i < 20; i++)
{
glColor3f(1.0,0.0,0.0);
// red vertex
glVertex3fv(&vdata[tindices[i][0]][0]);
glColor3f(0.0,1.0,0.0);
// green vertex
glVertex3fv(&vdata[tindices[i][1]][0]);
glColor3f(0.0,0.0,1.0);
// blue vertex
glVertex3fv(&vdata[tindices[i][2]][0]);
}
glEnd();
glFlush();
}
void main(int argc, char
**argv)
{
glutInit(&argc, argv);
glutInitDisplayMode(GLUT_DEPTH  GLUT_SINGLE
 GLUT_RGBA);
glutInitWindowPosition(100,100);
glutInitWindowSize(500,500);
glutCreateWindow("icosahedron");
glutDisplayFunc(render);
glutMainLoop();
}
The line that mentions color information should be replaced by a
command that sets the color of the ith face. If no code appears here,
all faces are drawn in the same color, and it'll be impossible to
discern the threedimensional quality of the object.
A Simple Example: Drawing a Cube
The following openGL program draws a
cube that's scaled by a modeling transformation. The viewing
transformation, gluLookAt( ), positions and aims the camera towards
where the cube is drawn. A projection transformation and a viewport
transformation are also specified. The rest of this section walks you
through this example and briefly explains the transformation commands
it uses. The succeeding sections contain the complete, detailed
discussion of all OpenGL's transformation commands.
//
transformed_cube
//
#include "stdafx.h"
#include <GL/glut.h>
void init(void)
{
glClearColor (0.0, 0.0, 0.0, 0.0);
glShadeModel (GL_FLAT);
}
void display(void)
{
glClear (GL_COLOR_BUFFER_BIT);
glColor3f (1.0, 1.0, 1.0);
glLoadIdentity (); /*
clear the matrix */
/*
viewing
transformation */
gluLookAt (0.0, 0.0, 5.0, 0.0,
0.0, 0.0,
0.0, 1.0, 0.0);
glScalef (1.0, 2.0, 1.0); /* modeling transformation */
glutWireCube (1.0);
glFlush ();
}
void reshape (int w, int
h)
{
glViewport (0, 0, (GLsizei) w,
(GLsizei)
h);
glMatrixMode (GL_PROJECTION);
glLoadIdentity ();
glFrustum (1.0, 1.0, 1.0, 1.0,
1.5,
20.0);
glMatrixMode (GL_MODELVIEW);
}
void main(int argc, char**
argv)
{
glutInit(&argc, argv);
glutInitDisplayMode (GLUT_SINGLE 
GLUT_RGB);
glutInitWindowSize (500, 500);
glutInitWindowPosition (100, 100);
glutCreateWindow (argv[0]);
init ();
glutDisplayFunc(display);
glutReshapeFunc(reshape);
glutMainLoop();
}
The Viewing Transformation
Recall that the viewing
transformation is analogous to positioning and aiming a camera. In this
code example, before the viewing transformation can be specified, the
current matrix is set to the identity matrix with glLoadIdentity( ).
This step is necessary since most of the transformation commands
multiply the current matrix by the specified matrix and then set the
result to be the current matrix. If you don't clear the current matrix
by loading it with the identity matrix, you continue to combine
previous transformation matrices with the new one you supply. In some
cases, you do want to perform such combinations, but you also need to
clear the matrix sometimes.
After the matrix is initialized, the viewing transformation is
specified with gluLookAt( ). The arguments for this command indicate
where the camera (or eye position) is placed, where it is aimed, and
which way is up. The arguments used here place the camera at (0, 0, 5),
aim the camera lens towards (0, 0, 0), and specify the upvector as (0,
1, 0). The upvector defines a unique orientation for the camera. If
gluLookAt( ) was not called, the camera has a default position and
orientation. By default, the camera is situated at the origin, points
down the negative zaxis, and has an upvector of (0, 1, 0). So the
overall effect is that gluLookAt( ) moves the camera 5 units along the
zaxis.
The Modeling Transformation
You use the modeling transformation to position and orient the model.
For example, you can rotate, translate, or scale the model  or perform
some combination of these operations. In this example, glScalef( ) is
the modeling transformation that is used. The arguments for this
command specify how scaling should occur along the three axes. If all
the arguments are 1.0, this command has no effect. The cube is drawn
twice as large in the y direction. Thus, if one corner of the cube had
originally been at (3.0, 3.0, 3.0), that corner would wind up being
drawn at (3.0, 6.0, 3.0). The effect of this modeling transformation is
to transform the cube so that it isn't a cube but a rectangular box.
Try This
Change the gluLookAt( ) call to the modeling transformation
glTranslatef( ) with parameters (0.0, 0.0, 5.0). The result should
look exactly the same as when you used gluLookAt( ). Why are the
effects of these two commands similar? Note that instead of moving the
camera (with a viewing transformation) so that the cube could be
viewed, you could have moved the cube away from the camera (with a
modeling transformation). This duality in the nature of viewing and
modeling transformations is why you need to think about the effect of
both types of transformations simultaneously. It doesn't make sense to
try to separate the effects, but sometimes it's easier to think about
them one way rather than the other. This is also why modeling and
viewing transformations are combined into the modelview matrix before
the transformations are applied.
Also note that the modeling and viewing transformations are included in
the display( ) routine, along with the call that's used to draw the
cube, glutWireCube( ). This way, display( ) can be used repeatedly to
draw the contents of the window if, for example, the window is moved or
uncovered, and you've ensured that each time, the cube is drawn in the
desired way, with the appropriate transformations. The potential
repeated use of display( ) underscores the need to load the identity
matrix before performing the viewing and modeling transformations,
especially when other transformations might be performed between calls
to display( ).
The Projection Transformation
Specifying the projection
transformation is like choosing a lens for a camera. You can think of
this transformation as determining what the field of view or viewing
volume is and therefore what objects are inside it and to some extent
how they look. This is equivalent to choosing among wideangle, normal,
and telephoto lenses, for example. With a wideangle lens, you can
include a wider scene in the final photograph than with a telephoto
lens, but a telephoto lens allows you to photograph objects as though
they're closer to you than they actually are. In computer graphics, you
don't have to pay $10,000 for a 2000millimeter telephoto lens; once
you've bought your graphics workstation, all you need to do is use a
smaller number for your field of view. In addition to the fieldofview
considerations, the projection transformation determines how objects
are projected onto the screen, as its name suggests. Two basic types of
projections are provided for you by OpenGL, along with several
corresponding commands for describing the relevant parameters in
different ways. One type is the perspective projection, which matches
how you see things in daily life.
Perspective makes objects that are farther away appear smaller; for
example, it makes railroad tracks appear to converge in the distance.
If you're trying to make realistic pictures, you'll want to choose
perspective projection, which is specified with the glFrustum( )
command in this code example. The other type of projection is
orthographic, which maps objects directly onto the screen without
affecting their relative size. Orthographic projection is used in
architectural and computeraided design applications where the final
image needs to reflect the measurements of objects rather than how they
might look. Architects create perspective drawings to show how
particular buildings or interior spaces look when viewed from various
vantage points; the need for orthographic projection arises when
blueprint plans or elevations are generated, which are used in the
construction of buildings.
Before glFrustum( ) can be called to set the projection transformation,
some preparation needs to happen. As shown in the reshape( ) routine,
the command called glMatrixMode( ) is used first, with the argument
GL_PROJECTION. This indicates that the current matrix specifies the
projection transformation; the following transformation calls then
affect the projection matrix. As you can see, a few lines later
glMatrixMode( ) is called again, this time with GL_MODELVIEW as the
argument. This indicates that succeeding transformations now affect the
modelview matrix instead of the projection matrix.
Note that glLoadIdentity( ) is used
to initialize the current projection matrix so that only the specified
projection transformation has an effect. Now glFrustum( ) can be
called, with arguments that define the parameters of the projection
transformation. In this example, both the projection transformation and
the viewport transformation are contained in the reshape( ) routine,
which is called when the window is first created and whenever the
window is moved or reshaped. This makes sense, since both projecting
(the width to height aspect ratio of the projection viewing volume) and
applying the viewport relate directly to the screen, and specifically
to the size or aspect ratio of the window on the screen.
Try This
Change the glFrustum( ) call to the
more commonly used Utility Library routine gluPerspective( ) with
parameters (60.0, 1.0, 1.5, 20.0). Then experiment with different
values, especially for fovy and aspect.
The Viewport Transformation
Together, the projection
transformation and the viewport transformation determine how a scene
gets mapped onto the computer screen. The projection transformation
specifies the mechanics of how the mapping should occur, and the
viewport indicates the shape of the available screen area into which
the scene is mapped. Since the viewport specifies the region the image
occupies on the computer screen, you can think of the viewport
transformation as defining the size and location of the final processed
photograph  for example, whether the photograph should be enlarged or
shrunk. The arguments to glViewport( ) describe the origin of the
available screen space within the window  (0, 0) in this example  and
the width and height of the available screen area, all measured in
pixels on the screen. This is why this command needs to be called
within reshape( )  if the window changes size, the viewport needs to
change accordingly. Note that the width and height are specified using
the actual width and height of the window; often, you want to specify
the viewport this way rather than giving an absolute size.
Drawing the Scene
Once all the necessary transformations have been specified, you can
draw the scene (that is, take the photograph). As the scene is drawn,
OpenGL transforms each vertex of every object in the scene by the
modeling and viewing transformations. Each vertex is then transformed
as specified by the projection transformation and clipped if it lies
outside the viewing volume described by the projection transformation.
Finally, the remaining transformed vertices are divided by w and mapped
onto the viewport.
GeneralPurpose Transformation Commands
This section discusses some OpenGL
commands that you might find useful as you specify desired
transformations. You've already seen a couple of these commands,
glMatrixMode( ) and glLoadIdentity( ). The other two commands described
here  glLoadMatrix*( ) and glMultMatrix*( )  allow you to specify any
transformation matrix directly and then to multiply the current matrix
by that specified matrix. More specific transformation commands  such
as gluLookAt( ) and glScale*( )  are described in later sections. As
described in the preceding section, you need to state whether you want
to modify the modelview or projection matrix before supplying a
transformation command. You choose the matrix with glMatrixMode( ).
When you use nested sets of OpenGL commands that might be called
repeatedly, remember to reset the matrix mode correctly.
void
glMatrixMode(GLenum mode);
Specifies whether the modelview,
projection, or texture matrix will be modified, using the argument
GL_MODELVIEW, GL_PROJECTION, or GL_TEXTURE for mode. Subsequent
transformation commands affect the specified matrix. Note that only one
matrix can be modified at a time. By default, the modelview matrix is
the one that's modifiable, and all three matrices contain the identity
matrix. You use the glLoadIdentity( ) command to clear the currently
modifiable matrix for future transformation commands, since these
commands modify the current matrix. Typically, you always call this
command before specifying projection or viewing transformations, but
you might also call it before specifying a modeling transformation.
void
glLoadIdentity(void);
Sets the currently modifiable matrix
to the 4 × 4 identity matrix. If you want to specify explicitly a
particular matrix to be loaded as the current matrix, use
glLoadMatrix*( ). Similarly, use glMultMatrix*( ) to multiply the
current matrix by the matrix passed in as an argument. The argument for
both these commands is a vector of sixteen values (m1, m2, ... , m16)
that specifies a matrix M as follows:
Remember that you might be able to maximize efficiency by using display
lists to store frequently used matrices (and their inverses) rather
than recomputing them.
If you're programming in C and you declare a matrix as m[4][4], then
the element m[i][j] is in the ith column and jth row of the OpenGL
transformation matrix. This is the reverse of the standard C convention
in which m[i][j] is in row i and column j. To avoid confusion, you
should declare your matrices as m[16].
void
glLoadMatrix{fd}(const TYPE *m);
Sets
the sixteen values of the current matrix to those specified by m.
void
glMultMatrix{fd}(const TYPE *m);
Multiplies
the matrix specified by the sixteen values pointed to by m by the
current matrix
and stores the result as the current matrix.
All matrix multiplication with OpenGL occurs as follows: Suppose the
current matrix is C and the
matrix specified with glMultMatrix*( ) or any of the transformation
commands is M. After
multiplication, the final matrix is always CM. Since matrix multiplication
isn't generally commutative, the order makes a difference.
Viewing and Modeling Transformations
Viewing and modeling transformations
are inextricably related in OpenGL and are in fact combined into a
single modelview matrix. One of the toughest problems newcomers
to computer graphics face is understanding the effects of combined
threedimensional transformations. As you've already seen, there are
alternative ways to think about transformations  do you want to move
the camera in one direction, or move the object in the opposite
direction? Each way of thinking about transformations has advantages
and disadvantages, but in some cases one way more naturally matches the
effect of the intended transformation. If you can find a natural
approach for your particular application, it's easier to visualize the
necessary transformations and then write the corresponding code to
specify the matrix manipulations. The first part of this section
discusses how to think about transformations; later, specific commands
are presented. For now, we use only the matrixmanipulation commands
you've already seen. Finally, keep in mind that you must call
glMatrixMode( ) with GL_MODELVIEW as its argument prior to performing
modeling or viewing transformations.
Thinking about Transformations
Let's start with a simple case of two
transformations: a 45degree counterclockwise rotation about the origin
around the zaxis, and a translation down the xaxis. Suppose that the
object you're drawing is small compared to the translation (so that you
can see the effect of the translation), and that it's originally
located at the origin. If you rotate the object first and then
translate it, the rotated object appears on the xaxis. If you
translate it down the xaxis first, however, and then rotate about the
origin, the object is on the line y=x, as shown below. In general, the
order of transformations is critical. If you do transformation A and then transformation B, you almost always get something
different than if you do them in the opposite order.
Now let's talk about the order in
which you specify a series of transformations. All viewing and modeling
transformations are represented as 4 × 4 matrices. Each
successive glMultMatrix*( ) or transformation command multiplies a new
4 × 4 matrix M by the
current modelview matrix C to
yield CM. Finally, vertices v are multiplied by the current
modelview matrix. This process means that the last transformation
command called in your program is actually the first one applied to the
vertices: CMv. Thus, one way
of looking at it is to say that you have to specify the matrices in the
reverse order. Like many other things, however, once you've gotten used
to thinking about this correctly, backward will seem like forward.
Consider the following code sequence, which draws a single point using
three transformations:
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glMultMatrixf(N);
/* apply transformation N */
glMultMatrixf(M);
/* apply transformation M */
glMultMatrixf(L);
/* apply transformation L */
glBegin(GL_POINTS);
glVertex3f(v);
/* draw transformed vertex v */
glEnd();
With this code, the modelview matrix
successively contains I, N, NM,
and finally NML, where I represents the identity matrix.
The transformed vertex is NMLv.
Thus, the vertex transformation is N(M(Lv))
 that is, v is multiplied
first by L, the resulting Lv is multiplied by M, and the resulting MLv is multiplied by N. Notice that
the transformations to vertex v
effectively occur in the opposite order than they were specified.
(Actually, only a single multiplication of a vertex by the modelview
matrix occurs; in this example, the N,
M, and L matrices are already multiplied
into a single matrix before it's applied to v.)
Grand, Fixed Coordinate System
Thus, if you like to think in terms of a grand, fixed coordinate system
 in which matrix multiplications affect the position, orientation, and
scaling of your model  you have to think of the multiplications as
occurring in the opposite order from how they appear in the code. Using
the simple example shown on the left side of the figure above (a
rotation about the origin and a translation along the xaxis), if you
want the object to appear on the axis after the operations, the
rotation must occur first, followed by the translation. To do this,
you'll need to reverse the order of operations, so the code looks
something like this (where R
is the rotation matrix and T
is the translation matrix):
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glMultMatrixf(T);
/* translation */
glMultMatrixf(R);
/* rotation */
draw_the_object();
Moving a Local Coordinate System
Another way to view matrix
multiplications is to forget about a grand, fixed coordinate system in
which your model is transformed and instead imagine that a local
coordinate system is tied to the object you're drawing. All operations
occur relative to this changing coordinate system. With this approach,
the matrix multiplications now appear in the natural order in the code.
(Regardless of which analogy you're using, the code is the same, but
how you think about it differs.) To see this in the
translationrotation example, begin by visualizing the object with a
coordinate system tied to it. The translation operation moves the
object and its coordinate system down the xaxis. Then, the rotation
occurs about the (nowtranslated) origin, so the object rotates in
place in its position on the axis. This approach is what you should use
for applications such as articulated robot arms, where there are joints
at the shoulder, elbow, and wrist, and on each of the fingers. To
figure out where the tips of the fingers go relative to the body, you'd
like to start at the shoulder, go down to the wrist, and so on,
applying the appropriate rotations and translations at each joint.
Thinking about it in reverse would be far more confusing.
This second approach can be
problematic, however, in cases where scaling occurs, and especially so
when the scaling is nonuniform (scaling different amounts along the
different axes). After uniform scaling, translations move a vertex by a
multiple of what they did before, since the coordinate system is
stretched. Nonuniform scaling mixed with rotations may make the axes of
the local coordinate system nonperpendicular.
As mentioned earlier, you normally
issue viewing transformation commands in your program before any
modeling transformations. This way, a vertex in a model is first
transformed into the desired orientation and then transformed by the
viewing operation. Since the matrix multiplications must be specified
in
reverse order, the viewing commands need to come first. Note, however,
that you don't need to specify either viewing or modeling
transformations if you're satisfied with the default conditions. If
there's no viewing transformation, the "camera" is left in the default
position at the origin, pointed toward the negative zaxis; if there's
no modeling transformation, the model isn't moved, and it retains its
specified position, orientation, and size.
Since the commands for performing
modeling transformations can be used to perform viewing
transformations, modeling transformations are discussed first, even if
viewing transformations are actually issued first. This order for
discussion also matches the way many programmers think when planning
their code: Often, they write all the code necessary to compose the
scene, which involves transformations to position and orient objects
correctly relative to each other. Next, they decide where they want the
viewpoint to be relative to the scene they've composed, and then they
write the viewing transformations accordingly.
Modeling Transformations
The three OpenGL routines for
modeling transformations are glTranslate*( ), glRotate*( ), and
glScale*( ). As you might suspect, these routines transform an object
(or coordinate system, if you're thinking of it that way) by moving,
rotating, stretching, shrinking, or reflecting it. All three commands
are equivalent to producing an appropriate translation, rotation, or
scaling matrix, and then calling glMultMatrix*( ) with that matrix as
the argument. However, these three routines might be faster than using
glMultMatrix*( ). OpenGL automatically computes the matrices for you.
In the command summaries that follow, each matrix multiplication is
described in terms of what it does to the vertices of a geometric
object using the fixed coordinate system approach, and in terms of what
it does to the local coordinate system that's attached to an object.
Translate
void
glTranslate{fd}(TYPEx, TYPE y, TYPEz);
Multiplies the current matrix by a
matrix that moves (translates) an object by the given x, y, and z
values (or moves the local coordinate system by the same amounts).
The figure below shows the effects of glTranslate( ).
Note that using (0.0, 0.0, 0.0) as
the argument for glTranslate*( ) is the identity operation  that is,
it has no effect on an object or its local coordinate system.
Rotate
void
glRotate{fd}(TYPE angle, TYPE x, TYPE y, TYPE z);
Multiplies the current matrix by a
matrix that rotates an object (or the local coordinate system) in a
counterclockwise direction about the ray from the origin through the
point (x, y, z). The angle parameter specifies the angle of rotation in
degrees. The effect of glRotatef(45.0, 0.0, 0.0, 1.0), which is a
rotation of 45 degrees about the zaxis, is shown below:
Note that an object that lies farther from the axis of rotation is more
dramatically rotated (has a largerorbit) than an object drawn near the
axis. Also, if the angle argument is zero, the glRotate*( ) command has
no effect.
Scale
void
glScale{fd}(TYPEx, TYPE y, TYPEz);
Multiplies the current matrix by a
matrix that stretches, shrinks, or reflects an object along the axes.
Each x, y, and z coordinate of every point in the object is multiplied
by the corresponding argument x, y, or z. With the local coordinate
system approach, the local coordinate axes are stretched, shrunk, or
reflected by the x, y, and z factors, and the associated object is
transformed with them. The figure below shows the effect of
glScalef(2.0, 0.5, 1.0).
glScale*( ) is the only one of the
three modeling transformations that changes the apparent size of an
object: Scaling with values greater than 1.0 stretches an object, and
using values less than 1.0 shrinks it. Scaling with a 1.0 value
reflects an object across an axis. The identity values for scaling are
(1.0, 1.0, 1.0). In general, you should limit your use of glScale*( )
to those cases where it is necessary. Using glScale*( ) decreases the
performance of lighting calculations, because the normal vectors have
to be renormalized after transformation.
Note: A scale value of zero collapses
all object coordinates along that axis to zero. It's usually not a good
idea to do this, because such an operation cannot be undone.
Mathematically speaking, the matrix cannot be inverted, and inverse
matrices are required for certain lighting operations. Sometimes
collapsing coordinates does make sense, however; the calculation of
shadows on a planar surface is a typical application. In general,
if a coordinate system is to be collapsed, the projection matrix should
be used rather than the modelview matrix.
As a review of what we have learned, we return
to the waving sheet animation by Roger Wetzel. Many of the 3D
geometric operations covered in this lecture are used in this example
program. (This source is from Roger Wetzel at www.fatalfx.com).
/*
FATAL FX' Dragnet 2k (C) 1996, 1998 Fatal FX
*
Version 1.1
*
by roger.wetzel@fatalfx.com
*/
#include
<GL/glut.h>
#include
<stdlib.h>
#include
<math.h>
#ifndef
M_PI
#define
M_PI 3.14159265358979323846
#endif
#define
SPACE 35.0
#define
EDGELEN 0.08
#define
POINTSPERSIDE 32
#define
STARTCOORD (POINTSPERSIDE1)*EDGELEN/2
#define
ANIMLEN 100
struct
Point3d
{
GLfloat
x, y, z;
};
struct
Point3d p[POINTSPERSIDE][POINTSPERSIDE];
GLfloat
material1[]
= {0.1, 0.5, 0.8, 1.0};
GLfloat
material2[]
= {0.8, 0.8, 0.8, 1.0};
GLfloat spinX =
0.0,
spinY = 50.0, spinZ = 0.0;
GLfloat x = 0.0,
y =
0.0, z = 0.7;
int
dispList = 1;
int
material = 0, anim = 1, spin = 1;
void
normalize(float v[3])
{
float
d = sqrt(v[0]*v[0]+v[1]*v[1]+v[2]*v[2]);
if
(d != 0.0) /* avoid division by zero */
{
v[0] /= d;
v[1] /= d;
v[2] /= d;
}
}
void
normCrossProd(float v1[3], float
v2[3], float out[3])
{
out[0]
= v1[1]*v2[2]  v1[2]*v2[1];
out[1]
= v1[2]*v2[0]  v1[0]*v2[2];
out[2]
= v1[0]*v2[1]  v1[1]*v2[0];
normalize(out);
}
void
changeMaterial(void)
{
if
(material)
glMaterialfv(GL_FRONT_AND_BACK, GL_AMBIENT_AND_DIFFUSE, material1);
else
glMaterialfv(GL_FRONT_AND_BACK, GL_AMBIENT_AND_DIFFUSE, material2);
material
= !material;
}
void
resetMaterial(void)
{
material
= 0;
changeMaterial();
material
= 0;
}
void
display(void)
{
glClear(GL_COLOR_BUFFER_BIT
 GL_DEPTH_BUFFER_BIT);
glPushMatrix();
glRotatef(spinY,
1.0, 0.0, 0.0);
glRotatef(spinX,
0.0, 1.0, 0.0);
glRotatef(spinZ,
0.0, 0.0, 1.0);
glCallList(dispList);
glPopMatrix();
glFinish();
glutSwapBuffers();
}
void
idle(void)
{
if
(spin)
{
spinX += x;
if
(spinX >= 360.0) {spinX = 360.0;}
spinY += y;
if
(spinY >= 360.0) {spinY = 360.0;}
spinZ += z;
if
(spinZ >= 360.0) {spinZ = 360.0;}
}
if
(anim)
{
dispList;
if (!dispList) {dispList = ANIMLEN;}
}
glutPostRedisplay();
}
void
motion(int xPos, int
yPos)
{
x
= y = 0.0;
spinX
= (GLfloat)xPos;
spinY
= (GLfloat)yPos;
}
void
initDisplayList(int anim)
{
int
i, j, k;
GLfloat
x, y, v1[3], v2[3], normal[3];
for
(y=STARTCOORD, j=0; j<POINTSPERSIDE; y=EDGELEN, j++)
{
for (x=STARTCOORD, k=0;
k<POINTSPERSIDE;
x+=EDGELEN, k++)
{
p[k][j].x = x;
p[k][j].y = y;
p[k][j].z = 0.30*sin(0.1*sqrt(x*SPACE*x*SPACE + y*SPACE*y*SPACE) +
(GLfloat)anim*2.0*M_PI/(GLfloat)ANIMLEN);
}
}
glNewList(anim+1,
GL_COMPILE);
resetMaterial();
for
(j=0; j<POINTSPERSIDE1; j++)
{
for (i=0; i<POINTSPERSIDE1; i++)
{
changeMaterial();
glBegin(GL_QUADS);
/* do the normal vector */
v1[0] = p[j][i].x  p[j+1][i].x;
v1[1] = p[j][i].y  p[j+1][i].y;
v1[2] = p[j][i].z  p[j+1][i].z;
v2[0] = p[j+1][i+1].x  p[j+1][i].x;
v2[1] = p[j+1][i+1].y  p[j+1][i].y;
v2[2] = p[j+1][i+1].z  p[j+1][i].z;
normCrossProd(v1, v2, normal);
glNormal3fv(normal);
/* vertexes */
glVertex3f( p[j][i].x,
p[j][i].y,
p[j][i].z);
glVertex3f( p[j+1][i].x, p[j+1][i].y,
p[j+1][i].z);
glVertex3f(p[j+1][i+1].x, p[j+1][i+1].y, p[j+1][i+1].z);
glVertex3f( p[j][i+1].x, p[j][i+1].y,
p[j][i+1].z);
glEnd();
}
}
glEndList();
}
void
init(void)
{
int
i;
GLfloat
lightPosition[] = {3.0, 0.0, 1.5, 1.0};
GLfloat
matSpecular[] = {0.8, 0.8, 0.8, 1.0};
GLfloat
matShininess[] = {80.0};
glClearColor(0.0,
0.0, 0.0, 0.0);
glShadeModel(GL_SMOOTH);
glEnable(GL_DEPTH_TEST);
glMaterialfv(GL_FRONT,
GL_SPECULAR, matSpecular);
glMaterialfv(GL_FRONT,
GL_SHININESS, matShininess);
glLightfv(GL_LIGHT0,
GL_POSITION, lightPosition);
glEnable(GL_LIGHTING);
glEnable(GL_LIGHT0);
for
(i=0; i<ANIMLEN; i++) {initDisplayList(i);}
dispList
= 1;
}
void
keyboard(unsigned char key, int
xPos, int yPos)
{
switch
(key)
{
case 27:
exit(0);
case '1':
x;
break;
case '2':
x = 0.0;
break;
case '3':
x++;
break;
case 'q':
y;
break;
case 'w':
y = 0.0;
break;
case 'e':
y++;
break;
case 'a':
z;
break;
case 's':
z = 0.0;
break;
case 'd':
z++;
break;
case 'x':
anim = !anim;
break;
case 'z':
spin = !spin;
break;
}
}
void
reshape(int w, int
h)
{
glViewport(0,
0, w, h);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluPerspective(60.0,
(GLfloat) w/(GLfloat) h, 1.0, 20.0);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glTranslatef(0.0,
0.0, 3.5);
}
int
main(int argc, char**
argv)
{
glutInit(&argc,
argv);
glutInitDisplayMode(GLUT_DOUBLE
 GLUT_RGB  GLUT_DEPTH);
glutInitWindowSize(500,
500);
glutInitWindowPosition(100,
100);
glutCreateWindow("FATAL
FX' Dragnet 2k [www.fatalfx.com]");
init();
glutDisplayFunc(display);
glutReshapeFunc(reshape);
glutKeyboardFunc(keyboard);
glutMotionFunc(motion);
glutIdleFunc(idle);
glutMainLoop();
return
0;
}

In the following,
we will describe the
openGL functions and parameters used in the sample code above.
glMaterialfv(GLenum
face, pname,
const GL float *params)
This function
specifies the material parameters
for the lighting model.
face 
specifies which faces are
being updated. This paramter must be one of the following:
GL_FRONT,
GL_BACK, or
GL_FRONT_AND_BACK.
pname  specifies the material parameter
of the faces that are being updated. This paramter must be one of
the following:
GL_AMBIENT,
GL_DIFFUSE,
GL_SPECULAR,
GL_EMISSION,
GL_SHININESS,
GL_AMBIENT_AND
DIFFUSE,
or
GL_COLOR_INDEXES.
params  specifies a pointer to the
values that pname will be set to.
glRotatef(
GLdoublef angle, x, y,
z)
This function produces a rotation of angle
degrees around the vector (x,y,z). The current matrix is
multiplied
by a rotation matrix with the product replacing the current
matrix.
Equivalent to calling glMultMatrix with the argument,
xx(1c)+c
xy(1c)zs xz(1c)+ys 0
yx(1c)+zs
yy(1c(+c yz(1c)xs 0
xz(1c)ys
yz(1c)+xs zz(1c)+c 0
0
0
0
1 
where
c = cos(angle), s = sine(angle),
and  (x,y,z)  = 1 (i.e. the vector is normalized). If the
matrix
mode is either GL_MODELVIEW or GL_PROJECTION, all objects drawn after glRotate
is called are rotated. Use glPushMatrix and glPopMatrix
to save and restore the unrotated coordinate system.
glCallList(GLuint
list)
This
function causes the named display
list to be executed. The commands saved in the display list are
executed
in order, just as if they were called without using a display
list.
If list has not been defined as a display list, glCallList
is ignored. Since this function can appear inside a
display
ist, a limit is placed on the nesting level of display lists during
displaylist
execution (to avoid the possibility of infinite recursion resulting
from
display lists calling one another). This limit depends on the
implementation,
but it is at least 64. A display list can be executed between a
call
to glBegin and the corresponding call to glEnd,
so long as the display list included only commands that are allowed in
this code block.
glutPostRedisplay(void)
This
function marks the current window
as needing to be redisplayed. During the next iteration through glutMainLoop,
the window's display callback will be called to redisplay the window's
normal plane. Multiple calls to glutPostRedisplay before
the next display callback opportunity generates only a single redisplay
callback. This function may be called within a window's display or
overlay
display callback to remark that window for redisplay.
glNewList(GLuint
list, GLenum mode)
This
function creates or replaces a display
list. Display lists are groups of GL commands that have been
stored
for subsequent execution. Display lists are created with
glNewLIst.
All subsequent command are placed in the display list, in the order
issued,
until glEndList is called.
list
 a positive integer that becomes
the unique name for the display list. Names can be created and
reserved
with glGenLists and tested for uniqueness with glIsList.
mode
 a symbolic constant that
can assume one of two values:
GL_COMPILE
 commands are only compiled
GL_COMPILE_AND_EXECUTE
 commands are compiled and executed as they are added to the display
list
Note: Some commands are not compiled
into the display list but are executed immediately, regardless of the
displaylist
mode. These will be noted and discussed in greater detail as they
are implemented.
glLightfv(GLenum
light, GLenum pname,
const GLfloat *params)
This
function sets the light source parameters.
The three parameters are defined below:
light
 Specifies a light.
The maximum number of lights depends on the implementation, but at
least
eight lights are supported. They are identified by smbolic names
of the form GL_LIGHTi where 0<= i < GL_MAX_LIGHTS.
pname
 Specifies a light source
parameter for light.
GL_AMBIENT,
GL_DIFFUSE,
GL_SPOT_EXPONENT,
GL_CONSTANT_ATTENUATION,
GL_LINEAR_ATTENUATION,
and
GL_QUADRATIC_ATTENUATION
are
accepted.
params
 Specifies a pointer to
the value of values that parameter pname of light source light will be
set to.
glShadeModel(GLenum
mode)
This
function selects a flat or smooth
shading. Smooth shading is the default and causes the comuted
colors
of vertices to be interpolated as the primitive is being
rasterized.
For curved surfaces and for flat surfaces illuminated by a localized
light
source, this means that each resulting pixel fragment is assigned a
different
color. Flat shading selecte the computed color of just one vertex
and assigns it to all the pixel fragement generated by rasterizing a
single
primitive. In either case, the computed color of a vertex is
affected
by the location and type of lighting specified (assuming that lighting
is enabled). Otherwise the vertex color is set to the current
color
at the time the vertex was specified.
mode
 Specifies a symbolic value
representing a shading technique. Accepted values are GL_FLAT and
GL_SMOOTH.
glEnable(GLenum
cap) and glDisable(GLenum
cap)
These
function enable and disable serverside
GL capabilities. The initial (default) value for each capability
(with the exception of GL_DITHER) is GL_FALSE. Both glEnable and
glDisable take a single argument, cap, which can assume one of the
following
values:
GL_ALPHA_TEST
 alpha testing (glAlphaFunc)
GL_AUTO_NORMAL  generates
normal vectors when eitherGL_MAP2_VERTEX_3
or GL_MAP2_VERTEX_4
is used to generate vertices (glMap2).
GL_BLEND
 blends the incoming RGBA color values with the values in the color
buffers
(glBlendFunc)
GL_CLIP_PLANEi
 clips geometry against userdefined clipping plane i. (glClipPlane)
GL_COLOR_LOGIC_OP
 appies the currently selected logical operation to the incoming RGBA
color and color buffer values (glLogicOp)
GL_COLOR_MATERIAL
 has one or more materal parameters track the current color (glColorMaterial)
GL_CULL_FACE
 culls polygons based on their winding in window coordinates (glCullFace)
GL_DEPTH_TEST
 does depth comparisons and updates the depth buffer. Even if
the
depth buffer exists and the depth mask is nonzero, the depth buffer is
not updated if the depth test is disabled (glDepth Funct,
glDepthRange)
GL_DITHER
 dithers color components on indices before they are written to the
color
buffer.
GL_FOG
 blends a fog color into the posttexturing color (glFog)
GL_INDEX_LOGIC_OP
 applies the currently selected logical operation to the incoming
index
and color buffer indices (glLogicOp)
GL_LIGHTi
 includes light i in the evaluation of the lighting equation (glLightModel,
glLight)
GL_LIGHTING
 uses the current lighting parameters to compute the vertex color or
index.
(glMaterial, glLightModel, glLight)
GL_LINE_SMOOTH
 draws lines with correct filtering (glLineWidth)
GL_LINE_STIPPLE
 uses the current line stipple pattern when drawing lines (glLineStipple)
GL_MAP1_COLOR_4
 calls to glEvalCoord1, glEvalMesh1, and
glEvalPoint1will
generate RGBA values (glMap1)
GL_MAP1_INDEX
 calls to glEvalCoord1, glEvalMesh1, and
glEvalPoint1
will generate color indices (glMap1)
GL_MAP1_NORMAL
 calls to glEvalCoord1, glEvalMesh1, and
glEvalPoint1
will generate s texture coordinates (glMap1)
GL_MAP1_TEXTURE_COORD_1
 enables generation of s texture coordinates
GL_MAP1_TEXTURE_COORD_2
 enables generation of s and t texture coordinates
GL_MAP1_TEXTURE_COORD_3
 enables generation of s, t and r texture
coordinates
GL_MAP1_TEXTURE_COORD_4
 enables generation of s, t, r and q
texture
coordinates
GL_MAP1_VERTEX_3
 enables generation of x, y and z vertex coordinates
GL_MAP1_VERTEX_4
 enables generation of x, y, z and w vertex coordinates
GL_MAP2_COLOR_4
 enables generation of RGBA values
GL_MAP2_INDEX
GL_MAP2_NORMAL
GL_MAP2_TEXTURE_COORD_1
GL_MAP2_TEXTURE_COORD_2
GL_MAP2_TEXTURE_COORD_3
GL_MAP2_TEXTURE_COORD_4
GL_MAP2_VERTEX_3
GL_MAP2_VERTEX_4
GL_NORMALIZE
 normal
vectors specified with glNormal are scaled to unit
length after transformation.
GL_POINT_SMOOTH
 draws points
with proper filtering (glPointSize)
GLPOLYGON_OFFSET_FILL
 if polygon
is rendered in GL_FILL mode, an offset is added to depth values
of apolygon's fragments before the depth comparison is performed (glPolygonOffset)
GL_POLYGON_OFFSET_LINE
 if polygon
is rendered in GL_LINE mode, an offset is added to depth values
of apolygon's fragments before the depth comparison is performed (glPolygonOffset)
GL_POLYGON_OFFSET_POINT
 an offset is
added to depth values of a polygon's fragments before the
depth comparison is performed, if the polygon is rendered in GL_point
mode
(glPolgonOffset)
GL_POLYGON_SMOOTH
 draw
polygons with proper filtering (if disabled, draws aliased polygons)
GL_POLYGON_STIPPLE
 uses the
current polygon stipple pattern when rendering polygons (glPolygonStipple)
GL_SCISSOR_TEST
 discards
fragments that are outside the scissor rectangle (glScissor)
GL_STENCIL_TEST
 does stencil
testing and updates the stencil buffer (glStencilFunc
and glStencilOp)
GL_TEXTURE_1D
 onedimensional
texturing is performed (unless twodimensional texturing
is also enabled) (glTexImage1D)
GL_TEXTURE_2D
 twodimensional
texturing is performed (glTexImage2d)
GL_TEXTURE_GEN_Q
 the q texture coordinate
is computed useing the texture generation
function defined with glTexGen
GL_TEXTURE_GEN_R
 the r texture coordinate
is computed using the texture generation
function defined with glTexGen
GL_TEXTURE_GEN_S
 the s texture coordinate
is computed using the texture generation
function defined with glTexGen
GL_TEXTURE_GEN_T
 the t texture coordinate
is computed using the texture generation
function defined with glTexGen
glMaterialfv(GLenum
face, pname, const
GLfloat *params)
This
function specifies the material parameters
for the lighting model. It takes three arguments,
face
 specifies whether the GL_FRONT
materials, the GL_BACK materials or the GL_FRONT_AND_BACK materials
will
be modified.
pname
 specifies which of several
parameter in one or both sets will be modified
GL_AMBIENT
GL_DIFFUSE
GL_SPECULAR
GL_EMMISION
GL_SHININESS
GL_AMBIENT_AND_DIFFUSE
GL_COOR_INDEXES
params 
specifies what value or values will
be assigned to the specified parameter. Material parameters are
used
in the lighting equation that may be applied to each vertex.
gluPerspective(GLdouble,
fovy, aspect,
zNear, zFar)
This
function sets up a perspective projection
matrix by specifying a viewing frustum into the world coordinate
system.
In general, the aspect ratio in gluPerspective should match the aspect
ratio of the associated viewport. for example aspect = 2.0 means
that the viewer's angle of view is twice as wide (x direction) as it is
high (y direction).
fovy
 specifies the field of view
angle, in degrees, in the y direction
aspect
 specifies the aspect ratio
that determines the field of view in the x direction. The aspect
ratio is the ratio of x (width) to y (height)
zNear
 specifies the distance from
the viewer to the near clipping plane (always positive)
zFar
 specifies the distance from
the viewer to the far clipping plane (always positive)
The
matrix generated by gluPerspective
is multiplied by the current matrix, just as if glMultMatrix were
called
with the generated matrix. The perspective matrix can be loade
onto
the current matrix stack by preceding the call to glPerspective
with a call to glLoadIdentity.
Given
f defined as follows:
f = cotangent(fovy/2.0)
the generated matrix is
f/aspect
0
0
0
0
f
0
0
0
0 (zFaz+zNear)/(zNearzFar)
2*zFar*zNear/(zNearzFar)
0
0
1
0

Note:
The depth buffer precision
is affected by the values specified for zNear and zFar.
The greater the ratio of zFar to zNear, the less
effective
the depth buffer will be at distinguising between surfaces that are
near
each other. If
r = zFar/zNear
approximately
log_{2}r bits
of depth buffer precision are lost. Because r appraoches
infinity
as zNear approaches 0, zNear must never be set to 0.
glTranslatef(GLfloat
x, y, z)
x,
y, z  specify the x,
y, and z coordinates of a translation vector.
This function multiplies the current matrix
by a translation matrix, with the product replacing the current matrix,
which produces the same result as calling glMultMatrix
with
the following matrix for its argument:
1
0 0
x
0 1
0
y
0 0
1
z
0 0
0
1 
Mathematical
Foundations
Basic
Definitions
1.
A matrix M : R^{3}
> R^{3} is called a linear transformation
and maps vectors to vectors by Y=MX. The term
linearity
refers to the property that M(cU +V) = cMU + MVfor
any scalar c and any vectors U and V.
2.
The zero matrix is a matrix
with all zero entries.
3.
The identity matrix
is the matrix I
with 1 on the main diagonal entries and 0 for all other entries.
4.
A matrix is said to be invertible
if there exists a matrix, M^{1} such that MM^{1}
= M^{1}M = I.
5.
The transpose of a matrix
M = [m_{ij}] is the matrix M^{T} = [m_{ji}].
That is the rows and columns are interchanged in M^{T}
(or
the matrix is flipped about its main diagonal).
6.
A matrix M is symmetric
if M=M^{T}.
7.
A matrix M is skewsymmetric
if M^{T}=  M.
8.
A diagonal matrix D
= [d_{ij}] has the property that d_{ij}=0
when
i not equal to j. Sometimes we use the notation D =
diag{a,b,c}.
Scaling
If
a diagonal matrix D = diag{d_{0},
d_{1}, d_{2}} has all positive entries, it is a
scaling
matrix. Each diagonal term represents how much stretching or
shrinking
occurs for the correspoinding coordinate direction. Uniform
scaling
is D = sI
= diag{s,s,s} for s>0.
Rotation
A
matrix R is a rotation matrix
if its transpose and inverse are the same matrix, that is, R^{1}
= R^{T}, in which case RR^{T} = R^{T}R
=
I.
The matrix has a correspoinding unitlength axis of rotation Uand
angle of rotation f.
The choice is not unique since U is also an axis of
rotation
and f
+ 2pk
for any integer k is an angle of rotation. If U=(u_{0},u_{1},u_{2}),
we can define the skewsymmetric matrix S by
The rotation corresponding to axis U and angle
f is
R = I
+ (sin f)
S + (1  cos f)
S^{2}
Translation
Translation
of vectors by
a fixed vector T element of R^{3} is
represented by the function Y = X + T
for X and Y elements of R^{3}.
It is not possible to represent this translation as a linear
transformation
of the form Y = MX for some constant matrix M.
However, if the problem is embedded in a fourdimensional space, it is
possible to represent translation with a linear transformation (called
a homogeneous transformation).
ref: 3D
Game Engine Design, by David H. Eberly, Morgan Kaufmann
Homogeneous
Transformations
A
vector (x,y,z) eR^{3}
can be mapped uniquely onto a vector (x,y,z,1)eR^{4}.
Other vectors (x,y,z,w)eR^{4}
can be projected onto the hyperplane w=1 by (x,y,z,w) >
(x/w,y/w,z/w,1).
An entire line of points with with origin (0,0,0,0) is projected onto
the
single point (x,y,z,1) All of R^{4} \ {0}
is
partitioned into equivalence classes, each class having
representative
projection (x,y,z,1). A 4tuple in this setting is called
a homogeneous coordinate. Two homogeneous coordinates
that
are equivalent are indicated to be so by (x_{0},y_{0},z_{0},w_{0})~(x_{1},y_{1},z_{1},w_{1}).
Transformations
can be applied to homogeneous
coordinates to obtain other homogeneous coordinates. Such a 4x4
matrix
H = [h_{ij}], 0 <= i <= 3 and 0 <=j<=3, is called a
homogeneous transformation as long as h_{33}=1. Usually,
homogeneous matrices are written as 2x2 block matrices,
where M is a 3x3
matrix, T is 3x1
S^{T} is 1x3 and 1 is a scalar. The product of a
homogeneous
coordinate and a homogeneous transformation in block format is,
<>Any 3x3 linear transformation M can be
represented by the homogeneous matrix
Translation
by a vector T can also
be represented by a homogeneous transformation,
The
two transformations can be composed
to represent Y = MX + T as
Assuming
M is invertible, the equation
can be solved for X = M^{1}(YT).
Thus,
the inverse of a homogeneous matrix is
Perspective
projection can also be represented
by a homogenoeous matrix where the lowerleft entry is not the zero
vector.
We usually discuss the geometric pipeline in terms of products of
homogeneous
transformations. That notation is a convenience and is not
particularly
useful in an implementation unless the underlying hardware (and/or
graphics
package) has native support for vector and matrix operations in four
dimensions
(e.g. openGL and SGI).
Transformations
in 2D
We will begin with a simple 2D rotation.
We will begin with our source code for drawing a triangle
(redgreenblue
blended) and add the necessary code for rotation. We first need
to
tell the GLUT that we want a double buffer. Note the changes to glutInitDisplayMode(
) in main( ). We will also need to clear the depth
buffer
(since we have more than one) which is accomplished by including
GL_DEPTH_BUFFER_BIT
to the parameter list of the glClear( ) function. Finally
we will want to draw in one display buffer while we show the
other.
This is handled by the glutSwapBuffers( ) function.
//
simple 2D rotation demo
#include
<GL/glut.h>
GLfloatyang
= 1.2;
void
display()
{
glClear(GL_COLOR_BUFFER_BITGL_DEPTH_BUFFER_BIT);
glRotatef(yang,0.0,1.0,0.0);
glBegin(GL_POLYGON);
glColor3f(1.0,0.0,0.0);
glVertex2f(0.0, 0.45);
glColor3f(0.0,1.0,0.0);
glVertex2f(0.45, 0.45);
glColor3f(0.0,0.0,1.0);
glVertex2f(0.45, 0.45);
glEnd();
glutSwapBuffers();
glFlush();
}
void
init()
{
glClearColor(1.0,1.0,1.0,1.0);
glMatrixMode(GL_PROJECTION);
gluOrtho2D(1.0,
1.0, 1.0, 1.0);
}
void
main(int argc, char**
argv)
{
glutInit(&argc,argv);
glutInitDisplayMode(GLUT_DEPTH
 GLUT_DOUBLE  GLUT_RGBA);
glutInitWindowSize(400,400);
glutInitWindowPosition(0,0);
glutCreateWindow("2D
Rotation");
glutDisplayFunc(display);
glutIdleFunc(display);
init();
glutMainLoop();
}

Now let's translate and rotate the
triangle. In order to maintain our view we will need to recall glLoadIdentity()
before each translation. Otherwise our translations accumulate and the
triangle moves out of our viewing window after a few frames of
animation.
Calling glLoadIdentity()
resets the transformation matrix so our rotation is also being
reset.
To make the triangle appear to rotate we have to continually increase
the
size of the rotation angle up to 360.0 degrees. We can accomplish
this by modifying the display( ) function in the following
manner.
void
display()
{
glClear(GL_COLOR_BUFFER_BITGL_DEPTH_BUFFER_BIT);
glLoadIdentity();
glTranslatef(0.4,0.25,0.0);
glRotatef(yang,0.0,1.0,0.0);
glBegin(GL_POLYGON);
glColor3f(1.0,0.0,0.0);
glVertex2f(0.0, 0.45);
glColor3f(0.0,1.0,0.0);
glVertex2f(0.45, 0.5);
glColor3f(0.0,0.0,1.0);
glVertex2f( 0.45, 0.45);
glEnd();
yang
= yang + 1.2;
if (yang>360.0)
yang = 0.0;
glutSwapBuffers();
glFlush();
} 
Before we move on to 3D transformations,
lets make sure we understand the pipeline graphics of openGL.
Assume
that we want to generate an animation in which two trianges are
independently
rotated about two different axes. Modify the display( )
function
to show another triangle translated into the upper left portion of the
viewing window and rotating about the x axis.
void
display()
{
glClear(GL_COLOR_BUFFER_BITGL_DEPTH_BUFFER_BIT);
glLoadIdentity();
glTranslatef(0.4,0.25,0.0);
glRotatef(yang,0.0,1.0,0.0);
glBegin(GL_POLYGON);
glColor3f(1.0,0.0,0.0);
glVertex2f(0.0, 0.45);
glColor3f(0.0,1.0,0.0);
glVertex2f(0.45, 0.5);
glColor3f(0.0,0.0,1.0);
glVertex2f( 0.45, 0.45);
glEnd();
glLoadIdentity();
glTranslatef(0.4,0.25,0.0);
glRotatef(xang,1.0,0.0,0.0);
glBegin(GL_POLYGON);
glColor3f(1.0,1.0,0.0);
glVertex2f(0.0, 0.45);
glColor3f(0.0,1.0,1.0);
glVertex2f(0.45, 0.5);
glColor3f(1.0,0.0,1.0);
glVertex2f( 0.45, 0.45);
glEnd();
yang =
yang + 1.2;
if
(yang>360.0)
yang
= 0.0;
xang =
xang  0.5;
if
(xang<0.0)
xang
= 360.0;
glutSwapBuffers();
glFlush();
} 
In
order to appreciate the importance of
the second glLoadIdentity( ) (shown in bold
above)
run your program with and without this function call.
Transformations
in 3D
We
will replace the lowerright triangle
in our 2D transformation program above with a pyramid.
glBegin(GL_TRIANGLES);
glColor3f(1.0,1.0,0.0);
glVertex3f( 0.0, 0.5, 0.0);
glVertex3f(0.5,0.5, 0.5);
glVertex3f( 0.5,0.5, 0.5);
glColor3f(0.0,1.0,0.0);
glVertex3f( 0.0, 0.5, 0.0);
glVertex3f( 0.5,0.5, 0.5);
glVertex3f( 0.5,0.5,0.5);
glColor3f(0.0,0.0,1.0);
glVertex3f( 0.0, 0.5, 0.0);
glVertex3f( 0.5,0.5,0.5);
glVertex3f(0.5,0.5,0.5);
glColor3f(1.0,0.0,0.0);
glVertex3f( 0.0, 0.5, 0.0);
glVertex3f(0.5,0.5,0.5);
glVertex3f(0.5,0.5, 0.5);
glEnd();
The four triangles have a common vertex
(0.0,0.5,0.0)
and each triangle shares a base vertex with each of its
neighbors.
We have left the bottom face (a square) open. We will rotate this
pyramid about both the x and y axes, using the glRotatef( )
function,
glRotatef(yang,1.0,1.0,0.0);
Running
this program with no other modifications
creates a strange effect. The faces of the pyramid are drawn in
the
order they are listed without regard for which is in the front or which
faces are obscured. We have to enable the GL_DEPTH_TEST
to
force openGL to display the solid object correctly. Depthtesting
can be enabled once in the init( ) function.
void
init()
{
glClearColor(1.0,1.0,1.0,1.0);
glClearDepth(1.0);
glEnable(GL_DEPTH_TEST);
glMatrixMode(GL_PROJECTION);
gluOrtho2D(1.0,
1.0, 1.0, 1.0);
}
The complete listing of the modified program
is given below,
//
simple 3D rotation demo
#include
<GL/glut.h>
GLfloat
yang =
0.0;
GLfloat
xang =
0.0;
void
display()
{
glClear(GL_COLOR_BUFFER_BITGL_DEPTH_BUFFER_BIT);
glLoadIdentity();
glTranslatef(0.4,0.25,0.0);
glRotatef(yang,1.0,1.0,0.0);
glBegin(GL_TRIANGLES);
glColor3f(1.0,1.0,0.0);
glVertex3f( 0.0, 0.5, 0.0);
glVertex3f(0.5,0.5, 0.5);
glVertex3f( 0.5,0.5, 0.5);
glColor3f(0.0,1.0,0.0);
glVertex3f( 0.0, 0.5, 0.0);
glVertex3f( 0.5,0.5, 0.5);
glVertex3f( 0.5,0.5,0.5);
glColor3f(0.0,0.0,1.0);
glVertex3f( 0.0, 0.5, 0.0);
glVertex3f( 0.5,0.5,0.5);
glVertex3f(0.5,0.5,0.5);
glColor3f(1.0,0.0,0.0);
glVertex3f( 0.0, 0.5, 0.0);
glVertex3f(0.5,0.5,0.5);
glVertex3f(0.5,0.5, 0.5);
glEnd();
glLoadIdentity();
glTranslatef(0.4,0.25,0.0);
glRotatef(xang,1.0,0.0,0.0);
glBegin(GL_POLYGON);
glColor3f(1.0,1.0,0.0);
glVertex2f(0.0, 0.45);
glColor3f(0.0,1.0,1.0);
glVertex2f(0.45, 0.5);
glColor3f(1.0,0.0,1.0);
glVertex2f( 0.45, 0.45);
glEnd();
yang
= yang + 1.2;
if
(yang>360.0)
yang = 0.0;
xang
= xang  0.5;
if (xang<0.0)
xang = 360.0;
glutSwapBuffers();
glFlush();
}
void
init()
{
glClearColor(1.0,1.0,1.0,1.0);
glClearDepth(1.0);
glEnable(GL_DEPTH_TEST);
glMatrixMode(GL_PROJECTION);
gluOrtho2D(1.0,
1.0, 1.0, 1.0);
}
void
main(int argc, char**
argv)
{
glutInit(&argc,argv);
glutInitDisplayMode(GLUT_DEPTH
 GLUT_DOUBLE  GLUT_RGBA);
glutInitWindowSize(400,400);
glutInitWindowPosition(0,0);
glutCreateWindow("3D
Rotation");
glutDisplayFunc(display);
glutIdleFunc(display);
init();
glutMainLoop();
}

Each of the faces
of the pyramid have been
set to a single color. As shown in the figure below the bottom of
the pyramid is open.
Also
note that the other object (blended
2D triangle) can intersect one of the faces of the pyramid. Since
we enabled the GL_DEPTH_TEST the display of the intersecting faces is
handled
properly. This is a powerful feature of openGL.
Types
of Transformations
There
are three types of transformations
that are of interest in computer graphics applications. These are
linear, affine and projective.
Linear
 Linear transformations
preserve parallel lines and act on lines to produce another line (or
possibly
a point). The origin (0 vector) is transformed to the
origin.
Consider the examples of scale and rotate.
Affine
 Affine transformations
are more general than linear transformations. Affine
transformations
preserve parallel lines and map lines to lines or points, but the zero
vector is not necessarily preserved. These transformations can
include
translation.
Projective
 In this transformation,
parallel lines are not necessarily preserved but lines are mapped to
lines
or points. (i.e. straight lines are still projected as straight
lines).
The general camera model which we have studied uses projective
transformations.
In
general, since straight lines are
mapped to straight lines in all of these transformations we only need
to
save the vertices (endpoints) of the lines being transformed.