Reflections Using Render to Texture and Projective
Texture Mapping
A common way to create reflections in OpenGL is to
use the stencil buffer. Typically two rendering passes
per frame are used: the first renders the scene normally;
the second creates the reflection by re-rendering the
scene flipped about the plane of the reflecting surface.
Use of the stencil buffer ensures that the second pass
reflection image is only blended over those parts of
the first pass image that represent reflecting surfaces.
NeHe has a tutorial showing how to do this here.
The approach used in the glBase demo is similar but
involves rendering the reflection into a texture map.
Again two rendering passes are used: the first pass
renders the reflection into the reflection texture;
the second pass renders the scene normally, blending
the reflection texture onto the reflecting surfaces.
The advantage of this is that when creating reflections
on water it is easy to distort the texture coordinates
making the reflection shimmer as the water ripples.
The disadvantage is that pixelation effects can be seen
if the reflection texture is significantly smaller than
the frame buffer.
Creating the reflection texture
The diagram below shows how the reflection texture
is created.
It should be clear from the diagram that the image
of the figure seen by the view camera via reflection
in the horizontal plane is the same as the image of
the figure seen by the reflection camera looking through
the horizontal plane. Hence, to calculate the position
and orientation of the reflection camera we just flip
the position and orientation of the view camera about
the plane of the reflection. This holds for any reflection
plane, but keeping the reflection plane axially aligned
simplifies the calculation.
So, to create the reflection texture for the current
view camera position and orientation we
- position the reflection camera, as described above
- resize the viewport to have the same dimensions
as the reflection texture map
- set up a clip plane so that we won't render anything
behind the reflection plane
- render the scene into the reflection texture map
(more on this later).
In the glBase demo this is all done by the function
updateReflectionTexture(cam c).
Projecting the reflection texture
Once the reflection texture has been created, the scene
is rendered normally and the reflection texture is blended
onto the reflecting surfaces. The question here is how
to generate the texture coordinates for applying the
reflection texture. Looking at the diagram again, we
can see that the reflection texture is perspectively
correct for a 'canvas' that is perpendicular to the
view direction of the reflection camera (and of the
view camera). This canvas is just the viewport that
we set up when rendering the reflection scene. One way
to map the reflection texture from this canvas onto
the reflection plane is to think of the reflection camera
as a slide projector that projects the reflection texture
onto the reflection surface. This technique is called
projective texture mapping.
Projective texture mapping is described in detail here
with a demo here,
but in a nutshell, you take advantage of the fact that
OpenGL generalises the usual two component texture coordinate
(s,t) to a four-component homogeneous texture coordinate
(s,t,r,q) allowing us to load a modelview and projection
transform into the texture matrix. In the glBase demo
app the same modelview and projection transforms used
create the reflection texture are loaded into the texture
matrix when rendering the reflection texture onto the
water surface. This is all done in the function loadProjectionMatrix(cam
c). The texture coordinates are then specified as three
dimensional texture coordinates and are just the same
values as those passed to glVertex3f to specify the
spatial coordinates of the triangle vertexes making
up the water surface, but with small differences added
in to distort the texture. This is done in the function
drawReflection(cam c).
Render to texture
The only question remaining is how to render into a
texture using OpenGL. As far as I know OpenGL doesn't
actually support this operation directly, so you have
to render the scene either into the frame buffer's back
buffer or into an offscreen buffer and then use glCopyTexSubImage
to copy the pixels into a texture object.
In the first version of the demo I used the backbuffer
approach, but there was a problem in the way I implemented
it: I didn't take account of the fact that the frame
buffer will not always be big enough to match the dimensions
of the texture map. For example, with the first version
of the demo, if you resized the window to say half its
default width, half the reflection disappears because
I'm trying to read a 512 pixel wide image out of a frame
buffer that is now only about 200 pixels wide. The new
version fixes this problem in two ways:
- If your OpenGL supports the pixel buffer extensions,
they should be used to create an offscreen buffer
for rendering the reflection. The offscreen buffer
won't be resized when the frame buffer window is.
The downside is that more memory is used on the graphics
card.
- If pixel buffers aren't supported, or if you use
the P key to switch off use of pixel buffers, the
back buffer will be used, but the code now adjusts
the amount of the reflection texture map actually
used to make sure that it never exceeds the frame
buffer size.
Another point worth making is that you should make
sure that the texture and frame buffer or pixel buffer
have the same pixel format, otherwise there will be
a performance hit converting from one format to the
other. If you stick to GL_RGB or GL_RGBA the driver
should do this automatically.
Performance
I asked anyone who downloaded the demo to send me performance
results with reflection update enabled and disabled
(the only difference is that with it disabled the call
to glCopyTexSubImage is not made). Some typical results
are shown below, note that these are for the first version
of the demo, without pixel buffer support.
System |
FPS
with reflection update ON |
FPS
with reflection update off |
Pentium
III 500 / GeForce 2 MX |
125 |
130 |
Dual
Celeron 300 / TNT 2 |
59 |
60 |
What these results show is that, on some systems at
least, glCopyTexSubImage can be used to implement a
render to texture with very little performance hit.
However, performance depends on how well glCopyTexSubImage
has been implemented in the driver. For example, see
the table below
System |
FPS
with reflection update ON |
FPS
with reflection update OFF |
Athlon
/ GeForce 2 GTS |
32 |
300 |
Celeron
466 / Radeon 64DDR |
2 |
250 |
These figures show a big performance hit for glCopyTexSubImage.
The difference on the GeForce 2 GTS system is that the
drivers are not the latest version, presumably with
the latest drivers performance should be about 290 FPS,
judging from the earlier TNT and GeForce MX figures.
The difference on the Radeon is that the drivers don't
accelerate glCopyTexSubImage at all when copying from
the frame buffer. However, it should be possible to
hit the Radeon's optimised path when using pixel buffers,
so I hope to have some revised performance figures soon.
'nuff said
Last update: 28th May 2001