3 February 2008

Inspired by SuperJer, I decided to try my hand at writing a raytracer from scratch. By scratch I mean I started with a GTK+ demo program that... opens a window. I used no graphics libraries other than the GTK+/Cairo facilities for taking a pixbuffer and rendering it to the screen.

In terms of figuring out the math and debugging it was not all that bad, and I got to hone my programming design skills.

This happened over the course of almost a week, but I estimate I spent about 2-3 full days actually working on it.

*t = +3h*. It took me a couple of hours to figure out that
GTK+/Cairo was an acceptable tool which would allow me to take arbitrary
pixel data and paint it to the screen. This is just a blank canvas.

*t = +3.5h*. I figured out how colors are supposed to be
represented in the pixbuf array. Here are some cyan pixels.

*t = +4.5h*. Here's a test checkerboard pattern. Raytracing works
by taking a ray and throwing it into a scene at a particular angle. When
the ray collides with an object, we record what color that object is. Do
this a million times, once for each pixel, and you get a raytraced
image.

The lines are a bit curved because of the specific mapping I chose to convert pixel locations to angles in the scene.

*t = +9.5h*. I moved the camera and added a reflecting sphere.
This looks really neat but is actually very easy. When a ray hits the
sphere, instead of recording a particular color, we compute the angle at
which the ray bounces off the sphere, and then look in *that*
direction to get a color.

Once we get that reflected color, we lighten it a bit, which is why the reflected image looks lighter than the original image.

It is sort of hairy to compute exactly where an arbitrary ray hits an arbitrary sphere, so I computed the intersection using binary search.

*t = +10h*. I added a sky. If a ray doesn't hit any of the
objects in the scene, we give it a color based on its angle from the
horizontal. This really livens up the scene.

*t = +10h*. I made the sphere translucent. Instead of just
reflecting rays that hit the sphere, we compute both the reflected ray as
well as looking to see what color we would see if the ray just passed
straight through the sphere.

*t = +11h*. To render shadows on the checkerboard, when a ray
arrives at the surface, we draw another ray from that point directly to a
fixed light source vector. If there's a collision with another object,
the point is in shadow and rendered darker than usual. Otherwise, it's
lit as normal.

*t = +13h*. I moved the brightest part of the sky off to the
right, rather than having it be directly above. The shadows disappeared
for a bit while I was restructuring my code, but later we'll use the same
point in the sky to compute shadows.

*t = +15h*. Shadows are back. I added another metallic green
sphere; the tint is just the color that all the reflected rays are mixed
with. Notice the reflection in the reflection, and the shadow in the
reflection, and the shadow in the reflection in the reflection... It's
theoretically possible for a ray to have infinitely many bounces,
although this is very unlikely for randomly arranged objects in a
scene.

I also made the checkerboard an infinite strip. Because it's just a straight strip it's still easy to characterize its shape mathematically and compute collisions.

Around here I had to do quite a bit of refactoring to represent objects in more uniform ways regardless of their geometry and lighting characteristics.

*t = +16h*. All the previously gray squares have been changed to
mirrored squares. Nothing special here, the same technique is used to
render all the mirrored surfaces.

*t = +18h*. I added a cube. You can tell in the reflections that
it's the right shape, although it looks poor because all the faces are
the same color.

I had to clean up a bit so that the collision-detection code was general enough to deal with arbitrary planes.

*t = +19h*. Now we color the faces of the cube based on how much
they are pointing towards the light source, which is a more realistic
model of lighting.

*t = +24h*. Here is a refracting rectangular prism! The same sort
of method used for reflections still works: whenever a ray hits the block
we compute the angle of the refracted ray using Snell's law and then look
in that direction to figure out what color we would see. A little bit of
the reflected ray is also mixed in. We also model total internal
reflection, as you can see on the right-hand side and bottom of the
prism.

The prism has an index of refraction of 1.5, the same as that of glass.

Just for fun, this is what the previous scene looks like when the index
of refraction is 20. When *n* is this high, each incoming ray
essentially turns normal as it passes through the prism, and then resumes
on the other side at the original angle.

I took a tangent to try out fog. This is just an exponential attenuation of the color based on how far the object is. Notice that you can no longer see the ends of the infinite strip.

At this point I tried to do the "broken pencil" refraction demo that you have explained to you in high-school physics. Indeed, you can see the bending and displacement of the rays.

The block has an index of refraction of 1.33, the same as that of water.

*t = +27h*. This scene contains soft shadows. For each point on
the checkerboard, instead of drawing a single ray to the light source, we
draw many rays within a cone around the light source. The number of those
rays that are blocked (occluded) tells us how much in shadow that point
is.

The angular width of the cone is just slightly higher than the effective angular width of the sun.

Sampling such a large number of rays per pixel greatly increases the time needed to render the scene.

*t = +28h*. I added some more cubes to the scene. Now the cubes
are colored using a combination of occlusion and their angle to the light
source.

I made the "sun" a lot wider, for fuzzier shadows. Notice on the leftmost block that the shadow is sharpest closest to the other block and fuzzier further away.

What's next?

- Translucent objects, modeling glass spheres or lenses (with refraction), and point light sources would probably be pretty easy to do at this point.
- I could probably do textures, provided I figured out some way to generate or load them.
- Imperfect reflections (e.g. on glass or water) could be done by adding a little bit of randomness to the simulation.
- Many other complex effects that require increasing the number of simulated rays would greatly increase the running time of the raytracer, so I am a little hesitant to head in that direction.

My code (extremely work-in-progress!) is available as a Git archive.
You'll will need the `gtkmm` development libraries to build it. To
download it:

$ git clone http://web.psung.name/git/raytracer.git

Further instructions are in the `README` file.