The quick fix is to move the hit-point a small amount along the normal vector (in the incident direction) of the surface. This is what I usually do:

vOffsetHit = vHit + vNormal*0.00001

But why 0.00001? Originally I chose this number because it was larger than typical errors. This fixed some of the artifacts in my images where small holes appeared in my geometry.

However this number only works if the scene geometry has a magnitude around 1 or so. What if the scene geometry is very large? Perhaps the mirror is placed at coordinate 100000,100000,100000. In this case an offset of 0.00001 is too small. Due to limited precision in floating point numbers (~7 significant decimal digits) 100000+0.00001 gives the number 100000, where 100000.00001 was expected.

The solution is to compute a scaled epsilon value that guarantees that x + epsilon > x. I wrote a function called Epsilon that does exactly this. This is how it is defined:

// Difference between 1.0f and the minimum float greater than 1.0f

#define FLOAT_EPSILON 1.19209289550781e-007

float Epsilon(float x, float scale)

{

float f = FLOAT_EPSILON*scale;

return fabs(x)*f;

}

x is the number to compute epsilon from. scale is used to scale epsilon and must be > 1.

This is how it can be used:

vHit.x = some number

vOffsetHit.x = vHit.x + Epsilon(vHit.x, 10)

Now we have a guarantee that vOffsetHit.x is larger than vHit.x with an amount at least 10 times larger than the smallest epsilon.

Simple really…

I have implemented a small unit test program that tests this function and it seems to be working well:

]]>First 3 arrays are created containing split-candidates for the 3 different axis x,y and z. Each split candidate is defined like this:

struct SSplit

{

real fSplit; // Split position of event

uint32 iItem; // Atom the split belongs to

bool bLeft; // true if event is a left event, false if it is a right event

uint32 nLeft; // number of objects left to split position

uint32 nRight; // number of objects right to split position

}

Next 3 other arrays are created which are simply pointers to the same split-candidates. I have implemented an < operator for the pointer, so we can sort the pointers instead of the split candidates. Handling of pointers are much more light weight than handling the split candidates themselves.

Now the split candidate pointers are sorted using standard C++ sort functions. This sorting is kept for the rest of the algorithm to avoid having to resort splits for every node that is computed.

Next the call to the recursive algorithm starts. For each set of split candidates, the best splitting candidate is chosen as the one with the lowest cost. The split candidates are then split into a left side and a right side, and each branch is split recursively until the algorithm decides that a leaf node is cheaper than a branch.

The cost function computes the cost by looking at the number of objects on each side of the split and the probability of hitting that side (surface area relative to parent area) and goes like this:

fCost = fCostTraversel + fCostIntersect*(nl*al + nr*ar)/fArea;

Since the split candidates are already sorted the number of objects on left and right side of each split candidate (this number is used in the cost function) is done by simply counting the objects while iterating through them from left to right and right to left.

Each KdTreeNode takes up 12 bytes which are spend like this:

struct KdTreeNode

{

real m_fSplit; // Splitting position according to split plane m_eAxis

union

{

KdTreeNode* m_pChild; // Pointer to two nodes

KdTreeItem* m_pItems; // Pointer to items in leaf node

};

struct

{

unsigned int m_eType : 1; // ENodeType, leaf=0, branch=1

unsigned int m_eAxis : 2; // ENodeAxis, x=0,y=1,z=2

unsigned int m_iItemCount : 29; // Number of KdTreeItems in leaf

};

};

To better visualize the voxels traversal I implemented an interactive tool that allows me to trace a single ray while showing only the inspected voxels in that trace. This tool is very valuable debugging the KD-tree.

So first I compute bounding boxes for all meshes (or spheres or isosurfaces) and construct a KD-Tree containing all these models. Then if a model is a triangle-mesh this mesh is subdivided in its own KD-tree.

This implementation behaves well if models are small and placed apart (no overlap). So e.g. a field of bunnies renders very fast. But many scenes contains overlapping structures. The Sponza Atrium models makes Gray grind forever, because the meshes overlaps a lot and because the camera would be placed inside the model.

So since, last winter (2009) I have been working on and off with a new Surface Area Heuristics KD-tree. My hope is that this new tree will make Gray perform much better in my current worst case scenarios.

]]>To use the tool download it and run it using 3 functions of the parameters u and v for each x,y and z coordinate. Here is the complete input arguments for the tool:

Parametric.exe x y z umin umax vmin vmax un vn outfile.xml format

x function of u and v

y function of u and v

z function of u and v

umin minimum value of u

umax maximum value of u

vmin minimum value of v

vmax maximum value of v

u division along u parameter

v division along v parameter

outfile.xml output model filename

format output model fileformat (GRAY or K3D)

example:

Parametric.exe u sin(u/pi) v -2*pi 2*pi -2 4 100 100 test.xml GRAY

The tools is for Windows, but the C++ source code can be requested if you need to compile for your own platform.

Download: Parametric.zip

I added some .bat files as examples. Below is the “Twisted Torus”

]]>An example is a flat plane given by:

x = u

y = v

z = 0

for u = [0,1] and v=[0,1]

This yields a small flat square (rather dull to look at).

More interesting are parametric surfaces that sweep through all three coordinates, x,y,z.

This formula is a the formula for a torus:

x = (0.7+0.3*cos(v))*cos(u)

y = (0.7+0.3*cos(v))*sin(u)

z = 0.3*sin(v)

for u = [0, 2*pi] and v = [0, 2*pi]

And the resulting image (using a glass material):

]]>The tool is not very flexible but it served my need at the time.

Here’s a screenshot:

The actual tool can be downloaded here (for Windows) Lathe

To produce these images, I start with with the light source and cast a large number of light rays into my 3D scene. If the rays hits a reflecting or refracting object the light bounces in new directions. When the light hits a diffuse non reflection object the position of the “photon” is stored into a photon map.

The next step is to gather these photons again, but this time starting from the camera. When the camera rays hits a diffuse object the number of photons in the neighbourhood of the hit point is computed. The density of photons (number of photons per area) gives an estimate of how much light has hit this point.

Here is an image. Notice the “shadow” cast by the pawn.

]]>Isosurfaces are surfaces described with a formula rather than with thousands of small polygons. The benefit is that very complex models can be visualized using only little memory, the drawback is lack of artistic control and costly evaluation times of each isosurface formula.

So what does an isosurface formula look like? One of the simplest is the sphere given by f(x,y,z)=x*x+y*y+z*z-r*r . The isosurface is given by all the x,y,z triplets that evaulates to 0 when inserted into f(x,y,z). Iso-surface because, all evaluations give the same value 0 on the surface.

The way I ray trace isosurfaces is by defining a bounding box around my object. When a ray hits the bounding box, Gray evaluates the formula f. If f is positive, I move forward along the ray and evaluates f again. When f yields a negative value this iteration stops and the last position with a positive value is selected as a point on the surface.

The trick to optimize this progress is the find some upper bound on the derivative of the formula f (how fast does f go towards 0). This tells me how far I can move forward and still be sure that f is positive. The actual implementation in Gray iterates in smaller and smaller steps until it is within some accepted threshold.

Enough talking. Here is an image:

This “rock” is a sphere, whose radius has been modulated with some fractal noise.