CSC2522 Project
Accurate Point-To-Patch Form Factors

Tasso Karkanis
095310782

Introduction

One of the steps in a radiosity rendering system is the computation of form factors. Inaccurate form factors cause undesirable visual artifacts in the radiosity solution. The two most popular algorithms for computing form factors are the hemi-cube method and recently ray shooting. Both these methods compute point-to-patch form factors, and both are approximate. For this project, an accurate but relatively inefficient method of computing point-to-patch form factors was designed and implemented.

The Algorithm

The new method was inspired by Nusselt's Analogy. The visibility mesh is computed by projecting the geometry of the scene onto a unit sphere about the viewing position and transforming the resulting geometry back into world coordinates. Finally, the line integral formula is applied to determine the form factors.

To handle scenes with occlusion, the geometry of the scene is inserted into a BSP. The tree is traversed in front to back order with respect to the viewing position. The vertices of each triangle are projected onto the unit sphere, and the triangle is stored as arcs between the vertices. If any of the previously projected triangles intersect the new triangle, the difference is computed and stored because the previous triangles are necessarily in front of the new one because of the order traversal of the BSP. The remainder of this subtraction is stored and the rest is discarded. When all triangles have been processed, they are projected back into world coordinates and the form factors are evaluated.

Implementation

The algorithm was implemented in C++. Norman Chin's BSP code from Graphics Gems was used. The most difficult part of the implementation was the computation of triangle differences. Even though this problem seems simple, there are surprisingly many cases.

An obvious optimization was partially implemented. In the naive implementation, for every triangle processed, an overlap test is performed for every previously processed triangle. In order reduce the unnecessary tests, the surface of the sphere was recursively partitioned. The base of the recursion are the six points where rectilinear axes intersect the sphere. The recursion step is to average three closest points an split the triangle they form into three new triangles using the original points and the average. Using this partition, it is possible to test only those triangles which are likely to overlap the new triangle.

Testing and Results

The implementation was tested with several small, hand-written cases, and many large generated files. The generated files consisted of non-adjacent each of which have the same normal. The following timings were gathered on a 300 MHz Pentium II running Linux. Scenes of 50, 100, and 200 triangles were processed in 0.2, 2.8 and 300 seconds, respectively. A scene with 500 triangles took about 8 hours.

A generated scene with 200 triangles was used analyze the effectiveness of the algorithm. The following is a plot of the form factor as computed by the new algorithm versus the form factor computed by a hemi-cube algorithm with a resolution of 100x100 pixels:


And the following is a plot of the form factor as computed by the new algorithm versus the absolute normalized error of the hemi-cube algorithm:


Note that the error generally decreases with the magnitude of the form factor. This trend is due to the fact that a larger form factor indicates that the triangle covers a larger number of pixels in the hemi-cube frame buffer, and its area is therefore estimated more accurately. Clearly the extra cost of the accurate algorithm pays off when the form factor is small. This could be important in a radiosity solution, for example when a scene contains small, bright emitters.

Conclusions

The "exponential" behaviour of the algorithm forces one to the conclusion that it is not practical for application in its this implementation. It is however useful for comparing and evaluating approximate algorithms.

Future Work

The implementation could be further optimized. Currently, all input polygons are triangularized, as are the results of the triangle difference algorithm. This can dramatically increase the number of primitives handled. It might be better to reformulate these parts such that polygons can be handled.