Friday, May 4, 2012

Thesis


There is something that has been nagging at me for about 3 years. My master's thesis is still not complete. This is preventing me from being a legitimate holder of a diploma for a M.S. degree in Computer Science from DigiPen. I have worked on the thesis off-and-on for those 3 years. Usually I make a bit of progress, only to become frustrated and demotivated within another week, leaving it idle for a few months. I am committed to finishing it by the end of the year, and to help it was suggested to keep a journal about it. I've gone a step further, deciding to create blog posts about it with the hopes that externalizing the thoughts and process will help keep ideas and fresh and small goals in sight.

Dynamic 2D Patterns for Shading 3D Scenes

Although I have already implemented a simple version of this, I originally cut a lot of corners that I'm going back over to implement it fully correct. The problem is this requires quite a bit of work, even in just the beginning. The first step in the technique involves distributing sampling points across a mesh. This is in itself an entire other paper (Stratified Point Sampling). That technique has 3 main steps: 
1) Voxelize a mesh. 
2) Determine a sample for each voxel
3) Remove samples that are too close

This in itself was a big enough hurdle. I implemented an octree and spent (off and on) weeks trying to debug a simple triangle-cube intersection bug to get it working. That finally allowed me to voxelize the mesh, but I still lack a decent mesh/sample point representation, which is causing me to be stuck on section 2).

Even with the proper sampling point scheme in place, the implementation has only just begun. Given the sampling points on the mesh, you then convert those sample points to screen-space, calculate a 2D similarity transform using least-squares solution, then offset the uv-coordinates of the screen-aligned pattern when rendering. And that's not even considering that you can separate the mesh into patches, each with their own similarity transform such that you achieve less pattern sliding. And the blending scheme between these patches is quite complicated. The main difficulty seems to arise from the requirement that the original mesh is modified with patch information, such as a blend weight per patch at each vertex, and that the patches are each rendered separately, which likely results in poor performance on a large enough scene with objects with highly granular patches. 

My thoughts on this resemble another paper, in which particles are distributed across the surface of an object. Perhaps the 2D similarity transform approach can be combined with screen-aligned surface particles for a similar effect that is more optimal to render.

This technique is the cornerstone for the thesis, as it is the technique with the best results for 2D pattern coherence on 3D objects. It is also a technique that gives some room for improvement. But it is also a hefty technique to implement. In future posts, I'll discuss other techniques I am trying to implement, as well as any progress I make on this one.

No comments: