It’s been about four years since I’ve drawn/painted anything. I think I couldn’t justify the act of drawing/painting – I mean, I wasn’t going to become an artist. My profession is clearly in computer science and everything else seemed like such a waste of time. Recently though, I met someone who convinced me to pick up the brush again and get back to painting. I watched Casino Royale with him and the image of Vesper Lynd stayed with me. It sort of shocked me that a Bond Girl can be witty, brainy, and vulnerable (as well as beautiful – I think you have to be if you are a Bond Girl). Anyway, Here are the paintings:
Thanks to VERVE, a robot simulation platform running on eclipse, my knowledge of Java is increasing daily. Currently I am going through the VERVE code that deals with importing and parsing KML overlays (Google Earth File Format), and in the process I encountered the concept Scene Graph, which is how Ardor3D (the graphics library we use for VERVE) is structured.
Scene graph is a directed acyclic graph (DAG) arranged in a tree structure that is composed of nodes. The objects are stored in the leaf nodes of the tree and the non-leaf (group) nodes are there to group the leaf nodes together.
The nodes in the VERVE scene graph are referred using weak references.
Weak References are references to objects that don’t interfere with garbage collection. Normally, the objects are referenced using strong references:
StringBuffer buffer = new StringBuffer();
But in some cases, strong references causes memory leaks since it forces an object to remain in memory if it is reachable via chain of strong references.
Weak references don’t force the object to remain in memory. They are created like this:
WeakReference<Widget> weakWidget = new WeakReference<Widget>(widget);
Sometimes garbage collector might throw out the weak referenced object before you ask for the object. And when you do ask, you might get back “null”. To prevent this, you can use a weak hashmap, which has keys that are referred to with weak references. Weak Hashmap automatically removes the entry if key becomes no longer reachable.
Here’s an awesome blog post where I read about Weak References: weblogs.java.net/blog/2006/05/04/understanding-weak-referencesf
While talking about a method inside Verve (3D simulator for robotic control), my teammate said that it was similar to a “Factory”. At first, I thought he was making an analogy between an actual factory and the method. But it turned out that he was referring to a pattern called “Factory Method”, which was of course outlined in the seminal book “Design Patterns” by Gamma et.al that I conveniently forgot to read after impulsively buying it from Amazon.
Factory Method is a design pattern in object oriented program that allows an instantiation of an object in the subclass instead of the superclass. The superclass would be an interface that outlines the objects that need to be implemented. It is the job of a subclass that implements the interface to instantiate these objects outlined in the interface.
Here’s an example. Say you have a Shape super class. Inside the Shape’s constructor, it calls a “makeShape” function, which is supposed to instantiate and return a Shape. But the program won’t know which shape to make (in another words, “instantiate”) because the type of shape is defined in the subclasses. So the job of instantiation is passed to the subclasses of Shape super class, such as Circle. Circle Subclass would implement the “makeShape” method that instantiates the shape and returns it.
There is another pattern that is similar to Factory Method but does not instantiate each time but rather passes back the same object that was instantiated once for that class. It’s called a “Singleton”.
I am currently reading a paper called “Geometry Ciipmaps: Terrain Rendering Using Nested Grids” by Frank Losasso and Hughes Hoppe, for my next project in IRG for terrain-rendering related project.
What is a Geometry Clipmap:
It caches the terrain in a set of nested regular grids (which are filtered version of the terrain at power of two resolutions) centered about the viewer and it’s incrementally updated with new data as the viewer moves.
What is the difference between Mipmaps / Texture Clipmaps / Geometry Clipmaps ?
A mipmap is a pyramid of images of the same scene. The set of images goes from fine to coarse, where the highest mipmap level consists of just one pixel. Mipmap level rendered at a pixel is a function of screen space parametric derivatives, which depend on view parameters and not on the content of the image.
A texture clipmap caches a view dependent subset of a mipmap pyramid. The visible subset of the lower mip levels is limited by the resolution of the display. So it’s not necessary to keep the entire texture in memory. So you “clip” the mipmap to only the region needed to render the scene. Texture clipmaps compute LOD (level of detail) per-pixel based on screen space projected geometry.
With terrains, geometry for screen space does not exist until the level of detail is selected. But texture clipmaps compute LOD per-pixel based on existing geometry. See the problem?
So geometry clipmaps selects the LOD in world space based on viewer distance. It does this by using set of rectangular regions about the view point and uses transition regions to blend between LOD levels.
Geometry clipmap’s refinement hierarchy is based on viewer centric grids but ignores local surface geometry.
Overview of Geometry Clipmap
Consists of m levels of terrain pyramid. Each level contains n x n array of vertices, stored as vertex buffer in video memory. Each vertex contains (x,y,z,z_c) coordinates, where z_c indicates height value at (x,y) in the next coarser level (for transition morphing).
Each clipmap level contains associated texture image(s), which are stored as 8-bit per channel normal map for surface shading (more efficient than storing per-vertex normals. The normal map is computed from the geometry whenever the clipmap is updated.
– determine the desired active regions (extent we wish to render)
-update the geometry clipmap
-crop the active regions to the clip regions (world extent of nxn grid of data stored at that level), and render.
Computing Desired Active Regions
Approximate screen-space triangle size s in pixels is given by
W is the window size
phi is the filed of view
If W = 640 pixels, phi = 90 degrees, we obtain good results with clipmap size n=255.
Normal maps are stored at twice the resolution, which ives 1.5 pixels per texture sample.
Geometry Clipmap Update
Instead of copying over the old data when shifting a level, we fill the newly exposed L-shaped region (since the texture look up wraps around using mod operations on x and y). The new data comes from either decompressing the terrain (for coarser levels), or synthesizing (for finer levels). The finer level texture is synthesized from the coarser one using interpolatory subdivision scheme.
Constraints on the clipmap regions
– clip regions are nested fro coarse-to-fine geometry prediction. Prediction requires maintaining one grid unit on all sides.
– rendered data (active region) is subset of data present in clipmap (clip region).
– perimeter of active region must lie on even vertices for watertight boundary with coarser level.
– render region (active region) must be at least two grid units wide to allow a continuous transition between levels).
Rendering the Geometry Clipmap
//crop the active regions
for each level l = 1:m in coarse-to-fine order:
Crop active_region(l) to clip_region(l)
Crop active_region(l) to active_region(l-1)
//Render all levels
for each level l=1:m in fine-to-coarse order:
Render_region(l) = active_region(l) - active_region(l+1)
Transition Regions for Visual Continuity
In order to render regions at different levels of detail, the geometry near the boundary is morphed for each render region s.t. it transitions to geom of coarser level.
Morphed elevation (z’):
where blend parameter alpha is computed as alpha = max(alpha_x, alpha_y).
v’_x denotes continuous coordinates of the viewpoint in the grid of clip region.
x_min and x_max are the integer extents of the active_region(l).
Each clipmap level stores texture images for use in rasterization (i.e. a normal map). Mipmapping is disabled but LOD is performed on the texture using the same spatial transition regions applied to the geometry. So texture LOD is based on viewer distance rather than on screen-space derivatives (which is how hardware mipmapping works).
For each level of clipmap, maintain z_min, z_max bounds for the local terrain. Each render region is partitioned into four rectangular regions (see Figure 6). Each rectangular region is extruded by zmin and zmax (the terrain bounds) to form a axis-aligned bounding box. The bounding boxes are intersected with the four-sided viewing frustum and the resulting convex set is projected onto an XY plane. The axis aligned rectangle bounding this set is used to crop the given rectangular region.
Create a terrain pyramid by downsampling from fine terrain to coarse terrain using a linear filter. Then reconstruct the levels in coarse-to-fine order using interpolatory subdivision from next coarser level + residual.
Fractal noise displacement is done by adding uncorrelated Gaussian noise to the upsampled coarser terrain. The Gaussian noise values are precomputed and stored in a look up table for efficiency.
Here are some good references:
A good follow up paper: http://research.microsoft.com/en-us/um/people/hoppe/gpugcm.pdf
Awesome website on terrain rendering: http://www.vterrain.org
Definition: Name of the annual conference on computer graphics (CG) convened by the ACM SIGGRAPH organization. Dozens of research papers are presented each year, and SIGGRAPH is widely considered the most prestigious forum for the publication of computer graphics research.
Our paper, Updated Sparse Cholesky Factors for Co-Rotational Elastodynamic, was presented at SIGGRAPH. Here’s a picture of me and Florian (first author) doing a silly 40-sec advertisement of our session during Tech Papers Fast Forward.
SIGGRAPH ends up being a big re-union of friends since graphics industry/academia is so tightly knit. The left half of us met at PIXAR during the Summer Internship. Lot of us ended up in the visual effects studios including Pixar, Weta, and Bungie and we started our new jobs within few months of each other. The right half are my friends from Berkeley Graphics Lab. Everyone loves technology, art, and being a nerd. I loved spending time with these like-minded people.
Curiosity landed on Mars succesfully on the first day of SIGGRAPH. There was a viewing session set up at the Geek Bar in Los Angeles Convention Center. A fair amount of people came, including my family who snuck in to the geek bar to watch the landing with me. I believe my mom cried during the landing. It was so emotional and I felt so proud to be an engineer.
And I got some quality time with my family and my parents’ not-so-small-anymore pug. It looks just like Frank the Pug from Men in Black. This pug changed our family dynamic from a grouchy family to a loving family. It loves everyone it meets. Here’s me pinching its velvet-pin-cushion cheeks.
One of the best things about SIGGRAPH is the screening of selection from the SIGGRAPH Animation Festival. I got to watch Disney’s new short Paperman, which is screened in front of Wreak it Ralph. The style of it was an interesting blend of 2D and 3D framed in photographic black-and-white. I watched it three times, and during the third one I teared up a bit. It moved me in the best way. Definitely a dreamer and romantic’s pick of the evening.
I also loved the style of Twinings commercial called “Gets You Back To You”. My favorite part is when her feet plunges in to the shallow ocean. Makes me want to walk along the shores of Seal Beach before I head back to Silicon Valley.
During downtime, I wandered into the SIGGRAPH Bookstore. It’s awesome because someone already did the work for you in selecting the books in your interest range. I ended up buying bunch of them:
The DSLR Filmmaker’s Handbook: Real-World Production Techniques
Cool video illustrating different Human-Robot Interaction Projects currently in development at Intelligent Robotics Group (where I work 😀 ).