Above are about point-based rendering.
Point-based rendering, as the name implies, is to visualize objects using point primitives instead of polygonal primitives such as triangles. This idea of using points as rendering primitive was first suggested by Marc Levoy and Turner Whitted in 1985 (You can find its abstract and the full paper at here). And afterward, a novel technique named QSplat was introduced by Szymon Rusinkiewicz and Marc Levoy in 2000, and that ignited a fire on this subject.
At this point, we should ask ourselves, ‘why do we need alternate primitives other than triangle?’ We can find a good answer for this question here in QSplat paper abstraction.
Advances in 3D scanning technologies have enabled the practical creation of meshes with hundreds of millions of polygons. Traditional algorithms for display, simplification, and progressive transmission of meshes are impractical for data sets of this size.
I think the most important motive of point primtives is an enormous quantity of newly acquired data in practice (due to its increased volume and accuracy). We can think of the simplest case, where more than hundreds of millions of primitives, in a single rendering frame, are projected onto a screen of limited pixel resolution such as 800 by 600. In such a scene, most single primitive will contribute to a single pixel (or to even smaller) in the final displayed image. And that means, triangle primitives are not a good choice then. Anyway, to avoid some redundancies such as triangular-setup stage required for rendering triangular polygon meshes, we can choose points as our rendering primitives.
After the advent of QSplat, many researchers have rushed into this field (I was one of them ), and more sophisticated and improved techniques were introduced. Surface splatting techniques using EWA (elliptical weighted average) filter provided very high quality anti-aliased images [web site]. And others have focused on exploiting the programmable graphics pipeline. Sequential point trees technique is one of them. They proposed an innovative method of serializing the whole point hierarchy into graphics memory, and then the GPU by itself can apply a level of detail of the object on the fly [full paper].
In 2005, Mario Botsch et al. proposed more advanced GPU technique in their paper, and later they improved their own algorithm using deferred shading technique. Their algorithm was an application of the Phong shading and deferred shading technique (commonly used in polygonal rendering techniques). And, by exploiting the GPU power, they could achieve both high-quality image and the real-time frame rates at the same time.
No comments:
Post a Comment