At this point in time, some form of deferred rendering is becoming the standard rendering technique in games. I've long been a fan of deferred shading, and was quite pleased with the results after converting our forward renderer to deferred on a 360 project a little over a year ago. More recently, moving to a different project and engine, we went through the forward->deferred transition again, but our lead programmer tried a variation of the idea called deferred lighting. From the beginning, I wasn't a fan of the technique, for a variety of reasons. For a longer summary of the idea, and a more in depth comparison of deferred shading vs deferred lighting, check out this extensive post on gameangst. At this point I am going to assume you are familiar with the techniques. I mainly agree with Adrian's points, but there's a few issues I think he left out.
Deferred lighting is usually marketed as a more flexible alternative to 'traditional' deferred shading which has the proposed advantages:
- similar performance, perhaps better in terms of light/pixel overdraw cost
- no compromise in terms of material/shader flexibility
In short, I think the first claim is dubious at best, and the 2nd claim actually turns out to be false. Adrian has an exposition on why the 2nd material pass in deferred lighting actually gives you much less flexibility than you would think. The simple answer is that material flexibility (or fancy shaders), modify BRDF inputs or they alter the BRDF itself. Flexibility in terms of modifying the BRDF inputs (multi-layer textures, procedural, animated textures, etc.) can easily be accounted for in traditional deferred shading, so there is no advantage there. Deferred lighting is quite limited in how it can modify the BRDF because it must use a common function for the incoming light (irradiance) at each surface point, for all materials. It only has flexibility for the 2nd half of the BRDF, the exit light (radiance) on the eye path. Materials with fancy specular (like skin) are difficult to even fake with control only over exit radiance.
Now, there is a solution using stencil techniques that allows multiple shader paths during light accumulation, but traditional deferred shading techniques can use this too to get full BRDF flexibility. So Deferred Lighting has no advantage in BRDF flexibility. (more on the stencil techniques in another post)
But the real problem with deferred lighting is in performance - its not similar to deferred shading at all. The 1st problem is that all else being equal, two full render passes are just always going to be slower. The extra CPU draw call cost and geometry processing can be significant, especially if you are trying to push the geometry detail limits of the hardware (and shouldn't you?). The geometry processing could only be 'free' if there was significant pixel shader work to load balance against, and the load balancing was effecient. On PS3, the load balancing is not effecient, and more importantly, there is not much significant pixel shader work. Most of the significant pixel shader work is in the light accumulation, which is moved out of any geometry pass in both techniques - so they easily will be geometry limited. This is the prime disadvantage of any deferred technique right now vs traditional forwad shading. With forward shading, its much easier to really push the geometry limits of the hardware, as all pixel shading is done in one heavy pass.
Furthermore, the overdraw performance of the two systems is not comparable, and for high overdraw objects, such as foilage, deferred shading has a large advantage. Foilage objects are typically rendered with alpha-test, and because of this they receive only a partial benefit from the hardware's HI-Z occlusion. In our engine, the 1st pass in the two techniques for simple foilage is similar, both sample a single texture for albedo/alpha. The only difference is in DS the 1st pass outputs albedo and normal vs just the normal for DL. The 2nd pass, unique to DL, must read that same diffuse/albedo texture again, as well as the lighting information, which is often in a 1 or 2 64-bit texture(s). So its easily 3 times the work per pixel touched.
As a side note: the problems with Hi-Z and alpha test are manifold. With 2 pass rendering, you would think the fully populated z-buffer and Hi-Z from the 1st pass will limit overdraw in the 2nd pass to a little over 1.0. This is largely true for reasonable polygon scenes without alpha test. The problem with alpha-test is that it creates a large number of depth edges and wide z-variation within each Hi-Z tile. Now, this wouldn't be such a problem if the Hi-Z tiles stored a min/max z range, because then you could do fast rejection on the 2nd pass with z-equal compares. But they store a single z-value, either the min or the max, useful only for a greater-equal or less-equal compare test. Thus, when rendering triangles with alpha-test in the second pass, you get alot of false overdraw for pixels with zero-alpha that still pass the Hi-Z test.
Gοod ρoѕt. I ωіll be
ReplyDeleteеxperiencing mаny of thеѕe issues
as well..
my web blog :: payday loans