ACM Sandbox Symposium: Rendering Volumes with Per-Pixel Displacement Mapping
(Visited 5481 times)OK, that sounds incredibly geeky. And it is. Here’s what Eric Risser just showed:
250,000 unique asteroids that are each unique meshes, each turned into a single box mesh that is cheap to render, with the actual shape defined entirely in a shader.
A sky full of tens of thousands of animating birds, each animating at a different stage in their anim, flying over the Everglades, in full 3d, so you can peek up close at them or fly through the flock — done as one single mesh, a single giant cube.
A dog, standing still, with a high-detail texture and legs and everything, defined in a couple of textures and as a single cube.
Basically, what Risser just demoed is an incredibly low-poly way of doing incredibly high-poly scenes.
Very cool… but currently gated behind the challenge of having high-end shader support and knowing how to make shaders that can do something like this… And of course, any of these objects would collide as boxes, not as a high-poly dog. 🙂 Still, very cool. Usually displacement mapping is only used for making grates or bumpy walls…
Some of the stuff he showed is available here.
8 Responses to “ACM Sandbox Symposium: Rendering Volumes with Per-Pixel Displacement Mapping”
Sorry, the comment form is closed at this time.
Yes, but it still has to be polygonized at some point, in this case, in the 3d accelerator. This is still advantageous, but it’s not totally free.
Part of the win comes from quick-and-easy level-of-detail automagically done by the accelerator card.
[…] ACM Sandbox Symposium: Rendering Volumes with Per-Pixel Displacement Mapping […]
Sounds like an excellent way to develop high-quality background world elements with a relatively cheap cost in processor time.
The only problem is, it means I’d need to upgrade my graphics card. Again. And it was doing so well, too… 🙂
Two nitpicks, Raph: you seem to say that an entire flock of birds you can fly through would be rendered as a single cube. I think that’s incorrect, this technique would work by rendering each bird as a single quad.
In addition, the animation is performed by storing each keyframe as a separate image (all grouped in one texture), i.e. in the style of oldschool sprites. Close-ups of an animated bird will look choppy unless there are a huge amount of keyframes.
I’m not exactly sure where this formally falls between image based rendering and real-time raytracing, but yes, as shaders increase in power, we will be seeing these techniques get more detail and features: animation by space warping comes to mind.
Games like Total War or some flight sims use a completely different approach: the idea is that, if you are rendering several thousands of small objects, then you can get away with many of them having the same orientation and animation frame, so you render a bunch of them once and then “paste” at different positions and distances.
Mike, it’s not free, but each asteroid is ~12 polys (a cube). That’s way way cheaper than what they would be normally.
Jare, the version on the website was quads. The version he showed at the symposium was cubes. And yes, you are correct that that the animations are effectively texture swaps.
Basically, this is an advanced form of “imposters,” which is the technique you reference in your last paragraph. Imposters are generally camera-facing sprites that are rendered on the fly. The difference in this technique is that the source textures are actually multiple “heightfields” so to speak, to achieve full 3d per pixel displacement.
Ok, forgive this amateur wannabe geek, but couldn’t this technology be used in combination with other tech to give each pixel it’s own collision? I mean, the pixel doesn’t care as long as a server tells it when to stop, right?
12 polys per asteroid – Yes and no.
The game sends 12 polys per asteriod to the 3d accelerator, which means the IO to the card is reduced, as well as calculations on the CPU. HOWEVER, the accelerator card then converts those 12 polys to hundreds or thousands, and renders them… which still takes up GPU processing.
Hi guys, glad to see there’s interest in this technique. Just to clarify a few things, impostors aren’t rendered on a single cube/quad, rather each object such as a bird or asteroid is rendered on its own cube/quad, all the geometry is however stored in one single mesh file (I believe that’s where the confusion occurred). This simplifies things from a scene graph perspective since now a flock of birds doesn’t require the kind of cpu side management it would if each bird was an independent element. In any case that’s auxiliary.
Mike’s a little off. No extra geometry is created on the GPU, in fact this technique does not require shader model 4.0 (so no geometry shaders), I have given some thought to doing something like that in a geometry shader, might be a bit faster but all I’ve got is a 6800 so any research in that direction will have to wait until I upgrade. Rather than creating geometry on the GPU I basically use the geometry the GPU is given as a virtual screen onto which I render the final image by turning the pixel shader into a low weight ray tracer, since the pixel shader retrieves memory in the form of a texture (rather than geometry) I store my model as multiple height fields in a texture rather than as a vertex buffer. Mike pretty much hit the nail on the head.
“Part of the win comes from quick-and-easy level-of-detail automagically done by the accelerator card.”
Level of detail is essentially per pixel now, and if overdraw is dealt with then performance will be determined by how many pixels you have on your screen rather than the number of objects you’d like to draw, making it as some have pointed out, an excellent technique for adding lots of cheap no hassle environmental detail to a scene.
Jare is correct about animation, whereas I didn’t mention animation in my sandbox talk, it was included in the paper printed up in the proceedings (not sure who all was there, seems like at least a couple of y’all were). I advocate basically taking key-frames and storing them as textures, unfortunately interpolation is pretty much impossible, I don’t suggest using this technique on anything that needs very long or very sophisticated animations. As for the flying birds in the demo I showed at sandbox, that animation actually was done using space warping. I had to write a program to export my models into multiple height field texture format for rendering and in order to have it go through animation keyframes of the model and automatically render frames on a grid… well that would have been a bit more work then just throwing some sin functions into the shader itself and warping the texture in real time (also its free on memory and gives a more fluid looking animation). After an hour of trial and error I managed to produce a fairly nasty looking equation that I felt could pass for flying. I think this approach could be viable if it was made artist friendly, as of now it required writing a big mathy equation and placing it in several key points of the shader. With some of the drag and drop shade tree stuff I’ve seen it wouldn’t be too difficult to give a user basic math building blocks and let them visually build the equation using that sort of thing, anyway at the moment nothing like that exists so unless your a programmer with a good intuition for animation or an artist with a strong math background I wouldn’t suggest attempting hard coding animation directly into the shader.
In case anyone was wondering why I suggested boxes at sandbox and quads on the website then here’s the deal. I originally developed the method using quads because my inclination was to reduce the amount of geometry as much as possible. Later while doing research towards volumetric cloud rendering I realized that quads result in a lot of overdraw and that a box, while its a bit more geometry, is really a better balance because it drastically reduces the number of dead pixels being drawn, especially for elongated shapes. The quad method is featured as chapter 21 “True Impostors” in GPU Gems 3 which was release a day or two after sandbox. the box method was published as “Rendering 3D Volumes Using Per Pixel Displacement Mapping” (academia always has geeky titles 🙂 ) at sandbox. So officially they are different papers, in fact the latter is written as an extension to the former. Nvidia’s original call for papers for GPU Gems 3 was almost a year ago, sandbox was only four or so months ago, so from my research standpoint one paper clearly precedes the other, but from the publics perspective they are both released at the same time, I believe this is causing a bit of confusion.
Anyway, didn’t mean to ramble on so much, happy coding!
Eric Risser