Tessellation and render passes... how?

Started by
7 comments, last by ja0335 10 years, 9 months ago

Hi...

I want to know wich is the technique to make a render in passes using tessellation, i mean, if i have in my first pass all the logic to tessellate the geometry. Then i add a second pass for ilumination, and this pass will be once per light.

When i render the result i get a horrible overlaped result between the tessellated geometry(first pass) and the not tessellated geometry(second pass).

If i put the tessellation logic in the lighting pass i will tesellate the object once per light, and i think this is not good.

What is the correct way to make render passes and have tessellation?

Thanks.

Juan Camilo Acosta Arango

Bogotá, Colombia

Advertisement

You can tessellate the geometry on a pre-pass and use the stream output functionality to capture the high resolution geometry for that frame. Then this generated geometry can be used for all subsequent rendering passes.

It is possible to both rasterize a model and stream it out at the same time, so you can combine this step with the building of your G-buffer.

Jason, do you know off hand roughly the performance hit of doing that, vs re-tessellating for n passes?
I remember when I tried that for animated meshes to avoid having to reskin multiple times ( for current and last frame for motion blur + each scene pass), it turned out to be quite a bit slower than just reskinning. Of course, that was years ago (dx10), and tessellation is alot more expensive than skinning, so I am curious to hear of more up to date tests with regards to this :).

I haven't tried it out personally, but there were a few presentations from GDC a couple years ago that mentioned this could be a win. Certainly I would expect it to be better than using full tessellation in all passes, but even this depends on how much tessellation work you are doing.

My usual advice in this type of situation is to design your architecture such that it is easy to swap one method for the other. If you find that streaming out the mesh is faster, then go with it. If not, just use the fully tessellated version. That way you also future proof yourself for the next round of new features too.

You can tessellate the geometry on a pre-pass and use the stream output functionality to capture the high resolution geometry for that frame. Then this generated geometry can be used for all subsequent rendering passes.

It is possible to both rasterize a model and stream it out at the same time, so you can combine this step with the building of your G-buffer.

huh.png what is "stream output functionality" may you point me to some example ? thanks.

Juan Camilo Acosta Arango

Bogotá, Colombia

The stream output stage is conceptually attached to the output of the geometry shader. It allows you to bind vertex buffers to it, that will receive the stream of geometry that is normally heading for the rasterizer. You have to create a special geometry shader in order to make this work, and specify the layout of the data that you want to stream out. You can read more about it here.

Here is an overview of the pipeline (taken from our D3D11 book), where you can see the location of the stream output stage:

[attachment=14964:Figure_2_9.png]

hey JasonZ...

Is there any sample about it in the Hieroglyph 3 engine? i think understand best looking code smile.png

Juan Camilo Acosta Arango

Bogotá, Colombia

hey JasonZ...

Is there any sample about it in the Hieroglyph 3 engine? i think understand best looking code smile.png

Unfortunately not - although I suppose this discussion gives me a reason to add one :)

I have been searching in internet for a sample to tessellate a base mesh and stream out the new geometry, unfortunately i haven't found any thing. Some one here may point me to a tutorial? All i found is about particle systems but i need start from a base mesh.

Thanks

Juan Camilo Acosta Arango

Bogotá, Colombia

This topic is closed to new replies.

Advertisement