On-the-fly geometry creation

Started by
3 comments, last by Ryan_001 10 years, 4 months ago
Ok the topic's a bit vague, I'll try to explain the idea before asking the question.

I'm reworking my user interface code for a new project. To that end, the rendering portion of the UI focuses on 'glyphs', which are simple 2d polygonal graphics. They could be a box, an underscore, an arrow, or the letter 'Q', ect... Glyphs can range in size from a few poly's, to a few hundred at most. Each control in the UI would consist of a few glyphs. For example a button might be a box glyph along with a few glyph's for the letters on the button. The glyphs are stored as triangle strips in a 2D texture. The idea is that each control would output a single vertex to the vertex stream/buffer for each glyph, and then that would be 'expanded' on the fly on the video card. So 1 vertex input could expand to anywhere from 1 to a few hundred tri's which are then rendered to the screen.

My first thought was to use the geometry shader. The output vertex format to the pixel shader would need to have at least 16 entries, which limits each geometry shader invocation (in D3D11) to 64 vertices. I could cut large glyphs up into multiple parts, but this does significantly complicate things.

My next though would be to use instancing in the geometry shader. This works well for large glyphs, but smaller glyphs (which will be common) would lead to many 'null' instances, or instances that produced no output. For example if each input vertex was executed over 8 instances, then for most glyphs instances 1 through 7 would produce no output. I'm worried that this might be a performance concern.

Another thought was to use the input assembler/vertex shader to perform the geometry expansion. The issue here is that this would lead to many null/degenerate triangle which would have to be discarded by the video card. Again this could be a performance issue.

I was also pondering the use of the hull/domain shader/tessellator to perform the geometry expansion. But I'm not all that familiar with it (having never used or worked with it), and as far as I can tell the resulting tessellated mesh's are quite restricted and wouldn't be easily transformed into arbitrary shapes.

Obviously I wish I had the time to code, test, and profile all 4, but alas that is not quite feasible. So, any ideas or thoughts? Is there any easy way to construct arbitrary geometry 'on the fly' on a GPU?
Advertisement

Do you really have to expand 2D texture into 3D model? If you need text just render small part of texture with blending turned on.

Maybe i didn't explain it properly. The 2D texture doesn't store image data, it would be read by the geometry shader (or vertex shader in the 4th proposal) to construct the glyphs. So it would contain vertex data. The 'vertex data' thats fed in as input is actually 'instance' data. So the glyph 'id' in the vertex data would correspond to a row (y-axis) of the 2D texture. The vertex data (in a compact form) would be stored in that row.

The idea is that each control would output a single vertex to the vertex stream/buffer for each glyph, and then that would be 'expanded' on the fly on the video card. So 1 vertex input could expand to anywhere from 1 to a few hundred tri's which are then rendered to the screen.

Is this part a hard requirement? I like breaking hard problems down into easier ones as early as possible, so my first thought is to just output more than one vertex to the stream/buffer for complex glyphs. Basically, just cap the size of a glyph and split complex ones into more than one. This should ensure good performance, and provide better flexibility in the future. As a random example, you might find performance drops off on a shader if you exceed a certain number of outputs and this approach lets you very easily mess with the number of outputs that will get generated.

Sorry for the delayed reply, holiday festivities and family can tie up a lot of time...

Thanks for the ideas, in the end after playing around with it all I went with a different approach. I'll describe it here briefly in case anyone's still reading this thread.

The general idea is that I wanted to send as little information to the video card as possible, and that items to be drawn are variable sized glyphs. The ideas above were attempts to expand the data on-the-fly. Eventually I decided to go another route. I render enough vertices to cover all the glyphs that need to be drawn, but the vertices themselves contain no actually data. In the vertex shader, with the help of a few constant buffers, I map the vertex id to its instance id and its glyph vertex number. So say the first glyph requires 12 vertices to render and the 2nd glyph 8, then vertex #2 maps to the second vertex of the first glyph, and vertex #14 maps to the second vertex of the second glyph. Once I have the instance data and glyph vertex number I just combine the two to get the final vertex data which is sent on down the pipeline.

This way I only need to upload the vertex -> instance mapping (a simple table with 1 32 bit entry per instance), and the per-instance data. All of this is alot less than doing it the normal way of just uploading the vertices directly.

This topic is closed to new replies.

Advertisement