[SharpDX] Draw UI elements to screen as quads

Started by
2 comments, last by Stefan Fischlschweiger 9 years, 1 month ago

I'm currently trying to get a user interface and Hud working in my game. Text based stuff works via Direct2D/DirectWrite

However, when i comes to graphical stuff I'm confused on how I should do that. I tried to to that with Direct2D's DrawBitmap function but somehow this only works when there is no 3D stuff to draw as well.

So I'm considering 3 options to do this via Direct3D

1. Make textured squares and rectangles in 3dsMax (or any modeller) and load these through my model loader, then draw them to screen.

2. Instead of making a model make a Quad class with Vertex and Index buffer and fill these buffers when instancing a Quad, the draw them to screen with another draw function.

3. Leave out buffers alltogether and instead have a shader do the heavy lifting, feeding it only the position and size of the quad while calculating vertex position and UVs in the shader, like in that fullscreen triangle code you can find all over the web.

I tried #3 and #2, with little to no success. Maybe I implemented it wrong, but maybe the whole approach is wrong.

So how do you guys usually do UIs/HuDs and what should I keep an eye on when making such stuff?

I need to be able to move and/or resize some of the UI stuff on the fly.

Advertisement

I wrote a really quick and dirty UI module that does sort of #2 above. I keep a dictionary of "gumps" which are just a quad with a texture, scale, position, and color. I check a dirty flag every update and rebuild a dynamic vertex buffer with the info from the gump dictionary.

I draw each gump one at a time, with an ortho projection matrix, and just identity for the view. The shader takes a 2d pos and 2d uv, and just multiplies the texture by the color.

It works for really basic stuff but there's no concept of draw order so complicated stuff and alphas probably won't work. I made it just to display some static images really quickly.

I can paste some of the code if you'd like, but there's a bit of cruft in there from my material library.

What I did for a GUI was just define a quad in screen-size (vertices going from 0,0,1 to whatever size you want your thing to be, keeping z=1) and have a matrix to decide where the quad has to go on the GPU. I then send an orthographic matrix to the GPU as viewProjection (Matrix.OrthoOffCenterLH(0, screenwidth, screenheight, 0, 1f, 2f);) and I send a translation matrix (the model/world matrix, if you want) to the GPU.

Then a very simple GUI shader:


PS_IN VS(VS_IN input)
{
	PS_IN output = (PS_IN)0;

	output.pos = mul(mul(input.pos, world), viewProj);
	output.tex = input.tex;

	return output;
};

float4 PS( PS_IN input ) : SV_Target
{
	return image.Sample(imageSampler, input.tex);
};

I thought as much, maybe I come back to this method. But why OrthoOFFCENTER?

This topic is closed to new replies.

Advertisement