Rendering multiple textures and draw text in DirectX11

Started by
22 comments, last by dave09cbank 8 years ago

Hi

I am new to DirectX and still learning. I am using SharpDx and DirectX11 + DirectX9.

Now i have come across two problems probably cause i am still learning. I have come across two hurdles/issues below :-

1. Render two/more texture and then updating each of the texture at different times with new texture image data. I can update a single texture via mapping a texture. I have included a potential screen dump which depicts my issues. How could this problem be resolved ?

2. I tried to render/draw text onto my render text. the text would draw if it isn't within the dimension of the texture rectangle. Why would this happen ? What could i do to overcome this issue ?

[attachment=30935:SampleWindow.jpg]

Any ideas or help be much appreciated .

If you require any more information then do let me know.

Advertisement

What problem are you having updating two textures? First note that manually updating textures per frame isn't optimal, but if you have to (dynamic UI from something like Awesomium comes to mind), then there's no restriction on updating multiple textures. Create a separate texture for each one you want to render, then update them separately as needed.

I'm not sure about your second question though, do you meant you're rendering text onto a texture, and you don't see the text if it's outside of the texture? Or are you rendering to the screen and don't see it if it's outside the quad that you are drawing your texture with? If it's the former (rendering text to a texture), then you're limited by that textures dimensions (you can only render onto that texture, then render that texture onto the screen). If it's the later (rendering onto the screen over a texture quad), then are you setting viewport or scissor rects when rendering your texture? If you are, make sure you clear them before rendering the text, if you're limiting the texture by a rectangle, then all rendering after that will still be limited.

Thanks for the reply @xycsoscyx

the trouble i was having was to understand to how multiple texture can be rendered on a single render target.

From what i understood from your reply is that in order to render multiple texture, i would need to

  1. set viewport rectangle
  2. set input layout and vertex
  3. update the PixelShader using ShaderResourceView
  4. call draw on device's immediatecontext
  5. then flush it.

I believe these will be the steps followed. correct me if i'm wrong.

As for the second part of the question your reply makes sense to me as it depicts the exact behaviour i'm seeing. Thanks for clarifying this for me.

Ah, so you just mean drawing two textures onto the screen? You can just draw quads directly with each texture instead of setting the viewport per call:

1. set the viewport (probably the whole screen)

2. set the input layout and vertex

3. use an orthographic projection matrix for your transform

4. for each texture

a. set the texture

b. draw a quad where you want it on the screen

The quad that you draw will be placed wherever you want on the screen (easy to do with orthographic projection since you can use screen coordinates directly if you want), and the texture coordinates will just be 0-1 across and down the quad. This will draw the entire texture on the quad, placing the quad wherever (and whatever size) you want.

thanks for the reply @xycsoscyx

Yes. Initially I would start off with two textures but eventually this could be more than two texture depending on the requirement.

As per your suggestion/points if my my understanding is correct then i would do step 1, 2,3 at the start and then perform step 4a and 4b for each of the textures ?

Also over the last couple of days i have been trying to understand and use projective projection matrix as i will be performing zoom in/out.
Hence my question in relation to above is would i be able to achieve the desired results as depicted below from the screen dumps

Image A

[attachment=31124:SampleWindow_WithaspectRatioA.jpg]

ImageB

[attachment=31125:SampleWindow_WithaspectRatioB.jpg]

Any suggestion would be helpful.

Thanks.

You can do that with projection, but the coordinates for the quads/text/etc become a bit more complicated since you're working in a different space. With orthographic you have fixed left/right/top/bottom coordinates that you can use when setting up the matrix, so creating the UI elements becomes a lot simpler. With perspective projection you need to work inside the frustum. Ultimately it's not that difficult, but just having absolute screen space coordinates is a lot simpler.

You can also still do a zoom in/out with orthographic. You'd set the view rectangle (left/right/top./bottom) when setting up the matrix and you can shrink that rectangle while keeping your quad coordinates the same, which will result in zooming in.

Thanks for the reply @xycsoscyx.

thanks for clarifying between the two matrices.

Also i forgot to mention/ask previously : Is it still possible to perform panning along size zoom in/out via Orthographic projection ? The reason for asking is being that i would like to zoom in/out within a certain part of the texture.

As mentioned in your previous reply:

You'd set the view rectangle (left/right/top./bottom) when setting up the matrix and you can shrink that rectangle while keeping your quad coordinates the same, which will result in zooming in.

How would i achieve this ? Any sample code would be really helpful or if you can guide me towards a sample/example ?

In contrast to above, would the view rectangle be affected to each texture that i wish to render OR is it possible that each texture could potentially have their own orthographic projection ?

Thanks.

Matrices just convert, so your projection matrix typically just converts from view space to screen space, orthographic does parallel projection such that things don't shrink the farther away they are (which is typically what you want for UI).

https://en.wikipedia.org/wiki/Orthographic_projection

Orthographic also has a lot more examples of setting it up using left/top/right/bottom coordinates, which means you can explicitly set the coordinates to (0,screenWidth)x(0,screenHeight) so that you get 1:1 mapping of coordinates and screen pixels. Note that when projected, coordinates between (left,right) get mapped to (-1,1), and everything outside that gets clipped. This makes it easy to increase the left/right coordinates when creating the matrix, which will result in whatever's getting projected scrolling by.

Note also that there are a lot of other ways to do the same thing. You can setup a view matrix that changes it's translation instead, then you'd use (vertex * view * project), which will also scroll the screen (basically moving the vertex to view space to scroll it, then to screen space).

You can also do the same things with a perspective projection matrix, then increase/decrease the FOV to zoom in/out, and still use the view matrix to actually scroll things. Using a perspective projection becomes more complicated though (the math on the projection matrix itself and setting up your scene), but it can offer you more effects (you get parallax as part of perspective projection).

thanks for the reply @xycsoscyx

With your explanation and suggestion have managed to create a orthographic camera.

However i still have an issue where i have been center the texture as shown in post #5 Image A and Image B.
I believe i will have to use the viewport to do align them to the center. if not then how could it be achieved ?

Also in addition to above the above, performance wise is it better to use a camera projection in comparison to updating the texture co-ordination such as when performing scrolling or zoom in/out ?

Thanks.

What is the actual problem you're seeing in image A and B? It looks like you're rendering a background quad, a foreground quad, and some text over that. Is something not working right there? What is it that you're actually trying to accomplish?

For scrolling/zooming, it really depends on your setup. Performance wise, it's always best to update the least amount of data per frame, if you have a lot of vertex data that you're rendering, then you should always get better performance by updating the camera data per frame and leaving the vertex data alone since the amount of data for the camera will be less than the amount of data in the vertex buffers. There are a lot of factors though, is there a lot of static data that stays the same per frame, are you updating a lot of things dynamically, etc. If it's a fully dynamic UI that needs a lot of vertex data calculated per frame, then it may by faster to scroll/zoom there, but even then you'd need to run performance tests to really find out (the GPU may still be faster at doing the math). You can even do a separate step for scrolling/zooming in your shaders, then update the scroll/zoom data per frame, leaving the camera/vertex data alone.

This topic is closed to new replies.

Advertisement