Jump to content

  • Log In with Google      Sign In   
  • Create Account

FREE SOFTWARE GIVEAWAY

We have 4 x Pro Licences (valued at $59 each) for 2d modular animation software Spriter to give away in this Thursday's GDNet Direct email newsletter.


Read more in this forum topic or make sure you're signed up (from the right-hand sidebar on the homepage) and read Thursday's newsletter to get in the running!


Cadde

Member Since 11 Mar 2012
Offline Last Active Apr 19 2012 04:08 AM

Topics I've Started

Best practices for rendering LOTS of text?

09 April 2012 - 05:31 PM

I have come to a point where i want to render LOTS of text. My idea here is i want to render the text to a "screen" in my 3D space.
I have a few ideas on how i want to do it but i am not sure if it's going to be a good solution.

I have fiddled with text rendering in the past with GDI, GDI+ and even in XNA but every time i have run into a limiting factor performance wise.

My goal here is to make a computer screen kinda deal with a console. I want it to be as fast as possible to allow text to flash by as fast as it would in your average command prompt. (Not even this has been working for me in the past, always too slow)

My first idea is to render the text to a surface and then send that surface to a single quad representing the screen. The downside here is that sending such large textures (I am thinking 1920x1080 in size which obviously would translate to a 2048x2048 texture) every frame or even every other frame etc would still be very slow.

So my second idea was to make a single texture, a bitmap font and send that to the GPU once and have X number of quads times Y number of quads for rows and columns. Then update an array or likewise with the tu and tv values for each quad.
So if my screen has 80 columns and 50 rows then that would be 4,000 * 2 floats = 32,000 bytes per frame. At least that would be an improvement over sending a new 2048x2048 texture which would be ~16 Megabytes per frame.

Then my third idea was to somehow handle the texture coordinates on the GPU and simply sending bytes describing what goes in each column and row. So i have a texture on the GPU and 8,000 triangles. They don't have any texture coordinates assigned initially and i send a stream of 4,000 bytes per frame that the shader would use to set the appropriate texture coordinates. (Obviously i am going for a monospace font here too)

Of course the screen wouldn't need to be updates once every frame but i am looking for at least 60 updates per second per virtual screen. And i would love to have more than 80x50 columns and rows too!

Which option would be the most viable one and could anyone provide an example for me if he/she has one in mind?
For instance, i have no idea of how to actually implement the third option in shader code and even the second option is untested territory for me.

Thanks in advance for any help! I will now attempt to make a testbed using at least the 2nd option.
//Cadde


EDIT:

I am also looking for any other alternatives that would be easier to implement and still fast. Those three options where just off the top of my head.

[SOLVED] I am supposed to use Transpose but when i do...

11 March 2012 - 10:25 PM

I have been trying many different languages revolving around DirectX and even had a stab at OpenGL but i finally settled on C# + SlimDX because it's been the most comfortable solution for me.

So, i am making a custom engine for a game i intend to make. It's not going to be a fully fledged engine but more like a support framework.

----

Anyways, the question / problem i am facing right now is that i am (supposedly?) running DirectX 11 and made a simple triangle to play with so i could continue making a camera class. I set up my world, view and projection matrices and a basic color shader.
I started getting very strange issues however as i went along. First i got the RH vs LH wrong but that got sorted (I think) and then things got really wonky.

If i Matrix.Transpose my world, view and projection matrices before sending them to the shader my screen turns completely <color of triangle>!
That is, whatever color of the triangle i assign my view is filled with it. Having tried to "solve" any matrix issues i might have had in my camera class etc for about 4 hours straight i re-built a lot of the cbuffer updating procedures thinking something went wrong there.
Out of a "fluke" i commented out the Matrix.Transpose lines and VOILA. Everything worked as it should!

So... I know i am supposed to use Matrix.Transpose with DX11 but when i do it breaks.
When i don't it works as intended.
How can this be?

Thanks in advance for any clues or information on the subject.
//Cadde

PARTNERS