Jump to content

  • Log In with Google      Sign In   
  • Create Account


Streaming image to a texture?


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
5 replies to this topic

#1 Waterlimon   Crossbones+   -  Reputation: 2456

Like
0Likes
Like

Posted 15 May 2012 - 05:23 AM

Hey, im messing around in c++, currently doing rendering related stuff (=badly designed opengl wrapper).

I wanted to try and make some kind of a deformable terrain which i then raycast on CPU, upload the image to a texture on GPU and put it in a fullscreen quad. I know it will probably go like 2 FPS on a 400*400 screen or something if i even get it working, but i need some interesting goal to even bother to program.

Currently i do something like this (not wrapped to a class or anything yet, i was just testing if it works):

while true
{
[add 1 to each component of each pixel of CPU-side texture]
[Upload texture to GPU using glTexSubImage2D or whatever it was]
[Draw my 2 triangles that cover the screen]
[Swap buffers]
}


And i was wondering what can i do to increase performance. It is ok as i will obviously not be raycasting HD image, but i might also later want to draw polygons on top of it too, so id have to also have a depth texture for the raycasted thing, and possibly other stuff too to apply some effects.

I thought of these:
-Update the texture in many parts depending on how high the velocity of the "pixels" is at that point (initially the refresh rate for the whole thing, later something like a quadtree if i want multiple separate raycasted thingies moving in different directions?)

-Use some compression mechanism to not need as much bits per component

-Use PBO (i believe this will only help if its CPU limited?)


Any articles or anything that i can read to figure out how to implement for example the compression, and would it be any use?

I wont be doing anything complex as im not going to be trying to add polygons or anything to be rendered any time soon, but i want to try something simple before moving to coding the other stuff, and i want to get information about texture compression and stuff.

Thanks.

o3o


Sponsor:

#2 bobbias   Members   -  Reputation: 120

Like
1Likes
Like

Posted 15 May 2012 - 06:22 AM

You might want to look into using the GPU for the raycasting. You can accelerate that sort of thing by a massive amount using the GPU for your heavy lifting. On something like this it's not like raycasting on the GPU is going to be cutting into other GPU accelerated stuff.

I just did a quick search and saw this: http://www.daimi.au.dk/~trier/?page_id=98

Not entirely the same thing, but probably helpful. Some googling provides these links:
http://www.cg.tuwien.ac.at/hostings/cescg/CESCG-2005/papers/VRVis-Scharsach-Henning.pdf
http://www.virtualglobebook.com/3DEngineDesignForVirtualGlobesSection43.pdf
http://www.it.ubi.pt/17epcg/Actas/artigos/17epcg_submission_8.pdf

No idea how useful these would be, but there might be some techniques in those that help.

#3 mhagain   Crossbones+   -  Reputation: 7802

Like
1Likes
Like

Posted 15 May 2012 - 06:33 AM

A PBO is only going to be useful if you have some kind of delay between when you call glTexSubImage and when you use the texture; typically in the order of a frame or so. If you need to use the image in the same frame as you call glTexSubImage it's just going to add extra overhead.

A single glTexSubImage of a large rectangle per frame is also going to perform much better than many many small rectangles.

The key trick to fast glTexSubImage performance is to get your texture format parameters matching what the GPU is using internally. If they match it'll go up fast, if they don't it will need to take intermediate conversion steps which may happen in software and will slow things down.

The internalFormat parameter you supply to your original glTexImage call is not necessarily that which is actually used by the GPU - your implementation is free to substitute that with whatever it wants. Look on it as being a sort of "this is what I'd like the GPU to give me a reasonable match for" rather than "this is exactly what I want and I expect it to be exactly what I get".

So, with all that out of the way, the optimal parameters may require some testing on your part, but I've found these to work fastest across a broad range of hardware:

internalFormat: GL_RGBA8
format: GL_BGRA
type: GL_UNSIGNED_INT_8_8_8_8_REV

This can seem counter-intuitive; using 32-bit data instead of 24-bit surely sends more data and therefore should be slower? Not quite - it's a match for the GPU's internal storage, so like I said, it goes up directly without any intermediate steps needed. Those intermediate steps are always going to be much much slower than the overhead of the extra data.

It can also seem as though it's "wasting memory", but if you look at it from a certain perspective, it's actually not. Most GPUs don't actually have native internal support for 24-bit textures, and even if yours does, you're accepting some extra memory overhead in exchange for a much faster data transfer. So the memory isn't being "wasted" - it's being "used". Used to get higher performance.

It appears that the gentleman thought C++ was extremely difficult and he was overjoyed that the machine was absorbing it; he understood that good C++ is difficult but the best C++ is well-nigh unintelligible.


#4 Waterlimon   Crossbones+   -  Reputation: 2456

Like
0Likes
Like

Posted 15 May 2012 - 07:05 AM

Yeah i tried BGRA and im currently using RGBA (as i didnt notice any difference)


Any estimates how many FPS i can get with 1920*1080 texture being uploaded each frame before it gets CPU limited? As i think it was using 25% processor time (which is a full core as i have a quad core i believe), but im not sure if thats just because its in debug mode or something.

Also, would i get any significant performance increase by using 2 textures instead of 1, with a loop like
while true
{
[draw frame to send]
[update texture1]
[swap textures 1 and 2]
[draw full screen quad using texture 1 (=texture 2 at start of loop)]
}

EDIT: Oh, if i do it like above, would i gain benefit from using a PBO if i understood you right?

Edited by Waterlimon, 15 May 2012 - 07:14 AM.

o3o


#5 Waterlimon   Crossbones+   -  Reputation: 2456

Like
0Likes
Like

Posted 15 May 2012 - 07:09 AM

You might want to look into using the GPU for the raycasting. You can accelerate that sort of thing by a massive amount using the GPU for your heavy lifting. On something like this it's not like raycasting on the GPU is going to be cutting into other GPU accelerated stuff.

I just did a quick search and saw this: http://www.daimi.au....ier/?page_id=98

Not entirely the same thing, but probably helpful. Some googling provides these links:
http://www.cg.tuwien...ach-Henning.pdf
http://www.virtualgl...esSection43.pdf
http://www.it.ubi.pt...ubmission_8.pdf

No idea how useful these would be, but there might be some techniques in those that help.


I want to use an octree instead of a 3D grid so it might be too complex to implement properly for me on the GPU. If i for some reason NEED top performance, i could think about it but for now i think it will be more fun doing it on the CPU.

o3o


#6 mhagain   Crossbones+   -  Reputation: 7802

Like
0Likes
Like

Posted 15 May 2012 - 02:48 PM

EDIT: Oh, if i do it like above, would i gain benefit from using a PBO if i understood you right?


Yeah, that should be PBO-friendly alright; so long as you're replacing the entire texture rect you can do this.

25% CPU usage on a quad-core machine does suggest that you're maxing out one core, but I doubt if it's coming from the texture upload. The primary bottlenecks there, once you get the parameters right, are going to be pipeline stalls and bandwidth. Instead I'd guess that you've just got a classic busy-wait loop, in which case maxing out a core is quite normal behaviour and not really indicative of performance.

It appears that the gentleman thought C++ was extremely difficult and he was overjoyed that the machine was absorbing it; he understood that good C++ is difficult but the best C++ is well-nigh unintelligible.





Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS