Archived

This topic is now archived and is closed to further replies.

shurcool

how to render to memory?

Recommended Posts

shurcool    439
hi, me and my friend are working on a game, it''s called war worms (a working name) and here''s a screenshot of what we''ve got so far. and u can download the executable from the dev site (www.warwormsdev.f2s.com), if you want. here''s my question. i need to render a 2d scene, and kind of save the colour of each pixel to memory, so that i can put the same scene, without re-rendering a very high number of triangles, but putting it in a different position on the screen. so far i guess the only way is to render the scene, and using glReadPixels() save them to memory. but the problem is that the scene will not fit on the screen... so what do i do? do i render the scene many times, untill i get all parts of it in memory? but won''t that overwrite previous data? please help me out... any help is greately appreciated. thanks, shurcool

Share this post


Link to post
Share on other sites
jenova    122
if you are trying to a do a "blurring" technique then you want to look into "glAccum". if you are not, please specify what you are trying to do.

To the vast majority of mankind, nothing is more agreeable than to escape the need for mental exertion... To most people, nothing is more troublesome than the effort of thinking.

Share this post


Link to post
Share on other sites
vincoof    514
> so far i guess the only way is to render the scene, and
> using glReadPixels() save them to memory. but the problem
> is that the scene will not fit on the screen...

will not fit on the screen ?
if you have a 1024x768 desktop and you need to render a 2000x2000 picture, then yes it will not be "on screen".
But you still can ask OpenGL to render the 2000x2000 picture in a "back" buffer that you will not send to the monitor, but you can query the pixels using glReadPixels.

In OpenGL, the screen resolution does not force the limit of the OpenGL viewport.

Share this post


Link to post
Share on other sites
Guest Anonymous Poster   
Guest Anonymous Poster
it sounds like your not rendering in realtime, so why use opengl? use software algos to get nice and more consistant results on all hardware,,, and it will be easier to save that way

Share this post


Link to post
Share on other sites
shurcool    439
yes, i am rendering in real-time. you probably missed it, but i did mention that this was a game.

i will explain what i want to do here in more detail.

i have a 2d game with deformable terrain. terrain is represented with tristrips, which results in need to render a very high number of triangles each frame, which i think is useless. i though it would speed it up, if i render the scene once, and then save the color of each pixel to memory. then i would just colour each pixel on the screen using the pixel colors from memory. of course, with repositioning it, because the scene will scroll. and when i have an explosion, part of the land would change. i would, using scissors testing render just that specific region, and save it in memory. it is faster to render a small box with scissors testing, than the whole sceen, right?

i hope i gave you a better understanding of what i am trying to do here.

quote:
if you have a 1024x768 desktop and you need to render a 2000x2000 picture, then yes it will not be "on screen".
But you still can ask OpenGL to render the 2000x2000 picture in a "back" buffer that you will not send to the monitor, but you can query the pixels using glReadPixels.


so all i would have to do is glReadPixels at (-15, -50) or (1500, -100) or is there anything i would have to do in order to active that "back" buffer?

thanks a lot for your suggestions.

thanks,
shurcool

Share this post


Link to post
Share on other sites
jenova    122
i foresee the memory requirements becoming ridiculous, unless you are using a small map.

To the vast majority of mankind, nothing is more agreeable than to escape the need for mental exertion... To most people, nothing is more troublesome than the effort of thinking.

Share this post


Link to post
Share on other sites
Michalson    1657
I assume the reason you want to save individual pixels is so the terrain can be damaged (by erasing chunks of the terrain when hit). Doing this in 3D is not only hard but the memory and cpu speed needed to do it are just insane. The solution: Use DirectDraw instead of Direct3D or OpenGL. All other games like this (Worms, Tanks, Gorillas, Etc, Etc) use 2D rendering. Store the terrain as chunk "sprites" say 128x128, then modify them whenever terrain damage needs to be generated.

Share this post


Link to post
Share on other sites
vincoof    514
shurcool: I''ve just tried, and remembered that you can not query out of the GL context
You can define a viewport which is bigger than the current screen, but in fact you can''t read anything on this.

Well, what you can do is split your viewing frustum into small regions, then render on each region, and finally you merge all of that into a big texture or a big rgb array.
Once this big texture is computed, there is no problem for displaying it.
But be careful of the texture size limit. It is something between 1024x1024 and 4096x4096 on current cards.

Share this post


Link to post
Share on other sites
shurcool    439
who ever said anything about 3d?!?! :-\ it is 2d. look at the screenshot above.

quote:
Original post by vincoof
Well, what you can do is split your viewing frustum into small regions, then render on each region, and finally you merge all of that into a big texture or a big rgb array.
Once this big texture is computed, there is no problem for displaying it.
But be careful of the texture size limit. It is something between 1024x1024 and 4096x4096 on current cards.


that''s exactly the same idea as i said earlier. :D i guess that''s what i will have to do.

thanks for help.

thanks,
shurcool

Share this post


Link to post
Share on other sites
vincoof    514
When you split the frustum, take care of the split method.
If the game is in full 2D (I mean, orthographic projection) then the split is obvious.
Otherwise (eg with perspective projection) it is a real pain in the a$$.

Share this post


Link to post
Share on other sites
a person    118
what benefit are you getting from the 3d hardware that would require you do use hacky methods to display the world? 2d games like this should be done in directdraw (the correct tool for the job) or optimize how you are drawing things. dont use individual polys, try consolidating by using textures (and i dont mean rendering your map to one huge texture thus preventing yoru game running on lots of hardware). you really should look into some curve tessalation algorithms that will give you what you want. you wont get per pixel collision detecting using 3d apis. too time consuming reading from vram (cards are meant receive data not supply it). it really is sad how ppl use opengl for the wrong types of apps just because they want to use opengl and hate dx.

having a 2048x2048x16 (2000x2000 wont work so we move up to next availible size) texture is monstorus. comes out to about 8mb. with double buffering (with a single zbuffer) you use another 4.5mb for video (1024x768x16). so you require minium of a 32meg card to handle your game (since a 16meg card will most likly not handle the texture size, but also because i still not have counted misc textures for fonts, explosions, models, etc. some 2d games cant be done well using 3d apis. now i am not saying that you can do the same thing in directdraw at 1024x768 with no trouble, but at least you dont have to worry about keeping your prerendered scene in system memory.

though i see you rendering only 968 vertices which is not that many at all. you should look into trying to optimize your drawing methods (ie rendering to a texture is stupid). you dont need perpixel curves. if you try for that, someone rendering at higher resolutions will drastically reduce performance. you may consider using ALL pretransformed vertices which only need to be moved in the 2d plane. this will greatly reduce transformation overhead.

what hardware are you running that on? it would help judge better the performence you are getting.

Share this post


Link to post
Share on other sites
_the_phantom_    11250
firstly, wasnt DD removed in DX8 or 8.1? in favor of using the 3D hardware to do 2D ops (as it''s generaly faster)

Secondly, why render the whole screen at once?
why not give a closer view most of the time and only render that what you can see and then say for rocket tracking drop out a few zoom levels and use a lower LOD to speed up rendering (much like Worms2 does)

If you want to renender the whole screen at once I personaly dont see OGL or D3D being the soultion as u need to be in 2D, with 2d sprites etc as this will make it much easier to do in the long term imho.

Share this post


Link to post
Share on other sites
vincoof    514
Not that I want to stick to OpenGL at any cost, but OpenGL draw 2D scenes and is designed for. That''s not just 3D world where you cut Z coordinate to zero. There really exists optimizations for 2D rendering in OpenGL.
Moreover, it offers all the powerful OpenGL features like blending, stencil buffer, etc which is a mess to do in DirectDraw.

I''m not saying that OpenGL is faster than DirectDraw, obviously it would be difficult for "simple" 2D rendering.
But using OpenGL allows to add features to your scene easily (like zooming the world for the rocket thingy described) where DirectDraw is not designed for.

Also, nowadays I don''t know any game which still uses the video memory to query pixel information, even in 2D. The pixel value simply represent nothing. You *always* need to store your data elsewhere than video memory.
Using a tile engine, it can be very efficient to use textures.

And btw, it''s not a good idea to store a 2048x2048 texture if it has to be in video memory during all the game

Share this post


Link to post
Share on other sites
Blueshift    122
Shurcool:

I wouldn''t worry about that. Just rerender it every frame, the number of vertices and triangles doesn''t seem too high for me. What is your target hardware for the game ?

Using ReadPixels et al. will only slow you down and will make concepts more complex. Imagine, someone is playing your game at 2048x2048 image resolution, you''ll have to cache lots of pixels there. And in the time you transfer a 2048x2048 image over the AGP bus, modern 3D hardware could already have rendered your whole game screen twice ! It''s really not worth it.

Just be sure to use decent (2D) frustum culling. Only render the vertices/faces you see on the current part of the game screen. If you use tristrips, this might be a bit harder than with individual triangles, though. Are you using GPC to generate the contours and tristrips ?
A.H aka Blueshift

Share this post


Link to post
Share on other sites
shurcool    439
yes, i am using gpc. but the problem with just rendering are quite a few.
1. i'd like to have as low as possible min specs.
2. i need the pixel data for collision detection (as it is a lot faster then looping thru all lines and checking for intersection).
3. it says there's 900 something vertices, but that's not very true. it's true for contours (the outline), there are 900 something vertices on the edges, but not for tristrips: i will load that map right now and see how many triangles it has to render (the whole map fits on the screen). and i'm sure the number will be around 10 000 which isn't that great.
4. my clipping operations (explosions) also slow it down, so i though it'd be faster to use pixel data to fill in the screen, rather than render so many polys.

thanks for your support guys!

thanks,
shurcool

Edited by - shurcool on February 8, 2002 8:19:20 AM

Share this post


Link to post
Share on other sites
_the_phantom_    11250
tbh I think you should rethink you design a bit, because currently you are suffering from either too much vertex data to be rendered or from slow AGP transfers due to the amount of texture data you''d have to transfer to get the effect you need, neither of which are gonna lend to a low min spec, unless you restrict the res the player can play at.

the amount of vertext data does seam a bit of overkill as well imho, surely you could change you generation routines to kill some of them? reduce the smoothness abit etc?

as for the collsion detection there is probably a quick way to do that as well instead of checking per pixel try and break it down to smaller amounts of data to check (couldnt give you an example without knowing the weapons in use etc however)

Share this post


Link to post
Share on other sites
JohnPC    122
I''m the other programmer of war worms. One of the problems we''re having is with collision detection. Basically we need to test if something going from (x1,y1) to (x2,y2) passes through any line.

First idea was to loop through the vertices and check them all. But as there are more and more objects moving about a lot of unnecessary checks are being done. And looping through 1000 or more vertices for each object wouldn''t be very good..

There were two other ideas we were thinking of:

Do a pixel based check first, and then if there''s a hit find out between what vertices and so on. This raises the question of how to render to memory so we can do the pixel based checks..

The other idea involved using a quadtree or other method to reduce the number of vertices we need to check. But rebuilding this quadtree anytime the terrain changed could be slow..

Share this post


Link to post
Share on other sites
vincoof    514
You can set your (x1,y1)-(x2-y2) line to be horizontal (or vertical) using rotation (very easy) and then use the bouding rectangle of this line to use selection buffer.
It's very fast (uses HW) and fully OpenGL compatible.

Edit: you can also use 4 clipping planes (defining the bouding rectangle), if you don't want to rotate your scene.
Note than a clipping plane represents a clipping line, in 2D.

Edited by - vincoof on February 8, 2002 3:04:55 PM

Share this post


Link to post
Share on other sites