Jump to content
  • Advertisement

Archived

This topic is now archived and is closed to further replies.

shurcool

how to render to memory?

This topic is 5948 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

hi, me and my friend are working on a game, it''s called war worms (a working name) and here''s a screenshot of what we''ve got so far. and u can download the executable from the dev site (www.warwormsdev.f2s.com), if you want. here''s my question. i need to render a 2d scene, and kind of save the colour of each pixel to memory, so that i can put the same scene, without re-rendering a very high number of triangles, but putting it in a different position on the screen. so far i guess the only way is to render the scene, and using glReadPixels() save them to memory. but the problem is that the scene will not fit on the screen... so what do i do? do i render the scene many times, untill i get all parts of it in memory? but won''t that overwrite previous data? please help me out... any help is greately appreciated. thanks, shurcool

Share this post


Link to post
Share on other sites
Advertisement
if you are trying to a do a "blurring" technique then you want to look into "glAccum". if you are not, please specify what you are trying to do.

To the vast majority of mankind, nothing is more agreeable than to escape the need for mental exertion... To most people, nothing is more troublesome than the effort of thinking.

Share this post


Link to post
Share on other sites
> so far i guess the only way is to render the scene, and
> using glReadPixels() save them to memory. but the problem
> is that the scene will not fit on the screen...

will not fit on the screen ?
if you have a 1024x768 desktop and you need to render a 2000x2000 picture, then yes it will not be "on screen".
But you still can ask OpenGL to render the 2000x2000 picture in a "back" buffer that you will not send to the monitor, but you can query the pixels using glReadPixels.

In OpenGL, the screen resolution does not force the limit of the OpenGL viewport.

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
it sounds like your not rendering in realtime, so why use opengl? use software algos to get nice and more consistant results on all hardware,,, and it will be easier to save that way

Share this post


Link to post
Share on other sites
yes, i am rendering in real-time. you probably missed it, but i did mention that this was a game.

i will explain what i want to do here in more detail.

i have a 2d game with deformable terrain. terrain is represented with tristrips, which results in need to render a very high number of triangles each frame, which i think is useless. i though it would speed it up, if i render the scene once, and then save the color of each pixel to memory. then i would just colour each pixel on the screen using the pixel colors from memory. of course, with repositioning it, because the scene will scroll. and when i have an explosion, part of the land would change. i would, using scissors testing render just that specific region, and save it in memory. it is faster to render a small box with scissors testing, than the whole sceen, right?

i hope i gave you a better understanding of what i am trying to do here.

quote:
if you have a 1024x768 desktop and you need to render a 2000x2000 picture, then yes it will not be "on screen".
But you still can ask OpenGL to render the 2000x2000 picture in a "back" buffer that you will not send to the monitor, but you can query the pixels using glReadPixels.


so all i would have to do is glReadPixels at (-15, -50) or (1500, -100) or is there anything i would have to do in order to active that "back" buffer?

thanks a lot for your suggestions.

thanks,
shurcool

Share this post


Link to post
Share on other sites
i foresee the memory requirements becoming ridiculous, unless you are using a small map.

To the vast majority of mankind, nothing is more agreeable than to escape the need for mental exertion... To most people, nothing is more troublesome than the effort of thinking.

Share this post


Link to post
Share on other sites
I assume the reason you want to save individual pixels is so the terrain can be damaged (by erasing chunks of the terrain when hit). Doing this in 3D is not only hard but the memory and cpu speed needed to do it are just insane. The solution: Use DirectDraw instead of Direct3D or OpenGL. All other games like this (Worms, Tanks, Gorillas, Etc, Etc) use 2D rendering. Store the terrain as chunk "sprites" say 128x128, then modify them whenever terrain damage needs to be generated.

Share this post


Link to post
Share on other sites
shurcool: I''ve just tried, and remembered that you can not query out of the GL context
You can define a viewport which is bigger than the current screen, but in fact you can''t read anything on this.

Well, what you can do is split your viewing frustum into small regions, then render on each region, and finally you merge all of that into a big texture or a big rgb array.
Once this big texture is computed, there is no problem for displaying it.
But be careful of the texture size limit. It is something between 1024x1024 and 4096x4096 on current cards.

Share this post


Link to post
Share on other sites

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

Participate in the game development conversation and more when you create an account on GameDev.net!

Sign me up!