Archived

This topic is now archived and is closed to further replies.

Perferati

Rendering a scene to .MPEG

Recommended Posts

Hi all, I was wondering if anyone could direct me to tutorials/documents on how to render an OpenGL scene to an .mpeg format. What I would like to do is be able to save my animation when the user clicks play to an .mpeg file which can then be played back independant from the program. Is this just a pipe dream or is it possible to implement? I would need a fairly indepth tutorial as I really have no knowledge of the MPEG format. Thanks for all your help in advance.

Share this post


Link to post
Share on other sites
That sounds like a toughy. It''d probably involve going frame-by-frame (like taking screenshots) and then combining it all into an AVI, and then converting to MPEG, and then cleaning up by deleting all BMPs and the AVI.

I don''t know if you could possibly look at the source for taking a screenshot and then looking at source for a program that converts a series of BMPs to an AVI, and then doing the same to convert that AVI to an MPEG, or what. I suppose that''s just my thoughts, don''t have a clue about the technical implemenation.

Share this post


Link to post
Share on other sites
there is an AVI C++ Library and headers which ship with MSVC6
and Windows 9x 2k/XP (assuming ur not using linux)

check MSDN and there is a tutorial or rendering FROM and .avi TO
OpenGL, it might help a buit though tutorial 30 something @
NeHe

later days
/silvermace

Share this post


Link to post
Share on other sites
The MPEG thing is posible but enormous(:-)) hard to implement.You must have a very good knowedge of C++,AVI stuff and OpenGL to complete it.There is a AVI demo that comes with VC++ 6.0.The demo creates a bitmaps of a moving clock and then creates a AVI file with the combined images.

Jonny Bravo:Take care,Minsky

Bye

The pain is coming...this sommer!!!

Share this post


Link to post
Share on other sites
you have two choices. one involves gpl code and the other involves windows code.
first off: THERE IS NO TUTORIAL ON THIS. nor is there example code you can use. you will have to learn about mpeg and how to read from the back buffer. if you think its too hard then dont implement the feature. its entirly up to you.

be forewarned either method will most likly not be realtime unless you run at a quite low resolution (ie 320x240). even then it may not be terribly fast do to how slow it is to read from vram. dont expect resizeing images from a fullscreen image down to a smaller size will help speed things up. while it would reduce the work done by the mpeg encoder it still strains the ago bus. you should also allow the engine to run at a fixed interval regardless of how long frames take without messing up timing of the physics and game logic. you may wish to consider a demo format which records the actual gameplay movement (ie key presses) or motion vectors/positions every x times per second. this can then be played back at slower then realime rates (or possibly faster depending on the hardware and resolution) when its encoded. mpeg encoding is a cpu intensive process. you may be better saving to jpeg and then converting the jpegs into an mpeg (see the mjpeg format and the intel jpeg lib).

assuming you now understand some of the limitations and know how to read the frames from the back/p buffer you need to encode the data. two ways which are explained below.

almost before i forget. its a good idea to have your own audio library so you can properly sync the audio to video frames that may be rendered slower then realtime. either actually mix the wav data yourself or figure out how to get the wav data you need from the library your using.

the gpl method: this requires releasing your game as open source. thus may not be a viable option for you. simply do a search on sourceforge for an mpeg encoder. pretty straight forward and works on most platforms.

the windows method: use directshow and build the correct filter graph. you however need to craete your own source filter and understand how they work since you will be creating the time stamps so the encoder knows how the frames relate as you pass them them through. unfortunatly i dont have much knowledge in theis area (ie creating source filters) thus you will have to go through the sdk and look at examples that may be present.

an alternitive is to use the video for windows api and create an avi in the way. however i have had trouble getting compressed audio to work with that. though it probably is bad code on my part. you can use most video compressors including dvix if its installed.

you could also store a sequence of jpegs and turn that into an mpeg. DONT even think about storing bmps. its WAY too harddisk intensive. not only would it require ridiculous amounts of harddisk space, but due to the sheer amount of data you would actually slow the process down compared to compressing to jpeg before hand. furthermore most ppl dont have the over 260 MB per minute required to store the bmps which are 320x240x16 and a framerate of 30 frames per second.

looking at the nehe tutorial as well as msdn samples will surly help with using the vfw avi api that is a part of windows. whlie outdated it surly beats directshow in ease of use and the gpl method which requires opening yoru source code (which really is only a problem if you dont want yoru game to be opensource).

yes its definatly possible. realtime is pushing it a bit depending on the hardware and resolution.

Share this post


Link to post
Share on other sites
Thanks for all your input guys...I think I''ll pass on .mpeg recording for this project. My boss only mentioned it as a "nice to have" feature..that and steroscopic viewing. I''ve researched stereo viewing and I believe I will be able to implement that no probs, 1 outta 2 aint bad.

Thanks again.

Share this post


Link to post
Share on other sites
I assume that your boss doesn''t know a lot about programming if he gives you such a task as a "nice to have"...

The only other explanaition is, that he thinks you''re a genious

Share this post


Link to post
Share on other sites
you have serveral choices. the first is that you render a scene and after that you read the pixels from the grapfics board with glReadPixels(...). The one bad things about this is that your video will be than dependent on the window size where your image is rendered. a better solution would be to use of a p-buffer. you can render an open glscene size independent into a pbuffer (pixel buffer). size independent means you can specify what measures your image should have regardless your window size. If you have still interest i can send you my AviRecorder class, which creates an avi file mp4 compressed. its quite easy to use (even for me).

Share this post


Link to post
Share on other sites
quote:
Original post by Peaceman
I assume that your boss doesn''t know a lot about programming if he gives you such a task as a "nice to have"...

The only other explanaition is, that he thinks you''re a genious


LOL...here''s his exact wording :

"MPEG (or similar format) Output

A useful function to have would be to be able to generate an MPEG file output. It would use the same “view” as the user controlled screen display, but would be able to define its own image size, resolution, frame-rate, etc., as appropriate."

He doesn''t think im a genius but knows im quite capable, he does do some programming but he''s a mechanical engineer, not a programmer. ;-)

Share this post


Link to post
Share on other sites
Check the latest news on NeHe. There''s a link to some pretty capable sounding source code for generating AVIs from your own renderings. Haven''t checked it myself yet, but could be useful.

Share this post


Link to post
Share on other sites
quote:
Original post by Peaceman
I assume that your boss doesn''t know a lot about programming if he gives you such a task as a "nice to have"...

The only other explanaition is, that he thinks you''re a genious


Funny, I have a boss just like that too!


www.thermoteknix.com

Share this post


Link to post
Share on other sites