Jump to content
  • Advertisement
Sign in to follow this  
Deliverance

Pre-rendered based graphics

This topic is 3505 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I'm becoming quite interested in this technique because i never used it before and it seems quite useful for machines that don't have fancy hardware. I'm seeking for information suitable for machines that don't support pixel shaders or fancy options. Searching around i have found very little info on technologies used in (again) Syberia, Resident Evil, Myst. I'm quite a beginner in this field of pre-rendering stuff AND integrating it with 3D objects(because i used prerendered images in a few 2D games i worked on) so here's where i arrived and what questions i have. As far as i can figure, I imagine that the the game level would be composed by stitching a resonable(memory friendly) number of pre-rendered images. Does this sound okay? The question is: should the depth-buffer be pre-rendered? Well as far as i can see images tend to need more disk space than analytic models so i can figure that i can use 3d models instead of pre-rendered depth buffers. Whenever i would need the depth buffer i would just construct it from a simplified version of the level's geometry. Another question is: what about rotating the camera 360 degrees? How can this be achieved? Scenes rendered in cubemaps? Yet another questions is: how many images is enough for a level? Would a streaming technique for data from disk be a good option on supporting big levels? In Syberia for example i see convincing shadows being cast by the animated parts of the game level: how is this achieved? I imagine there's a fair deal of theory behind all this so what sources of information you recommend to consult further? Of course there will come a time for trial & error but now i want to have a moderately clear picture of the processes involved in a game engine using pre-rendered graphics along with 3D objects. :D

Share this post


Link to post
Share on other sites
Advertisement
I liked those Resident Evil / Alone in the Dark / Monkey Island games! And although its not fancy 3D, it pushes you more to artistic results. Instead of creating a large 3D bulk world, each scene gets special care.

I think the stuff you see on the screen is divided in 4 things:
- A flat high-detailed picture
- 3D objects on the foreground (character, objects, pick-up items)
- An actual (low-res) 3D mesh
- Optional, a foreground mesh/transparent picture (player can walk behind this)

The 3D mesh is not rendered, but can be used for physics / collision detection, and generating shadowMaps. The difficult part is to let the image have the same depth/angle as the 3D mesh. In other words, you must be able to perfectly map/project that image on the 3D mesh.

Most games probably created a high-res mesh first in their level editor. They take a snapshot from a fixed camera angle/position/fov, and then eventually enhance the image with photographs or hand drawings. Then create a low-res (box shaped) model of that area, according to the high-res mesh.

The low-res model can be used to determine where your player can walk, how physical objects fall on the ground, where bullet decals come, etc. AND, the low-res model can also be used to receive the shadows that are casted by
dynamic objects in your scene (player, objects, etc.). You can generate shadowMaps in this "background scene". This can be done with shaders, but there are also several tutorials that did it without. Or you can use stencil shadows. Anyhow, the shadowMaps/stencil shadows are created in the background, and later on pasted on top of your picture. For example, *project* your shadowMap on the (invisible)low-res model, and draw it on top of your picture. You could do additive rendering, or multiply the background with the shadowMap. Or do both (to prevent pitch black shadows)...
result = (background * 0.5) + (shadowMap * background * 0.5);

I think the hardest part is getting the same lighting for your dynamic (true 3D) objects as your background picture. Try to use the same lighting(techniques) as you used to render the high-res image, as much as possible.

Your rendering pipeline could look like this:
1.- Update shadowMaps (with the CPU (stencil), or FBO's (modern hardware required), or snapshots (render, copy to texture, erase screen). Do this for the dynamic/animated objects, for each lightsource.
2.- Draw background and shadowMaps together. Mix them the way you like. Shaders provide many ways to do it. If you don't have shaders, use transparency and multiple screen filling layers.
3.- Draw the dynamic objects on top (disable depth test).
4.- Eventually apply the shadowMaps on the dynamic objects as well ('self-shading')
5.- Optional, render the foreground picture on top


>> Complex depth
If you have a complex world where a simple background and foreground layer wouldn't be sufficient, you could think of actually rendering the low-res mesh and project the picture on it (look for projective rendering, compare it with a flashlight where you put a cardboard figure on top). Now you can enable depth testing when rendering the objects, which solves the problem of the player being behind/in front of the static scene.

However, I'm not sure if very complex pictures can be simply projected on a low-res mesh. Some of the edges/corners could become wrong. Another option might be rendering the (high-res) foreground to the stencil buffer or something. It has been a long time ago, so I might talk rubbish right now. But I believe its possible to determine for each pixel if its 'enabled' or 'disabled' this way. If an object is (partially) drawed on disabled pixels, nothing happens. So the previous rendering on that point (the background picture with stencil culling disabled) remains intact for these pixels.


>> 360
A cubeMap can do the job indeed. Although it probably gets harder to synchronize it with that low-res 3D model I was talking about. Never tried it :)


>> Streaming
I think you don't have to worry that much, unless you really target old/simple hardware. A normal RGB 1024x512 image (Modern hardware doesn't even need these Power Of Two resolutions) could fill a screen and takes 1.5 MB. If that is too much, you could think of compressing techniques (requires shaders though). Modern games such as GTA IV do streaming with much more data.

I would load the neighbour pictures of each scene in the background. Unless you have a very quick paced game, you won't switch screens so very often. And if it does happen that the current scene picture hasn't been loaded yet, well, then just be patient :) Resident Evil used the infamous Door Cutscenes most probably to cover their loading times (each chamber would have ~3/~5 pictures I guess). Although I guess the Gamecube version didn't need such a long loading time (5 seconds).

So if you only load a few scenes at a time, you can easily create a world made of thousands of pictures (don't know what your storage medium is). And if you can do it quick enough in the background, you wouldn't have any loading times at all.

Greetings,
Rick

Share this post


Link to post
Share on other sites
There are many many techniques all having the advantages and disadvantage.

As Spek pointed out, it depends what your target technology is.

One technique which allows awesome results is to breakdown the pre-rendered scene into multiple bitmap layers:

- Color bitmap: Simply represents the flat color info of the scene.
- Lightning bitmap: Represents the info that is used in order to calculate how much to light up every pixel in the color bitmap. This allows you to use realtime lightning to effect the pre-rendered content aswell as your models which are batched and drawn realtime.
- Normal bitmap: Can be used to apply extra info for possible lightning techniques and/or extra detail.
- Displacement bitmap: Used for occlusion, shadow mapping and/or physical collisioning. This is almost required if you want to create some kind of perspective illusions.

Possible other bitmaps can be used to do transparency and/or reflection stuff if desired.

All these bitmaps are then used in specific shaders which all you to do all the required calculus.

One of the many roads leading to Rome ;).

Regards,

Xeile

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!