Sign in to follow this  

1st person pre-rendered

This topic is 4665 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi, I'm currently working on the design of a 1st person pre-rendered game engine (alike Myst III). I know how to make a basic environment; render 6 90degrees shots and put them on the inside of a cube, been there, done that. However, I want to make it a little more dynamic, with animations and effects. One of these effects will be water transformation; I want the water to slowly move. I thought of the following solution; Create a sphere out of triangles, with each vertex being assigned a float telling it how much it can move and following what pattern; rocks will get 0-movement, water will get more, etc. Each triangle will be subdivided when the 'node' is loaded (depending on how much data the player's computer can handle). On rendertime, I will need a way to move the verticles accordingly to their parent triangle, to create the effect of the water moving. After that, a vertex shader (I'm using DirectX) will move each verticle so that it's distance to the origin is the same (creating a perfect sphere, slighly deformed by the water effect). Now, 2 questions: -Is this a good way? I can save the data on the location of each triangle in a file, etc, so that's not a problem. But computing-wise, will this be fast? -Moving the verticles; lock the buffer and change it thus? The calculation woulb be so complex I'm not sure a vertex shader could handle it. Any comment on this would be greatly appreciated, thanks for your time reading!

Share this post


Link to post
Share on other sites
Sounds viable enough. You'll want to use a vertex shader for moving the vertices around - perhaps a displacement map is the best solution?

There's also some stuff you could do directly (i.e. render the water in realtime) if you pack depth data alongside your prerendered backdrop. It'd look something like this:

1) Render backdrop, writing z=1.0f (saving you a clear)
2) Render depth pass (you'll need pixel shader 2.0 for this, to write to oDepth)
3) Render realtime objects

As I say, it needs pixel shader 2.0, but it would allow a lot more dynamism. Trying to move something like a bird across the scene using your vertex-displacement method would be very difficult, if not impossible.

Share this post


Link to post
Share on other sites
Thanks for you reply,

I was thinking about using Pixel shaders, but since there is little support for them on older machines, I'd like to limit myself to v1, so I will not occlude people.
About the moving bits - parts that will be moved with the water shader will be 'locked'; non-movable. Since everything is build out of recangles, I can place overlays as rectangles over the scene and make them move along.
Other things, like a bird flying around the sky, will be a plane that's not deformed by the water (as birds are in the sky, really, it won't have to be changed in such a way).

Changing the water to video would be way too much data: high resolution with a lot of nodes with moving things would increase rendertime insanely, as well as absorb disc space and memory - not doable.

About the z-depth; I plan DOF and fog; so the depth is used, only not for realtime 3d objects.

I'll look into displacement maps! Cheers.

Share this post


Link to post
Share on other sites
Quote:
Original post by The Parrot
I was thinking about using Pixel shaders, but since there is little support for them on older machines, I'd like to limit myself to v1, so I will not occlude people.
Be aware that pixel shaders will only run in hardware (unlike vertex shaders, which can be run in software). Use of ps1.1 will be limiting you to GF3 or above. But then, I was saying ps2.0 which is GFFX or later so never mind [grin]

Quote:

About the moving bits - parts that will be moved with the water shader will be 'locked'; non-movable. Since everything is build out of recangles, I can place overlays as rectangles over the scene and make them move along.
Other things, like a bird flying around the sky, will be a plane that's not deformed by the water (as birds are in the sky, really, it won't have to be changed in such a way).
Ahh, ok, if you're decomposing the scene into a static backdrop with animated overlays then you should be able to handle most things without a problem (e.g. people).

Quote:

Changing the water to video would be way too much data: high resolution with a lot of nodes with moving things would increase rendertime insanely, as well as absorb disc space and memory - not doable.
I suspect - given that the water only needs to be a short video that gets played on loop - that it wouldn't be that much compared to the requirements for other animated features. A small stream you can loop a 2second capture of; a person walking up and talking to you, you can't. (I realise that I'm just presenting more problems, instead of solutions [smile])

Quote:
About the z-depth; I plan DOF and fog; so the depth is used, only not for realtime 3d objects.
Fair enough. I guess you can apply those depth effects to animated features offline without too much trouble.

I'm wondering what the Myst games used. I think the first one used Quicktime; perhaps the later ones used something like Bink? I've not used it myself but I know it has a good reputation for control of both data size and bandwidth.

Riven came on six CDs. I don't think it was the static image backdrops that were taking up so much space...

Share this post


Link to post
Share on other sites
Ahem, of course v1 occludes people as well - just not as many as v2 ;)

Quote:
Original post by superpigI suspect - given that the water only needs to be a short video that gets played on loop - that it wouldn't be that much compared to the requirements for other animated features. A small stream you can loop a 2second capture of; a person walking up and talking to you, you can't. (I realise that I'm just presenting more problems, instead of solutions [smile])

I've handled video in my previous game Divided (see my sig) - it won't be too much of a problem to have video characters and cutscenes - however, rendered water at high-res (3x1024x1024 at moest, I'd guess) loops is too much data. For stand-still shots I could use it if displaced water doesn't fit, but panorama's are just too large, I think.

Quote:
Original post by superpigI'm wondering what the Myst games used. I think the first one used Quicktime; perhaps the later ones used something like Bink? I've not used it myself but I know it has a good reputation for control of both data size and bandwidth.

Riven came on six CDs. I don't think it was the static image backdrops that were taking up so much space...

First used Quicktime, yes - later ones used Bink. I believe Bink is the standard these days; every game with lots of animations seems to use it.

Riven had a lot of video, indeed - though most of it's water was done through pixel displacement rather as video. The lastest Myst used displacement as well, rather as animation. Too much data for video, I'd guess.

By the way - in no way am I making a Myst clone, I just said Myst because it's most familiar to people.

Share this post


Link to post
Share on other sites
Quote:
Original post by The Parrot
Riven had a lot of video, indeed - though most of it's water was done through pixel displacement rather as video. The lastest Myst used displacement as well, rather as animation. Too much data for video, I'd guess.
Are you sure about that? The thing about this vertex displacement effect is that it would prevent you from having sharp edges on your water - if you're looking at a pool of water at the bottom of a cliff (a very Myst scenario) then moving the vertices around would move the water but it ought to warp the bottom of the cliff as well.

It's just occurred to me that you can still render the water in realtime - without depth data - if you create a mask which sets up where water pixels should and should not be drawn. That'd allow you to have objects sticking out of the water, occluding it. You could render that mask into the stencil buffer, and then render water in realtime - using your skysphere displacement if you want, but if you've got it you might as well go for something nicer like realtime refraction with EMBM or something.

In any case, I'm still not convinced that water is such a large amount of data in full video. It may well be that you need 3 1024x1024 frames of video on loop but a 1024x1024 image really isn't that expensive, especially when you bring things like DXT compression into play.

Share this post


Link to post
Share on other sites
I finally managed to get Myst III working again, and looking at the water it makes me think it's vertex displacement with a mask, kind-of like you said. Wikipedia tells me it's pixel displacement again..

On a sidenote, the latest Myst combines everything; animation, vertex displacement, 3d mesh - which is beyond me, really, as I haven't got time to render so many passes - another reason why I can't do pre-rendered water, apart from scenes where it is very fit.

I'm now a bit weary of this, however, as I'll have to find a way to render out the scene so that the water doens't show any of the wrong edges where water meets land you see oh-so-often in realtime 3d water-using applications.

Any more thoughts would be very welcome, even now I'm sure this effect could get complicated ;)

Share this post


Link to post
Share on other sites

This topic is 4665 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this