Jump to content

  • Log In with Google      Sign In   
  • Create Account

We need your feedback on a survey! Each completed response supports our community and gives you a chance to win a $25 Amazon gift card!


How was this achieved?


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
10 replies to this topic

#1 s.howie   Members   -  Reputation: 138

Like
0Likes
Like

Posted 10 April 2013 - 03:53 PM

I watched this video that demonstrates some graphics tech Obsidian are developing for Project Eternity.
 
 
I'm really curious of how they achieved 3D dynamic water levels and lighting on a 2D image.
 
If anyone could explain it to me like I am 5, I would greatly appreciate it.

Edited by s.howie, 10 April 2013 - 03:55 PM.


Sponsor:

#2 L. Spiro   Crossbones+   -  Reputation: 14447

Like
4Likes
Like

Posted 10 April 2013 - 04:10 PM

Not too difficult to do the lighting, shadows, or water depth.

Along with the 2D RGB image there is a depth buffer.

 

In standard 3D programming the geometry will be projected onto the screen and each pixel will have a depth value as well.  Lighting and shadowing are done from there.

 

This is exactly the same thing except that the projection step has been pre-processed.  Since you can’t change the angle of the camera, the projection of the geometry and the depth values for each pixel will always be the same, so it can be pre-processed into the 2D image you saw there plus a secret depth buffer for each pixel that you didn’t see.

 

Since they have depth for each pixel they can also raise and lower the water level as they showed.  This could be done by either reversing the actual height of each pixel knowing the incident angle of the camera or by projecting the water into the same space as the level and performing depth-testing there instead.

 

 

L. Spiro


It is amazing how often people try to be unique, and yet they are always trying to make others be like them. - L. Spiro 2011
I spent most of my life learning the courage it takes to go out and get what I want. Now that I have it, I am not sure exactly what it is that I want. - L. Spiro 2013
I went to my local Subway once to find some guy yelling at the staff. When someone finally came to take my order and asked, “May I help you?”, I replied, “Yeah, I’ll have one asshole to go.”
L. Spiro Engine: http://lspiroengine.com
L. Spiro Engine Forums: http://lspiroengine.com/forums

#3 Khatharr   Crossbones+   -  Reputation: 3084

Like
0Likes
Like

Posted 10 April 2013 - 08:18 PM

That reminded me of late PSX and early PS2 era stuff. For a moment I thought they were making an dev kit and was somewhat interested, but then I saw they're just making yet another 'classic' cRPG. Sort of puzzled by why the guy is excited enough to talk for half the video about that. It's well made, but there's nothing really new there.


void hurrrrrrrr() {__asm sub [ebp+4],5;}

There are ten kinds of people in this world: those who understand binary and those who don't.

#4 s.howie   Members   -  Reputation: 138

Like
0Likes
Like

Posted 11 April 2013 - 04:59 AM

@ L. Spiro

 

Thanks for your reply. My 5 year old brain is struggling to understand all of what you said but I think I have a tentative grasp on the basic idea.

 

What I took away from your explanation was as follows:

 

1) When the 2D image of the environment is generated, a depth buffer is returned along with the RGB value of each pixel.

2) This depth information per pixel can then be used to somehow mask the water effect.

 

 

I got lost where you mention "reversing the actual height of each pixel".

 

 

I have a few assumption on how step 2 is achieved:

 

1) I assume that the depth buffer for each pixel is the distance from the camera, and that the z height of the pixel could then be found from knowing this depth value and the angle and position of the camera.

2) Once the z depth of a pixel is known, I assume when the water texture is rendered it checks each pixel against the corresponding pixel z height of the environment image, only rendering when the height is equal to or below the water height.

3) Knowing next to nothing about graphics programming, I'm assuming these operations, being per pixel, are pushed to the GPU by a shader.

 

 

Thanks again for shedding some light on what is going on under the hood. Please let me know if I have horribly misunderstood your explanation.



#5 Khatharr   Crossbones+   -  Reputation: 3084

Like
2Likes
Like

Posted 11 April 2013 - 02:24 PM

When they rendered the scene from their 3D model they kept the depth buffer from the render. When they draw the 2D scene they restore that buffer and then do additional drawing. They can get that water effect by fiddling with the depth of the water. It's not necessary to use a shader for the water effect; normal depth testing can do that. The lighting effect (day/night shift) could be done with a pixel shader. The depth buffer operates at the per-pixel level, so the positioning of the camera in this case is somewhat moot, since the scene is pre-rendered. Basically each pixel has a value indicating its depth. In order for a new color to be drawn into that location it must have a lesser depth value than the existing pixel. So to lower the water like that they can just increase the depth at which they render the water texture and the rock pixels will end up on top, hiding the water. The lighting effect with the orb looks kind of like bump mapping or displacement mapping. Links below:

 

http://en.wikipedia.org/wiki/Z-buffering

http://en.wikipedia.org/wiki/Bump_mapping

http://en.wikipedia.org/wiki/Displacement_mapping


Edited by Khatharr, 11 April 2013 - 02:27 PM.

void hurrrrrrrr() {__asm sub [ebp+4],5;}

There are ten kinds of people in this world: those who understand binary and those who don't.

#6 s.howie   Members   -  Reputation: 138

Like
0Likes
Like

Posted 12 April 2013 - 07:30 PM

@Khatharr

 

Thanks for your explanation and the links.

 

My understanding of what I am reading on z-buffering is that the depth value is distance from the camera. If depth value alone was used to determine where the water was rendered then would it not render in a plane that faces the camera rather than a plane facing up the y axis?

 

Perhaps I am coming at this from the wrong angle. I am thinking of the water as a flat texture rather than a plane that has also been rendered and has its own z-buffer. If this were the case, then this planes depth buffer could be shifted to shift the water height. Then the two depth buffers can be checked against each other, rendering the pixel closest to the camera.

 

Am I getting close?



#7 Khatharr   Crossbones+   -  Reputation: 3084

Like
3Likes
Like

Posted 12 April 2013 - 10:03 PM

The z buffer belongs to the destination buffer, not the texture. In this case they're blitting the pre-rendered depths into the depth buffer. They rendered the scene in 3D and saved both the color buffer and the depth buffer. When they start the scene they draw the texture (the color) and then blit the depth values from the pre-render directly into the depth buffer. Once that's done they can place 2D sprites or whatever into the scene and get semi-realistic clipping. The water is just a dynamic texture on a plane, yes. If you think of the water as being a transparent plane and then imagine lowering that into a pile of pebbles you'll get a similar visual effect. Mind that the plane is not necessarily parallel to the screen.

 

Remember that when rendering you give each vertex a 3-dimensional coordinate. When the hardware renders the water it creates the plane mathematically. Each triangle is just 3 points (in a specific order). After those points are run through the rendering matrix to get their correct positions relative to the viewer, the rasterizer starts. The rasterizer checks each pixel in the viewport (probably the full screen in this case) to see if a ray from the front of the frustum to the back collides with the triangle represented by those 3 points. If it does then the depth value at the location of the collision is checked against the value in the depth buffer. If the collision is 'underneath' the existing depth buffer value then the pixel is skipped. If it's 'on top' of the depth value then the depth value is replaced, the color of the primitive at the collision location is calculated, any lighting effects are applied, and the pixel is colored accordingly.

 

So when they draw the pre-render and blit the depth values, after that point in the frame the scene is identical to what it would be if they had redone the full 3D render, but without all the work.

 

The graphics pipeline is complex as a whole, but it's actually just a combination of many fairly simple parts. Most of the parts are one means or another of reducing work that doesn't make sense. If you study how the three transforms work and understand that each primitive is rendered independently then you can easily see where all the other stuff fits into place.

 

http://msdn.microsoft.com/en-us/library/windows/desktop/bb206269%28v=vs.85%29.aspx


void hurrrrrrrr() {__asm sub [ebp+4],5;}

There are ten kinds of people in this world: those who understand binary and those who don't.

#8 s.howie   Members   -  Reputation: 138

Like
0Likes
Like

Posted 13 April 2013 - 03:12 AM

@Khatharr

 

Thank you for that succinct explanation. I feel I am starting to understand, at a very high level of abstraction, what is going on.

 

I'm inspired to play around with the concept to see if I can get a similar effect. I may cheat though and render an image. I'll render both environment model and water plane depth buffers as images then do a pixel by pixel comparison to see which pixel I should chose to draw into the combined image.

 

Thanks again.



#9 Khatharr   Crossbones+   -  Reputation: 3084

Like
3Likes
Like

Posted 13 April 2013 - 04:07 AM

Depth testing is performed by the graphics hardware. It would be horridly inefficient to do it on the CPU. I think you're still not understanding how the depth buffer works.

 

It may be easier to understand it in 2D terms. Say you have a 10x10 grid of pixels that represent your render target. In order to simulate depth you'll need a 10x10 grid of pixels that hold the depth value for each pixel in the render target. Now, when you draw a sprite to the grid, you also specify a 'depth' for that sprite. For each pixel being drawn to, if the current value in the depth buffer is 'under' the sprite depth then the depth value is for that pixel is updated and the color value is set to the corresponding texel value. If the current value in the depth buffer is 'over' the sprite depth then processing for that pixel stops - it failed the depth test.

 

Graphics hardware is ridiculously efficient at this kind of task. All you have to do is tell the API to render something and all this gets done 'under the hood'. All you typically have to do with the depth buffer is remember to clear it at the start of each frame, just like the color buffer. The only thing they're doing different with the water in this case is that they're not clearing the depth buffer before drawing, but instead they're copying in a set of pre-computed values so that when the water is rendered it will have depth values to compare to in order to simulate a 3D interaction.


void hurrrrrrrr() {__asm sub [ebp+4],5;}

There are ten kinds of people in this world: those who understand binary and those who don't.

#10 s.howie   Members   -  Reputation: 138

Like
0Likes
Like

Posted 13 April 2013 - 05:30 AM

@Khatharr

 

Thanks for the further explanation.

 

I assumed that depth testing would usually be performed on graphics hardware. However, the skill set and tools I have immediately at my disposal wont allow me to program for it (I planned to play around with the idea in Javascript and my graphics card isn't supported by WebGL). Hence the inefficient CPU rendering of the image. Essentially my plan was to overlay the water image onto the environment by comparing the two depth buffers (rendered as images). I understand this is not what really happens when a scene is rendered to the screen.

 

However, your comment has helped me further understand the concept and for that I am grateful. Your description of how they would not clear the depth buffer before drawing the water but instead copy in the depth buffer of the environment for comparison against those of the water plane drove it home for me.

 

Your insights into the rendering process have given me a great appreciation of how a scene is rendered. I find it hard to believe how many calculations must be going on under the hood to do a raycast for each pixel on the screen against each triangle in the scene each frame and every frame. It does all this and runs at 60fps!

 

Mind = Blown

 

Obviously there must be optimisation strategies to only do tests that make sense (like spatial partitioning?), and maybe not recalculating depths for an object that has not moved if the camera is static (Though this wouldn't help keep a solid framerate whilst the camera is moving). And many other things I wouldn't even think of.

 

Thank you for all your time in trying to make sure I understand the concept. It has been very valuable. But I feel as if I am only becoming more and more curious ;P


Edited by s.howie, 13 April 2013 - 05:32 AM.


#11 Khatharr   Crossbones+   -  Reputation: 3084

Like
0Likes
Like

Posted 13 April 2013 - 03:31 PM

One of the big optimizations that the GPU has access to is the fact that rasterizing a single pixel is "embarassingly parallel" (actual technical term). None of the results depend on any of the other results. All the operations can be done in parallel. This is why you'll often hear about 'streams' on a GPU. The GPU will actually rasterize an enourmous number of pixels at the same time. The hardware is also optimized for floating-point and matrix math and the pipeline itself is riddled with optimizations like face culling and frustum clipping that can exclude large numbers of polygons from most of the required processing. It's a pretty interesting system and well worth the time to investigate and fiddle with. smile.png


void hurrrrrrrr() {__asm sub [ebp+4],5;}

There are ten kinds of people in this world: those who understand binary and those who don't.




Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS