• Advertisement
Sign in to follow this  

Very basic AO concept in HLSL

This topic is 2011 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

So in my world of cubes something seemed to be missing, and I might want to add a little of AO to it. I have read about SSAO, and other types but, since its a world of cubes and I dont have really complex features, objects, geometry and stuff, I was looking for a very very very basic application of AO to apply to my game in HLSL.

So, can anyone help me out here.?

Share this post


Link to post
Share on other sites
Advertisement
If it is too simple then it will look awful. Even if you don’t understand the shader you can add a decent AO technique. Besides, all need the same, the depth buffer and some parameters.

I included a couple of SSAO techniques in my engine (http://xnafinalengine.codeplex.com/) that you can see in my source code. I think that they are "easy" to adapt to your engine.
One more thing, I use the AO result with the ambient light, but you can compose it in a final pass.

Share this post


Link to post
Share on other sites
interesting, I gave it a look and can't really understand what it does!!!.. much less how to get it to work. tongue.png

Share this post


Link to post
Share on other sites
Every SSAO technique needs a depth buffer as input. In XNA you can’t access the GPU depth buffer so you need to build one yourself. Why not start there?

You need to render all you opaque geometry in a render target with a Single surface format and a depth surface. Use my shader, but ignore the normal information. Then show me a screenshot of this buffer. I will tell you if it looks correct.

Be aware that the Single surface format does not work with linear or anisotropic filter, so when you render the render target to screen adjust the filter parameter to point.

Any questions?

Share this post


Link to post
Share on other sites
blink.png... will work on that.. and let you know... I do need to make the normals in the pixel buffer, since I'm working normal-less

Share this post


Link to post
Share on other sites
One step at a time. Let's do the depth buffer first.

I’m following the thread so when you have something to show me do a post and I check it.

Good luck!!

Share this post


Link to post
Share on other sites
ok I'm back now, had to do a long trip to Asia which is not over.

I have been looking into catalin zima deferred rendering tutorial here http://www.catalinzima.com/tutorials/deferred-rendering-in-xna/ since this kind of works its way into making the buffers, (I'm also using as reference the xna 4.0 version that other guy did based on this), but I'm kind of stuck trying to thet the direct light to work as it always shows a blank screen, but I can see my other 3 views with, color, normals, and depth in a buffer, so I guess step one is done. I'll post a screen when I get back home.

Share this post


Link to post
Share on other sites
here is the screen of the buffer I have so far.... for some reason the low right corner should be light map, but it still not showing...

[attachment=9311:depth test.png]

Share this post


Link to post
Share on other sites
ok well, I managed to get to get it to work, but its still kind of odd, and I don't fully understand how did I make it work... :huh:

I have some fx files, one does the normal rendering to 3 rendertargets2D, which I draw on the screen above with a spritebatch call of the rendertarget, and then another fx file, does the lighting taking as input the render targets from the first file, but the thing is that I had to render a quad to make it work??? and I cannot seem to print the result with my spritebatch even when the result of the lights fx file is a rendertarget2D because it comes out blank, but if i render what look like a totaly empty quad... the result image pops up in full screen.

:blink: very confused here? I though that only using filling out the parameters and executing the effect.technique.apply method would trigger the code in the hlsl file, so why rendering an empty quad with no reference whatsoever will make any difference when the pixel shader has a output structure? Edited by winsrp

Share this post


Link to post
Share on other sites
ok I'm trying to understand it bit by bit, I need to send my depth buffer, and my normals buffer, then send a random normals texture that he provides, and fill out a couple of variables, and that should be it, also at the very end of the code, I need to use the ao value to my current light calculations, and draw a quad calling this shader.

I think that should make the trick, I'll give it a try to see if it works.

Share this post


Link to post
Share on other sites
WOW... that was a royal pain in the .. you know where... but I finally got it, still need to tweak values on the final render, so from top to down, left to right you have

Color buffer, depth buffer, normal buffer, light buffer, SSAO buffer, and final render (still need to tweak some values to get this nice).

[attachment=9357:SSAO test.png]

Share this post


Link to post
Share on other sites
thats cool, thanks... here a little bit more info for you to read.

I've tried 2 different SSAO formulas I've found around but wit no luck, this is the latest.

[attachment=9411:SSAO test2.png]

I also cannot seem to add ambient light to the scene, without screwing it up.

Thanks a lot in advance for the help, really driving me crazy with this stuff, and I really really want it to work.

Share this post


Link to post
Share on other sites
Not sure if you really need screen space ao. What I did was to simply store for each quad on which sides there are other cubes and put gradients on them accordingly.

Share this post


Link to post
Share on other sites
so you had a base color, and just changed the gradient of that color, but you need to check if they are kind of occluded right, so you need to get some samples around the world to test for occlusion, right?

Share this post


Link to post
Share on other sites
still, I want to give it a nice lighting effect.

I'm having trouble incorporating 4 effects

ambient light
diffuse light
diffuse shading
and SSAO

all 4 together = mess for me, and don't work, but I'm not giving up...

Also I'm working in my item editor, almost done, where I also can test light for the items, and also working on the landscape, which I haven't totally figured out how to make 3d terrain without those pesty floating rocks here and there... but almost there. I also have my masteries designed in paper, plus my skill set, I also have a guy running around the terrain, and I'm working on procedural trees atm.

Lots to be done.. but getting there.

And as Shakespeare once said..

Have more than you show, Speak less than you know. ph34r.png Edited by winsrp

Share this post


Link to post
Share on other sites

ok well, I managed to get to get it to work, but its still kind of odd, and I don't fully understand how did I make it work... huh.png

I have some fx files, one does the normal rendering to 3 rendertargets2D, which I draw on the screen above with a spritebatch call of the rendertarget, and then another fx file, does the lighting taking as input the render targets from the first file, but the thing is that I had to render a quad to make it work???



You always have to render some geometry on the GPU. If you render text or textures on screen, the SpriteBatch internally renders a quad for you with a texture on top of it. And when you want, for example, post process an image, you want to apply some code (shader) to all your pixels, therefore you render a quad to cover the entire screen and in each pixel the pixel shader will perform the calculations that post process the image (the image you want to post process should be a different texture because you can read the destination texture on a shader). The quad is a medium to indicate to the pixel shader where to perform its calculations.
With deferred renderers, you have the scene preprocessed (G-Buffer). You have the depth of the pixels, the normal in each pixel and some other information. What you do in latter stages is to use this information and compose with other information. For example, in the stage you want, I guest, to calculate an ambient light or directional light, lights that affect potentially all your pixels, therefore you render a quad to cover all screen and in each pixel you read some scene information (from GBuffers) and compose with some other information (light information in this case) to produce the lighting result.
Light intensity is additive, if you want to apply another light, just render another quad (this time with another light information) and you add the new result to the previous result (you need to set the blending states to do this automatically).


Lunch!! When I’m back I continue. Edited by jischneider

Share this post


Link to post
Share on other sites

so you had a base color, and just changed the gradient of that color, but you need to check if they are kind of occluded right, so you need to get some samples around the world to test for occlusion, right?

I stored that info in the vertex data itself... for each quad I checked the eight neighbor cubes that can create a concave edge and put that into the vertices (or in my case quads since I used a geometry shader). That way I know where to put gradients without any screen space pass. Or at least you could, I think what I did was render the texture coordinates to a buffer and applied the gradients in screen space for performance reasons.

result:
[attachment=9437:vertexao.png]

top down view of a hole that shows the gradients a little better:
[attachment=9438:vertexao2.png]

hmpf, no idea why the mouse pointer ended up in the screenshots when it's deactivated on screen... sad.png Edited by japro

Share this post


Link to post
Share on other sites
uhh,.. lots of information, entering sponge mode now!!

jischneider

ok so what you are telling me is that all those images that I compose, some are helpers and some are actual additive filters on top of an image, so the depth buffer is a helper to make some calculations and then all the others, color, light, AO, etc, just fall one on top of the other to modify the previews composition result? If that is the case, then point taken, and I kind of now understand why I have to render a quad.

japro

so how do you know where might the concave portion be? the 8 top cubes? (excluding center which would be the 9th) I still need to know who to apply a gradient in HLSL on my pixel shader, since this looks kind of cool, now I would like to explore both venues, the one with your solution and the one with AO, to see which is better in both looks and performance, and do a combined pass to see what comes out, and which one can I integrate long term with all my lights and shadows.

And if you have a single cube hole how would that look from up top? will it have a little light in the center due to the the gradients? Edited by winsrp

Share this post


Link to post
Share on other sites

so how do you know where might the concave portion be? the 8 top cubes?

Exactly. There is basically 8 flags (which you can conveniently store in a single byte) that shows which of the cubes are occupied.

I still need to know who to apply a gradient in HLSL on my pixel shader.
[/quote]
You mean how?
I have this function in my GLSL shader (translating to HLSL should be trivial, you probably only need to replace vec2 with float2)

float occlusion(vec2 tx, uint occ) {
float res = 1;
if((occ & 1u) > 0u) res*=1-pow(clamp(1-sqrt((tx.x)*(tx.x)+(tx.y)*(tx.y)),0,1),2);
if((occ & 2u) > 0u) res*=1-(1-tx.y)*(1-tx.y);
if((occ & 4u) > 0u) res*=1-pow(clamp(1-sqrt((1-tx.x)*(1-tx.x)+(tx.y)*(tx.y)),0,1),2);
if((occ & 8u) > 0u) res*=1-tx.x*tx.x;
if((occ & 16u) > 0u) res*=1-pow(clamp(1-sqrt((1-tx.x)*(1-tx.x)+(1-tx.y)*(1-tx.y)),0,1),2);
if((occ & 32u) > 0u) res*=1-tx.y*tx.y;
if((occ & 64u) > 0u) res*=1-pow(clamp(1-sqrt((tx.x)*(tx.x)+(1-tx.y)*(1-tx.y)),0,1),2);
if((occ & 128u) > 0u) res*=1-(1-tx.x)*(1-tx.x);
return res;
}


"occ" contains the occlusion flags and tx are essentially texture coordinates of the quad (so [0,1] on both axes). The result are gradients that are 0 on concave edges and 1 on the others. So you then can use that in your lighting calculation however you like. In the example images I just do "color *= 0.7+0.3*occlusion(...);" Edited by japro

Share this post


Link to post
Share on other sites
hmm... so besides the HLSL conversion, which as you said is trival, I will need to modify it a little bit in order to change the input texture coordinate (since you are using a geom shader, you are building your color for each quad into a texture if I'm not wrong) to be used directly with my color coordinate, and since I'm not using quads, I'm using cubes, I can make a little trick here in order to use the normals buffer to get which quad I'm reading..... does it makes any sence?

Share this post


Link to post
Share on other sites
Geometry shader and cubes shouldn't really figure into this... what you need to provide is a quad/face relative coordinate system as you would when texturing the quads, texture coordinates essentially. When I speak of quads is mean the faces of the cube which in my case as well as i guess in yours are made up of two triangles. You then use this function in the fragment shader the same way you would use a texture lookup when texturing individual quads, except that you have to provide the occlusion information about the neighboring cubes.

Share this post


Link to post
Share on other sites
yeah I got that about the quad, its the same thing as I was thinking but I have some optimization of vertex in place that makes it a little bit hard for me, since if a cube is next to another that is the same "type" the vertex is shared with its neighbor cube and the index is re-used... funky code I came up, after i figured out that I cannot do a full optimiazation of vertexes to be unique in space, due to some coloring problems.

Besides I'm not texturing anything, just coloring, and I really was planning in leaving textures out of the picture here, which presents me with a problem.

But on the other hand, trying to see the big picture here... I could create a texture with each color I want, and use the full optimization code I have for vertexes to be unique, and just sample the texture on top of the corresponding vertex coordinates... hmmm... so that way colors wont mix, even when they are bases on color textures, ... tricky.

just a preview image I took the other day.

[attachment=9449:shapes.png]

Share this post


Link to post
Share on other sites
this is another pic of the full extent of my map.. fps are low, since I'm looking at all the map at once, normally the camera would be on the player which is on the middle of the map, so the view frustum will take care of the FPS on the screen.

On topic, I'm starting from scratch here, never give up, never surrender, I've already written the terrain generation 11 times using different approaches, to make it look better, faster, etc, the same goes for the light or everything else I need to do, every time I learn something new. This is what it looks with the normal diffuse light + ambient light that I have

[attachment=9460:shapes2.png]

This is what it looks like with the current diffuse + normals + kind of ambient, + ssao. No water here yet...

[attachment=9461:shapes3.png] Edited by winsrp

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement