Very basic AO concept in HLSL

Started by
25 comments, last by winsrp 11 years, 8 months ago
ok I'm trying to understand it bit by bit, I need to send my depth buffer, and my normals buffer, then send a random normals texture that he provides, and fill out a couple of variables, and that should be it, also at the very end of the code, I need to use the ao value to my current light calculations, and draw a quad calling this shader.

I think that should make the trick, I'll give it a try to see if it works.
Advertisement
WOW... that was a royal pain in the .. you know where... but I finally got it, still need to tweak values on the final render, so from top to down, left to right you have

Color buffer, depth buffer, normal buffer, light buffer, SSAO buffer, and final render (still need to tweak some values to get this nice).

[attachment=9357:SSAO test.png]
Tomorrow I will read your posts. Right now I don't have time.

[size=1]Project page: [size=1]<

[size=1] XNA FINAL Engine[size=1] [size=1]>
thats cool, thanks... here a little bit more info for you to read.

I've tried 2 different SSAO formulas I've found around but wit no luck, this is the latest.

[attachment=9411:SSAO test2.png]

I also cannot seem to add ambient light to the scene, without screwing it up.

Thanks a lot in advance for the help, really driving me crazy with this stuff, and I really really want it to work.
Not sure if you really need screen space ao. What I did was to simply store for each quad on which sides there are other cubes and put gradients on them accordingly.
so you had a base color, and just changed the gradient of that color, but you need to check if they are kind of occluded right, so you need to get some samples around the world to test for occlusion, right?
still, I want to give it a nice lighting effect.

I'm having trouble incorporating 4 effects

ambient light
diffuse light
diffuse shading
and SSAO

all 4 together = mess for me, and don't work, but I'm not giving up...

Also I'm working in my item editor, almost done, where I also can test light for the items, and also working on the landscape, which I haven't totally figured out how to make 3d terrain without those pesty floating rocks here and there... but almost there. I also have my masteries designed in paper, plus my skill set, I also have a guy running around the terrain, and I'm working on procedural trees atm.

Lots to be done.. but getting there.

And as Shakespeare once said..

Have more than you show, Speak less than you know. ph34r.png

ok well, I managed to get to get it to work, but its still kind of odd, and I don't fully understand how did I make it work... huh.png

I have some fx files, one does the normal rendering to 3 rendertargets2D, which I draw on the screen above with a spritebatch call of the rendertarget, and then another fx file, does the lighting taking as input the render targets from the first file, but the thing is that I had to render a quad to make it work???



You always have to render some geometry on the GPU. If you render text or textures on screen, the SpriteBatch internally renders a quad for you with a texture on top of it. And when you want, for example, post process an image, you want to apply some code (shader) to all your pixels, therefore you render a quad to cover the entire screen and in each pixel the pixel shader will perform the calculations that post process the image (the image you want to post process should be a different texture because you can read the destination texture on a shader). The quad is a medium to indicate to the pixel shader where to perform its calculations.
With deferred renderers, you have the scene preprocessed (G-Buffer). You have the depth of the pixels, the normal in each pixel and some other information. What you do in latter stages is to use this information and compose with other information. For example, in the stage you want, I guest, to calculate an ambient light or directional light, lights that affect potentially all your pixels, therefore you render a quad to cover all screen and in each pixel you read some scene information (from GBuffers) and compose with some other information (light information in this case) to produce the lighting result.
Light intensity is additive, if you want to apply another light, just render another quad (this time with another light information) and you add the new result to the previous result (you need to set the blending states to do this automatically).


Lunch!! When I’m back I continue.

[size=1]Project page: [size=1]<

[size=1] XNA FINAL Engine[size=1] [size=1]>

so you had a base color, and just changed the gradient of that color, but you need to check if they are kind of occluded right, so you need to get some samples around the world to test for occlusion, right?

I stored that info in the vertex data itself... for each quad I checked the eight neighbor cubes that can create a concave edge and put that into the vertices (or in my case quads since I used a geometry shader). That way I know where to put gradients without any screen space pass. Or at least you could, I think what I did was render the texture coordinates to a buffer and applied the gradients in screen space for performance reasons.

result:
[attachment=9437:vertexao.png]

top down view of a hole that shows the gradients a little better:
[attachment=9438:vertexao2.png]

hmpf, no idea why the mouse pointer ended up in the screenshots when it's deactivated on screen... sad.png
uhh,.. lots of information, entering sponge mode now!!

jischneider

ok so what you are telling me is that all those images that I compose, some are helpers and some are actual additive filters on top of an image, so the depth buffer is a helper to make some calculations and then all the others, color, light, AO, etc, just fall one on top of the other to modify the previews composition result? If that is the case, then point taken, and I kind of now understand why I have to render a quad.

japro

so how do you know where might the concave portion be? the 8 top cubes? (excluding center which would be the 9th) I still need to know who to apply a gradient in HLSL on my pixel shader, since this looks kind of cool, now I would like to explore both venues, the one with your solution and the one with AO, to see which is better in both looks and performance, and do a combined pass to see what comes out, and which one can I integrate long term with all my lights and shadows.

And if you have a single cube hole how would that look from up top? will it have a little light in the center due to the the gradients?

This topic is closed to new replies.

Advertisement