• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.

bearsomg

Members
  • Content count

    10
  • Joined

  • Last visited

Community Reputation

117 Neutral

About bearsomg

  • Rank
    Member
  1. Yep, I changed it to be 32 bit float and it worked perfectly. I'm going to go through and do some optomization now, but with my computer as it is I can run the algorithm with 255 samples and still retain 30 fps. Still, I think some optimization would help. I'll probably start by going through MJP's post. Thanks for the help!
  2. Thanks for the help!   I haven't explicitly done anything to disable blending, but I do not have a blend state set for the technique to generate the G-buffer. Does this suffice, or is there something else I should do? EDIT: I changed the technique to do this and the clear color doesn't affect the output anymore:   BlendState DisableBlending {     BlendEnable[0] = FALSE;     RenderTargetWriteMask[0] = 1 | 2 | 4 | 8; }; technique10 SSAO_T {     pass P0     {         SetBlendState(DisableBlending, float4(0.0f, 0.0f, 0.0f, 0.0f), 0xFFFFFFFF);         SetVertexShader( CompileShader( vs_4_0, VS_Dif() ) );         SetPixelShader( CompileShader( ps_4_0, PS_Dif() ) );         SetGeometryShader( NULL );     } }       About the normals, I was a bit confused as to what to do about that. I saw different code samples just writing the normals out, and some encoding it. I changed mine to do the encoding just now.   I've also changed the sampler to use point sampling. Does this automatically correct for the 0.5 offset, or do I need to do that myself?   I noticed a problem which I also fixed, and this seemed to have the most effect on the output. The textures I was rendering to were created with DXGI_FORMAT_R8G8B8A8_UNORM, and I changed that to DXGI_FORMAT_R16G16B16A16_FLOAT, as that is what is needed to hold the G-buffer.   This is my output right now, any idea where that height-map like effect is coming from?
  3. Alright, so I've got my UV coordinates for the full screen quad (dividing the SV_Position variable by the screen size), and am able to get the algorithm pretty much working. I switched over my map generation shader to render to multiple render targets, one for position, normals, and depth. My problem is now that when I calculate the AO factor, the screen seems to be split in 4 quadrants, and the render fades almost completely to black the further you go down the screen. I'll post screenshots once I get back to my computer.   EDIT: I just did some more testing. It appears that the resulting SSAO map changes in strange ways depending on what color I use to clear the pos/normal maps before rendering to them (green was what I was experiencing the problem I described above with). It looks like clearing with black is the right way to go. Now I'm noticing a lot of linear artefacting. Is there really any way to fix this outside of fully uv unwrapping the model? I attached screenshots below.   Here is my shader to generate the maps:   matrix MatrixPalette[255]; matrix worldMatrix; matrix viewMatrix; //Vertex Input struct VS_INPUT_SKIN {      float4 position : POSITION;      float3 normal    : NORMAL;      float2 tex0    : TEXCOORD;      float3 boneLinks : BONELINKS;     }; struct SSAO_PS_INPUT {     float4 pos : SV_POSITION;     float3 actPos : TEXCOORD0;     float3 normal : TEXCOORD1;     float depth : TEXCOORD2; }; struct SSAO_PS_OUTPUT {     float4 posMap : SV_Target0;     float4 normalMap : SV_Target1;     float4 depthMap : SV_Target2; }; float4 skinVert(float4 vert, float fact, matrix bone1, matrix bone2) {     float4 p = float4(0.0f, 0.0f, 0.0f, 1.0f);     //vertex skinning     float bone1Weight = fact;     float bone2Weight = 1.0f - bone1Weight;     p += bone1Weight * mul(vert,bone1);     p += bone2Weight * mul(vert,bone2);     p.w = 1.0f;     return p; } float3 skinNorm(float3 vert, float fact, matrix bone1, matrix bone2) {     float3 norm = float4(0.0f, 0.0f, 0.0f, 1.0f);     float bone1Weight = fact;     float bone2Weight = 1.0f - bone1Weight;     norm += bone1Weight * mul(vert,bone1);     norm += bone2Weight * mul(vert,bone2);     norm = normalize(norm);     return norm; } SSAO_PS_INPUT vs_SSAO(VS_INPUT_SKIN IN) {     SSAO_PS_INPUT OUT;     float4 skinnedPos = skinVert(IN.position, IN.boneLinks[2], MatrixPalette[IN.boneLinks[0]], MatrixPalette[IN.boneLinks[1]]);     float3 skinnedNormal = skinNorm(IN.normal, IN.boneLinks[2], MatrixPalette[IN.boneLinks[0]], MatrixPalette[IN.boneLinks[1]]);     float4 worldPos = mul(skinnedPos, worldMatrix);     OUT.pos = mul(worldPos, viewMatrix);     OUT.pos = mul(OUT.pos, projMatrix);     OUT.actPos = mul(worldPos, viewMatrix);     OUT.normal = mul(skinnedNormal, worldMatrix);     OUT.normal = mul(OUT.normal, viewMatrix);     OUT.depth = normalize(mul(worldPos, viewMatrix).z);     return OUT; } SSAO_PS_OUTPUT ps_SSAO(SSAO_PS_INPUT IN) : SV_Target {     SSAO_PS_OUTPUT OUT;     OUT.posMap = float4(IN.actPos, 1);     OUT.normalMap = float4(IN.normal, 1);     OUT.depthMap = float4(IN.depth, IN.depth, IN.depth, 1);     return OUT; } technique10 SSAO_T {     pass P0     {         SetVertexShader( CompileShader( vs_4_0, vs_SSAO() ) );         SetPixelShader( CompileShader( ps_4_0, ps_SSAO() ) );         SetGeometryShader( NULL );     } }     And here is my shader to generate the AO map:   matrix MatrixPalette[255]; matrix worldMatrix; matrix viewMatrix; matrix projMatrix; matrix lightViewMatrix; float2 texelSize; Texture2D posMap; Texture2D normalMap; Texture2D depthMap; Texture2D randomTexture; SamplerState DifferredSampler {     Filter = MIN_MAG_MIP_LINEAR;     AddressU = Clamp;     AddressV = Clamp; }; struct VS_OUTPUT {      float4 position : SV_POSITION;      float3 normal : NORMAL; }; float3 getPosition(in float2 uv) {     return posMap.Sample(DifferredSampler, uv).xyz; } float3 getNormal(in float2 uv) {     return normalize(normalMap.Sample(DifferredSampler, uv).xyz * 2.0f - 1.0f); } float getDepth(in float2 uv) {     return depthMap.Sample(DifferredSampler, uv).w; } float3 getRandom(in float2 uv) {     return randomTexture.Sample(DifferredSampler, texelSize*uv/float2(64,64)).xyz * 2.0f - 1.0f; } float g_sample_rad = 3.0f; float g_intensity = 3.0f; float g_scale = 1.0f; float g_bias = 0.001f; float doAmbientOcclusion(in float2 tcoord,in float2 uv, in float3 p, in float3 cnorm) {     float3 diff = getPosition(tcoord + uv) - p;     const float3 v = normalize(diff);     const float d = length(diff)*g_scale;     return max(0.0,dot(cnorm,v)-g_bias)*(1.0/(1.0+d))*g_intensity; } float4 getOcclusion(float2 uv) {     float4 o;     o.rgb = 1.0f;     o.a = 1.0f;     const float2 vec[4] = {float2(1,0),float2(-1,0),                 float2(0,1),float2(0,-1)};     float3 p = getPosition(uv);     float3 n = getNormal(uv);     float2 rand = getRandom(uv);     float ao = 0.0f;     float rad = g_sample_rad/p.z;          int iterations = 4;     for (int j = 0; j < iterations; ++j)     {       float2 coord1 = reflect(vec[j],rand)*rad;       float2 coord2 = float2(coord1.x*0.707 - coord1.y*0.707,                   coord1.x*0.707 + coord1.y*0.707);         ao += doAmbientOcclusion(uv,coord1*0.25, p, n);       ao += doAmbientOcclusion(uv,coord2*0.5, p, n);       ao += doAmbientOcclusion(uv,coord1*0.75, p, n);       ao += doAmbientOcclusion(uv,coord2, p, n);     }     ao/=(float)iterations*4.0;     o.rgb = ao;     return o; } float4 ps_lighting(VS_OUTPUT IN) : SV_Target { float4 ao = float4(1.0f, 1.0f, 1.0f, 1.0f);     float2 uv = IN.position.xy;     uv.x /= texelSize[0];     uv.y /= texelSize[1];     ao = getOcclusion(uv);     return ao; } VS_OUTPUT vs_Skinning(VS_INPUT_SKIN IN) {         VS_OUTPUT OUT = (VS_OUTPUT)0;         float4 p = float4(0.0f, 0.0f, 0.0f, 1.0f);         float3 norm = float3(0.0f, 0.0f, 0.0f);         //vertex skinning         float bone1Weight = IN.boneLinks[2];         float bone2Weight = 1.0f - bone1Weight;         p += bone1Weight * mul(IN.position,MatrixPalette[IN.boneLinks[0]]);         p += bone2Weight * mul(IN.position,MatrixPalette[IN.boneLinks[1]]);         p.w = 1.0f;         norm += bone1Weight * mul(IN.normal,MatrixPalette[IN.boneLinks[0]]);         norm += bone2Weight * mul(IN.normal,MatrixPalette[IN.boneLinks[1]]);         norm = normalize(norm);         norm = mul(norm, worldMatrix); OUT.normal = normalize(mul(norm, lightViewMatrix)); //move pos to worldviewproj space         float4 worldPos = mul(p, worldMatrix);         OUT.position = mul(worldPos, viewMatrix);         OUT.position = mul(OUT.position, projMatrix);                  return OUT; }           Position map with buffer cleared to black:   Normal map with buffer cleared to black:   Depth map with buffer cleared to black:   Resulting SSAO map with all buffers cleared to green:   Resulting SSAO map with all buffers cleared to white:   Resulting SSAO map with all buffers cleared to black:
  4. I've been working on adding SSAO support to my Direct3D10 program, but I'm a bit confused when it comes to using the normal and depth maps to build the occlusion buffer, which is then blended with the scene. From my understanding, this is the process:   (Pass 1): Generate the normal and depth maps (I use one pass and put the normal in RGB and the depth in A) (Pass 2): Generate the AO map using the view space normal/depth map (Pass 3): Render the actual scene using the occlusion factor from the AO map generated in pass 2   I'm confused when it comes to pass 2. I'm attempting to follow the tutorial here: http://www.gamedev.net/page/resources/_/technical/graphics-programming-and-theory/a-simple-and-practical-approach-to-ssao-r2753 but I got confused with how to implement the shader. Namely, where does the float2 uv   parameter come from in the pixel shader, and how is it calculated?   This is my shader for creating the normal/depth map:     //Vertex Input struct VS_INPUT_SKIN {      float4 position : POSITION;      float3 normal    : NORMAL;      float2 tex0    : TEXCOORD;      float3 boneLinks : BONELINKS;     }; struct SSAO_PS_INPUT {     float4 pos : SV_POSITION;     float3 normal : TEXCOORD0;     float depth : TEXCOORD1; }; SSAO_PS_INPUT vs_SSAO(VS_INPUT_SKIN IN) {     SSAO_PS_INPUT OUT;     float4 skinnedPos = skinVert(IN.position, IN.boneLinks[2], MatrixPalette[IN.boneLinks[0]], MatrixPalette[IN.boneLinks[1]]);     float3 skinnedNormal = skinNorm(IN.normal, IN.boneLinks[2], MatrixPalette[IN.boneLinks[0]], MatrixPalette[IN.boneLinks[1]]);     float4 worldPos = mul(skinnedPos, worldMatrix);     OUT.pos = mul(worldPos, viewMatrix);     OUT.pos = mul(OUT.pos, projMatrix);     OUT.normal = mul(skinnedNormal, worldMatrix);     OUT.normal = mul(OUT.normal, viewMatrix); OUT.normal = normalize(OUT.normal);     OUT.depth = mul(worldPos, viewMatrix).z;     return OUT; } float4 ps_SSAO(SSAO_PS_INPUT IN) : SV_Target {     return float4(IN.normal,IN.depth); } technique10 SSAO_T {     pass P0     {         SetVertexShader( CompileShader( vs_4_0, vs_SSAO() ) );         SetPixelShader( CompileShader( ps_4_0, ps_SSAO() ) );         SetGeometryShader( NULL );     } }   Basically, I'm confused as how to execute pass 2 using my generated map.
  5. [quote name='eppo' timestamp='1353494615' post='5002887'] Simplest way is to not do the shadow check when an object is facing away from the light source. e.g.: shadowing = (dot(faceNormal, lightTangent) > 0.0)?1.0:shadowCalc(); [/quote] How is lightTangent calculated? I have the light direction and position.
  6. Hello, I have recently started working on implementing shadows in my rendering engine using shadow mapping with Direct3D10 and HLSL. So far, the algorithm is working nicely, except for a small problem. When I move the camera behind an object that is lit on one side by a light source, the areas where the shadow is bleed through onto the other side of the model. For example, a character model has it's arm crossed in front of it's chest, and when you move behind the model you can see the shadow on the model's back. I'm assuming this is because the shader is just testing the depth values to see if the pixel is behind an occluder, and the back of the model is determined to be behind an occluder. So, my question is, what would be the best and easiest way to fix this? Thanks in advance!
  7. Hello, Could someone explain to me how I could implement SSAO in my Direct3D 10 program? Would it be possible to do all the calculations in one pass? I somewhat understand how the SSAO technique works, but I'm completely confused when it comes to implementing it. Thanks
  8. This probably seems like a generic question, but could somebody tell me why this code will not display these points? I'm sure there's a simple solution that I've missed. I attached a zip containing the program's source file, header file, and a csv that contains all the vertex data the program is trying to load. The columns go x, y, and z respectively. The MdlLoader class is a class that I wrote to load the data from the model file. I know for a fact that it is loading correctly because as you can see in the code, the program is correctly outputting another csv with the same vertex data in it. This is the one I included in the ZIP. Thank you in advance! EDIT: Sorry, apparently the ZIP was corrupted when uploading to here. Please use this one: [url="http://www.justinman.net/dx.zip"]www.justinman.net/dx.zip[/url]
  9. Here's my issue: My program has just a back wall and floor. The user is able to move the virtual camera around and zoom in and out. I need the floor to appear completely level, where the camera can only see the front of it, no matter what the Y translation coordinate or FOV is. What would the correct way to calculate the camera's Y rotation from the Y translation coord and FOV be? Currently through experimentation, I was able to form an equation where it calculates rotation based on the Y translation coord, but obviously it does not work when the user zooms in or out (change in FOV). I am using gluPerspective, and I would not like to change this.
  10. Hello. I am experimenting with a library that returns a 4x3 matrix representing the position of the camera. How would I get the values for pos x, pos y, pos z, rot x, rot y, and rot z from this matrix? Thanks in advance! EDIT: I figured out I cannot directly convert this type of matrix because it is not a standard a 4x3 matrix, the library has it's own stuff in there. The library can also create an OpenGL projection matrix (looks like an array of 16 values). How could I get the pos xyz and rot xyz from that matrix? The library is ARToolkit to help clarify.