• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.

simotix

Members
  • Content count

    506
  • Joined

  • Last visited

Community Reputation

142 Neutral

About simotix

  • Rank
    Advanced Member
  1. Can anyone provide any suggestions?
  2. Both of your image links are broken
  3. [quote name='Daniel E' timestamp='1300657779' post='4788361'] You said something about 'viewray should be multipled by the word matrix', so i thought that that would be the most appropriate answer. Didn't really know what you were getting at to be honest. [/quote] The way you you and MJP calculate your eyeRay/eyeToPixel is considerably different, I was wondering why you do your the way that you do? [quote name='Daniel E' timestamp='1300582346' post='4788112'] here's my pointlight shader using MJPs method for reconstruction [code] VS_OUTPUT_LIGHTPASS_INSTANCE vs_lightPass(VS_INPUT_INSTANCE In) { In.Pos.xyz *= In.insPos.w; In.Pos.xyz += In.insPos.xyz; Out.vEyeRay = In.Pos.xyz - camPos.xyz; } PS_output ps_pointLight(in VS_OUTPUT_LIGHTPASS_INSTANCE In ) { const float3 eyeToPixel = normalize(In.vEyeRay.xyz); } [/code] [/quote] MJPS [code] VS_OUTPUT VS( VS_INPUT Input ) { float4 positionScaled = float4(Input.PositionOS * Scale, 1.0f); Output.PositionWS = mul(positionScaled, World).xyz; Output.ViewRay = Output.PositionWS - CameraPosWS; return Output; } PS_OUTPUT_RECONSTRUCT PSReconstructLinear(PS_INPUT Input) { float3 viewRay = normalize(Input.ViewRay); float3 positionWS = CameraPosWS + viewRay * depth; return output; } [/code] [quote name='Daniel E' timestamp='1300657779' post='4788361'] So have you found your error? [/quote] Not yet unfortunately
  4. [quote name='Daniel E' timestamp='1300589357' post='4788132'] my pointlight positions are in world space [/quote] Aren't they typically in world space to begin with? [quote name='Daniel E' timestamp='1300589357' post='4788132'] i'm using cubes as light geometry btw [/quote] Why use cubes over spheres?
  5. [quote name='Daniel E' timestamp='1300582346' post='4788112'] here's my pointlight shader using MJPs method for reconstruction [/quote] What is "In.insPos.w;". The viewray should be multipled by the word matrix
  6. [quote name='MJP' timestamp='1300562986' post='4788026'] Here you go: [url="http://cid-538e432ea49f5bde.office.live.com/self.aspx/Public/PRTest2.zip"]http://cid-538e432ea...lic/PRTest2.zip[/url] [/quote] I took a look at this an our position reconstruction does look exactly the same, so I am thinking it has to be with the lighting. I noticed our lighting was considerably different, since I use a point light. Could someone ensure that I am calculating my lighting correctly? I would surprised if this was the issue, but without seeing an example of a point light being calculated with MJP's method I am not ruling anything out as far as being wrong. [code] float4 PixelShaderFunction(VertexShaderOutput input) : SV_Target { float2 texCoord = input.PositionCS.xy * InvTextureSize; float3 viewRay = normalize(input.ViewRay); float depth = depthMap.Sample( sampPointClamp, texCoord ).x; float3 positionWS = CameraPosition + viewRay * depth; float4 normalData = normalMap.Sample( sampPointClamp, texCoord); float3 normal = 2.0f * normalData.xyz - 1.0f; float3 lightDir = normalize(positionWS - PointLightPosition); float d = length( PointLightPosition - positionWS); float nDl = dot(normal, lightDir); float atten = 1.0f -d/PointLightRadius; float4 color = (atten * float4(PointLightColor.rgb, 1.0f)) + (nDl * atten); return color; } [/code]
  7. [quote name='MJP' timestamp='1300562986' post='4788026'] So then shouldn't those values be different for your two methods? [/quote] I do the same operation for storing the value for both ways [code] output.Depth.x = length(input.PositionVS); [/code] This is how I saw it explained in http://mynameismjp.wordpress.com/2010/09/05/position-from-depth-3/ I am taking a look at the source code you provided now, thank you for uploading it
  8. [quote name='MJP' timestamp='1300522731' post='4787857'] I just quickly modified my old position reconstruction test demo to do the first method from my blog, and it all works. So I don't think there's anything wrong with the code I posted. [/quote] Any chance this is available online anywhere? [quote name='MJP' timestamp='1300522731' post='4787857'] What is "Depth Val" here? [/quote] Depth Val is just me taking the information from the depth map. Traditional: float depthVal = depthMap.Sample( sampPointClamp, texCoord ).r; Yours: float viewDistance = depthMap.Sample( sampPointClamp, texCoord ).x;
  9. I am hoping someone can review my shader code, as it appears I may be doing something wrong. I noticed that the position I end up having in my point light shader is different from my working model (traditional) and in MJP's version. I debugged the same pixel in PIX using the traditional way, and MJP's version and it appears that I may be reconstructing the value wrong, even though I followed MJP's documentation to every little detail provided. To confirm, when I use the traditional way of deferred rendering, my lighting is correct, when I use MJP's version, lighting is incorrect (the position is wrong in the point light shader). This is how I debugged the issue: In PIX I took a look at the "output.Depth.x" value for the traditional, and MJP's. The both share the same exact VS, so the values will be the same. Here are the important variables (pixel 300, 400, screen tex coord 0.376, 0.668) [code] input.VertexShaderOutput::TexCoord ( 0.417, 0.585 ) float2 input.VertexShaderOutput::PositionVS ( -0.495, -0.500, 1.493, 1.000 ) float4 PixelShaderFunction.PixelShaderOutput::Depth ( 1.929, 0.000, 0.000, 0.000 ) float4 [/code] Now, in my point pixel shader for the traditional way (WORKING VERSION). (By working, I mean there is a lightmap being generated and the light is in the correct position). [code] texCoord = 0.376, 0.667 Depth Val = 0.934 position ( -0.495, 1.000, -0.507, 0.670 ) float4 [/code] In the point light pixel shader for MJP's (BROKEN VERSION), this is the values I am getting. [code] texCoord = 0.376, 0.668 (SAME) Depth Val = 0.934 (SAME) positionWS ( -0.302, 2.240, -2.483 ) float3 (WAY DIFFERENT) viewRay ( -0.324, 0.793, -0.517 ) float3 CameraPosition ( 0.000, 1.500, -2.000 ) float3 [/code] The problem as you can see is that positionWS is different then the working, traditional version. If you look at my code, there will be an exact 1:1 port of what is shown on MJP's blog. However, unless I am some how seeing something wrong, then there must be a problem with what he presenting on his blog. Can someone please take a look at my point light shader code and let me know if the see anything wrong at all? (I compile with warnings as errors, so there will be no warnings in this shader code). TRADITIONAL (WORKING): [code] Texture2D colorMap : register( t0 ); Texture2D normalMap : register( t1 ); Texture2D depthMap : register( t2 ); SamplerState sampLinearClamp : register( s0 ); SamplerState sampPointClamp : register( s1 ); cbuffer ModelViewInfo : register( b0 ) { matrix World; matrix View; matrix Projection; } cbuffer InvertViewCB : register ( b1 ) { matrix InvertViewProjection; } cbuffer CameraPosition : register ( b2 ) { float3 CameraPosition; } cbuffer PointLightInfo : register ( b3 ) { float3 PointLightPosition; float PointLightRadius; float3 PointLightColor; float PointLightIntensity; }; struct VertexShaderInput { float3 Position : POSITION0; }; struct VertexShaderOutput { float4 Position : SV_POSITION; float4 ScreenPosition : TEXCOORD0; }; VertexShaderOutput VertexShaderFunction(VertexShaderInput input) { VertexShaderOutput output; float4 worldPosition = mul(float4(input.Position,1), World); float4 viewPosition = mul(worldPosition, View); output.Position = mul(viewPosition, Projection); output.ScreenPosition = output.Position; return output; } float4 PixelShaderFunction(VertexShaderOutput input) : SV_Target { input.ScreenPosition.xy /= input.ScreenPosition.w; float2 texCoord = 0.5f * (float2(input.ScreenPosition.x,-input.ScreenPosition.y) + 1); float depthVal = depthMap.Sample( sampPointClamp, texCoord ).r; float4 position; position.xy = input.ScreenPosition.xy; position.z = depthVal; position.w = 1.0f; position = mul(position, InvertViewProjection); position /= position.w; float4 normalData = normalMap.Sample( sampPointClamp, texCoord ); float3 normal = 2.0f * normalData.xyz - 1.0f; float3 lightDir = normalize(position.xyz - PointLightPosition); float d = length( PointLightPosition - position.xyz ); float nDl = dot(normal, lightDir); float atten = 1.0f -d/PointLightRadius; float4 color = (atten * float4(PointLightColor.rgb, 1.0f)) + (nDl * atten); return color; } [/code] MJP's (BROKEN): [code] Texture2D colorMap : register( t0 ); Texture2D normalMap : register( t1 ); Texture2D depthMap : register( t2 ); SamplerState sampLinearClamp : register( s0 ); SamplerState sampPointClamp : register( s1 ); cbuffer ModelViewInfo : register( b0 ) { matrix World; matrix View; matrix Projection; } cbuffer InvertViewCB : register ( b1 ) { matrix InvertViewProjection; } cbuffer CameraPosition : register ( b2 ) { float3 CameraPosition; } cbuffer PointLightInfo : register ( b3 ) { float3 PointLightPosition; float PointLightRadius; float3 PointLightColor; float PointLightIntensity; }; cbuffer PSPerInstance : register ( b4 ) { float NearClipDistance; float FarClipDistance; float2 InvTextureSize; }; cbuffer SomeRandomCrap : register(b5) { int someRandomCrap; }; struct VertexShaderInput { float3 Position : POSITION0; }; struct VertexShaderOutput { float4 PositionCS : SV_POSITION; float3 ViewRay : TEXCOORD0; }; VertexShaderOutput VertexShaderFunction(VertexShaderInput input) { VertexShaderOutput output; float4 inputPosition = float4(input.Position, 1.0f); output.PositionCS = mul( inputPosition, World ); float3 positionWS = output.PositionCS.xyz; output.ViewRay = positionWS - CameraPosition; output.PositionCS = mul( output.PositionCS, View ); output.PositionCS = mul( output.PositionCS, Projection ); return output; } float4 PixelShaderFunction(VertexShaderOutput input) : SV_Target { float2 texCoord = input.PositionCS.xy * InvTextureSize; float viewDistance = depthMap.Sample( sampPointClamp, texCoord ).x; float3 viewRay = normalize(input.ViewRay); float3 positionWS = CameraPosition + viewRay * viewDistance; float4 normalData = normalMap.Sample( sampPointClamp, texCoord); float3 normal = 2.0f * normalData.xyz - 1.0f; float3 lightDir = normalize(positionWS - PointLightPosition); float d = length( PointLightPosition - positionWS); float nDl = dot(normal, lightDir); float atten = 1.0f -d/PointLightRadius; float4 color = (atten * float4(PointLightColor.rgb, 1.0f)) + (nDl * atten); return color; } [/code] If it matters, here is my gbuffer code [code] Texture2D txDiffuse : register( t0 ); SamplerState samLinear : register( s0 ); cbuffer ModelViewInfo : register( b0 ) { matrix World; matrix View; matrix Projection; } cbuffer SpecularInfo : register( b1 ) { float SpecularIntensity; float SpecularPower; } struct VertexShaderInput { float4 Position : POSITION; float3 Normal : NORMAL; float2 TexCoord : TEXCOORD0; }; struct VertexShaderOutput { float4 PositionCS : SV_POSITION; float4 PositionVS : Position; float2 TexCoord : TEXCOORD0; float3 Normal : TEXCOORD1; }; VertexShaderOutput VertexShaderFunction(VertexShaderInput input) { input.Position.w = 1.0f; VertexShaderOutput output = (VertexShaderOutput)0; output.PositionCS = mul( input.Position, World ); output.PositionCS = mul( output.PositionCS, View ); output.PositionVS = output.PositionCS; output.PositionCS = mul( output.PositionCS, Projection ); output.TexCoord = input.TexCoord; output.Normal = mul( input.Normal, (float3x3)World ); output.Normal = normalize(output.Normal); return output; } struct PixelShaderOutput { float4 Color : SV_Target0; float4 Normal : SV_Target1; float4 Depth : SV_Target2; }; PixelShaderOutput PixelShaderFunction(VertexShaderOutput input) { PixelShaderOutput output = (PixelShaderOutput)0; output.Color = txDiffuse.Sample( samLinear, input.TexCoord ); output.Color.a = SpecularIntensity; output.Normal.rgb = 0.5f * (normalize(input.Normal) + 1.0f); output.Normal.a = SpecularPower; //output.Depth = input.PositionVS.z / 100.0f; output.Depth.x = length(input.PositionVS); return output; }; [/code]
  10. Does anyone have any suggestions at all?
  11. Is my question or problem worded poorly?
  12. Is my question or problem worded poorly?