Jump to content

  • Log In with Google      Sign In   
  • Create Account


Member Since 06 Mar 2010
Offline Last Active Sep 12 2014 02:49 AM

Topics I've Started

Sorting/structuring renderables and cache locality

06 September 2014 - 09:52 AM

Hi, so I'm building a graphics engine for fun, and I've been thinking about how to approach renderable sorting for the different passes (I'm doing deferred rendering). I'd heard about how you can make huge gains by sorting everything so that access is linear for each pass. The problem for me comes when I want to re-use the same renderables for several different passes during the same frame.

First of all I want to start off by saying that my knowledge of how the modern CPU cache actually works is very rudimentary, so I'm mostly going off assumptions here, please do correct me if I am wrong at any point. Also don't hesitate to ask for clarifications if I'm making no sense.

My current idea would be to keep a large, preallocated buffer where I store all the renderables (transforms, meshes, bundled with material and texture handles, flyweight pattern style) that got through culling each frame update.

Then I would keep different index/handle "lists"(not necessarily an actual list) -- one list per render pass -- with handles or direct indices to the renderable array.

This way I can access the same renderable from several different passes. I don't have to copy or move the renderables around. I'd just send in a pointer to the renderables array and then for each pass access all the relevant renderables through the index lists. This would essentially mean that I never sort the actual renderables array, only sorting the index lists for things like depth, translucency (depending on what pass).

Now comes my question, would this be inefficient because I'd be essentially randomly accessing different indices in the big renderable array? The cache would have no real good way to predict where I'd be accessing next, so I'd probably be getting tons of cache misses.

I just feel that despite this, it's a flexible and hopefully workable approach.

How do real, good engines deal with this sort of thing? Should I just not bother thinking about how the cache handles it?

Deferred rendering - Point light attenuation problem

08 December 2012 - 05:57 PM

*Update* I fixed it. It was not at all related to anything in my shader, my attenuation is perfectly fine. I had my depth stencil set up all wrong! I found the right solution here: https://developer.nvidia.com/sites/default/files/akamai/gamedev/docs/6800_Leagues_Deferred_Shading.pdf at page 15, if anyone is having similar problems! Sorry for necroing. Thanks for all the help, you people!

Hi, sorry for vague title. I'm going by Catalin Zima's deferred renderer pretty heavily, so it's very similar. My problem is that my point light lighting doesn't fall off based on the attenuation like it should, it's a little hard to explain exactly what's wrong, so I frapsed it: http://youtu.be/1AY2xpmImgc

Upper left is color map, right of that is normal map and furthest to the right is depth map. The light map is the bottom left one, so look at that one.

Basically, they color things that are outside of the light radius. I strongly suspect there's something wrong about the projected texture coordinates. I've double checked so that all values that I send into the shaders actually get assigned, and I've looked through everything in pix and it seems to be fine. When I draw the sphere model that represents the point light, I scale the translation matrix with a (LightRadius, LightRadius, LightRadius) matrix.

I use additive blending mode for my lighting phase, and change rasterizer state depending on if I'm inside or not. I use a separate render target for my depth, I haven't bothered trying to use my depth stencil as a RT as I've seen some people do.

Here's how the shader looks:

Vertex shader:

cbuffer MatrixVertexBuffer
float4x4 World;
float4x4 View;
float4x4 Projection;
struct VertexShaderInput
float3 Position : POSITION0;
struct VertexShaderOutput
float4 Position : SV_Position;
float4 LightPosition : TEXCOORD0;
VertexShaderOutput LightVertexShader(VertexShaderInput input)
VertexShaderOutput output;
output.Position = mul(float4(input.Position, 1.0f), World);
output.Position = mul(output.Position, View);
output.Position = mul(output.Position, Projection);
output.LightPosition = output.Position;
return output;

Pixel shader:

cbuffer LightBufferType
float3 LightColor;
float3 LightPosition;
float LightRadius;
float LightPower;
float4 CameraPosition;
float4 Padding;
cbuffer PixelMatrixBufferType
float4x4 InvViewProjection;
struct VertexShaderOutput
float4 Position : SV_Position;
float4 LightPosition : TEXCOORD0;
Texture2D textures[3]; //Color, Normal, depth
SamplerState pointSampler;
float2 postProjToScreen(float4 position)
  float2 screenPos = position.xy / position.w;
  return 0.5f * (float2(screenPos.x, -screenPos.y) + 1);
half4 LightPixelShader(VertexShaderOutput input) : SV_TARGET0
float2 texCoord = postProjToScreen(input.LightPosition);
float4 baseColor = textures[0].Sample(pointSampler, texCoord);
if(baseColor.r + baseColor.g + baseColor.b < 0.0f) //I cull early if the pixel is completely black, meaning there really isn't anything to light here.
  return half4(0.0f, 0.0f, 0.0f, 0.0f);
//get normal data from the normalMap
float4 normalData = textures[1].Sample(pointSampler, texCoord);

//tranform normal back into [-1,1] range
float3 normal = 2.0f * normalData.xyz - 1.0f;
//read depth
float depth = textures[2].Sample(pointSampler, texCoord);
//compute screen-space position
float4 position;
position.x = texCoord.x;
position.y = -(texCoord.x);
position.z = depth;
position.w = 1.0f;
//transform to world space
position = mul(position, InvViewProjection);
position /= position.w;
//surface-to-light vector
float3 lightVector = position - input.LightPosition;
//compute attenuation based on distance - linear attenuation
float attenuation = saturate(1.0f - max(0.01f, lightVector)/(LightRadius/2)); //max(0.01f, lightVector) to avoid divide by zero!
//normalize light vector
lightVector = normalize(lightVector);
//compute diffuse light
float NdL = max(0, dot(normal, lightVector));
float3 diffuseLight = NdL * LightColor.rgb;
//reflection vector
float3 reflectionVector = normalize(reflect(-lightVector, normal));
//camera-to-surface vector
float3 directionToCamera = normalize(CameraPosition - position);
//compute specular light
float specularLight =  pow( saturate(dot(reflectionVector, directionToCamera)), 128.0f);
//take into account attenuation and lightIntensity.
return attenuation * half4(diffuseLight.rgb, specularLight);

Sorry if it's messy, I'm pretty loose about standards and commenting while experimenting.

Thank you for your time, it's really appreciated.

error C4430: missing type specifier

13 March 2010 - 01:00 AM

Hi, I'm working on compiling this HUGE project that came with the 3D Game Programming Book by Stefan Zerbst. I am honestly in over my head because I dont understand 1/10th of it, but I just want to get it compiled to see how it looks, and the only error I'm getting is this: 1>c:\documents and settings\<name>\desktop\zfx3d\chap_15\include\cgameentity.h(29) : error C4430: missing type specifier - int assumed. Note: C++ does not support default-int Yeah, so it's just a header file. I'm going to paste the whole thing. I've marked row 29. // FILE: CGameEntity.h #ifndef CGameEntity_H #define CGameEntity_H #include <windows.h> #include <stdio.h> #include "zfx.h" #include "CGamePortal.h" #include "CGameLevel.h" class CGameLevel; class CGamePortal; class CGameEntity { public: CGameEntity(void); virtual ~CGameEntity(void); virtual HRESULT Render(ZFXRenderDevice*)=0; virtual void Update(float)=0; virtual bool TouchAndUse(const ZFXVector&)=0; virtual bool TestCollision(const ZFXAabb&, ZFXPlane*)=0; virtual bool TestCollision(const ZFXRay&, float, float*)=0; virtual bool Load(FILE*); virtual ZFXAabb GetAabb(void) { return m_Aabb; } 29----> virtual IsOfType(ZFXENTITY e) { return (e==m_Type); } protected: ZFXENTITY m_Type; ZFXAabb m_Aabb; VERTEX *m_pVerts; WORD *m_pIndis; UINT m_NumVerts; UINT m_NumIndis; UINT m_nSkin; }; // class typedef class CGameEntity *LPGAMEENTITY; /*----------------------------------------------------------------*/ class CGameDoor : public CGameEntity { public: CGameDoor(void); virtual ~CGameDoor(void); virtual HRESULT Render(ZFXRenderDevice*); virtual void Update(float); virtual bool Load(FILE*); virtual bool TouchAndUse(const ZFXVector&); virtual bool TestCollision(const ZFXAabb&, ZFXPlane*); virtual bool TestCollision(const ZFXRay&, float, float*); virtual bool IsActive(void) { return m_bActive; } virtual bool ConnectToPortals(CGameLevel*); private: ZFXVector m_vcT; ZFXAXIS m_Axis; float m_fSign; float m_fTime; float m_fDist; float m_fPause; bool m_bActive; bool m_bOpening; bool m_bPausing; UINT m_Portal[2]; CGamePortal* m_pPortal_A; CGamePortal* m_pPortal_B; bool LoadMesh(FILE *pFile); }; // class typedef class CGameDoor *LPGAMEDOOR; /*----------------------------------------------------------------*/ #endif Thanks in advance.

Visual Studio 2003?

06 March 2010 - 04:17 AM

Hi. I'm trying to learn 3D programming with this book that was released in circa 2002, compiling any of the code doesn't work in a modern compiler (Say... Visual Studio 2005 or 2008). And I can't find VS2003 anywhere? Microsoft doesn't seem to have it anymore, and I don't want to turn to pirate sites out of principle. Also I'm working with .dsw files, does anyone know if there are any plugins for any other compilers than the Microsoft ones that can handle that? Also I'm self learned so don't bash me on not knowing the terminology, please. :P Thanks in advance! P.S. If it helps, it's the 3D Game Engine Programming book by Stefan Zerbst, if anyone has worked with it before. Edit: OKAY, thanks a lot! I solved the problem. As usual it was just me being a total noob! :) Turns out I'm using the wrong version of VC++ 2k8, trying to find the MFC package now. [Edited by - Lemmi on March 6, 2010 12:32:56 PM]