Jump to content

  • Log In with Google      Sign In   
  • Create Account


Member Since 19 Oct 2009
Offline Last Active Today, 02:53 PM

Topics I've Started

Deferred Point Lights position error [SOLVED]

26 July 2014 - 04:21 PM



Another topic from me smile.png


I've just discovered that the position reconstruction is not completely fine in my deferred renderer. Here is the steps I'm doing:

- render gbuffer -> albedo, normals, viewspace depth (32 bit float texture)

- for each point light

    - render a sphere geometry (which is a littlebit larger than the radius of the light)

    - the cull mode is always CW, so I'm drawing the backfaces only

    - position reconstruction is done in the fragment shader by: eyePosition + eyeRay * depth

    - eyeRay is computed in the vertex shader using an eye correction (described here)


The problem comes when the camera intersects the sphere geometry. The reconstructed position inside the sphere is okay, but at the edge, I get something wierd. This means the attenuation calculation is wrong, and the objects will be lit outside of the sphere.


Here is the shader:

// in the vertex shader

varying     vec3    EyeRay;

uniform     vec3    eyePosition;
uniform     float   farPlane;
uniform     vec3    cameraFront;


vec3 eyeRay = normalize(worldPos.xyz - eyePosition);
float eyeCorrection = farPlane / dot(eyeRay, cameraFront);
EyeRay = eyeRay * eyeCorrection;

// in the fragment shader

vec2 screenPos = ScreenPos.xy / ScreenPos.w;
vec2 Texcoord = 0.5 * (screenPos + 1.0);

float depthVal = texture2D(textDepth, Texcoord).r;

// get world position
vec3 wPosition = eyePosition + EyeRay * depthVal;

gl_FragData[0] = vec4(wPosition, 1.0);

The result is attached. You can see the green area at the edge of the sphere. I don't know if the stencil-optimization would resolve this problem, but it should work without it.

[CSM] Cascaded Shadow Maps split selection

25 July 2014 - 11:14 AM



I'm working on a CSM and I don't know which way should I choose. I'm using a geometry prepess which gives me a depth map from the camera's view, so I'm using a separate fullscreen pass to compute the shadows (so I can't use the clip planes as a solution)



- create [numSplits] render target, render each shadow map to the right buffer

- switch to shadow calculation pass

- bind every texture to the shader

- in the shader, use dynamic branching, like

if (dist < split1) { texture2D(shadowmap1, texcoord); ... }



- create only one render target and draw the shadow maps as a texture atlas (left-top is the first split, right-top is the second, etc...)

- switch to shadow calculation pass

- bind the only one texture

- in the shader, use dynamic branching, which calculates the texcoords where the shadow map should be sampled.


And here comes my problems with both way. The target platform is OpenGL 2.0 (think to SM2).



If I know well, the dynamic branching in a shader under SM3 is a "fake" solution. So it will compute every branch and evaluate after. It won't be so fast to compute shadows for each split then make the decision later. Especially, I'm using PCF, and in SM2, the instruction count is not infinite. smile.png



With 4 splits and 1024 shadow maps, the texture size would be 2048x2048, And maybe this is the best case... imagine 2048 shadow maps which would use 4096x4096 texture.


However the 2nd solution still looks more viable. But I'm not sure about the texture arrays in OpenGL 2, is it available?




Skill/Spell effects

16 June 2014 - 04:46 PM



Well, I've tried to find some simlar topics, but I can't. Probably I can't search :D This topic was my best hit.


So I've just done my basic particle system: I have an emitter which has direction, start/end values (like color, speed, size, ...) and a delay time between emitting particles. However, I'm just wondering how the computer games make their "own effects" for different skills. I mean, I can't use particle system for every spells.


I'm playing with LoL and it has a lot of different skills. For example here is Katarina's abilities. These are not particles. Are they using simple meshes (some quads) with animated textures with alpha blending? How should I handle these kind of effects?


And another one: if I have to use meshes and/or particles, how should I manage the different abilities? Now I have a base class (called Ability) and the different skills are inherited from it. In these classes I can handle the different particle emitters and actions. Is it a "normal" system or is there any better?


Sorry if I missed something or it's a duplicated topic.

Could you please give me some advice?

Deferred shading position reconstruction change

09 April 2014 - 01:27 AM



The position reconstruction is currently working but I want to change it.


I'm using linear depth, so I can easly get an "eye ray" and use it to recostruct the position.

It works for directional lights and point lights if drawn by a fullscreen quad + scissor test. However I want to change the scissor-test-based point light into sphere geometry drawing.


The main problem is that the directional light pass is still drawn by a fullscreen quad and the two different approaches need two different depth value.


Now I'm storing the depth value as (ViewPos.z / FarPlane). It works perfectly if I'm drawing a fullscreen quad in the following way:

- compute view rays from the 4 corner of the far plane

- in the lighting pass simply get the necessary ray and compute world position:

vec3 wPosition = eyePosition + EyeRay * depthVal;


However, if I'd like to draw point lights with sphere, I cannot use the view rays to the far plane. I can only use the world position of the vertex of the sphere then compute a view ray from it. But it's trivial that the depth required to reconstruct world position is not ViewPos.z. It needs length(ViewPos).


I want to use linear depth so I don't want to reconstruct position using the inverseViewProj matrix.


Why I need geometry? For spot lights also + performance.

Get Swing control HWND (window handle) in c++

20 October 2013 - 12:33 PM



I've read a lot about this problem, but I haven't found the proper solution.


I'm working on a c++ game (and engine) which uses OpenGL as graphics API.

I've decided that I create the editor in Java with Swing. I'm using JDK 1.7 x86 version (but I'm on Win7 x64)


The problem is that I need the handle (HWND) of the "preview control" (a simple JPanel) in the c++ code (to initialize the OpenGL).

I know how the native wrapper works, the problem is the HWND getting.


I've read about the AWT solution: get awt, get graphics component, and get the window handle. It would be great for me, but it crashes and I don't know why.


Of course it's important that if I send the editor to the artists, they have to be able to run it without any trick (just install a JVE 1.7)


Has anyone any idea?