Jump to content

  • Log In with Google      Sign In   
  • Create Account

csisy

Member Since 19 Oct 2009
Offline Last Active Nov 20 2014 06:08 AM

Topics I've Started

Scene Graph + Visibility Culling + Rendering

11 November 2014 - 04:49 AM

Hi,
 
I wanted to improve my rendering with visibility culling, so I've redesigned the rendering system.
Sorry for the long post, the real question will be short. :)
 
A Mesh contains information about its parent-container (called Drawable, which contains the other components, like animation controller), its buffer (VBO) and its material. So everything which is necessary to the rendering.
In the previous system, when I added a new Drawable to the scene, I sorted the meshes by shader and material and used that buffer for rendering.
 
However, the problem comes with the visibility culling. I wanted a simple uniform grid (working on a top-down game) which makes the culling faster and easier: if a grid node is visible then add its meshes to the render queue. This means that I can't sort the meshes when I add them to the scene, I have to do it per frame.
 
So here's what I'm doing right now:
- clear render queue
- get visible nodes
- for all nodes
    - for all drawables in the node:
        - check if the meshes are already in the render queue
        - if not, add them to the proper part of the render queue
 
For the render queue, I'm using a std::set array. The array is indexed by the shader (more precisely by the shading type - diffuse, normal, etc) and the set contains the meshes sorted by textures. It could be faster to use vector then sort it, I don't know.
 
Is there any better solution for the storage or the overhead of the clearing and inserting is not too bad? How are you doing it?

Deferred Point Lights position error [SOLVED]

26 July 2014 - 04:21 PM

Hi,

 

Another topic from me smile.png

 

I've just discovered that the position reconstruction is not completely fine in my deferred renderer. Here is the steps I'm doing:

- render gbuffer -> albedo, normals, viewspace depth (32 bit float texture)

- for each point light

    - render a sphere geometry (which is a littlebit larger than the radius of the light)

    - the cull mode is always CW, so I'm drawing the backfaces only

    - position reconstruction is done in the fragment shader by: eyePosition + eyeRay * depth

    - eyeRay is computed in the vertex shader using an eye correction (described here)

 

The problem comes when the camera intersects the sphere geometry. The reconstructed position inside the sphere is okay, but at the edge, I get something wierd. This means the attenuation calculation is wrong, and the objects will be lit outside of the sphere.

 

Here is the shader:

// in the vertex shader

varying     vec3    EyeRay;

uniform     vec3    eyePosition;
uniform     float   farPlane;
uniform     vec3    cameraFront;

[...]

vec3 eyeRay = normalize(worldPos.xyz - eyePosition);
float eyeCorrection = farPlane / dot(eyeRay, cameraFront);
EyeRay = eyeRay * eyeCorrection;

// in the fragment shader

vec2 screenPos = ScreenPos.xy / ScreenPos.w;
vec2 Texcoord = 0.5 * (screenPos + 1.0);

float depthVal = texture2D(textDepth, Texcoord).r;

// get world position
vec3 wPosition = eyePosition + EyeRay * depthVal;

gl_FragData[0] = vec4(wPosition, 1.0);

The result is attached. You can see the green area at the edge of the sphere. I don't know if the stencil-optimization would resolve this problem, but it should work without it.


[CSM] Cascaded Shadow Maps split selection

25 July 2014 - 11:14 AM

Hi,

 

I'm working on a CSM and I don't know which way should I choose. I'm using a geometry prepess which gives me a depth map from the camera's view, so I'm using a separate fullscreen pass to compute the shadows (so I can't use the clip planes as a solution)

 

1)

- create [numSplits] render target, render each shadow map to the right buffer

- switch to shadow calculation pass

- bind every texture to the shader

- in the shader, use dynamic branching, like

if (dist < split1) { texture2D(shadowmap1, texcoord); ... }

 

2)

- create only one render target and draw the shadow maps as a texture atlas (left-top is the first split, right-top is the second, etc...)

- switch to shadow calculation pass

- bind the only one texture

- in the shader, use dynamic branching, which calculates the texcoords where the shadow map should be sampled.

 

And here comes my problems with both way. The target platform is OpenGL 2.0 (think to SM2).

 

1)

If I know well, the dynamic branching in a shader under SM3 is a "fake" solution. So it will compute every branch and evaluate after. It won't be so fast to compute shadows for each split then make the decision later. Especially, I'm using PCF, and in SM2, the instruction count is not infinite. smile.png

 

2)

With 4 splits and 1024 shadow maps, the texture size would be 2048x2048, And maybe this is the best case... imagine 2048 shadow maps which would use 4096x4096 texture.

 

However the 2nd solution still looks more viable. But I'm not sure about the texture arrays in OpenGL 2, is it available?

 

Thanks,

csisy


Skill/Spell effects

16 June 2014 - 04:46 PM

Hi,

 

Well, I've tried to find some simlar topics, but I can't. Probably I can't search :D This topic was my best hit.

 

So I've just done my basic particle system: I have an emitter which has direction, start/end values (like color, speed, size, ...) and a delay time between emitting particles. However, I'm just wondering how the computer games make their "own effects" for different skills. I mean, I can't use particle system for every spells.

 

I'm playing with LoL and it has a lot of different skills. For example here is Katarina's abilities. These are not particles. Are they using simple meshes (some quads) with animated textures with alpha blending? How should I handle these kind of effects?

 

And another one: if I have to use meshes and/or particles, how should I manage the different abilities? Now I have a base class (called Ability) and the different skills are inherited from it. In these classes I can handle the different particle emitters and actions. Is it a "normal" system or is there any better?

 

Sorry if I missed something or it's a duplicated topic.

Could you please give me some advice?


Deferred shading position reconstruction change

09 April 2014 - 01:27 AM

Hi,

 

The position reconstruction is currently working but I want to change it.

 

I'm using linear depth, so I can easly get an "eye ray" and use it to recostruct the position.

It works for directional lights and point lights if drawn by a fullscreen quad + scissor test. However I want to change the scissor-test-based point light into sphere geometry drawing.

 

The main problem is that the directional light pass is still drawn by a fullscreen quad and the two different approaches need two different depth value.

 

Now I'm storing the depth value as (ViewPos.z / FarPlane). It works perfectly if I'm drawing a fullscreen quad in the following way:

- compute view rays from the 4 corner of the far plane

- in the lighting pass simply get the necessary ray and compute world position:

vec3 wPosition = eyePosition + EyeRay * depthVal;

 

However, if I'd like to draw point lights with sphere, I cannot use the view rays to the far plane. I can only use the world position of the vertex of the sphere then compute a view ray from it. But it's trivial that the depth required to reconstruct world position is not ViewPos.z. It needs length(ViewPos).

 

I want to use linear depth so I don't want to reconstruct position using the inverseViewProj matrix.

 

Why I need geometry? For spot lights also + performance.


PARTNERS