Jump to content

  • Log In with Google      Sign In   
  • Create Account

Interested in a FREE copy of HTML5 game maker Construct 2?

We'll be giving away three Personal Edition licences in next Tuesday's GDNet Direct email newsletter!

Sign up from the right-hand sidebar on our homepage and read Tuesday's newsletter for details!


We're also offering banner ads on our site from just $5! 1. Details HERE. 2. GDNet+ Subscriptions HERE. 3. Ad upload HERE.


csisy

Member Since 19 Oct 2009
Offline Last Active Nov 20 2014 06:08 AM

Posts I've Made

In Topic: Scene Graph + Visibility Culling + Rendering

11 November 2014 - 05:09 PM

First of all, thanks for the replies. :)

 

@haegarr:

AFAIK the std vector (and maybe the set) won't reallocate memory when I call the clear() function, so the memory management shouldn't be a problem. However, implementing a simple "pool" class is not a hard task to do.

 

I've already seen your solution (this message-based thing) somewhere, looks pretty cool. :)

 

@mark:

1.: Agreed, it could be an optimization

 

2.: There are already multiple structures but not for each entity type, just by shading type. I'm not sure that I'd like to bring hard-coded entity types into the engine. But with specific tasks it could be okay. The multithreading can be pain in ***, but with correct management, it can boost things up. :)


In Topic: Deferred shading position reconstruction change

23 September 2014 - 04:45 AM

 

Thanks, I'll try it. smile.png

What does the rcp function actually do? Rounding, like floor / ceil in C?

 

rcp is the reciprocal ( 1 / x ).

note that in shaders this is often a fast approximation (so faster than actually doing 1.0 / x but less accurate).

 

I've found that rcp is not in the standard so I'm not using it. In this case, the equation is:

float eyeCorrection = farPlane / dot(eyeRay, -cameraFront);

Shouldn't we prevent division by zero? If the eyeRay == -cameraFront, the dot product will be zero.

 

However this probably isn't a problem at all, I'm culling front faces (so rendering back faces only) and the only way it could be true if the camera is facing in exactly the opposite direction. Which means the light is culled and not rendered at all.


In Topic: Reconstructing Position From Depth Buffer

05 August 2014 - 08:39 AM

 


Since you have to do this for each pixel, this method can be slower than the previous one.

 

There's no need for an entire matrix mul per pixel if you do it correctly.

 

I've explained reconstructing linear z from the depth buffer a few times before. You can look at this thread from a month ago that explains the impetus behind the math required to reconstruct linear z from a depth buffer. I've used this method in my deferred renderers in the past and it's always worked well.

 

 

Nice one :) I've talked about the simpler solution (which requires to do that calculation per pixel), but of course you can "extract" the "behind the scenes" math as you did :) Btw, thanks for the information, I'll store it on my HDD :D

 

@BlueSpud:

Sorry, I wasn't here for some days, but I'm glad that it's working :)

 

You should also check Samith's post about reconstructing linear depth from the depth buffer. With linear depth, the world-space position reconstruction is faster, since you just need the eye ray and the eye position (and of course the linear depth).


In Topic: Reconstructing Position From Depth Buffer

30 July 2014 - 07:47 AM

I'm using linear depth rendered into a texture (I'm using 32 bit floating-point texture, so it's a little overhead reduction). From linear depth it's easy to calculate the world space position with the following steps:

- calculate an eye ray, which points from the eye position to the proper position on the far plane

- when you have this ray, you can simply calculate world space position by eyePosition + eyeRay * depth, if the depth is in [0, 1] range.

 

This method is the same as Styves mentioned in the post above. There are some modifications of this technique, when you use linear depth in range [0, far - near] or [near, far] or something like that, but the "algorithm" is the same.

 

However, the "basic" stored depth is exponential, so if you'd like to use that, there's a really simple (but not so cost-effective) method to do that:

- if you have a texcoord in range [0, 1], you have to convert it into [-1, 1] by "texcoord.xy * 2 - 1"

- you set the depth value to the z coordinate

- then apply a homogenous matrix transformation with the inverse of the view * projection

 

Something like (GLS - NOTE: I didn't test it, just write it here) :

// read depth at the coordinate
float depth = getDepthValue(texcoord);

// get screen-space position
vec4 pos;
pos.xy = texcoord * 2.0 - 1.0;
pos.z = depth;
pos.w = 1.0;

// get world-space position
pos = invViewProj * pos; // or pos * mat, depends on the matrix-representation
pos /= pos.w;

vec3 worldPos = pos.xyz;

Since you have to do this for each pixel, this method can be slower than the previous one.


In Topic: Deferred Point Lights position error [SOLVED]

27 July 2014 - 04:42 AM

Your eye ray is interpolated linearly, you need to move the calculation into the fragment shader, ie. send over the worldposition.xyz to the fragment shader.

 

Henning

 

Wow, and really... :) I don't know why I put it into the vertex shader at all. It was in the fragment shader at the first time, probably I wanted to optimize a littlebit, and I didn't test it. I know, these are just excuses. :D

 

Anyway, thank you! :)


PARTNERS