• 9
• 9
• 10
• 9
• 10
• ### Similar Content

• By lxjk
Hi guys,
There are many ways to do light culling in tile-based shading. I've been playing with this idea for a while, and just want to throw it out there.
Because tile frustums are general small compared to light radius, I tried using cone test to reduce false positives introduced by commonly used sphere-frustum test.
On top of that, I use distance to camera rather than depth for near/far test (aka. sliced by spheres).
This method can be naturally extended to clustered light culling as well.
The following image shows the general ideas

Performance-wise I get around 15% improvement over sphere-frustum test. You can also see how a single light performs as the following: from left to right (1) standard rendering of a point light; then tiles passed the test of (2) sphere-frustum test; (3) cone test; (4) spherical-sliced cone test

I put the details in my blog post (https://lxjk.github.io/2018/03/25/Improve-Tile-based-Light-Culling-with-Spherical-sliced-Cone.html), GLSL source code included!

Eric

• Good evening everyone!

I was wondering if there is something equivalent of  GL_NV_blend_equation_advanced for AMD?
Basically I'm trying to find more compatible version of it.

Thank you!

• Hello guys,

How do I know? Why does wavefront not show for me?
I already checked I have non errors yet.

And my download (mega.nz) should it is original but I tried no success...
- Add blend source and png file here I have tried tried,.....

PS: Why is our community not active? I wait very longer. Stop to lie me!
Thanks !

• I wasn't sure if this would be the right place for a topic like this so sorry if it isn't.
I'm currently working on a project for Uni using FreeGLUT to make a simple solar system simulation. I've got to the point where I've implemented all the planets and have used a Scene Graph to link them all together. The issue I'm having with now though is basically the planets and moons orbit correctly at their own orbit speeds.
I'm not really experienced with using matrices for stuff like this so It's likely why I can't figure out how exactly to get it working. This is where I'm applying the transformation matrices, as well as pushing and popping them. This is within the Render function that every planet including the sun and moons will have and run.
if (tag != "Sun") { glRotatef(orbitAngle, orbitRotation.X, orbitRotation.Y, orbitRotation.Z); } glPushMatrix(); glTranslatef(position.X, position.Y, position.Z); glRotatef(rotationAngle, rotation.X, rotation.Y, rotation.Z); glScalef(scale.X, scale.Y, scale.Z); glDrawElements(GL_TRIANGLES, mesh->indiceCount, GL_UNSIGNED_SHORT, mesh->indices); if (tag != "Sun") { glPopMatrix(); } The "If(tag != "Sun")" parts are my attempts are getting the planets to orbit correctly though it likely isn't the way I'm meant to be doing it. So I was wondering if someone would be able to help me? As I really don't have an idea on what I would do to get it working. Using the if statement is truthfully the closest I've got to it working but there are still weird effects like the planets orbiting faster then they should depending on the number of planets actually be updated/rendered.

• Hello everyone,
I have problem with texture

# OpenGL Opengl World Position From Depth

## Recommended Posts

Hi,

I am having a lot of trouble trying to recover world space position from depth.

I swear I have managed to get this to work before in another project, but I have been stuck on this for ages

I am using OpenGL and a deferred pipeline

I am not modifying the depth in any special way, just whatever OpenGL does

and I have been trying to recover world space position with this (I don't care about performance at this time, i just want it to work):

vec4 getWorldSpacePositionFromDepth(
sampler2D depthSampler,
mat4 proj,
mat4 view,
vec2 screenUVs)
{

mat4 inverseProjectionView =  inverse(proj * view);
float pixelDepth = texture(depthSampler, screenUVs).r * 2.0f - 1.0f;

vec4 clipSpacePosition =  vec4( screenUVs * 2.0f - 1.0f, pixelDepth, 1.0);

vec4 worldPosition = inverseProjectionView * clipSpacePosition;
worldPosition = vec4((worldPosition.xyz / worldPosition.w ), 1.0f);
return worldPosition;
}


Which I am sure is how many other sources do it...

But the positions seem distorted and get worse as i move the camera away from origin it seems, which of course then breaks all of my lighting...

Please see attached image to see the difference between depth reconstructed world space position and the actual world space position

Any help would be much appreciated!

K

##### Share on other sites
Which is not what you want since ur in shader you xould precompute z coord or use glfragcoord z value

Amyway( model * view) * projection = mat1
Vertex * mat1 = a
A.xyz = a.xyz / a.w
A = a * 0.5 + 0.5

##### Share on other sites
Your code looks slow but sane. How are screenUVs computed?
Are you passing the exact same view matrix that the scene was rendered with? Maybe try computing the inverse-view-proj matrix on the CPU and passing it in.

Which is not what you want since ur in shader you xould precompute z coord or use glfragcoord z Value

Reconstructing position from the depth buffer is common in post-processing / deferred shading systems, as glfragcoord is not available and there's no other way to determine the per pixel world-space coordinates.

##### Share on other sites

screenuvs are just 0-1 uvs for a screen space quad like the usual deferred setup

When i use an explicit world space position buffer everything renders perfectly. So I am sure it is reading from the correct uvs

I tried cpu inverse view proj and it didn't make any difference.

It is so weird, because it looks almost okay until the camera is moved far away!

##### Share on other sites

I can't believe this - it was something so simple:

my glDepthRangef was set to 0.1f and 1000.0f

setting it to 0.0f and 1000.0f

fixed the position reconstruction

argh!

Thanks for the help anyway! :)

##### Share on other sites
glDepthRange specifies the NDC->depth-texture transform. Generally you want to leave it on 0,1 and never touch it.

##### Share on other sites

oh so that explains it then.

Thank you!

##### Share on other sites

"The setting of (0,1) maps the near plane to 0 and the far plane to 1. With this mapping, the depth buffer range is fully utilized."

This says you have no need to touch it.