• Advertisement
  • Popular Tags

  • Popular Now

  • Advertisement
  • Similar Content

    • By lxjk
      Hi guys,
      There are many ways to do light culling in tile-based shading. I've been playing with this idea for a while, and just want to throw it out there.
      Because tile frustums are general small compared to light radius, I tried using cone test to reduce false positives introduced by commonly used sphere-frustum test.
      On top of that, I use distance to camera rather than depth for near/far test (aka. sliced by spheres).
      This method can be naturally extended to clustered light culling as well.
      The following image shows the general ideas

       
      Performance-wise I get around 15% improvement over sphere-frustum test. You can also see how a single light performs as the following: from left to right (1) standard rendering of a point light; then tiles passed the test of (2) sphere-frustum test; (3) cone test; (4) spherical-sliced cone test
       

       
      I put the details in my blog post (https://lxjk.github.io/2018/03/25/Improve-Tile-based-Light-Culling-with-Spherical-sliced-Cone.html), GLSL source code included!
       
      Eric
    • By Fadey Duh
      Good evening everyone!

      I was wondering if there is something equivalent of  GL_NV_blend_equation_advanced for AMD?
      Basically I'm trying to find more compatible version of it.

      Thank you!
    • By Jens Eckervogt
      Hello guys, 
       
      Please tell me! 
      How do I know? Why does wavefront not show for me?
      I already checked I have non errors yet.
      using OpenTK; using System.Collections.Generic; using System.IO; using System.Text; namespace Tutorial_08.net.sourceskyboxer { public class WaveFrontLoader { private static List<Vector3> inPositions; private static List<Vector2> inTexcoords; private static List<Vector3> inNormals; private static List<float> positions; private static List<float> texcoords; private static List<int> indices; public static RawModel LoadObjModel(string filename, Loader loader) { inPositions = new List<Vector3>(); inTexcoords = new List<Vector2>(); inNormals = new List<Vector3>(); positions = new List<float>(); texcoords = new List<float>(); indices = new List<int>(); int nextIdx = 0; using (var reader = new StreamReader(File.Open("Contents/" + filename + ".obj", FileMode.Open), Encoding.UTF8)) { string line = reader.ReadLine(); int i = reader.Read(); while (true) { string[] currentLine = line.Split(); if (currentLine[0] == "v") { Vector3 pos = new Vector3(float.Parse(currentLine[1]), float.Parse(currentLine[2]), float.Parse(currentLine[3])); inPositions.Add(pos); if (currentLine[1] == "t") { Vector2 tex = new Vector2(float.Parse(currentLine[1]), float.Parse(currentLine[2])); inTexcoords.Add(tex); } if (currentLine[1] == "n") { Vector3 nom = new Vector3(float.Parse(currentLine[1]), float.Parse(currentLine[2]), float.Parse(currentLine[3])); inNormals.Add(nom); } } if (currentLine[0] == "f") { Vector3 pos = inPositions[0]; positions.Add(pos.X); positions.Add(pos.Y); positions.Add(pos.Z); Vector2 tc = inTexcoords[0]; texcoords.Add(tc.X); texcoords.Add(tc.Y); indices.Add(nextIdx); ++nextIdx; } reader.Close(); return loader.loadToVAO(positions.ToArray(), texcoords.ToArray(), indices.ToArray()); } } } } } And It have tried other method but it can't show for me.  I am mad now. Because any OpenTK developers won't help me.
      Please help me how do I fix.

      And my download (mega.nz) should it is original but I tried no success...
      - Add blend source and png file here I have tried tried,.....  
       
      PS: Why is our community not active? I wait very longer. Stop to lie me!
      Thanks !
    • By codelyoko373
      I wasn't sure if this would be the right place for a topic like this so sorry if it isn't.
      I'm currently working on a project for Uni using FreeGLUT to make a simple solar system simulation. I've got to the point where I've implemented all the planets and have used a Scene Graph to link them all together. The issue I'm having with now though is basically the planets and moons orbit correctly at their own orbit speeds.
      I'm not really experienced with using matrices for stuff like this so It's likely why I can't figure out how exactly to get it working. This is where I'm applying the transformation matrices, as well as pushing and popping them. This is within the Render function that every planet including the sun and moons will have and run.
      if (tag != "Sun") { glRotatef(orbitAngle, orbitRotation.X, orbitRotation.Y, orbitRotation.Z); } glPushMatrix(); glTranslatef(position.X, position.Y, position.Z); glRotatef(rotationAngle, rotation.X, rotation.Y, rotation.Z); glScalef(scale.X, scale.Y, scale.Z); glDrawElements(GL_TRIANGLES, mesh->indiceCount, GL_UNSIGNED_SHORT, mesh->indices); if (tag != "Sun") { glPopMatrix(); } The "If(tag != "Sun")" parts are my attempts are getting the planets to orbit correctly though it likely isn't the way I'm meant to be doing it. So I was wondering if someone would be able to help me? As I really don't have an idea on what I would do to get it working. Using the if statement is truthfully the closest I've got to it working but there are still weird effects like the planets orbiting faster then they should depending on the number of planets actually be updated/rendered.
    • By Jens Eckervogt
      Hello everyone, 
      I have problem with texture
      using System; using OpenTK; using OpenTK.Input; using OpenTK.Graphics; using OpenTK.Graphics.OpenGL4; using System.Drawing; using System.Reflection; namespace Tutorial_05 { class Game : GameWindow { private static int WIDTH = 1200; private static int HEIGHT = 720; private static KeyboardState keyState; private int vaoID; private int vboID; private int iboID; private Vector3[] vertices = { new Vector3(-0.5f, 0.5f, 0.0f), // V0 new Vector3(-0.5f, -0.5f, 0.0f), // V1 new Vector3(0.5f, -0.5f, 0.0f), // V2 new Vector3(0.5f, 0.5f, 0.0f) // V3 }; private Vector2[] texcoords = { new Vector2(0, 0), new Vector2(0, 1), new Vector2(1, 1), new Vector2(1, 0) }; private int[] indices = { 0, 1, 3, 3, 1, 2 }; private string vertsrc = @"#version 450 core in vec3 position; in vec2 textureCoords; out vec2 pass_textureCoords; void main(void) { gl_Position = vec4(position, 1.0); pass_textureCoords = textureCoords; }"; private string fragsrc = @"#version 450 core in vec2 pass_textureCoords; out vec4 out_color; uniform sampler2D textureSampler; void main(void) { out_color = texture(textureSampler, pass_textureCoords); }"; private int programID; private int vertexShaderID; private int fragmentShaderID; private int textureID; private Bitmap texsrc; public Game() : base(WIDTH, HEIGHT, GraphicsMode.Default, "Tutorial 05 - Texturing", GameWindowFlags.Default, DisplayDevice.Default, 4, 5, GraphicsContextFlags.Default) { } protected override void OnLoad(EventArgs e) { base.OnLoad(e); CursorVisible = true; GL.GenVertexArrays(1, out vaoID); GL.BindVertexArray(vaoID); GL.GenBuffers(1, out vboID); GL.BindBuffer(BufferTarget.ArrayBuffer, vboID); GL.BufferData(BufferTarget.ArrayBuffer, (IntPtr)(vertices.Length * Vector3.SizeInBytes), vertices, BufferUsageHint.StaticDraw); GL.GenBuffers(1, out iboID); GL.BindBuffer(BufferTarget.ElementArrayBuffer, iboID); GL.BufferData(BufferTarget.ElementArrayBuffer, (IntPtr)(indices.Length * sizeof(int)), indices, BufferUsageHint.StaticDraw); vertexShaderID = GL.CreateShader(ShaderType.VertexShader); GL.ShaderSource(vertexShaderID, vertsrc); GL.CompileShader(vertexShaderID); fragmentShaderID = GL.CreateShader(ShaderType.FragmentShader); GL.ShaderSource(fragmentShaderID, fragsrc); GL.CompileShader(fragmentShaderID); programID = GL.CreateProgram(); GL.AttachShader(programID, vertexShaderID); GL.AttachShader(programID, fragmentShaderID); GL.LinkProgram(programID); // Loading texture from embedded resource texsrc = new Bitmap(Assembly.GetEntryAssembly().GetManifestResourceStream("Tutorial_05.example.png")); textureID = GL.GenTexture(); GL.BindTexture(TextureTarget.Texture2D, textureID); GL.TexParameter(TextureTarget.Texture2D, TextureParameterName.TextureMagFilter, (int)All.Linear); GL.TexParameter(TextureTarget.Texture2D, TextureParameterName.TextureMinFilter, (int)All.Linear); GL.TexImage2D(TextureTarget.Texture2D, 0, PixelInternalFormat.Rgba, texsrc.Width, texsrc.Height, 0, PixelFormat.Bgra, PixelType.UnsignedByte, IntPtr.Zero); System.Drawing.Imaging.BitmapData bitmap_data = texsrc.LockBits(new Rectangle(0, 0, texsrc.Width, texsrc.Height), System.Drawing.Imaging.ImageLockMode.ReadOnly, System.Drawing.Imaging.PixelFormat.Format32bppRgb); GL.TexSubImage2D(TextureTarget.Texture2D, 0, 0, 0, texsrc.Width, texsrc.Height, PixelFormat.Bgra, PixelType.UnsignedByte, bitmap_data.Scan0); texsrc.UnlockBits(bitmap_data); GL.Enable(EnableCap.Texture2D); GL.BufferData(BufferTarget.TextureBuffer, (IntPtr)(texcoords.Length * Vector2.SizeInBytes), texcoords, BufferUsageHint.StaticDraw); GL.BindAttribLocation(programID, 0, "position"); GL.BindAttribLocation(programID, 1, "textureCoords"); } protected override void OnResize(EventArgs e) { base.OnResize(e); GL.Viewport(0, 0, ClientRectangle.Width, ClientRectangle.Height); } protected override void OnUpdateFrame(FrameEventArgs e) { base.OnUpdateFrame(e); keyState = Keyboard.GetState(); if (keyState.IsKeyDown(Key.Escape)) { Exit(); } } protected override void OnRenderFrame(FrameEventArgs e) { base.OnRenderFrame(e); // Prepare for background GL.Clear(ClearBufferMask.ColorBufferBit); GL.ClearColor(Color4.Red); // Draw traingles GL.EnableVertexAttribArray(0); GL.EnableVertexAttribArray(1); GL.BindVertexArray(vaoID); GL.UseProgram(programID); GL.BindBuffer(BufferTarget.ArrayBuffer, vboID); GL.VertexAttribPointer(0, 3, VertexAttribPointerType.Float, false, 0, IntPtr.Zero); GL.ActiveTexture(TextureUnit.Texture0); GL.BindTexture(TextureTarget.Texture3D, textureID); GL.BindBuffer(BufferTarget.ElementArrayBuffer, iboID); GL.DrawElements(BeginMode.Triangles, indices.Length, DrawElementsType.UnsignedInt, 0); GL.DisableVertexAttribArray(0); GL.DisableVertexAttribArray(1); SwapBuffers(); } protected override void OnClosed(EventArgs e) { base.OnClosed(e); GL.DeleteVertexArray(vaoID); GL.DeleteBuffer(vboID); } } } I can not remember where do I add GL.Uniform2();
  • Advertisement
  • Advertisement
Sign in to follow this  

OpenGL Dynamic Ambient Occlusion

This topic is 1398 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hey,

 

So I'm trying to deepen my GPU knowledge and embarking on rendering a minecraft-style mesh with Dynamic Ambient Occlusion ala this GPU Gems article.

http://http.download.nvidia.com/developer/SDK/Individual_Samples/DEMOS/OpenGL/src/dynamic_amb_occ/docs/214_gems2_ch14.pdf

 

I guess the principle question I have is this:

When you are creating the AO shadow value per-vertex (element) in your mesh, do you then just follow that with a normal render and use normal fragment shader linear interpolation between adjacent vertex shadow values to actually shade the fragments?

 

i.e. does the process look something like this:

2 passes of the AO algorithm

1 pass of a normal render using a texture map output by the AO algroithm to get shadow values per vertex?

 

Or am I missing something in the article and you can actually do the screen render within the 2 AO passes?

 

Thanks!

 

 

Share this post


Link to post
Share on other sites
Advertisement

I can help you with this, but.. Please don't. Despite being fully aware of the limitations of SSAO, I actually implemented various SSAO variations.

They all had one thing in common: You can't find edges that aren't there, nor can you know one mesh REALLY isn't creating AO with another. It's going to be awful.

On models you usually bake in some fake AO to make them more detailed. On dynamic stuff, including just basic movement, it won't work well.

 

You have a wonderful opportunity to use your vertex attributes for something useful. Set aside a few 6-8 bytes for lighting.

You can use 4 channels for emissive light, 1 channel for shadows, 1 for AO (it being separated allows you to ramp it), and even brightness (from torches).

If you don't have anything so fancy, just set aside 2 channels: shadows and AO.

 

Using the classic blanket approach to shadows is going to end up with your caves being in half-shade just like the rest of your shadows. Not to mention, shadows are affected by atmospheric light as well. Easily accounted for in voxel worlds simply by finding the "ground level" in a post-process stage of your terrain.

 

See:

http://0fps.net/2013/07/03/ambient-occlusion-for-minecraft-like-worlds/

 

And why not SSAO:

http://backslashn.com/post/37712343299/this-is-not-how-ambient-occlusion-works

Edited by Kaptein

Share this post


Link to post
Share on other sites

So is that article I posted actually SSAO though as opposed to an approximation of true AO? I thought it was the latter, not the former. My understanding is that a lot of SSAO implementations use the depth buffer hack for faster rendering but then you get jenky outcomes. The GPU Gems article, I think, is approximating global AO. Ya? Or am I just wrong on terminology?

 

Anyway, the AO for minecraft-like worlds is pretty great.

 

Just from a theoretical learning perspective though if you have the answer to my original question can you provide it?

 

Thanks :)

Share this post


Link to post
Share on other sites

You are correct. It is an interesting technique. I don't think it will work too well without some serious work in a minecraftian-style world. Too much stuff.

Definitely interesting though. Some things to consider: You need access to the atmosphere, as well as any nearby lights. You need to be able to efficiently represent vertices are disk-shapes, and I think that means you can't have quads with only 4 vertices.

 

No idea how well this is going to work out - how scalable it is. But it looks fancy.

Edited by Kaptein

Share this post


Link to post
Share on other sites

ok. so rambling on-wards. My central hangups are still just about implementation. I'm somewhat worried about performance but it's way easier to deal with that after implementation rather than obsessing about it now. Mostly I just want to learn this stuff; secondarily it's a bonus if it looks amazing and runs at framerate...

 

So here's my thoughts about implementation. Is anything in here either insane or just stupid because I don't know about some awesome GPU trick?

 

Basically, I think I can generate better results by generating AO data using the actual voxel cube faces, instead of their verts due to the fact that they are cubes and not a smoother mesh. Maybe this is wrong... Feedback appreciated :)

 

GPU 1 (generate our disks with which we will perform the AO algorithm described in the Gems 2 chapter)
vertex shader (pass-through)
geometry shader
at the center of each face of each block that is facing air
generate one point
generate the appropriate normal
transform feedback -> retrieve the list of points/normals
fragment shader: discard unless there is a way to just get the pipeline to stop after Transform Feedback?
 
CPU 1 (build the disk lookup data structure)
extracted points/normals go into textures
build out the hierarchical representation data structure within the textures
 
GPU 2 (First AO pass. Basically, render a single quad with UVs [0..1] to cover the whole data texture)
vertex shader (pass-through)
fragment shader
run AO pass #1 and store accumulated shadow data into a new texture
 
CPU 2 (here just to pass the texture back to the GPU for the second AO pass)
extract shadow info texture from GPU 2
Is there is a way to run the second GPU pass without coming back out to the CPU? If the texture is already bound from the previous render, I guess it's still in the same place?
 
GPU 3 (Second AO pass)
render a single quad with UVs [0..1] to cover the whole data texture.
vertex shader (pass-through)
fragment shader
run AO pass #2 and store accumulated shadow data into a shadow info texture
 
GPU 4 (render the actual scene)
vertex shader
geometry shader
for each cell center
generate the 2 triangles to render
fragment shader
in addition to normal logic, use AO data to shade mesh
because we generated AO data for faces, not verts, blend between up to nearest 4 face values as calculated in GPU 2+3

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement