• Advertisement
  • Popular Tags

  • Popular Now

  • Advertisement
  • Similar Content

    • By lxjk
      Hi guys,
      There are many ways to do light culling in tile-based shading. I've been playing with this idea for a while, and just want to throw it out there.
      Because tile frustums are general small compared to light radius, I tried using cone test to reduce false positives introduced by commonly used sphere-frustum test.
      On top of that, I use distance to camera rather than depth for near/far test (aka. sliced by spheres).
      This method can be naturally extended to clustered light culling as well.
      The following image shows the general ideas

       
      Performance-wise I get around 15% improvement over sphere-frustum test. You can also see how a single light performs as the following: from left to right (1) standard rendering of a point light; then tiles passed the test of (2) sphere-frustum test; (3) cone test; (4) spherical-sliced cone test
       

       
      I put the details in my blog post (https://lxjk.github.io/2018/03/25/Improve-Tile-based-Light-Culling-with-Spherical-sliced-Cone.html), GLSL source code included!
       
      Eric
    • By Fadey Duh
      Good evening everyone!

      I was wondering if there is something equivalent of  GL_NV_blend_equation_advanced for AMD?
      Basically I'm trying to find more compatible version of it.

      Thank you!
    • By Jens Eckervogt
      Hello guys, 
       
      Please tell me! 
      How do I know? Why does wavefront not show for me?
      I already checked I have non errors yet.
      using OpenTK; using System.Collections.Generic; using System.IO; using System.Text; namespace Tutorial_08.net.sourceskyboxer { public class WaveFrontLoader { private static List<Vector3> inPositions; private static List<Vector2> inTexcoords; private static List<Vector3> inNormals; private static List<float> positions; private static List<float> texcoords; private static List<int> indices; public static RawModel LoadObjModel(string filename, Loader loader) { inPositions = new List<Vector3>(); inTexcoords = new List<Vector2>(); inNormals = new List<Vector3>(); positions = new List<float>(); texcoords = new List<float>(); indices = new List<int>(); int nextIdx = 0; using (var reader = new StreamReader(File.Open("Contents/" + filename + ".obj", FileMode.Open), Encoding.UTF8)) { string line = reader.ReadLine(); int i = reader.Read(); while (true) { string[] currentLine = line.Split(); if (currentLine[0] == "v") { Vector3 pos = new Vector3(float.Parse(currentLine[1]), float.Parse(currentLine[2]), float.Parse(currentLine[3])); inPositions.Add(pos); if (currentLine[1] == "t") { Vector2 tex = new Vector2(float.Parse(currentLine[1]), float.Parse(currentLine[2])); inTexcoords.Add(tex); } if (currentLine[1] == "n") { Vector3 nom = new Vector3(float.Parse(currentLine[1]), float.Parse(currentLine[2]), float.Parse(currentLine[3])); inNormals.Add(nom); } } if (currentLine[0] == "f") { Vector3 pos = inPositions[0]; positions.Add(pos.X); positions.Add(pos.Y); positions.Add(pos.Z); Vector2 tc = inTexcoords[0]; texcoords.Add(tc.X); texcoords.Add(tc.Y); indices.Add(nextIdx); ++nextIdx; } reader.Close(); return loader.loadToVAO(positions.ToArray(), texcoords.ToArray(), indices.ToArray()); } } } } } And It have tried other method but it can't show for me.  I am mad now. Because any OpenTK developers won't help me.
      Please help me how do I fix.

      And my download (mega.nz) should it is original but I tried no success...
      - Add blend source and png file here I have tried tried,.....  
       
      PS: Why is our community not active? I wait very longer. Stop to lie me!
      Thanks !
    • By codelyoko373
      I wasn't sure if this would be the right place for a topic like this so sorry if it isn't.
      I'm currently working on a project for Uni using FreeGLUT to make a simple solar system simulation. I've got to the point where I've implemented all the planets and have used a Scene Graph to link them all together. The issue I'm having with now though is basically the planets and moons orbit correctly at their own orbit speeds.
      I'm not really experienced with using matrices for stuff like this so It's likely why I can't figure out how exactly to get it working. This is where I'm applying the transformation matrices, as well as pushing and popping them. This is within the Render function that every planet including the sun and moons will have and run.
      if (tag != "Sun") { glRotatef(orbitAngle, orbitRotation.X, orbitRotation.Y, orbitRotation.Z); } glPushMatrix(); glTranslatef(position.X, position.Y, position.Z); glRotatef(rotationAngle, rotation.X, rotation.Y, rotation.Z); glScalef(scale.X, scale.Y, scale.Z); glDrawElements(GL_TRIANGLES, mesh->indiceCount, GL_UNSIGNED_SHORT, mesh->indices); if (tag != "Sun") { glPopMatrix(); } The "If(tag != "Sun")" parts are my attempts are getting the planets to orbit correctly though it likely isn't the way I'm meant to be doing it. So I was wondering if someone would be able to help me? As I really don't have an idea on what I would do to get it working. Using the if statement is truthfully the closest I've got to it working but there are still weird effects like the planets orbiting faster then they should depending on the number of planets actually be updated/rendered.
    • By Jens Eckervogt
      Hello everyone, 
      I have problem with texture
      using System; using OpenTK; using OpenTK.Input; using OpenTK.Graphics; using OpenTK.Graphics.OpenGL4; using System.Drawing; using System.Reflection; namespace Tutorial_05 { class Game : GameWindow { private static int WIDTH = 1200; private static int HEIGHT = 720; private static KeyboardState keyState; private int vaoID; private int vboID; private int iboID; private Vector3[] vertices = { new Vector3(-0.5f, 0.5f, 0.0f), // V0 new Vector3(-0.5f, -0.5f, 0.0f), // V1 new Vector3(0.5f, -0.5f, 0.0f), // V2 new Vector3(0.5f, 0.5f, 0.0f) // V3 }; private Vector2[] texcoords = { new Vector2(0, 0), new Vector2(0, 1), new Vector2(1, 1), new Vector2(1, 0) }; private int[] indices = { 0, 1, 3, 3, 1, 2 }; private string vertsrc = @"#version 450 core in vec3 position; in vec2 textureCoords; out vec2 pass_textureCoords; void main(void) { gl_Position = vec4(position, 1.0); pass_textureCoords = textureCoords; }"; private string fragsrc = @"#version 450 core in vec2 pass_textureCoords; out vec4 out_color; uniform sampler2D textureSampler; void main(void) { out_color = texture(textureSampler, pass_textureCoords); }"; private int programID; private int vertexShaderID; private int fragmentShaderID; private int textureID; private Bitmap texsrc; public Game() : base(WIDTH, HEIGHT, GraphicsMode.Default, "Tutorial 05 - Texturing", GameWindowFlags.Default, DisplayDevice.Default, 4, 5, GraphicsContextFlags.Default) { } protected override void OnLoad(EventArgs e) { base.OnLoad(e); CursorVisible = true; GL.GenVertexArrays(1, out vaoID); GL.BindVertexArray(vaoID); GL.GenBuffers(1, out vboID); GL.BindBuffer(BufferTarget.ArrayBuffer, vboID); GL.BufferData(BufferTarget.ArrayBuffer, (IntPtr)(vertices.Length * Vector3.SizeInBytes), vertices, BufferUsageHint.StaticDraw); GL.GenBuffers(1, out iboID); GL.BindBuffer(BufferTarget.ElementArrayBuffer, iboID); GL.BufferData(BufferTarget.ElementArrayBuffer, (IntPtr)(indices.Length * sizeof(int)), indices, BufferUsageHint.StaticDraw); vertexShaderID = GL.CreateShader(ShaderType.VertexShader); GL.ShaderSource(vertexShaderID, vertsrc); GL.CompileShader(vertexShaderID); fragmentShaderID = GL.CreateShader(ShaderType.FragmentShader); GL.ShaderSource(fragmentShaderID, fragsrc); GL.CompileShader(fragmentShaderID); programID = GL.CreateProgram(); GL.AttachShader(programID, vertexShaderID); GL.AttachShader(programID, fragmentShaderID); GL.LinkProgram(programID); // Loading texture from embedded resource texsrc = new Bitmap(Assembly.GetEntryAssembly().GetManifestResourceStream("Tutorial_05.example.png")); textureID = GL.GenTexture(); GL.BindTexture(TextureTarget.Texture2D, textureID); GL.TexParameter(TextureTarget.Texture2D, TextureParameterName.TextureMagFilter, (int)All.Linear); GL.TexParameter(TextureTarget.Texture2D, TextureParameterName.TextureMinFilter, (int)All.Linear); GL.TexImage2D(TextureTarget.Texture2D, 0, PixelInternalFormat.Rgba, texsrc.Width, texsrc.Height, 0, PixelFormat.Bgra, PixelType.UnsignedByte, IntPtr.Zero); System.Drawing.Imaging.BitmapData bitmap_data = texsrc.LockBits(new Rectangle(0, 0, texsrc.Width, texsrc.Height), System.Drawing.Imaging.ImageLockMode.ReadOnly, System.Drawing.Imaging.PixelFormat.Format32bppRgb); GL.TexSubImage2D(TextureTarget.Texture2D, 0, 0, 0, texsrc.Width, texsrc.Height, PixelFormat.Bgra, PixelType.UnsignedByte, bitmap_data.Scan0); texsrc.UnlockBits(bitmap_data); GL.Enable(EnableCap.Texture2D); GL.BufferData(BufferTarget.TextureBuffer, (IntPtr)(texcoords.Length * Vector2.SizeInBytes), texcoords, BufferUsageHint.StaticDraw); GL.BindAttribLocation(programID, 0, "position"); GL.BindAttribLocation(programID, 1, "textureCoords"); } protected override void OnResize(EventArgs e) { base.OnResize(e); GL.Viewport(0, 0, ClientRectangle.Width, ClientRectangle.Height); } protected override void OnUpdateFrame(FrameEventArgs e) { base.OnUpdateFrame(e); keyState = Keyboard.GetState(); if (keyState.IsKeyDown(Key.Escape)) { Exit(); } } protected override void OnRenderFrame(FrameEventArgs e) { base.OnRenderFrame(e); // Prepare for background GL.Clear(ClearBufferMask.ColorBufferBit); GL.ClearColor(Color4.Red); // Draw traingles GL.EnableVertexAttribArray(0); GL.EnableVertexAttribArray(1); GL.BindVertexArray(vaoID); GL.UseProgram(programID); GL.BindBuffer(BufferTarget.ArrayBuffer, vboID); GL.VertexAttribPointer(0, 3, VertexAttribPointerType.Float, false, 0, IntPtr.Zero); GL.ActiveTexture(TextureUnit.Texture0); GL.BindTexture(TextureTarget.Texture3D, textureID); GL.BindBuffer(BufferTarget.ElementArrayBuffer, iboID); GL.DrawElements(BeginMode.Triangles, indices.Length, DrawElementsType.UnsignedInt, 0); GL.DisableVertexAttribArray(0); GL.DisableVertexAttribArray(1); SwapBuffers(); } protected override void OnClosed(EventArgs e) { base.OnClosed(e); GL.DeleteVertexArray(vaoID); GL.DeleteBuffer(vboID); } } } I can not remember where do I add GL.Uniform2();
  • Advertisement
  • Advertisement
Sign in to follow this  

OpenGL glXMakeCurrent slowly leaks memory

This topic is 655 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

This is a very strange problem I've recently stumbled upon and for which I have no explanation nor remedy. The situation is the following:

 

Main thread is UI and Logic (toolkit has own Display connection).

Render thread is purely OpenGL (own context, own Display connection).

 

If I have one window rendering everything is fine and no memory leaks happen.

 

If I have two windows then during each render call each window is made current using glXMakeCurrent, rendered and later on swapped. So basically this each frame:

 

glXMakeCurrent(display, window1Context...)

render window1

glXMakeCurrent(display, window2Context...)

render window2

wait for data for next frame render

 

The interesting thing is that while everything runs fine the application slowly starts leaking memory. Not doing glXMakeCurrent the leaking goes away. Using glXMakeCurrent the leaking starts again. It's differently fast/slow on different computers.

 

Any idea what could be wrong there? OpenGL runs entirely in the thread. The main thread has no connection to OpenGL at all. It only has the UI toolkit so to speak. Also the render thread has an own Display connection which is thread-safe.

 

Ideas welcome since I'm out of ideas right now.

 

EDIT: Note. I tried only enabling glXMakeCurrent without rendering between the calls and the leaking is the same. So rendering is not the culprit.

Edited by RPTD

Share this post


Link to post
Share on other sites
Advertisement

The first step is to take a dump and investigate what's actually going on.

 

Secondly, 

Others reported some leaks using this method, so i'll want to know how you test the memory leak.

I've looked around a bit and can't find a straight answer.

Are  there any resources that you aren't freeing up?

Share this post


Link to post
Share on other sites

One way I test is using the process monitor to see if the overall applicatin memory stays stable or steadily increases. That showed the little run-away of memory due to glXMakeCurrent. Otherwise engine internal objects are ref-counted and stored in a global list if leak-checking is enabled. upon exiting it is checked if no such list contains objects still alive. So I can precisely tell no leaking goes on inside the internal workings. It happens really only if glXMakeCurrent is enabled even if nothing else is going on.

Share this post


Link to post
Share on other sites

You can also try:

 

* call glXMakeCurrent with the same context for a single window version (so call it for each frame for the same window) and see if leaks are still there

* change xorg version (to older and newer)

* change driver version (mainly if using nvidia drivers). In that case, use a stable driver, not a BETA one.

 

and see if things change. If you can find any difference and you're using the latest glx available version, report this accordingly.

Share this post


Link to post
Share on other sites

glXMakeCurrent for the same window has no effect. So this one is not the problem. I went ahead and did a couple of tests.

 

valgrind on the editor. no leaks detected but memory jolts off in the process monitor. and with jolting off I mean like this:

Test System 1: around 10kb per second increase

Test System 2: around 100MB(!) per second increase.

so it can't be me leaking memory in my program. Valgrind would spot this. But why does the process memory consumption jolt off like this?

 

I also tested these situations:

 

for each frame

  for each window

    glXMakeCurrent()

    // no render

  for each window

    glXSwapBuffers()

 

this leaks as mentioned above.

 

for each frame

  for each window

    glXMakeCurrent()

    // no render

  // no swapping

 

no leaking in this case

 

for each frame

  // no glXMakeCurrent()

  // no render

  for each window

    glXSwapBuffers()

 

no leaking in this case either.

 

So the leaking happens as soon as glXMakeCurrent is used together with glXSwapBuffers().And interestingly Valgrind does not pick up this nearly 1G of lost memory.

 

EDIT:

I did some more testing and it seems that on the Test System 1 with the slow rising memory consumption I had been a bit quick. Letting the editor sit fully loaded with all rendering going on kept memory consumption in process monitor at the same level across a couple of seconds. System 2 though does jolt off in the hundrets of MB.

Edited by RPTD

Share this post


Link to post
Share on other sites
Hey, just out of interest. Are you working on a PC with an nVidia GPU? I remember having similar problems with an SFML application i wrote. When it was running in dual window mode it showed pretty much the same behaviour you were describing. But only on nVidia Cards. On my HD6850 it was running fine, as well as when using the integrated graphics.

Share this post


Link to post
Share on other sites

Hey, just out of interest. Are you working on a PC with an nVidia GPU? I remember having similar problems with an SFML application i wrote. When it was running in dual window mode it showed pretty much the same behaviour you were describing. But only on nVidia Cards. On my HD6850 it was running fine, as well as when using the integrated graphics.

No, one system is a Radeon HD 7970 with Crimson driver while the other is some Radeon 5xxx (not sure right now) with AMDGPU driver. The running away happens on the AMDGPU one.

Share this post


Link to post
Share on other sites

It seems to be not a threading issue. Even with synchronous rendering it happens. Something is wrong in Mesa:

 

==00:00:22:15.628 4423== 544,154,936 bytes in 13,576,238 blocks are still reachable in loss record 68,955 of 68,955
==00:00:22:15.628 4423==    at 0x4C2DB8F: malloc (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
==00:00:22:15.628 4423==    by 0x8A2714B: ??? (in /usr/lib/x86_64-linux-gnu/libxcb.so.1.1.0)
==00:00:22:15.628 4423==    by 0x8A24ED0: ??? (in /usr/lib/x86_64-linux-gnu/libxcb.so.1.1.0)
==00:00:22:15.628 4423==    by 0x8A26616: ??? (in /usr/lib/x86_64-linux-gnu/libxcb.so.1.1.0)
==00:00:22:15.628 4423==    by 0x8A26720: xcb_wait_for_reply (in /usr/lib/x86_64-linux-gnu/libxcb.so.1.1.0)
==00:00:22:15.628 4423==    by 0x84EE8C2: ??? (in /usr/lib/x86_64-linux-gnu/mesa/libGL.so.1.2.0)
==00:00:22:15.628 4423==    by 0x84E994E: ??? (in /usr/lib/x86_64-linux-gnu/mesa/libGL.so.1.2.0)
==00:00:22:15.628 4423==    by 0x84E3917: ??? (in /usr/lib/x86_64-linux-gnu/mesa/libGL.so.1.2.0)
==00:00:22:15.628 4423==    by 0x84E9E4B: ??? (in /usr/lib/x86_64-linux-gnu/mesa/libGL.so.1.2.0)
==00:00:22:15.628 4423==    by 0x84BD3B4: glXMakeContextCurrent (in /usr/lib/x86_64-linux-gnu/mesa/libGL.so.1.2.0)
 

xcb_wait_for_reply shows up in all large scale leaking reports. Mesa bug?

Share this post


Link to post
Share on other sites

Could be a Mesa bug.  Could be an XCB bug.  You can probably install the debug symbols for both libraries to get a better idea.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement