• Advertisement
  • Popular Tags

  • Popular Now

  • Advertisement
  • Similar Content

    • By lxjk
      Hi guys,
      There are many ways to do light culling in tile-based shading. I've been playing with this idea for a while, and just want to throw it out there.
      Because tile frustums are general small compared to light radius, I tried using cone test to reduce false positives introduced by commonly used sphere-frustum test.
      On top of that, I use distance to camera rather than depth for near/far test (aka. sliced by spheres).
      This method can be naturally extended to clustered light culling as well.
      The following image shows the general ideas

       
      Performance-wise I get around 15% improvement over sphere-frustum test. You can also see how a single light performs as the following: from left to right (1) standard rendering of a point light; then tiles passed the test of (2) sphere-frustum test; (3) cone test; (4) spherical-sliced cone test
       

       
      I put the details in my blog post (https://lxjk.github.io/2018/03/25/Improve-Tile-based-Light-Culling-with-Spherical-sliced-Cone.html), GLSL source code included!
       
      Eric
    • By Fadey Duh
      Good evening everyone!

      I was wondering if there is something equivalent of  GL_NV_blend_equation_advanced for AMD?
      Basically I'm trying to find more compatible version of it.

      Thank you!
    • By Jens Eckervogt
      Hello guys, 
       
      Please tell me! 
      How do I know? Why does wavefront not show for me?
      I already checked I have non errors yet.
      using OpenTK; using System.Collections.Generic; using System.IO; using System.Text; namespace Tutorial_08.net.sourceskyboxer { public class WaveFrontLoader { private static List<Vector3> inPositions; private static List<Vector2> inTexcoords; private static List<Vector3> inNormals; private static List<float> positions; private static List<float> texcoords; private static List<int> indices; public static RawModel LoadObjModel(string filename, Loader loader) { inPositions = new List<Vector3>(); inTexcoords = new List<Vector2>(); inNormals = new List<Vector3>(); positions = new List<float>(); texcoords = new List<float>(); indices = new List<int>(); int nextIdx = 0; using (var reader = new StreamReader(File.Open("Contents/" + filename + ".obj", FileMode.Open), Encoding.UTF8)) { string line = reader.ReadLine(); int i = reader.Read(); while (true) { string[] currentLine = line.Split(); if (currentLine[0] == "v") { Vector3 pos = new Vector3(float.Parse(currentLine[1]), float.Parse(currentLine[2]), float.Parse(currentLine[3])); inPositions.Add(pos); if (currentLine[1] == "t") { Vector2 tex = new Vector2(float.Parse(currentLine[1]), float.Parse(currentLine[2])); inTexcoords.Add(tex); } if (currentLine[1] == "n") { Vector3 nom = new Vector3(float.Parse(currentLine[1]), float.Parse(currentLine[2]), float.Parse(currentLine[3])); inNormals.Add(nom); } } if (currentLine[0] == "f") { Vector3 pos = inPositions[0]; positions.Add(pos.X); positions.Add(pos.Y); positions.Add(pos.Z); Vector2 tc = inTexcoords[0]; texcoords.Add(tc.X); texcoords.Add(tc.Y); indices.Add(nextIdx); ++nextIdx; } reader.Close(); return loader.loadToVAO(positions.ToArray(), texcoords.ToArray(), indices.ToArray()); } } } } } And It have tried other method but it can't show for me.  I am mad now. Because any OpenTK developers won't help me.
      Please help me how do I fix.

      And my download (mega.nz) should it is original but I tried no success...
      - Add blend source and png file here I have tried tried,.....  
       
      PS: Why is our community not active? I wait very longer. Stop to lie me!
      Thanks !
    • By codelyoko373
      I wasn't sure if this would be the right place for a topic like this so sorry if it isn't.
      I'm currently working on a project for Uni using FreeGLUT to make a simple solar system simulation. I've got to the point where I've implemented all the planets and have used a Scene Graph to link them all together. The issue I'm having with now though is basically the planets and moons orbit correctly at their own orbit speeds.
      I'm not really experienced with using matrices for stuff like this so It's likely why I can't figure out how exactly to get it working. This is where I'm applying the transformation matrices, as well as pushing and popping them. This is within the Render function that every planet including the sun and moons will have and run.
      if (tag != "Sun") { glRotatef(orbitAngle, orbitRotation.X, orbitRotation.Y, orbitRotation.Z); } glPushMatrix(); glTranslatef(position.X, position.Y, position.Z); glRotatef(rotationAngle, rotation.X, rotation.Y, rotation.Z); glScalef(scale.X, scale.Y, scale.Z); glDrawElements(GL_TRIANGLES, mesh->indiceCount, GL_UNSIGNED_SHORT, mesh->indices); if (tag != "Sun") { glPopMatrix(); } The "If(tag != "Sun")" parts are my attempts are getting the planets to orbit correctly though it likely isn't the way I'm meant to be doing it. So I was wondering if someone would be able to help me? As I really don't have an idea on what I would do to get it working. Using the if statement is truthfully the closest I've got to it working but there are still weird effects like the planets orbiting faster then they should depending on the number of planets actually be updated/rendered.
    • By Jens Eckervogt
      Hello everyone, 
      I have problem with texture
      using System; using OpenTK; using OpenTK.Input; using OpenTK.Graphics; using OpenTK.Graphics.OpenGL4; using System.Drawing; using System.Reflection; namespace Tutorial_05 { class Game : GameWindow { private static int WIDTH = 1200; private static int HEIGHT = 720; private static KeyboardState keyState; private int vaoID; private int vboID; private int iboID; private Vector3[] vertices = { new Vector3(-0.5f, 0.5f, 0.0f), // V0 new Vector3(-0.5f, -0.5f, 0.0f), // V1 new Vector3(0.5f, -0.5f, 0.0f), // V2 new Vector3(0.5f, 0.5f, 0.0f) // V3 }; private Vector2[] texcoords = { new Vector2(0, 0), new Vector2(0, 1), new Vector2(1, 1), new Vector2(1, 0) }; private int[] indices = { 0, 1, 3, 3, 1, 2 }; private string vertsrc = @"#version 450 core in vec3 position; in vec2 textureCoords; out vec2 pass_textureCoords; void main(void) { gl_Position = vec4(position, 1.0); pass_textureCoords = textureCoords; }"; private string fragsrc = @"#version 450 core in vec2 pass_textureCoords; out vec4 out_color; uniform sampler2D textureSampler; void main(void) { out_color = texture(textureSampler, pass_textureCoords); }"; private int programID; private int vertexShaderID; private int fragmentShaderID; private int textureID; private Bitmap texsrc; public Game() : base(WIDTH, HEIGHT, GraphicsMode.Default, "Tutorial 05 - Texturing", GameWindowFlags.Default, DisplayDevice.Default, 4, 5, GraphicsContextFlags.Default) { } protected override void OnLoad(EventArgs e) { base.OnLoad(e); CursorVisible = true; GL.GenVertexArrays(1, out vaoID); GL.BindVertexArray(vaoID); GL.GenBuffers(1, out vboID); GL.BindBuffer(BufferTarget.ArrayBuffer, vboID); GL.BufferData(BufferTarget.ArrayBuffer, (IntPtr)(vertices.Length * Vector3.SizeInBytes), vertices, BufferUsageHint.StaticDraw); GL.GenBuffers(1, out iboID); GL.BindBuffer(BufferTarget.ElementArrayBuffer, iboID); GL.BufferData(BufferTarget.ElementArrayBuffer, (IntPtr)(indices.Length * sizeof(int)), indices, BufferUsageHint.StaticDraw); vertexShaderID = GL.CreateShader(ShaderType.VertexShader); GL.ShaderSource(vertexShaderID, vertsrc); GL.CompileShader(vertexShaderID); fragmentShaderID = GL.CreateShader(ShaderType.FragmentShader); GL.ShaderSource(fragmentShaderID, fragsrc); GL.CompileShader(fragmentShaderID); programID = GL.CreateProgram(); GL.AttachShader(programID, vertexShaderID); GL.AttachShader(programID, fragmentShaderID); GL.LinkProgram(programID); // Loading texture from embedded resource texsrc = new Bitmap(Assembly.GetEntryAssembly().GetManifestResourceStream("Tutorial_05.example.png")); textureID = GL.GenTexture(); GL.BindTexture(TextureTarget.Texture2D, textureID); GL.TexParameter(TextureTarget.Texture2D, TextureParameterName.TextureMagFilter, (int)All.Linear); GL.TexParameter(TextureTarget.Texture2D, TextureParameterName.TextureMinFilter, (int)All.Linear); GL.TexImage2D(TextureTarget.Texture2D, 0, PixelInternalFormat.Rgba, texsrc.Width, texsrc.Height, 0, PixelFormat.Bgra, PixelType.UnsignedByte, IntPtr.Zero); System.Drawing.Imaging.BitmapData bitmap_data = texsrc.LockBits(new Rectangle(0, 0, texsrc.Width, texsrc.Height), System.Drawing.Imaging.ImageLockMode.ReadOnly, System.Drawing.Imaging.PixelFormat.Format32bppRgb); GL.TexSubImage2D(TextureTarget.Texture2D, 0, 0, 0, texsrc.Width, texsrc.Height, PixelFormat.Bgra, PixelType.UnsignedByte, bitmap_data.Scan0); texsrc.UnlockBits(bitmap_data); GL.Enable(EnableCap.Texture2D); GL.BufferData(BufferTarget.TextureBuffer, (IntPtr)(texcoords.Length * Vector2.SizeInBytes), texcoords, BufferUsageHint.StaticDraw); GL.BindAttribLocation(programID, 0, "position"); GL.BindAttribLocation(programID, 1, "textureCoords"); } protected override void OnResize(EventArgs e) { base.OnResize(e); GL.Viewport(0, 0, ClientRectangle.Width, ClientRectangle.Height); } protected override void OnUpdateFrame(FrameEventArgs e) { base.OnUpdateFrame(e); keyState = Keyboard.GetState(); if (keyState.IsKeyDown(Key.Escape)) { Exit(); } } protected override void OnRenderFrame(FrameEventArgs e) { base.OnRenderFrame(e); // Prepare for background GL.Clear(ClearBufferMask.ColorBufferBit); GL.ClearColor(Color4.Red); // Draw traingles GL.EnableVertexAttribArray(0); GL.EnableVertexAttribArray(1); GL.BindVertexArray(vaoID); GL.UseProgram(programID); GL.BindBuffer(BufferTarget.ArrayBuffer, vboID); GL.VertexAttribPointer(0, 3, VertexAttribPointerType.Float, false, 0, IntPtr.Zero); GL.ActiveTexture(TextureUnit.Texture0); GL.BindTexture(TextureTarget.Texture3D, textureID); GL.BindBuffer(BufferTarget.ElementArrayBuffer, iboID); GL.DrawElements(BeginMode.Triangles, indices.Length, DrawElementsType.UnsignedInt, 0); GL.DisableVertexAttribArray(0); GL.DisableVertexAttribArray(1); SwapBuffers(); } protected override void OnClosed(EventArgs e) { base.OnClosed(e); GL.DeleteVertexArray(vaoID); GL.DeleteBuffer(vboID); } } } I can not remember where do I add GL.Uniform2();
  • Advertisement
  • Advertisement
Sign in to follow this  

OpenGL Capture what is Rendered in OpenGL

This topic is 1002 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Advertisement

It is somewhat vague what you want exactly. Assuming that you want to read the content of output rendered by yourself, you can use glReadPixels to get the pixels. The details depend on many circumstances like reading from default framebuffer's front- or backbuffer or from an FBO attachment, the involved pixel formats, need for time continuous reading, ...

Share this post


Link to post
Share on other sites

When you say "capture", there's still a question of where you want to capture it to? Do you want to capture it to GPU memory for further GPU processing, or do you want to capture it back to CPU memory for e.g. saving to disk, or for other CPU processing?

 

If you want to capture the data to GPU memory for further GPU operations, the best way is to create a texture or a renderbuffer, and attach that to an FBO, and directly render the content to that FBO. After that you have the contents in a texture and can do whatever GPU side operations to it you want.

 

If you want to capture the data to CPU memory, then glReadPixels is your only choice if you are looking to use only core OpenGL ES 2 without assuming any extensions. glReadPixeling the default backbuffer can be very heavy, so it might be useful to use a FBO here as well, and glReadPixel the contents of the FBO instead of the default backbuffer.

 

If you are looking to use OpenGL ES 3, or have the OpenGL ES 2 https://www.khronos.org/registry/gles/extensions/NV/NV_pixel_buffer_object.txt extension, then you can use PBOs (pixelbuffer objects) as an optimized alternative to glReadPixels. (See the section "Asynchronous glReadPixels" in the description of that extension)

Share this post


Link to post
Share on other sites


If you are looking to use OpenGL ES 3, or have the OpenGL ES 2 https://www.khronos.org/registry/gles/extensions/NV/NV_pixel_buffer_object.txt extension, then you can use PBOs (pixelbuffer objects) as an optimized alternative to glReadPixels. (See the section "Asynchronous glReadPixels" in the description of that extension)

PBOs are not an alternative to glReadPixels You still need to run a glReadPixels command. The only thing is that glReadPixels can use a PBO to write the pixels to, allowing for an synchronous run.

Share this post


Link to post
Share on other sites

It is somewhat vague what you want exactly. Assuming that you want to read the content of output rendered by yourself, you can use glReadPixels to get the pixels. The details depend on many circumstances like reading from default framebuffer's front- or backbuffer or from an FBO attachment, the involved pixel formats, need for time continuous reading, ...

 

 

When you say "capture", there's still a question of where you want to capture it to? Do you want to capture it to GPU memory for further GPU processing, or do you want to capture it back to CPU memory for e.g. saving to disk, or for other CPU processing?

 

If you want to capture the data to GPU memory for further GPU operations, the best way is to create a texture or a renderbuffer, and attach that to an FBO, and directly render the content to that FBO. After that you have the contents in a texture and can do whatever GPU side operations to it you want.

 

If you want to capture the data to CPU memory, then glReadPixels is your only choice if you are looking to use only core OpenGL ES 2 without assuming any extensions. glReadPixeling the default backbuffer can be very heavy, so it might be useful to use a FBO here as well, and glReadPixel the contents of the FBO instead of the default backbuffer.

 

If you are looking to use OpenGL ES 3, or have the OpenGL ES 2 https://www.khronos.org/registry/gles/extensions/NV/NV_pixel_buffer_object.txt extension, then you can use PBOs (pixelbuffer objects) as an optimized alternative to glReadPixels. (See the section "Asynchronous glReadPixels" in the description of that extension)

 

 

let me clarify a bit, i want this capture to be reused later, i mean i want to capture the rendered opengl and then create a new rectangle with this capture as a texture on it, which of these two ways you mentioned should i use? and i will be thankful if you give me some links i follow cause im new on OpenGL.

 

i should mention that the game is on android and im coding with Eclipse

Edited by alireza.pir

Share this post


Link to post
Share on other sites

let me clarify a bit, i want this capture to be reused later, i mean i want to capture the rendered opengl and then create a new rectangle with this capture as a texture on it, which of these two ways you mentioned should i use?

This is usually not named capturing but "render to texture". The best way to do this is to not render to the default framebuffer but to an FBO (to a texture attached to an FBO, to be precise); look e.g. here to see an example. AFAIK OpenGLES 2 already supports FBOs. Use "opengl fbo tutorial" in your favorite search engine to get … well, more tutorials than ever needed ;)

Edited by haegarr

Share this post


Link to post
Share on other sites

 


let me clarify a bit, i want this capture to be reused later, i mean i want to capture the rendered opengl and then create a new rectangle with this capture as a texture on it, which of these two ways you mentioned should i use?

This is usually not named capturing but "render to texture". The best way to do this is to not render to the default framebuffer but to an FBO (to a texture attached to an FBO, to be precise); look e.g. here to see an example. AFAIK OpenGLES 2 already supports FBOs. Use "opengl fbo tutorial" in your favorite search engine to get … well, more tutorials than ever needed ;)

 

 

 

thank you for answers. and sorry im asking again, thats because i want to make it completely clear for myself.
 
i dont want the "Texture" to be rendered on screen itself, i want to Set this texture for a new Rectangle, for example if i have this in my game:
(which was a rectangle first filling with blue color, and turned to this by some functionality, the Blue and yellow DOTS Are Vertices of the mesh)
 
4QYjq.png
 
 
i want make a new rectangle polygon, by this previous rendered OutPut textured On it, meaning:
(white DOTS are new Mesh and the Blue part is as a texture on it 
 
Capture3.png
 
is your suggestion still suitable for me?
Edited by alireza.pir

Share this post


Link to post
Share on other sites


is your suggestion still suitable for me?

Yes. You do a 1st rendering pass with the folded mesh, into a texture attachment of an FBO. Then you do a 2nd render pass with mesh #2 (the orange quadrangle in the 2nd picture) with a texture mapping, and the bound texture is the one which was the render target in the 1st pass. So, the same texture is written in the 1st pass and read in the 2nd pass. That is the typical application of render to texture.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement