• Advertisement
  • Popular Tags

  • Popular Now

  • Advertisement
  • Similar Content

    • By Jens Eckervogt
      Hello guys, 
       
      Please tell me! 
      How do I know? Why does wavefront not show for me?
      I already checked I have non errors yet.
      using OpenTK; using System; using System.Collections.Generic; using System.IO; namespace Tutorial_08.net.sourceskyboxer { public class WaveFrontLoader { private static List<Vector3> vertices; private static List<Vector2> textures; private static List<Vector3> normals; private static List<int> indices; private static float[] verticesArray; private static float[] normalsArray; private static float[] texturesArray; private static int[] indicesArray; private static string[] lines; public static RawModel LoadObjModel(string filename, Loader loader) { if (!File.Exists("Contents/" + filename + ".obj")) { throw new FileNotFoundException("Error: wavefront file doesn't exist path: " + filename + ".png"); } vertices = new List<Vector3>(); textures = new List<Vector2>(); normals = new List<Vector3>(); indices = new List<int>(); lines = File.ReadAllLines("Contents/" + filename + ".obj"); try { foreach (string line in lines) { if (line == "" || line.StartsWith("#")) continue; string[] token = line.Split(' '); switch(token[0]) { case ("o"): string o = token[1]; break; case "v": Vector3 vertex = new Vector3(float.Parse(token[1]), float.Parse(token[2]), float.Parse(token[3])); vertices.Add(vertex); break; case "vn": Vector3 normal = new Vector3(float.Parse(token[1]), float.Parse(token[2]), float.Parse(token[3])); normals.Add(normal); break; case "vt": Vector2 texture = new Vector2(float.Parse(token[1]), float.Parse(token[2])); textures.Add(texture); break; case "f": texturesArray = new float[vertices.Count * 2]; normalsArray = new float[vertices.Count * 3]; verticesArray = new float[vertices.Count * 3]; indicesArray = new int[indices.Count]; int vertexPointer = 0; foreach (Vector3 vex in vertices) { verticesArray[vertexPointer++] = vex.X; verticesArray[vertexPointer++] = vex.Y; verticesArray[vertexPointer++] = vex.Z; } for (int i = 0; i < indices.Count; i++) { indicesArray[i] = indices[i]; } break; } } } catch (FileNotFoundException f) { throw new FileNotFoundException($"OBJ file not found: {f.FileName}", f); } catch (ArgumentException ae) { throw new ArgumentException("OBJ file is damaged", ae); } return loader.loadToVAO(verticesArray, texturesArray, indicesArray); } } } And It have tried other method but it can't show for me.  I am mad now. Because any OpenTK developers won't help me.
      Please help me how do I fix.

      And my download (mega.nz) should it is original but I tried no success...
      - Add blend source and png file here I have tried tried,.....  
       
      PS: Why is our community not active? I wait very longer. Stop to lie me!
      Thanks !
    • By codelyoko373
      I wasn't sure if this would be the right place for a topic like this so sorry if it isn't.
      I'm currently working on a project for Uni using FreeGLUT to make a simple solar system simulation. I've got to the point where I've implemented all the planets and have used a Scene Graph to link them all together. The issue I'm having with now though is basically the planets and moons orbit correctly at their own orbit speeds.
      I'm not really experienced with using matrices for stuff like this so It's likely why I can't figure out how exactly to get it working. This is where I'm applying the transformation matrices, as well as pushing and popping them. This is within the Render function that every planet including the sun and moons will have and run.
      if (tag != "Sun") { glRotatef(orbitAngle, orbitRotation.X, orbitRotation.Y, orbitRotation.Z); } glPushMatrix(); glTranslatef(position.X, position.Y, position.Z); glRotatef(rotationAngle, rotation.X, rotation.Y, rotation.Z); glScalef(scale.X, scale.Y, scale.Z); glDrawElements(GL_TRIANGLES, mesh->indiceCount, GL_UNSIGNED_SHORT, mesh->indices); if (tag != "Sun") { glPopMatrix(); } The "If(tag != "Sun")" parts are my attempts are getting the planets to orbit correctly though it likely isn't the way I'm meant to be doing it. So I was wondering if someone would be able to help me? As I really don't have an idea on what I would do to get it working. Using the if statement is truthfully the closest I've got to it working but there are still weird effects like the planets orbiting faster then they should depending on the number of planets actually be updated/rendered.
    • By Jens Eckervogt
      Hello everyone, 
      I have problem with texture
      using System; using OpenTK; using OpenTK.Input; using OpenTK.Graphics; using OpenTK.Graphics.OpenGL4; using System.Drawing; using System.Reflection; namespace Tutorial_05 { class Game : GameWindow { private static int WIDTH = 1200; private static int HEIGHT = 720; private static KeyboardState keyState; private int vaoID; private int vboID; private int iboID; private Vector3[] vertices = { new Vector3(-0.5f, 0.5f, 0.0f), // V0 new Vector3(-0.5f, -0.5f, 0.0f), // V1 new Vector3(0.5f, -0.5f, 0.0f), // V2 new Vector3(0.5f, 0.5f, 0.0f) // V3 }; private Vector2[] texcoords = { new Vector2(0, 0), new Vector2(0, 1), new Vector2(1, 1), new Vector2(1, 0) }; private int[] indices = { 0, 1, 3, 3, 1, 2 }; private string vertsrc = @"#version 450 core in vec3 position; in vec2 textureCoords; out vec2 pass_textureCoords; void main(void) { gl_Position = vec4(position, 1.0); pass_textureCoords = textureCoords; }"; private string fragsrc = @"#version 450 core in vec2 pass_textureCoords; out vec4 out_color; uniform sampler2D textureSampler; void main(void) { out_color = texture(textureSampler, pass_textureCoords); }"; private int programID; private int vertexShaderID; private int fragmentShaderID; private int textureID; private Bitmap texsrc; public Game() : base(WIDTH, HEIGHT, GraphicsMode.Default, "Tutorial 05 - Texturing", GameWindowFlags.Default, DisplayDevice.Default, 4, 5, GraphicsContextFlags.Default) { } protected override void OnLoad(EventArgs e) { base.OnLoad(e); CursorVisible = true; GL.GenVertexArrays(1, out vaoID); GL.BindVertexArray(vaoID); GL.GenBuffers(1, out vboID); GL.BindBuffer(BufferTarget.ArrayBuffer, vboID); GL.BufferData(BufferTarget.ArrayBuffer, (IntPtr)(vertices.Length * Vector3.SizeInBytes), vertices, BufferUsageHint.StaticDraw); GL.GenBuffers(1, out iboID); GL.BindBuffer(BufferTarget.ElementArrayBuffer, iboID); GL.BufferData(BufferTarget.ElementArrayBuffer, (IntPtr)(indices.Length * sizeof(int)), indices, BufferUsageHint.StaticDraw); vertexShaderID = GL.CreateShader(ShaderType.VertexShader); GL.ShaderSource(vertexShaderID, vertsrc); GL.CompileShader(vertexShaderID); fragmentShaderID = GL.CreateShader(ShaderType.FragmentShader); GL.ShaderSource(fragmentShaderID, fragsrc); GL.CompileShader(fragmentShaderID); programID = GL.CreateProgram(); GL.AttachShader(programID, vertexShaderID); GL.AttachShader(programID, fragmentShaderID); GL.LinkProgram(programID); // Loading texture from embedded resource texsrc = new Bitmap(Assembly.GetEntryAssembly().GetManifestResourceStream("Tutorial_05.example.png")); textureID = GL.GenTexture(); GL.BindTexture(TextureTarget.Texture2D, textureID); GL.TexParameter(TextureTarget.Texture2D, TextureParameterName.TextureMagFilter, (int)All.Linear); GL.TexParameter(TextureTarget.Texture2D, TextureParameterName.TextureMinFilter, (int)All.Linear); GL.TexImage2D(TextureTarget.Texture2D, 0, PixelInternalFormat.Rgba, texsrc.Width, texsrc.Height, 0, PixelFormat.Bgra, PixelType.UnsignedByte, IntPtr.Zero); System.Drawing.Imaging.BitmapData bitmap_data = texsrc.LockBits(new Rectangle(0, 0, texsrc.Width, texsrc.Height), System.Drawing.Imaging.ImageLockMode.ReadOnly, System.Drawing.Imaging.PixelFormat.Format32bppRgb); GL.TexSubImage2D(TextureTarget.Texture2D, 0, 0, 0, texsrc.Width, texsrc.Height, PixelFormat.Bgra, PixelType.UnsignedByte, bitmap_data.Scan0); texsrc.UnlockBits(bitmap_data); GL.Enable(EnableCap.Texture2D); GL.BufferData(BufferTarget.TextureBuffer, (IntPtr)(texcoords.Length * Vector2.SizeInBytes), texcoords, BufferUsageHint.StaticDraw); GL.BindAttribLocation(programID, 0, "position"); GL.BindAttribLocation(programID, 1, "textureCoords"); } protected override void OnResize(EventArgs e) { base.OnResize(e); GL.Viewport(0, 0, ClientRectangle.Width, ClientRectangle.Height); } protected override void OnUpdateFrame(FrameEventArgs e) { base.OnUpdateFrame(e); keyState = Keyboard.GetState(); if (keyState.IsKeyDown(Key.Escape)) { Exit(); } } protected override void OnRenderFrame(FrameEventArgs e) { base.OnRenderFrame(e); // Prepare for background GL.Clear(ClearBufferMask.ColorBufferBit); GL.ClearColor(Color4.Red); // Draw traingles GL.EnableVertexAttribArray(0); GL.EnableVertexAttribArray(1); GL.BindVertexArray(vaoID); GL.UseProgram(programID); GL.BindBuffer(BufferTarget.ArrayBuffer, vboID); GL.VertexAttribPointer(0, 3, VertexAttribPointerType.Float, false, 0, IntPtr.Zero); GL.ActiveTexture(TextureUnit.Texture0); GL.BindTexture(TextureTarget.Texture3D, textureID); GL.BindBuffer(BufferTarget.ElementArrayBuffer, iboID); GL.DrawElements(BeginMode.Triangles, indices.Length, DrawElementsType.UnsignedInt, 0); GL.DisableVertexAttribArray(0); GL.DisableVertexAttribArray(1); SwapBuffers(); } protected override void OnClosed(EventArgs e) { base.OnClosed(e); GL.DeleteVertexArray(vaoID); GL.DeleteBuffer(vboID); } } } I can not remember where do I add GL.Uniform2();
    • By Jens Eckervogt
      Hello everyone
      For @80bserver8 nice job - I have found Google search. How did you port from Javascript WebGL to C# OpenTK.?
      I have been searched Google but it shows f***ing Unity 3D. I really want know how do I understand I want start with OpenTK But I want know where is porting of Javascript and C#?
       
      Thanks!
    • By mike44
      Hi
      I draw in a OpenGL framebuffer. All is fine but it eats FPS (frames per second), hence I wonder if I could execute the framebuffer drawing only every 5-10th loop or so?
      Many thanks
       
  • Advertisement
  • Advertisement
Sign in to follow this  

OpenGL Moving to OpenGL 3.1

This topic is 700 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi, it's me again :)

 

So I've decided to drop the 100 years old compatiblity and finally move on - RIP OpenGL 2.0.

 

As a short to-do list:

- change context creation

- change GLSL shaders and the bindings to the client

- drop the old uniform handling and use UBO

- introduce VAOs

 

And my biggest problem is with VAO. I think I know how it works: in short, it stores the required bindings for drawing (vertexAttribPointers, enabled vertex attrib arrays). This can be a good thing, however this way I lose the connection with the currently bound shader.

 

How my current (2.0) rendering works:

bind a program
check the used attributes and enable/disable vertex attrib arrays
for each geometry
    bind vbo
    call vertex attrib pointers
    bind ibo (element array buffer)
    draw

And this way a vertex attrib pointer is only called for an active attribute. So if the VBO contains eg. positions, normals and uvs but the shader only uses the positions, a single vertex attrib pointer is called (and note that the other attributes are disabled)

 

But if I use VAO, the enabled state and the attrib pointers are associated with the VAO and decoupled from the shader.

// initialization
bind vao
bind vbo
call vertex attrib pointer for every attribute
call enable vertex attrib array for every attribute
bind ibo
//unbind vao to avoid mistakes

// drawing
bind program
for each geometry
    bind vao
    draw

So the question: Logically it looks better to call only two functions to draw a geometry instead of many, but does the bindings and enables affect the performance? Of course I could call the enable/disable and attribPointer functions every frame but I think that would have a bigger performancy penalty.

 

Edit:

I thought the same that this guy wrote here.

Edited by csisy

Share this post


Link to post
Share on other sites
Advertisement

I handle my VAOs in a way that I have only 2 calls per model by enabling attributes once as much as needed

Graphics::BufferData(BufferTarget::ArrayBuffer, sizeof(vertices) * sizeof(float), vertices, BufferUsage::StaticDraw);
Graphics::VertexAttribPointer(0, 3, VertexAttribPointerType::Float, false, 20, 0);
Graphics::VertexAttribPointer(1, 2, VertexAttribPointerType::Float, false, 20, (void*)12);

Graphics::EnableVertexAttribArray(0);
Graphics::EnableVertexAttribArray(1);
Graphics::BindVertexArray(0);

And when rendering call just

Graphics::BindVertexArray(model);
Graphics::DrawArrays(PrimitiveType::TriangleStrip, 0, 5);

For my Shader I always bind the attributes staticaly to certain locations with

//vertex
layout (location = 0) in vec4 Vertex;
layout (location = 1) in vec2 TexCoord;

Anything else about binding Textures/Shaders does my render pipeline in a way that it sorts models by Shader and then by Texture to keep Shader/Texture swaps as low as possible (and safes performance this way). It would save even more performance when you also skip the call to glBindVertexArray and do that once for each cluster of same models

Edited by Shaarigan

Share this post


Link to post
Share on other sites

In practice what you're going to find is that you only have a handful of vertex format combinations (i.e enabled arrays, strides, offsets) but the buffers they're sourced from change with greater frequency.

 

What you need is GL_ARB_vertex_attrib_binding - it's getting into GL 4.x territory but support for it should be uniquitous nowadays, and it's designed to solve exactly this problem.  Using this model your vertex format specification moves from runtime to load time, and at runtime you just bind a VAO and issue some glBindVertexBuffer calls.

Share this post


Link to post
Share on other sites

Thanks for your replies!

 

I'm not sure I'd like to use any GL higher than 3.1. A decent amount of Intel cards support only <= 3.1.

 

Well, I'm done with porting to 3.1. I've lost about 10 FPS (from 462 -> 452) but I can accept this because much more functions are available now (eg. geometry shaders). For now I followed the way that UE4 chosen: create a single VAO for every rendering context. This way, the rest of the code is untouched. :)

 

Edit:

Nevermind, it runs at the same speed than before and the UBOs are not used yet. :)

Edited by csisy

Share this post


Link to post
Share on other sites

Okay, so let's talk about the UBOs.

 

I think I have two different options and can't decide which one I should choose.

 

1) layout(std140)

This is probably the easier because the aligments and sizes are predefined. This way I can create a UBO everywhere (eg. before any shader program linked) and share across programs. I just have to follow the standards and align the data manually.

 

2) layout(shared)

This is more tricky, because first I need a program to query the parameters of the uniform block. However when two blocks are the same in the shader codes, then the layouts match as well. So at engine startup, I can compile and link a dummy shader where every available uniform block is defined. Actually I can imagine only three: one for camera and other global properties (view, projection, camera parameters, and so on) and one for the common object properties (world matrix and bones matrix array) and the last for common lighting parameters (color, intensity)

 

What do you think?

Share this post


Link to post
Share on other sites

Just std140 all the things. Its basically what D3D does, and look how popular it is.

 

Well, I'm done with porting to 3.1. I've lost about 10 FPS (from 462 -> 452)
 You're looking at this from a bad perspective. Don't measure FPS, measure milliseconds per frame.

 

462 fps is a grand total of 2.164 milliseconds per frame.

452 fps is a grand total of 2.212 milliseconds per frame.

 

Less than half a millisecond of difference (0.47ms). Thats nothing. You can probably chalk those margins up to measuring error.

 

Now, say that you're at 30 fps, and you loose 10 fps.

 

30 fps is 33.333 ms.

20 fps is 50.000 ms.

 

Difference is a whooping 16.666 milliseconds per frame. Thats a lot. And the difference from a playable 30 fps to a horribly unplayable 20 fps. Whereas from 460 fps to 450 fps, you probably cant physically notice a difference.

Share this post


Link to post
Share on other sites

Just std140 all the things. Its basically what D3D does, and look how popular it is.

I thought the same but it consumes more memory. But probably that's not a big deal.

 

The FPS "drop" (which actually does not happen, just forgot to comment an extra "test" clear :)) was just a sidenote.

 

I guess I'll go for the std140-way.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement