• Advertisement

OpenGL OpenGL 2D GUI system question

Recommended Posts

hi all,

i am trying to build an OpenGL 2D GUI system, (yeah yeah, i know i should not be re inventing the wheel, but this is for educational and some other purpose only),
i have built GUI system before using 2D systems such as that of HTML/JS canvas, but in 2D system, i can directly match a mouse coordinates to the actual graphic coordinates with additional computation for screen size/ratio/scale ofcourse.

now i want to port it to OpenGL, i know that to render a 2D object in OpenGL we specify coordiantes in Clip space or use the orthographic projection, now heres what i need help about.

1. what is the right way of rendering the GUI? is it thru drawing in clip space or switching to ortho projection?

2. from screen coordinates (top left is 0,0 nd bottom right is width height), how can i map the mouse coordinates to OpenGL 2D so that mouse events such as button click works? In consideration ofcourse to the current screen/size dimension.

3. when let say if the screen size/dimension is different, how to handle this? in my previous javascript 2D engine using canvas, i just have my working coordinates and then just perform the bitblk or copying my working canvas to screen canvas and scale the mouse coordinates from there, in OpenGL how to work on a multiple screen sizes (more like an OpenGL ES question).

lastly, if you guys know any books, resources, links or tutorials that handle or discuss this, i found one with marekknows opengl game engine website but its not free,

Just let me know. Did not have any luck finding resource in google for writing our own OpenGL GUI framework.

IF there are no any available online, just let me know, what things do i need to look into for OpenGL and i will study them one by one to make it work.

thank you, and looking forward to positive replies.

Share this post


Link to post
Share on other sites
Advertisement

No no no, you can pass pixel coords to vertex shader and there recalculate actual vertex clip position on screen not to mention y axis is flipped so you start from y=0 at bottom left corner

 

So how to do that in shader well first of all you pass vertex pixel coords then you pass screen size in pixels then you make something like nv.x = v.x / screen_width to get percentage of screen coords this should give you a number from 0..1 then you need to multiple it by 0.5 and subtract it by -1??? I really forgot how to do this anyway you need to end with a number between -1 and 1 or was it 0.5 as I said I'm on the phone now and can't actually check that for code let others think

Share this post


Link to post
Share on other sites

Store the coordinates in screen coordinates. Use this to check against mouse position.

Set up a VAO, and VBO with a rectangle only 1.0f by 1.0f, and 0.0f as the depth. You can also disable depth before rendering, and enabling it again after.

Translate the model matrix (intialized to identity matrix) by (xPos, yPos, 0.0f).

Scale the model matrix by (width, height, 1.0f).

Multiply the projection matrix (use ortho for 2D that doesn't ever need to simulate depth) by the model matrix. Then send that one resulting matrix to the shader.

These are the steps I use, the first one ensures that the mouse checks pass. The others ensure that the sprite renders at the same position.

Edited by Yxjmir

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


  • Advertisement
  • Advertisement
  • Popular Tags

  • Advertisement
  • Popular Now

  • Similar Content

    • By LifeArtist
      Good Evening,
      I want to make a 2D game which involves displaying some debug information. Especially for collision, enemy sights and so on ...
      First of I was thinking about all those shapes which I need will need for debugging purposes: circles, rectangles, lines, polygons.
      I am really stucked right now because of the fundamental question:
      Where do I store my vertices positions for each line (object)? Currently I am not using a model matrix because I am using orthographic projection and set the final position within the VBO. That means that if I add a new line I would have to expand the "points" array and re-upload (recall glBufferData) it every time. The other method would be to use a model matrix and a fixed vbo for a line but it would be also messy to exactly create a line from (0,0) to (100,20) calculating the rotation and scale to make it fit.
      If I proceed with option 1 "updating the array each frame" I was thinking of having 4 draw calls every frame for the lines vao, polygons vao and so on. 
      In addition to that I am planning to use some sort of ECS based architecture. So the other question would be:
      Should I treat those debug objects as entities/components?
      For me it would make sense to treat them as entities but that's creates a new issue with the previous array approach because it would have for example a transform and render component. A special render component for debug objects (no texture etc) ... For me the transform component is also just a matrix but how would I then define a line?
      Treating them as components would'nt be a good idea in my eyes because then I would always need an entity. Well entity is just an id !? So maybe its a component?
      Regards,
      LifeArtist
    • By EddieK
      Hi I am having this problem where I am drawing 4000 squares on screen, using VBO's and IBO's but the framerate on my Huawei P9 is only 24 FPS. Considering it has 8-core CPU and a pretty powerful GPU, I don't think it is not capable of drawing 4000 textured squares at 60FPS.
      I checked the DMMS and found out that most of the time spent was by the put() method of the FloatBuffer, but the strange thing is that if I'm drawing these squares outside of the view frustum, the FPS increases. And I'm not using frustum culling. 
      If you have any ideas what could be causing this, please share them with me. Thank you in advance.
    • By EddieK
      Hi, so I am trying to implement packed VBO's with indexing on OpenGL but I have run across problems. It worked fine when I had separate buffers for vertex positions, colors and texture coordinates. But when I tried to put everything into a single packed buffer, it completely glitched out. Here's the code which I am using:
      this.vertexData.position(0); this.indexData.position(0); int stride = (3 + 4 + 2) * 4; GLES20.glBindBuffer(GLES20.GL_ARRAY_BUFFER, buffers[0]); GLES20.glBufferData(GLES20.GL_ARRAY_BUFFER, vertexData.capacity()*4, vertexData, GLES20.GL_STATIC_DRAW); ShaderAttributes attributes = graphicsSystem.getShader().getAttributes(); GLES20.glEnableVertexAttribArray(positionAttrID); GLES20.glVertexAttribPointer(positionAttrID, dimensions, GLES20.GL_FLOAT, false, stride, 0); GLES20.glEnableVertexAttribArray(colorAttrID); GLES20.glVertexAttribPointer(colorAttrID, 4, GLES20.GL_FLOAT, false, stride, dimensions * 4); GLES20.glEnableVertexAttribArray(texCoordAttrID); GLES20.glVertexAttribPointer(texCoordAttrID, 2, GLES20.GL_FLOAT, false, stride, (dimensions + 4) * 4); GLES20.glBindBuffer(GLES20.GL_ELEMENT_ARRAY_BUFFER, buffers[3]); GLES20.glBufferData(GLES20.GL_ELEMENT_ARRAY_BUFFER, indexData.capacity()*2, indexData, GLES20.GL_STATIC_DRAW); GLES20.glDrawElements(mode, count, GLES20.GL_UNSIGNED_SHORT, 0); The data in vertex buffer is ordered like this:
      Vertex X, vertex Y, vertex Z, Color r, color g, color b, color a, Tex coord x, tex coord z and so on... (And I am pretty certain that the buffer I'm using is in this order)
      This is the version of the code which worked fine:
      this.vertexData.position(0); this.vertexColorData.position(0); this.vertexTexCoordData.position(0); this.indexData.position(0); GLES20.glBindBuffer(GLES20.GL_ARRAY_BUFFER, buffers[0]); GLES20.glBufferData(GLES20.GL_ARRAY_BUFFER, vertexPositionData.capacity()*4, vertexPositionData, GLES20.GL_STATIC_DRAW); ShaderAttributes attributes = graphicsSystem.getShader().getAttributes(); GLES20.glEnableVertexAttribArray(positionAttrID); GLES20.glVertexAttribPointer(positionAttrID, 4, GLES20.GL_FLOAT, false, 0, 0); GLES20.glBindBuffer(GLES20.GL_ARRAY_BUFFER, buffers[1]); GLES20.glBufferData(GLES20.GL_ARRAY_BUFFER, vertexColorData.capacity()*4, vertexColorData, GLES20.GL_STATIC_DRAW); GLES20.glEnableVertexAttribArray(colorAttrID); GLES20.glVertexAttribPointer(colorAttrID, 4, GLES20.GL_FLOAT, false, 0, 0); GLES20.glBindBuffer(GLES20.GL_ARRAY_BUFFER, buffers[2]); GLES20.glBufferData(GLES20.GL_ARRAY_BUFFER, vertexTexCoordData.capacity()*4, vertexTexCoordData, GLES20.GL_STATIC_DRAW); GLES20.glEnableVertexAttribArray(textCoordAttrID); GLES20.glVertexAttribPointer(textCoordAttrID, 4, GLES20.GL_FLOAT, false, 0, 0); */ GLES20.glBindBuffer(GLES20.GL_ELEMENT_ARRAY_BUFFER, buffers[3]); GLES20.glBufferData(GLES20.GL_ELEMENT_ARRAY_BUFFER, indexData.capacity()*2, indexData, GLES20.GL_STATIC_DRAW); GLES20.glDrawElements(mode, count, GLES20.GL_UNSIGNED_SHORT, 0); This is the output of the non working code:

      From this picture I can see that some of the vertex positions are good, but for some reason every renderable object from the game has a at least one vertex position of value 0
      Thank in advance,
      Ed
    • By QQemka
      Hello. I am coding a small thingy in my spare time. All i want to achieve is to load a heightmap (as the lowest possible walking terrain), some static meshes (elements of the environment) and a dynamic character (meaning i can move, collide with heightmap/static meshes and hold a varying item in a hand ). Got a bunch of questions, or rather problems i can't find solution to myself. Nearly all are deal with graphics/gpu, not the coding part. My c++ is on high enough level.
      Let's go:
      Heightmap - i obviously want it to be textured, size is hardcoded to 256x256 squares. I can't have one huge texture stretched over entire terrain cause every pixel would be enormous. Thats why i decided to use 2 specified textures. First will be a tileset consisting of 16 square tiles (u v range from 0 to 0.25 for first tile and so on) and second a 256x256 buffer with 0-15 value representing index of the tile from tileset for every heigtmap square. Problem is, how do i blend the edges nicely and make some computationally cheap changes so its not obvious there are only 16 tiles? Is it possible to generate such terrain with some existing program?
      Collisions - i want to use bounding sphere and aabb. But should i store them for a model or entity instance? Meaning i have 20 same trees spawned using the same tree model, but every entity got its own transformation (position, scale etc). Storing collision component per instance grats faster access + is precalculated and transformed (takes additional memory, but who cares?), so i stick with this, right? What should i do if object is dynamically rotated? The aabb is no longer aligned and calculating per vertex min/max everytime object rotates/scales is pretty expensive, right?
      Drawing aabb - problem similar to above (storing aabb data per instance or model). This time in my opinion per model is enough since every instance also does not have own vertex buffer but uses the shared one (so 20 trees share reference to one tree model). So rendering aabb is about taking the model's aabb, transforming with instance matrix and voila. What about aabb vertex buffer (this is more of a cosmetic question, just curious, bumped onto it in time of writing this). Is it better to make it as 8 points and index buffer (12 lines), or only 2 vertices with min/max x/y/z and having the shaders dynamically generate 6 other vertices and draw the box? Or maybe there should be just ONE 1x1x1 cube box template moved/scaled per entity?
      What if one model got a diffuse texture and a normal map, and other has only diffuse? Should i pass some bool flag to shader with that info, or just assume that my game supports only diffuse maps without fancy stuff?
      There were several more but i forgot/solved them at time of writing
      Thanks in advance
    • By RenanRR
      Hi All,
      I'm reading the tutorials from learnOpengl site (nice site) and I'm having a question on the camera (https://learnopengl.com/Getting-started/Camera).
      I always saw the camera being manipulated with the lookat, but in tutorial I saw the camera being changed through the MVP arrays, which do not seem to be camera, but rather the scene that changes:
      Vertex Shader:
      #version 330 core layout (location = 0) in vec3 aPos; layout (location = 1) in vec2 aTexCoord; out vec2 TexCoord; uniform mat4 model; uniform mat4 view; uniform mat4 projection; void main() { gl_Position = projection * view * model * vec4(aPos, 1.0f); TexCoord = vec2(aTexCoord.x, aTexCoord.y); } then, the matrix manipulated:
      ..... glm::mat4 projection = glm::perspective(glm::radians(fov), (float)SCR_WIDTH / (float)SCR_HEIGHT, 0.1f, 100.0f); ourShader.setMat4("projection", projection); .... glm::mat4 view = glm::lookAt(cameraPos, cameraPos + cameraFront, cameraUp); ourShader.setMat4("view", view); .... model = glm::rotate(model, glm::radians(angle), glm::vec3(1.0f, 0.3f, 0.5f)); ourShader.setMat4("model", model);  
      So, some doubts:
      - Why use it like that?
      - Is it okay to manipulate the camera that way?
      -in this way, are not the vertex's positions that changes instead of the camera?
      - I need to pass MVP to all shaders of object in my scenes ?
       
      What it seems, is that the camera stands still and the scenery that changes...
      it's right?
       
       
      Thank you
       
  • Advertisement