• Advertisement
Sign in to follow this  

OpenGL Getting started with OpenGL

This topic is 2505 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Greetings, OpenGL forum!

I've decided to use OpenGL for the graphical back end in a multipurpose editor I need for my various projects. I'm using OpenGL here because I think it's a great way to get into it and create something of use to myself without having to worry about performance being perfect.

I'm using C# and OpenTK to access OpenGL. I've got a few questions I'd like to throw at you. I won't lie; they are newbie questions because honestly I'm sort of overwhelmed by the options available to me in OpenGL compared to Direct3D which I've been using for years. I'm also targeting GL 2.0/2.1, not he more recent versions as I want them to run on DX9 class hardware.

First things first. There is a dozen ways to render geometry in OpenGL compared to Direct3D. In OpenGL, there's the original deprecated glVertex3f(...), display lists, vertex lists, VBOs, etc. I'm not really sure if there's a preferred solution for general rendering (static and dynamic). I'm leaning towards VBOs since that seems most in line with what I'm familiar with in Direct3D. I do like what I've read about Display Lists though, and it seems like an interesting and downright neat way to render geometry. But since display lists seem to use the generally deprecated glVertex3f like commands, are they too considered deprecated? I realize they're not exactly deprecated in the version of GL I'm targeting, but I would like to be somewhat current.

The second question I have is rendering quads vs. triangles. Is there actually any determent or benefit to rendering quads compared to triangles? I know the driver has to convert quads into triangles since that's what hardware expects. There's a number of situations where I will want to render quads. I'm not sure if I should just render them as quads or bite the bullet and create two triangles instead. I've always loved the idea that you have access to more primitives in GL than just triangles...

The last question I have is regarding shaders. Does anyone have any good tutorials they have found helpful that might help put me in the right direction regarding shaders in GL?

Thanks in advance for any and all help I receive regarding these questions! I've been using Direct3D for so long that I thought it was time to take a look at the other side of the fence for a change.

Share this post


Link to post
Share on other sites
Advertisement

Display Lists ... are they too considered deprecated?
[/quote]

Yes. Prefer VBO's over Display Lists.


Is there actually any determent or benefit to rendering quads compared to triangles?
[/quote]

Not really. Rendering with quads is deprecated now as well, so you might as well go ahead and do it with triangles. If its easier for you to do it with quads though its not going to hurt you.


Does anyone have any good tutorials they have found helpful that might help put me in the right direction regarding shaders in GL?
[/quote]

I like this site for GLSL: http://www.lighthouse3d.com/opengl/glsl/

Its a little bit out of date compared to the latest and greatest, but it does a good job showing you how to setup shaders in your project (and things haven't changed that much). You also have to option to use Cg since you're probably already familiar with HLSL (they are very similar), though that requires an external library and isn't part of the OpenGL specification like GLSL is. I've never used it, but its an option if you're interested.

Share this post


Link to post
Share on other sites
I actually think shaders might be tricky when using tutorials, because many don't bother to mention which version they are based on and you can end up mixing old and new stuff in ways that will cause funny error messages. For example new syntax uses in, out and inout, older versions use varying. Some features require explicitely declaring a version to use, but using a new version "invalidates" old syntax (I ended up cursing a lot finding a way to use certain extensions AND still use syntax that worked on my desktop nvidia and my notebook ati). Since you target 2.x that shouldn't be an issue, but you might run into sources that don't mention they are using 3+. So if you run into unfamiliar syntax and lots of errors in your shaders, that's the most likely reason.

In terms of quads. The main benefit would be laziness and if you're not using indexed primitives they save you 33% of vertex data. Of course you still save 33% of indices, but that usually doesn't matter as much. You should also make sure that all points lie on a plane or you really don't care in which direction your quad is split into triangles.

Share this post


Link to post
Share on other sites
I think the problem with quads is that they are not triangles... I mean if you import a model and use quads, you'd have to be careful that your imported mesh only contains quads, or you'd have to handle quads and triangles as well. Then it's simpler to use only triangles and split quads/polygons at loading the mesh (split like a triangle-fan for example, but maybe it depends on the editor/exporter and the file format).

Anyway, for other stuff I use quads, like GUI (well, for an editor I prefer native windows GUI), or HUD like things. Okay, in this case I usually just use immediate mode too...

Share this post


Link to post
Share on other sites

I think the problem with quads is that they are not triangles... I mean if you import a model and use quads, you'd have to be careful that your imported mesh only contains quads, or you'd have to handle quads and triangles as well. Then it's simpler to use only triangles and split quads/polygons at loading the mesh (split like a triangle-fan for example, but maybe it depends on the editor/exporter and the file format).

Anyway, for other stuff I use quads, like GUI (well, for an editor I prefer native windows GUI), or HUD like things. Okay, in this case I usually just use immediate mode too...


First of all, thank you everyone for your replies. I really appreciate it! This thread has really helped me get started in GL.

Regarding my intended use of quads, I didn't intend on using them for things like models. I was considering their use for screen space quads (text, hud, etc), 3d/2d debug overlays, etc. All my model work has always been in indexed triangles. Basically their use is extremely limited, but I like the concept of them for things like that. =)

Share this post


Link to post
Share on other sites
Gaming GPUs don't support quads. They support triangles. OpenGL is from 1992 and it implemented technology that may or may not have been available on SGI workstations.
Whatever D3D offers is exaclty what todays GPU can do. Therefore I recommend limiting yourself to what D3D offers since it is a good indication for the "best path".

Render your stuff with glDrawRangeElements or glDrawElements. Use shaders to render everything.
Use IBO/VBO just as you would in D3D.
Use your own math lib to create matrices and upload them to your shader.
Do not use built in stuff in your shader like ftransform() or gl_ModelViewProjectionMatrix.
Use generic vertex attributes and not the old glVertexPointer.

PS : choosing GL 2.1 is a good idea if you want to aim for DX9 hardware.

Share this post


Link to post
Share on other sites
I prefer using triangles but it's just a personal thing; there's nothing wrong with using quads and they work just fine for the kind of use case you have.

Regarding the rendering functions to use, I'd say that you'll find the transition easier if you stick with what's roughly equivalent to what you know from D3D. Hardware tends to be optimized to favour this kind of drawing anyway so longer term it's what you'll want to be doing. DrawIndexedPrimitive kinda translates to glDraw(Range)Elements and DrawPrimitive kinda translates to glDrawArrays; the -UP versions would be client-side vertex arrays, the standard versions are VBOs. There's one subtle difference in that GL takes the actual vertex/index count as params, whereas D3D takes the primitive count; another difference is that D3D separates vertex layout from data; GL doesn't.

For the GUI/HUD stuff glBegin/glEnd may be perfectly adequate; it's not going to be a bottleneck so long as you don't overdo it, and it's quite powerful to be able to specify any kind of vertex data without having to worry about your VBO strategy for it.

Congragulations on the decision to learn both, by the way. I went in the opposite direction and knowledge of both has been something I've definitely found useful.

Share this post


Link to post
Share on other sites
Quite a while ago I have written a number of articles about learning OpenGL.

Based on what you said in your thread, I bolded the two tutorials that might be of interest to you (dealing with rendering OpenGL primitives).

Introduction to 3D
OpenGL Compiler Setup
Creating an OpenGL Window
Intro to OpenGL Primitives
Drawing OpenGL Primitives

OpenGL Color
OpenGL 3D Transformations
Creating 3D Models in OpenGL
OpenGL Light how light works in OpenGL, etc
Create an OpenGL window from scratch: OpenGL base code
Create a Win32 Window in C++



Hope this helps

Share this post


Link to post
Share on other sites

gregsidelnikov user_popup.png that is old. The OP wants to stay modern even though he will be using GL 2.1. You might want to write about GL 3.x

Share this post


Link to post
Share on other sites

Gaming GPUs don't support quads. They support triangles.


They or their drivers support them well enough that non-indexed quads outperform non-indexed triangles by reducing the amount of data being moved around. They handle the primitive task of duplicating vertices to triangulate a quad quite well. I guess if someone wants to be completely paranoid about quads being dropped in every way by the vendors one can still add a few wrapping functions that handle it somewhere convenient. For all the simple GUI stuff (not to mention current trend of Minecraft clones) quads are just fine for now.

Share this post


Link to post
Share on other sites

gregsidelnikov user_popup.png that is old. The OP wants to stay modern even though he will be using GL 2.1. You might want to write about GL 3.x




Not old enough to be referenced in a recently published book on Android development.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
  • Advertisement
  • Popular Tags

  • Advertisement
  • Popular Now

  • Similar Content

    • By LifeArtist
      Good Evening,
      I want to make a 2D game which involves displaying some debug information. Especially for collision, enemy sights and so on ...
      First of I was thinking about all those shapes which I need will need for debugging purposes: circles, rectangles, lines, polygons.
      I am really stucked right now because of the fundamental question:
      Where do I store my vertices positions for each line (object)? Currently I am not using a model matrix because I am using orthographic projection and set the final position within the VBO. That means that if I add a new line I would have to expand the "points" array and re-upload (recall glBufferData) it every time. The other method would be to use a model matrix and a fixed vbo for a line but it would be also messy to exactly create a line from (0,0) to (100,20) calculating the rotation and scale to make it fit.
      If I proceed with option 1 "updating the array each frame" I was thinking of having 4 draw calls every frame for the lines vao, polygons vao and so on. 
      In addition to that I am planning to use some sort of ECS based architecture. So the other question would be:
      Should I treat those debug objects as entities/components?
      For me it would make sense to treat them as entities but that's creates a new issue with the previous array approach because it would have for example a transform and render component. A special render component for debug objects (no texture etc) ... For me the transform component is also just a matrix but how would I then define a line?
      Treating them as components would'nt be a good idea in my eyes because then I would always need an entity. Well entity is just an id !? So maybe its a component?
      Regards,
      LifeArtist
    • By QQemka
      Hello. I am coding a small thingy in my spare time. All i want to achieve is to load a heightmap (as the lowest possible walking terrain), some static meshes (elements of the environment) and a dynamic character (meaning i can move, collide with heightmap/static meshes and hold a varying item in a hand ). Got a bunch of questions, or rather problems i can't find solution to myself. Nearly all are deal with graphics/gpu, not the coding part. My c++ is on high enough level.
      Let's go:
      Heightmap - i obviously want it to be textured, size is hardcoded to 256x256 squares. I can't have one huge texture stretched over entire terrain cause every pixel would be enormous. Thats why i decided to use 2 specified textures. First will be a tileset consisting of 16 square tiles (u v range from 0 to 0.25 for first tile and so on) and second a 256x256 buffer with 0-15 value representing index of the tile from tileset for every heigtmap square. Problem is, how do i blend the edges nicely and make some computationally cheap changes so its not obvious there are only 16 tiles? Is it possible to generate such terrain with some existing program?
      Collisions - i want to use bounding sphere and aabb. But should i store them for a model or entity instance? Meaning i have 20 same trees spawned using the same tree model, but every entity got its own transformation (position, scale etc). Storing collision component per instance grats faster access + is precalculated and transformed (takes additional memory, but who cares?), so i stick with this, right? What should i do if object is dynamically rotated? The aabb is no longer aligned and calculating per vertex min/max everytime object rotates/scales is pretty expensive, right?
      Drawing aabb - problem similar to above (storing aabb data per instance or model). This time in my opinion per model is enough since every instance also does not have own vertex buffer but uses the shared one (so 20 trees share reference to one tree model). So rendering aabb is about taking the model's aabb, transforming with instance matrix and voila. What about aabb vertex buffer (this is more of a cosmetic question, just curious, bumped onto it in time of writing this). Is it better to make it as 8 points and index buffer (12 lines), or only 2 vertices with min/max x/y/z and having the shaders dynamically generate 6 other vertices and draw the box? Or maybe there should be just ONE 1x1x1 cube box template moved/scaled per entity?
      What if one model got a diffuse texture and a normal map, and other has only diffuse? Should i pass some bool flag to shader with that info, or just assume that my game supports only diffuse maps without fancy stuff?
      There were several more but i forgot/solved them at time of writing
      Thanks in advance
    • By RenanRR
      Hi All,
      I'm reading the tutorials from learnOpengl site (nice site) and I'm having a question on the camera (https://learnopengl.com/Getting-started/Camera).
      I always saw the camera being manipulated with the lookat, but in tutorial I saw the camera being changed through the MVP arrays, which do not seem to be camera, but rather the scene that changes:
      Vertex Shader:
      #version 330 core layout (location = 0) in vec3 aPos; layout (location = 1) in vec2 aTexCoord; out vec2 TexCoord; uniform mat4 model; uniform mat4 view; uniform mat4 projection; void main() { gl_Position = projection * view * model * vec4(aPos, 1.0f); TexCoord = vec2(aTexCoord.x, aTexCoord.y); } then, the matrix manipulated:
      ..... glm::mat4 projection = glm::perspective(glm::radians(fov), (float)SCR_WIDTH / (float)SCR_HEIGHT, 0.1f, 100.0f); ourShader.setMat4("projection", projection); .... glm::mat4 view = glm::lookAt(cameraPos, cameraPos + cameraFront, cameraUp); ourShader.setMat4("view", view); .... model = glm::rotate(model, glm::radians(angle), glm::vec3(1.0f, 0.3f, 0.5f)); ourShader.setMat4("model", model);  
      So, some doubts:
      - Why use it like that?
      - Is it okay to manipulate the camera that way?
      -in this way, are not the vertex's positions that changes instead of the camera?
      - I need to pass MVP to all shaders of object in my scenes ?
       
      What it seems, is that the camera stands still and the scenery that changes...
      it's right?
       
       
      Thank you
       
    • By dpadam450
      Sampling a floating point texture where the alpha channel holds 4-bytes of packed data into the float. I don't know how to cast the raw memory to treat it as an integer so I can perform bit-shifting operations.

      int rgbValue = int(textureSample.w);//4 bytes of data packed as color
      // algorithm might not be correct and endianness might need switching.
      vec3 extractedData = vec3(  rgbValue & 0xFF000000,  (rgbValue << 8) & 0xFF000000, (rgbValue << 16) & 0xFF000000);
      extractedData /= 255.0f;
    • By Devashish Khandelwal
      While writing a simple renderer using OpenGL, I faced an issue with the glGetUniformLocation function. For some reason, the location is coming to be -1.
      Anyone has any idea .. what should I do?
  • Advertisement