• Advertisement
  • Popular Tags

  • Popular Now

  • Advertisement
  • Similar Content

    • By tj8146
      I have attached my project in a .zip file if you wish to run it for yourself.
      I am making a simple 2d top-down game and I am trying to run my code to see if my window creation is working and to see if my timer is also working with it. Every time I run it though I get errors. And when I fix those errors, more come, then the same errors keep appearing. I end up just going round in circles.  Is there anyone who could help with this? 
       
      Errors when I build my code:
      1>Renderer.cpp 1>c:\users\documents\opengl\game\game\renderer.h(15): error C2039: 'string': is not a member of 'std' 1>c:\program files (x86)\windows kits\10\include\10.0.16299.0\ucrt\stddef.h(18): note: see declaration of 'std' 1>c:\users\documents\opengl\game\game\renderer.h(15): error C2061: syntax error: identifier 'string' 1>c:\users\documents\opengl\game\game\renderer.cpp(28): error C2511: 'bool Game::Rendering::initialize(int,int,bool,std::string)': overloaded member function not found in 'Game::Rendering' 1>c:\users\documents\opengl\game\game\renderer.h(9): note: see declaration of 'Game::Rendering' 1>c:\users\documents\opengl\game\game\renderer.cpp(35): error C2597: illegal reference to non-static member 'Game::Rendering::window' 1>c:\users\documents\opengl\game\game\renderer.cpp(36): error C2597: illegal reference to non-static member 'Game::Rendering::window' 1>c:\users\documents\opengl\game\game\renderer.cpp(43): error C2597: illegal reference to non-static member 'Game::Rendering::window' 1>Done building project "Game.vcxproj" -- FAILED. ========== Build: 0 succeeded, 1 failed, 0 up-to-date, 0 skipped ==========  
       
      Renderer.cpp
      #include <GL/glew.h> #include <GLFW/glfw3.h> #include "Renderer.h" #include "Timer.h" #include <iostream> namespace Game { GLFWwindow* window; /* Initialize the library */ Rendering::Rendering() { mClock = new Clock; } Rendering::~Rendering() { shutdown(); } bool Rendering::initialize(uint width, uint height, bool fullscreen, std::string window_title) { if (!glfwInit()) { return -1; } /* Create a windowed mode window and its OpenGL context */ window = glfwCreateWindow(640, 480, "Hello World", NULL, NULL); if (!window) { glfwTerminate(); return -1; } /* Make the window's context current */ glfwMakeContextCurrent(window); glViewport(0, 0, (GLsizei)width, (GLsizei)height); glOrtho(0, (GLsizei)width, (GLsizei)height, 0, 1, -1); glMatrixMode(GL_PROJECTION); glLoadIdentity(); glfwSwapInterval(1); glEnable(GL_SMOOTH); glEnable(GL_DEPTH_TEST); glEnable(GL_BLEND); glDepthFunc(GL_LEQUAL); glHint(GL_PERSPECTIVE_CORRECTION_HINT, GL_NICEST); glEnable(GL_TEXTURE_2D); glLoadIdentity(); return true; } bool Rendering::render() { /* Loop until the user closes the window */ if (!glfwWindowShouldClose(window)) return false; /* Render here */ mClock->reset(); glfwPollEvents(); if (mClock->step()) { glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glfwSwapBuffers(window); mClock->update(); } return true; } void Rendering::shutdown() { glfwDestroyWindow(window); glfwTerminate(); } GLFWwindow* Rendering::getCurrentWindow() { return window; } } Renderer.h
      #pragma once namespace Game { class Clock; class Rendering { public: Rendering(); ~Rendering(); bool initialize(uint width, uint height, bool fullscreen, std::string window_title = "Rendering window"); void shutdown(); bool render(); GLFWwindow* getCurrentWindow(); private: GLFWwindow * window; Clock* mClock; }; } Timer.cpp
      #include <GL/glew.h> #include <GLFW/glfw3.h> #include <time.h> #include "Timer.h" namespace Game { Clock::Clock() : mTicksPerSecond(50), mSkipTics(1000 / mTicksPerSecond), mMaxFrameSkip(10), mLoops(0) { mLastTick = tick(); } Clock::~Clock() { } bool Clock::step() { if (tick() > mLastTick && mLoops < mMaxFrameSkip) return true; return false; } void Clock::reset() { mLoops = 0; } void Clock::update() { mLastTick += mSkipTics; mLoops++; } clock_t Clock::tick() { return clock(); } } TImer.h
      #pragma once #include "Common.h" namespace Game { class Clock { public: Clock(); ~Clock(); void update(); bool step(); void reset(); clock_t tick(); private: uint mTicksPerSecond; ufloat mSkipTics; uint mMaxFrameSkip; uint mLoops; uint mLastTick; }; } Common.h
      #pragma once #include <cstdio> #include <cstdlib> #include <ctime> #include <cstring> #include <cmath> #include <iostream> namespace Game { typedef unsigned char uchar; typedef unsigned short ushort; typedef unsigned int uint; typedef unsigned long ulong; typedef float ufloat; }  
      Game.zip
    • By lxjk
      Hi guys,
      There are many ways to do light culling in tile-based shading. I've been playing with this idea for a while, and just want to throw it out there.
      Because tile frustums are general small compared to light radius, I tried using cone test to reduce false positives introduced by commonly used sphere-frustum test.
      On top of that, I use distance to camera rather than depth for near/far test (aka. sliced by spheres).
      This method can be naturally extended to clustered light culling as well.
      The following image shows the general ideas

       
      Performance-wise I get around 15% improvement over sphere-frustum test. You can also see how a single light performs as the following: from left to right (1) standard rendering of a point light; then tiles passed the test of (2) sphere-frustum test; (3) cone test; (4) spherical-sliced cone test
       

       
      I put the details in my blog post (https://lxjk.github.io/2018/03/25/Improve-Tile-based-Light-Culling-with-Spherical-sliced-Cone.html), GLSL source code included!
       
      Eric
    • By Fadey Duh
      Good evening everyone!

      I was wondering if there is something equivalent of  GL_NV_blend_equation_advanced for AMD?
      Basically I'm trying to find more compatible version of it.

      Thank you!
    • By Jens Eckervogt
      Hello guys, 
       
      Please tell me! 
      How do I know? Why does wavefront not show for me?
      I already checked I have non errors yet.
      using OpenTK; using System.Collections.Generic; using System.IO; using System.Text; namespace Tutorial_08.net.sourceskyboxer { public class WaveFrontLoader { private static List<Vector3> inPositions; private static List<Vector2> inTexcoords; private static List<Vector3> inNormals; private static List<float> positions; private static List<float> texcoords; private static List<int> indices; public static RawModel LoadObjModel(string filename, Loader loader) { inPositions = new List<Vector3>(); inTexcoords = new List<Vector2>(); inNormals = new List<Vector3>(); positions = new List<float>(); texcoords = new List<float>(); indices = new List<int>(); int nextIdx = 0; using (var reader = new StreamReader(File.Open("Contents/" + filename + ".obj", FileMode.Open), Encoding.UTF8)) { string line = reader.ReadLine(); int i = reader.Read(); while (true) { string[] currentLine = line.Split(); if (currentLine[0] == "v") { Vector3 pos = new Vector3(float.Parse(currentLine[1]), float.Parse(currentLine[2]), float.Parse(currentLine[3])); inPositions.Add(pos); if (currentLine[1] == "t") { Vector2 tex = new Vector2(float.Parse(currentLine[1]), float.Parse(currentLine[2])); inTexcoords.Add(tex); } if (currentLine[1] == "n") { Vector3 nom = new Vector3(float.Parse(currentLine[1]), float.Parse(currentLine[2]), float.Parse(currentLine[3])); inNormals.Add(nom); } } if (currentLine[0] == "f") { Vector3 pos = inPositions[0]; positions.Add(pos.X); positions.Add(pos.Y); positions.Add(pos.Z); Vector2 tc = inTexcoords[0]; texcoords.Add(tc.X); texcoords.Add(tc.Y); indices.Add(nextIdx); ++nextIdx; } reader.Close(); return loader.loadToVAO(positions.ToArray(), texcoords.ToArray(), indices.ToArray()); } } } } } And It have tried other method but it can't show for me.  I am mad now. Because any OpenTK developers won't help me.
      Please help me how do I fix.

      And my download (mega.nz) should it is original but I tried no success...
      - Add blend source and png file here I have tried tried,.....  
       
      PS: Why is our community not active? I wait very longer. Stop to lie me!
      Thanks !
    • By codelyoko373
      I wasn't sure if this would be the right place for a topic like this so sorry if it isn't.
      I'm currently working on a project for Uni using FreeGLUT to make a simple solar system simulation. I've got to the point where I've implemented all the planets and have used a Scene Graph to link them all together. The issue I'm having with now though is basically the planets and moons orbit correctly at their own orbit speeds.
      I'm not really experienced with using matrices for stuff like this so It's likely why I can't figure out how exactly to get it working. This is where I'm applying the transformation matrices, as well as pushing and popping them. This is within the Render function that every planet including the sun and moons will have and run.
      if (tag != "Sun") { glRotatef(orbitAngle, orbitRotation.X, orbitRotation.Y, orbitRotation.Z); } glPushMatrix(); glTranslatef(position.X, position.Y, position.Z); glRotatef(rotationAngle, rotation.X, rotation.Y, rotation.Z); glScalef(scale.X, scale.Y, scale.Z); glDrawElements(GL_TRIANGLES, mesh->indiceCount, GL_UNSIGNED_SHORT, mesh->indices); if (tag != "Sun") { glPopMatrix(); } The "If(tag != "Sun")" parts are my attempts are getting the planets to orbit correctly though it likely isn't the way I'm meant to be doing it. So I was wondering if someone would be able to help me? As I really don't have an idea on what I would do to get it working. Using the if statement is truthfully the closest I've got to it working but there are still weird effects like the planets orbiting faster then they should depending on the number of planets actually be updated/rendered.
  • Advertisement
  • Advertisement
Sign in to follow this  

OpenGL Max performance and support

This topic is 1154 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

After working on my engine for a while, I finally got to that point where I felt like it was time for a major refactoring!

Cleaning up everything, pointing out the things that work, the things that didn't or the ones that should change, and what not.

 

Well this got me thinking, one major thing I don't have and really don't know how to do is:

How do you check if a user's hardware can actually handle the OpenGL calls you're using?

 

Do you call them all on start up and check if you get a OpenGL Error back? (I hope its not something like this)

Or is it automatically done when you make your OpenGL context and target a specific version? Basically you get a guarantee that all the calls for that version are usable. EG I make a context for OpenGL 3.3 giving me guaranteed support for all OpenGL 3.3 and below calls

 

On a side note, one of my main things is a sprite batcher, which means lots of dynamic data is being sent to the GPU per frame.

So I was wondering what would be a solid performance test? How do I know I'm getting close to that 'magic number' that says "Using what you got now, you really can't do any better"?

Share this post


Link to post
Share on other sites
Advertisement

I'm using a two level mechanism.

 

At application start-up, a context is created with the highest supported version (supported by the application). If it fails, then the next lower version is tried. If a context was made successfully, then the belonging dynamic library (with the rendering implementation) is loaded. The library was linked statically against all OpenGL functions that could be expected to exist for that version. Hence, if it fails to load, the context is destroyed and the next lower version is tried. This continues to happen until the lowest supported version has failed which, of course, means that the game cannot be run.

 

After successfully loading a library, its initialization code creates an own context, and looks up for extensions that will provide better implementations for some features. If not found, then the particular basic implementation will be used. Graphic rendering is controlled by enqueued graphic rendering jobs, so it is already some kind of data driven routine invocation. The library's initialization code hence sets some of its job executing function pointers to other code.

Edited by haegarr

Share this post


Link to post
Share on other sites

In theory checking the GL_VERSION supported by your driver and cross-checking that with the appropriate GL specification should tell you what you need.

 

In practice OpenGL itself makes no guarantee that any given feature is going to be hardware accelerated.  It's perfectly legal for a driver to advertise a feature and have full support for it, but to drop you back to a software emulated path if you actually try to use it.

 

OpenGL gives you absolutely no way of knowing if this is going to happen, and if you drop back to software emulation per-vertex you may not even notice it - it will be slower but it may not be sufficiently slow for you to clearly determine if it's a software fallback or if it's just a more generic performance issue in your own code.

 

If you drop back to software emulation at a deeper part of the pipeline - per fragment or in the blend stage - you'll almost definitely notice it because you'll be getting about 1 fps.

 

The only way to satisfactorily know this is to know which features do or don't play well with current hardware and any previous generations you want to support.  For example, some of the earliest GL 2.0 hardware (from 10 years ago now, so you almost definitely don't want to support it, but it's useful to cite as an example) advertised support for non-power-of-two textures but only supported them in a software emulated path, so it's hello to 1 fps.

 

A rule of thumb might be to only rely on a feature being hardware accelerated if it's from a GL_VERSION or two below what the driver advertises, but that's obviously also a (fairly heavy-handed) constraint - most features are actually perfectly OK, but it's those handful that cause trouble that you need to watch out for.

Share this post


Link to post
Share on other sites
I think both of you are saying yes to there is a overall check for supported features for that version.
That if I can successfully create a OpenGL context using my desired target version, then any function documented to be apart of that implementation can be used.

Things I'm still kind of shady on:
Does successfully creating a OpenGL context (EG say a OpenGL 4.0 context gets created successfully), give me the guarantee that lower versions of OpenGL functions (EG OpenGL 3.3) can be used too? Not functionality in terms of the hardware like supporting textures that are non-power of two (Or is this considered OpenGL functionality). But in the sense of being able to use glMapBufferRange or glBufferSubData without having separate contexts.

Or is this interchangeable, because a higher version of OpenGL automatically has support for lower level OpenGL functions?
 

The library was linked statically against all OpenGL functions that could be expected to exist for that version


Are you implying that this is automatically done when the context is created?
As in if I do nothing special, no library from absolute scratch implementation, create a OpenGL context targeting my applications highest supported version. I will automatically get support for those version's functions

Or are you saying that the library that you are using does some additional work to check?
 

After successfully loading a library, its initialization code creates an own context, and looks up for extensions that will provide better implementations for some features.


Are talking about the ARB extensions here? Functions such as GL_ARB_Map_Buffer_Range?
 

In practice OpenGL itself makes no guarantee that any given feature is going to be hardware accelerated

Can you force OpenGL to only use hardware accelerated features? If you can how is this done? Edited by noodleBowl

Share this post


Link to post
Share on other sites

In practice OpenGL itself makes no guarantee that any given feature is going to be hardware accelerated

Can you force OpenGL to only use hardware accelerated features? If you can how is this done?

No, there is no such notion of hardware acceleration. OpenGL was created in a time where graphics commands were sent from a client computer into a main server over ethernet running SGI's hardware.
The main goal was to have the graphics rendered at all costs.

However mhagain may be exaggerating the state considering current OpenGL's status; it was very common and annoying until 5 or 6 years ago; but I haven't seen a GL implementation that fallbacks to software rendering in quite a long time.

Share this post


Link to post
Share on other sites


In practice OpenGL itself makes no guarantee that any given feature is going to be hardware accelerated.  It's perfectly legal for a driver to advertise a feature and have full support for it, but to drop you back to a software emulated path if you actually try to use it.
Do you know of any concrete example of an implementation that provides an OpenGL 3.2+ core context but it emulates some features in software? I keep hearing this, yet while being true in the spec, isn't followed by examples of situations in which it happened.

Share this post


Link to post
Share on other sites

In practice OpenGL itself makes no guarantee that any given feature is going to be hardware accelerated.  It's perfectly legal for a driver to advertise a feature and have full support for it, but to drop you back to a software emulated path if you actually try to use it.

Do you know of any concrete example of an implementation that provides an OpenGL 3.2+ core context but it emulates some features in software? I keep hearing this, yet while being true in the spec, isn't followed by examples of situations in which it happened.

Not 3.2, but on 2.1 I used dynamic indexing of an array of uniform variables inside a fragment shader, and my FPS dropped from 60 to 1 -- a sure sign that the driver has reverted to software emulation sad.png

Share this post


Link to post
Share on other sites

but I haven't seen a GL implementation that fallbacks to software rendering in quite a long time.

iOS partially does in certain circumstances, such as when attributes in a vertex buffer are misaligned.
I say “partially” because it isn’t emulating the full rendering pipeline, it just adds a huge CPU cost because it manually, on the CPU, makes an aligned copy of the vertex data every frame, and basically causes a drastic change in performance without giving you a clue that it is doing it, making it basically the same problem as going into emulation mode (nothing on iOS is fully emulated—it either is hardware accelerated or it fails).

 

 

In my own engine for desktop OpenGL 3.3/4.5, I suspect I’ve hit a slow path unknowingly too.

I’ve been very careful with its development and putting in a lot of effort to make sure the OpenGL ports run as close to the performance of Direct3D 11 as they can, and until last week I was at roughly 80-90%.

Suddenly after getting some new models for play-testing Direct3D 9 and Direct3D 11 are around 11,000 and 14,000 FPS respectively, whereas OpenGL dropped to 400 FPS.

 

I intend to allocate some time to investigate this in detail this weekend, but basically while full-on emulation is rare these days, there are still a million cases that cause it to do unnecessary CPU work.

 

 

L. Spiro

Share this post


Link to post
Share on other sites

Not 3.2, but on 2.1 I used dynamic indexing of an array of uniform variables inside a fragment shader, and my FPS dropped from 60 to 1 -- a sure sign that the driver has reverted to software emulation 

Of course but I asked for OpenGL 3 hardware examples for a reason, as to raise the point that this complaint keeps coming up. It seems as if OpenGL users are like the ARB, holding onto the past too much tongue.png

 

In any case, knowing if feature X is emulated in Y cards is good knowledge to have, which is also why I asked.

 

 

 

iOS partially does in certain circumstances, such as when attributes in a vertex buffer are misaligned.

ie, not aligned to 16 bytes? ES 2? I've seen thrown around that explicit 16 byte alignment is good for some desktop hardware too, AMD cards apparently. I'm assuming if you're using another API (Mantle? ES 3?) you'd have to do proper alignment in any case.

Edited by TheChubu

Share this post


Link to post
Share on other sites

ie, not aligned to 16 bytes?

Not aligned according to the guidelines here.

At least that page mentions the extra work that needs to be done.  For every other OpenGL * implementation you are just guessing.

 

 

L. Spiro

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement