Jump to content
  • Advertisement


  • Content count

  • Joined

  • Last visited

Community Reputation

136 Neutral

About Lifepower

  • Rank

Personal Information

  • Interests

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. I have separate code paths for ARB_direct_state_access (either under OpenGL 4.5 or when exposed as extension), EXT_direct_state_access and "non-DSA" way, which simulates DSA by preserving the state. However, I mostly focus on maintaining DSA versions as I haven't found any OpenGL 3.3 (which is minimal for my framework) hardware that doesn't support it. The issue I'm having on Intel driver occurs both with EXT_direct_state_access (exposed by Intel's drivers on Windows) and non-DSA approaches. Regarding "glGetIntegerv performance impact" - I have macros to disable that alongside with "glGetError". However, in my own performance benchmarks that I've performed on 7 different AMD/Nvidia graphics card each, and Intel graphics card ranging from OpenGL 3.3 to 4.5 support on Windows and Linux, I've found negligible performance difference when massively updating buffers and issuing drawing calls, less than 1/10th of 1%. If you know a specific combination of a graphics card, OS and driver version where this does make any significant difference - I would definitely be interested in testing it. I've read somewhere that in OpenGL ES, especially older devices with OpenGL ES 2, "glGet" calls may indeed hurt performance, but at least on desktop this doesn't seem to be a issue and even if it is, the design of my framework requires not to modify OpenGL state unless it is part of the method's purpose (e.g. activate shader, bind texture, etc.), which is why I'm focusing on DSA-way almost exclusively.
  2. Thank you for the replies. I've tried installing updated driver from Intel on machine with Intel HD Graphics 4600, but the result was the same. I'll try to contact Intel dev support with a simple example reproducing the issue.
  3. As the title says, it seems that updating contents of UBO while it is currently bound to a slot on Windows 10 + Intel HD Graphics doesn't work. I can reproduce this issue on several machines with clean Windows 10 install, fully updated (including 1607 update) and graphics driver that OS installed by itself - each having either Intel HD Graphics card, either Intel HD Graphics 4600 or Intel HD Graphics 5300. My rendering schedule is basically the following:Activate the appropriate shader program.Attach UBOs to the appropriate slots of shader program.Activate VAO for model 1.Update UBOs with the appropriate parameters (matrices, light parameters).Draw call for model 1.Activate VAO for model 2.Update UBOs with new parameters.Draw call for model 2.Repeat steps #6-8 for models 3, 4, .., N -1, N (it is the same mesh, just using different data in UBO).The above scheme appears to work just fine on a set of Nvidia and AMD graphics cards that I've tried, both on Windows and Linux; it also works on Intel HD Graphics cards under Linux. However, it doesn't seem to work on Intel HD Graphics under Windows 10 with the driver installed by OS. The contents of UBO doesn't seem to update in step #7 and keeps old data uploaded in step #4. I have different code paths for "EXT_direct_state_access", "ARB_direct_state_access" and non-DSA approach, but the issue is exactly the same (on Intel HD Graphics cards with Windows 10, "ARB_direct_state_access" is actually not exposed, so I'm not using it there). Basically, the non-DSA code that exhibits the issue on that configuration is: // VAO is activated before this. // (a) create UBO (note: this chunk of code is called at startup, it is not part of rendering loop) glGenBuffers(1, &bufferHandle); glGetIntegerv(bufferTargetToBinding(bufferTarget), reinterpret_cast<GLint*>(&previousBinding)); // simulate DSA way glBindBuffer(bufferTarget, bufferHandle); // bufferTarget is GL_UNIFORM_BUFFER glBufferStorage(bufferTarget, bufferSize, nullptr, GL_MAP_WRITE_BIT); glBindBuffer(bufferTarget, previousBinding); // (b) bind UBO glBindBufferRange(bufferTarget, bufferChannel, bufferHandle, bufferOffset, bufferSize); // bufferOffset is 0 // (c) update UBO glGetIntegerv(bufferTargetToBinding(bufferTarget), reinterpret_cast<GLint*>(&previousBinding)); // simulate DSA way glBindBuffer(bufferTarget, bufferHandle); mappedBits = glMapBufferRange(bufferTarget, mapOffset, mapSize, GL_MAP_WRITE_BIT | GL_MAP_INVALIDATE_BUFFER_BIT); // mapOffset == 0, mapSize == bufferSize std::memcpy(mappedBits, data, mapSize); glUnmapBuffer(bufferTarget); glBindBuffer(bufferTarget, previousBinding); // (d) draw call glDrawArrays(topology, baseVertex, vertexCount); // (e) Repeat (c) and (d) for other models. If instead of "glMapBufferRange", I use "glBufferSubData" to update the contents, the issue is less pronounced, but still exists (some models seem to jump back and forth between old and new positions as specified in UBO). Note that the issue occurs only on Windows 10 / Intel HD Graphics cards, not anywhere else. I have found two workarounds: one is to call "glFinish" right after "glDrawArrays", which seems to fix the problem; another workaround is to call "glBindBufferBase" to unbind UBO before updating its contents, then bind it again: glBindBufferBase(bufferTarget, bufferChannel, 0); // unbind UBO // Update UBO contents here as in code above, step (c) glBindBufferRange(bufferTarget, bufferChannel, bufferHandle, bufferOffset, bufferSize); // bind buffer back However, both of these workarounds seem to impact he performance. I couldn't find anything in the GL spec mentioning that buffer objects need to be unbound first or "glFinish" be called, before updating their contents.  So my question would be: is the issue I'm experiencing just a driver bug, or should buffer objects be really unbound before updating their contents? P.S. I'm using a very similar code to update VBOs as well, and they also exhibit the same issue on Intel HD Graphics cards and Windows 10, albeit to a less degree simply because I don't update them often.
  4. Lifepower

    Wicked Defense 2 re-released as freeware

    Similarly, we have released earlier version of Wicked Defense 1 and re-released Aztlan Dreams. The second one is a turn-based puzzle strategy game with role-playing elements, which first appeared back in 2009. Below are some screenshots of Aztlan Dreams:
  5. Previously, Wicked Defense 2 was released under commercial terms back in 2009. It was a sequel to Wicked Defense 1, which first appeared in 2007. Since it has been around 7 years since the release and especially now that Ixchel Studios web site has gone offline, we have decided to re-release the game as freeware.   This is basically a tower-defense genre, real-time strategy 3D game, where you have to build and upgrade towers, and cast instant spells to prevent large hordes of monsters from reaching their destination. It uses abstract conceptual graphics and gameplay.   The game's engine uses shaders, CPU's vector instructions and geometry instancing to produce procedural 3D models in real-time. It mainly does so using DX9/SM2 and certain number of instances per batch, although we did have experimental back-end running on DX8 with v1.1 shaders. In some scenarios (e.g. "Rebellion") at times there could be over million triangles per frame, e.g. when the whole group of "Weaver" monsters gets cloned and then sub-divides. The engine also features volumetric visual effects such as lightning, rays, beams and particle showers.   Some screenshots are shown below: For more screenshots and the actual game installer, you can visit the game's official web site at: http://asphyre.net/games/wd2
  • Advertisement

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!