• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.
Sign in to follow this  
Followers 0
Vincent_M

OpenGL
Pros and Cons for Batching Sprites

9 posts in this topic

I thought sprite batches were the way to go for the last 3 years, but the more I use my sprite batch system, the more I wonder if it's actually helping? I understand the benefits: fewer texture swaps, shader swaps and draw calls overall.

 

The way I use my sprite batch class is to accomplish what I described above. I have a Sprite class which I create instances of, and add them to my SpriteBatch instance which also has a reference to a texture atlas. I can setup each sprite's blitting data, transform them, etc. and the Sprite Batch will keep track of every sprite's vertices in a single array.

 

Life is good, isn't it? Then, I started analyzing the cons I've found using them:

-Most sprites aren't using the same texture all that frequently unless you're using a tile map, but tile maps can be handled as a special-case anyway. Outside of that, it's very rare to have the same sprite show up multiple times onscreen.

-Batching requires all sprites to software-transform the vertices they own in the vertex array in OpenGL ES 2.0 --very common right now for mobile games

-If you wanted to make a sprite invisible because you didn't want to render it at that time, you couldn't just simply not render it. You'd have to give that sprite's vertices an alpha value of zero so they're completely transparent. This is a blending performance killer on mobile devices

-Frustum culling is inefficient. If you use a sprite batch for a large area that has many of the same sprite, say, the same enemy, you'll be able to draw them all at once easily, but usually only a couple of them, if any, are onscreen at a time

-Color data is needed per vertex to apply a color overlay to a sprite instead of a single uniform because its vertices are batched with the rest

-Any time you want to draw a sprite, you need a sprite batch. This is a pain if you're making an RPG with 4 main characters where they each have their own batches. Each character needs its own sprite batch for their sprite even though they're the only instance that'll ever need it at a time

-Storing a sprite in multiple lists are a pain as well. I'm writing a 2D engine where each Sprite is a scene object with its own transform matrix. Not only does it have to be included in the scene's object dictionary (STL map), but now its corresponding sprite batch needs to keep a pointer to it in its own list. This is problematic because if the Sprite is released from memory, I now need to make sure both the batch and scene know it.

 

Just my thoughts

0

Share this post


Link to post
Share on other sites

I'm currently having similar thoughts about implementing a sprite batcher, and your post even got me thinking about some problems I hadn't thought of before. As such, remember that all follows are answers from the top of my head that may or may not work as I have yet to implement this myself.

 


-Batching requires all sprites to software-transform the vertices they own in the vertex array in OpenGL ES 2.0 --very common right now for mobile games

If done at load time this shouldn't be a problem, no?

I'm assuming you only geometry-batch static sprites.

 


-If you wanted to make a sprite invisible because you didn't want to render it at that time, you couldn't just simply not render it. You'd have to give that sprite's vertices an alpha value of zero so they're completely transparent. This is a blending performance killer on mobile devices

Maybe increasing draw calls to only draw the visible indices would help. Once an object is invisible for more than a number of frames or more than X objects are hidden, rebuild the indices to only contain the visible instances. Having some pools of indices would help limiting the work done when rebuilding the indices.

 


-Frustum culling is inefficient. If you use a sprite batch for a large area that has many of the same sprite, say, the same enemy, you'll be able to draw them all at once easily, but usually only a couple of them, if any, are onscreen at a time

Try force-splitting the batches in chunks of a certain area. Say, a chunk only covers a 512x512 pixel area at most (only count the sprites' centers, not their dimensions).This will make even less sprites fit a batch, but maybe it's still worh it... This may however add to the difficulty of properly sorting the faces to render front-to-back (if you need that).

 


-Any time you want to draw a sprite, you need a sprite batch. This is a pain if you're making an RPG with 4 main characters where they each have their own batches. Each character needs its own sprite batch for their sprite even though they're the only instance that'll ever need it at a time

Assuming your engine is aware of the difference between static sprites and dynamic sprites, the performance loss of putting your character sprites through the system should be negligible.

The interface you use to feed the sprites to your engine should also be simple enough that the user will barely notice that the sprites will get batched before rendering.

 


-Storing a sprite in multiple lists are a pain as well. I'm writing a 2D engine where each Sprite is a scene object with its own transform matrix. Not only does it have to be included in the scene's object dictionary (STL map), but now its corresponding sprite batch needs to keep a pointer to it in its own list. This is problematic because if the Sprite is released from memory, I now need to make sure both the batch and scene know it.

There probably are many solutions to this problem, Maybe your Sprite class should be the one notifying the batcher of its existence and destruction in it's ctor and dtor respectively? Or maybe you could use something like boost::signals to expose some events on the sprite, something like sprite->getSignal_onDestroy().connect(...). Or implement a sprite::addListener( ISpriteListenet* ).

 

 

The solutions also depend on the type of scene; how many static sprites are there, how many dynamic ones, is instancing used etc.

0

Share this post


Link to post
Share on other sites

-Most sprites aren't using the same texture all that frequently unless you're using a tile map, but tile maps can be handled as a special-case anyway. Outside of that, it's very rare to have the same sprite show up multiple times onscreen.

If performance is critical and your sprite usage is very dynamic (i.e. you can't create a texture atlas offline) you need to create a system that would take the textures being used, and batch them into a single texture atlas.
It's not easy, but it isn't that hard either.

If you're developing for DX11/GL3; then you can use texture arrays instead which is much simpler and solves the issue.
 

-Batching requires all sprites to software-transform the vertices they own in the vertex array in OpenGL ES 2.0 --very common right now for mobile games

And this is a problem because...?
 

-If you wanted to make a sprite invisible because you didn't want to render it at that time, you couldn't just simply not render it. You'd have to give that sprite's vertices an alpha value of zero so they're completely transparent. This is a blending performance killer on mobile devices

If you're manipulating the sprite vertices to change the alpha value, then:
  • You can just not include the vertices when regenerating the vertex buffer.
  • If you're overwriting just a subregion, then a more clever approach than setting alpha to 0 is to move the vertices out of the viewport region. All vertices will be culled and pixel processing power won't be wasted on them; only for the vertices.

-Frustum culling is inefficient. If you use a sprite batch for a large area that has many of the same sprite, say, the same enemy, you'll be able to draw them all at once easily, but usually only a couple of them, if any, are onscreen at a time

Yes. This is a trade off. Partition your batch into smaller batches, but not small enough to become one batch per sprite.
Additionally, if your API supports instancing (or you're passing each sprite position through a constant register) then culling is still possible. If you can draw up to 80 sprites but only 2 are visible, then divide the number of vertices by 40.
The fact that your vertex buffer can hold 80 * 4 vertices (assuming 4 vertices per sprite) doesn't mean you can't pass less to the draw call.
If you're using instancing, just pass a lower instance count.

-Any time you want to draw a sprite, you need a sprite batch. This is a pain if you're making an RPG with 4 main characters where they each have their own batches. Each character needs its own sprite batch for their sprite even though they're the only instance that'll ever need it at a time

I prefer a more generic batching system that accepts "any" kind of sprite and batches it together (generating the atlas on the fly) and then reuse. However this only works well if the sprites can be grouped by shared properties that last long enough.

Indeed, batching isn't a silver bullet, it does come with trade offs or problems. But generally speaking does it's job in improving performance (with some exceptions, when all the content is too dynamic), and some of the problems you're mentioning are easily solvable.
2

Share this post


Link to post
Share on other sites

I only transform the vertices if something has changed since the last frame, which is nice if it's static. A downside to consider, however, is that each sprite requires as second set of position, rotation and scale data which can be bloating, and if even just one thing changes, you'd have to re-multiply the translation, rotation and scale matrices together to get the new transform matrix, then multiply pre-multiply its parent matrix by that if it's attached to another scene element, and finally multiply each vertex by that matrix. It can be memory-intensive too as my old sprite module's size was around 1500 bytes per sprite instance! It had all kinds of features though, but imagine using thousands of those instances as static sprites in a tile map. That's megabytes of just metadata lol... (That's not counting the actual vertex/index data in the batch).

 

I've thought about index pools as alpha blending combined with limited filtrate on mobile devices are huge restrictions. It'd just be bad to have to hide a 256x256 sprite with alpha blending from all the texture look-ups alone.

 

Btw, as far as sorting goes, I use the depth buffer along with 3D coordinates in my vertex format. I thought about trimming the vertex struct down to 2D positions with a "depth" value per sprite, but that won't work with batching since OpenGL ES 2.0 doesn't support UBOs like desktop OpenGL 3.3 lol. Due to this, color and depth information per sprite is copied into each of the sprite's vertices until we start seeing OpenGL ES 3.0 hardware.

 


Vincent_M, on 25 Aug 2013 - 5:30 PM, said:

-Any time you want to draw a sprite, you need a sprite batch. This is a pain if you're making an RPG with 4 main characters where they each have their own batches. Each character needs its own sprite batch for their sprite even though they're the only instance that'll ever need it at a time
Assuming your engine is aware of the difference between static sprites and dynamic sprites, the performance loss of putting your character sprites through the system should be negligible.
The interface you use to feed the sprites to your engine should also be simple enough that the user will barely notice that the sprites will get batched before rendering.

 

By static, do you mean moving/non-moving? My sprite batch just contains a single STL vector of vertices for all sprites attached to it. Having only a single sprite in a sprite batch wouldn't be slower than drawing sprites individually, but it's more setup code as you'd have to load the texture, allocate the sprite batch, link the texture to the sprite batch, allocate a sprite, link it to the sprite batch, setup sprite blitting params, then transform/update animations as necessary.

 

Then again, my SpriteBatch class does contain a list of metadata for sprite sheet animations too. You'd define the dimensions of the animation frame, how many frames, etc in a SpriteAnimation object, then link that to your SpriteBatch instance that's keeping reference to the sprite sheet texture. Then, the sprite instance just needs to use SetAnimation(), frame trimming, playback state, playback framerate (all optional), etc.

0

Share this post


Link to post
Share on other sites

@Matias: I'm trying to add onto my reply above since your post came in after my reply, but it doesn't seem to let me, so I apologize in advance for the double post. Here's are my replies to your points:

 

-I've seen some good things in OpenGL 3.3, but my main target is mobile (yep, I'm on  the mobile bandwagon...). I'm excited for OpenGL ES 3.0 since the specification does appear to support batching and MRTs for deferred rendering, but it looks like it could be a while before we start seeing devices that completely support the spec.

 

-Software rendering is probably a benefit for static sprites as you just transform them once, and update them whenever something changes in the future instead of every frame in a vertex shader. However, software transformation is kind of intense when something actually does change because there's 3, possibly 4 matrix4x4 multiplications and 4 matrix4x4 * vector3 operations. That's 48 to 64 dot products for the matrix multiplication and 12 more dot products to transform the vertices.

 

-As far as frustum culling is concerned, I may stop using my current Sprite class for tile maps and create a simpler sprite class that treats the sprites as static and generates sprite batches based on a quadtree. If I want anything dynamic, I'll treat them as actual sprites with full functionality that character sprites would have.

 

-So, if I move the vertices out of the viewport, the GPU won't attempt to draw them? I'm still sending the data to the GPU, but if it's outside the viewport upon final transformation in the vertex shader, the pixel shader won't be processed? If I understood that correctly, then I feel better about my fill rate concerns. I'm still sending unnecessary data to the GPU, but then again 3D games are sending way more vertex data to GPU with frustum culling than you'd encounter in an entire 2D scene, generally. I would think anyway.

 

-I've thought about writing an atlas generator, but it was meant to be a tool to generate them offline and produce meta data to tell my SpriteBatch class the coordinates of the sprites and animation frames held in the atlas. On-the-fly generation seems as if it could produce long load times as you'd be loading many separate image files at once instead of one larger one, and possibly generating sprite definition data on-the-spot.

0

Share this post


Link to post
Share on other sites
Most of thee worries involve negligible amounts of data and processing: for example, adding a colour to every one of hundreds and hundreds of sprites amounts to 1 (index) to 8 (16-bit RGBA) hundreds and hundreds of bytes per frame. Are there technical reasons to use many batches instead of putting all compatible sprites in the same batch? Singleton or almost singleton entities like the "4 main characters" can have reserved places in a shared vertex buffer rather than their separate batches.
0

Share this post


Link to post
Share on other sites

For batching 3d meshes I have a system which allows me to:

 

- batch "batches" (store only the list pointer)

- batch single meshes (store the data in a local list)

 

I use the mesh batches for blocks which don't change so often. So in this case efficient culling comes at price of drawing out-of-frustum objects. 

Instead of copying data (except for the single mesh case) I only add a batch list to a list of batches. Before drawing I upload the data to a buffer object (could be constant buffer or vertex buffer). So this way I can have one draw call per mesh type regardless how the meshes are batched. This can be implemented with sprites too. Of course it involves copying data around which may be less efficient with mobile devices.

 

Cheers!

0

Share this post


Link to post
Share on other sites

Most of thee worries involve negligible amounts of data and processing: for example, adding a colour to every one of hundreds and hundreds of sprites amounts to 1 (index) to 8 (16-bit RGBA) hundreds and hundreds of bytes per frame. Are there technical reasons to use many batches instead of putting all compatible sprites in the same batch? Singleton or almost singleton entities like the "4 main characters" can have reserved places in a shared vertex buffer rather than their separate batches.

The way I process my sprites currently are by texture: if I have a bunch of sprites using a texture, or group of similar textures for animated sprite sheets, etc, then all my sprites are listed in that sprite batch. Now, since the OpenGL specs I'm confined to at this time do not allow for the manipulation of my batches on a per object basis, I have to simulate that by copying that data into my vertices.

 

Right now, the idea is that I have a texture atlas with a bunch of images within it that my sprites can use. I link my sprite to the appropriate batch, and it usually never changes since it makes sense to stick with that batch. Now, the only way a sprite can be rendered is if it's attached to a sprite batch because it is what has access to the textures, not the sprite. I could very well have multiple batches referencing the same texture, which would simulate a quad tree as far as scene rendering goes...

0

Share this post


Link to post
Share on other sites

possibly 4 matrix4x4 multiplications and 4 matrix4x4 * vector3 operations. That's 48 to 64 dot products for the matrix multiplication and 12 more dot products to transform the vertices.

If your sprites don't rotate (or you classify between rotating & not rotating), you can get away with no matrix multiplication at all.
After all the transformation is just:
outPos.xy = inVertex.xy * scale.xy + position.xy;
outPos.zw = float2( -1, 1 );
That's just 1 dot product per vertex. Just make sure outPos.xy is in the [-1; 1] range.

-So, if I move the vertices out of the viewport, the GPU won't attempt to draw them? I'm still sending the data to the GPU, but if it's outside the viewport upon final transformation in the vertex shader, the pixel shader won't be processed? If I understood that correctly, then I feel better about my fill rate concerns. I'm still sending unnecessary data to the GPU, but then again 3D games are sending way more vertex data to GPU with frustum culling than you'd encounter in an entire 2D scene, generally. I would think anyway.

Yes, that's exactly what happens. Make sure all 3 vertices from the triangle lay outside the viewport. But don't make the number too big (i.e. don't place it at 8192.0 in screen coordinates; just bigger than 1.0 or less than -1.0 but not too far away; and W must not be 0)
1

Share this post


Link to post
Share on other sites

I like your thinking, and I do have a combiner method in my Matrix class that'll put a position and scale vector into a matrix as their elements are mutually exclusive. Rotation isn't all that common for many things, so that could be a nice workaround. The only problem is that many different types of objects 'can' rotate. What I could do is check to see if the rotation angle has changed between frames. If not, but position or scale have, then I can just use that combiner method in the class itself, possibly.

 

The code posted above does appear to be lightning fast. I'll keep that in mind for moving my triangles out of the way. When you say bigger than 1.0, is that how many pixels you suggest moving them out? My ortho matrix is based off of pixels.

0

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!


Register a new account

Sign in

Already have an account? Sign in here.


Sign In Now
Sign in to follow this  
Followers 0

  • Similar Content

    • By Solid_Spy
      Hello, I have been working on SH Irradiance map rendering, and I have been using a GLSL pixel shader to render SH irradiance to 2D irradiance maps for my static objects. I already have it working with 9 3D textures so far for the first 9 SH functions.
      In my GLSL shader, I have to send in 9 SH Coefficient 3D Texures that use RGBA8 as a pixel format. RGB being used for the coefficients for red, green, and blue, and the A for checking if the voxel is in use (for the 3D texture solidification shader to prevent bleeding).
      My problem is, I want to knock this number of textures down to something like 4 or 5. Getting even lower would be a godsend. This is because I eventually plan on adding more SH Coefficient 3D Textures for other parts of the game map (such as inside rooms, as opposed to the outside), to circumvent irradiance probe bleeding between rooms separated by walls. I don't want to reach the 32 texture limit too soon. Also, I figure that it would be a LOT faster.
      Is there a way I could, say, store 2 sets of SH Coefficients for 2 SH functions inside a texture with RGBA16 pixels? If so, how would I extract them from inside GLSL? Let me know if you have any suggestions ^^.
    • By DaniDesu
      #include "MyEngine.h" int main() { MyEngine myEngine; myEngine.run(); return 0; } MyEngine.h
      #pragma once #include "MyWindow.h" #include "MyShaders.h" #include "MyShapes.h" class MyEngine { private: GLFWwindow * myWindowHandle; MyWindow * myWindow; public: MyEngine(); ~MyEngine(); void run(); }; MyEngine.cpp
      #include "MyEngine.h" MyEngine::MyEngine() { MyWindow myWindow(800, 600, "My Game Engine"); this->myWindow = &myWindow; myWindow.createWindow(); this->myWindowHandle = myWindow.getWindowHandle(); // Load all OpenGL function pointers for use gladLoadGLLoader((GLADloadproc)glfwGetProcAddress); } MyEngine::~MyEngine() { this->myWindow->destroyWindow(); } void MyEngine::run() { MyShaders myShaders("VertexShader.glsl", "FragmentShader.glsl"); MyShapes myShapes; GLuint vertexArrayObjectHandle; float coordinates[] = { 0.5f, 0.5f, 0.0f, 0.5f, -0.5f, 0.0f, -0.5f, 0.5f, 0.0f }; vertexArrayObjectHandle = myShapes.drawTriangle(coordinates); while (!glfwWindowShouldClose(this->myWindowHandle)) { glClearColor(0.5f, 0.5f, 0.5f, 1.0f); glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); // Draw something glUseProgram(myShaders.getShaderProgram()); glBindVertexArray(vertexArrayObjectHandle); glDrawArrays(GL_TRIANGLES, 0, 3); glfwSwapBuffers(this->myWindowHandle); glfwPollEvents(); } } MyShaders.h
      #pragma once #include <glad\glad.h> #include <GLFW\glfw3.h> #include "MyFileHandler.h" class MyShaders { private: const char * vertexShaderFileName; const char * fragmentShaderFileName; const char * vertexShaderCode; const char * fragmentShaderCode; GLuint vertexShaderHandle; GLuint fragmentShaderHandle; GLuint shaderProgram; void compileShaders(); public: MyShaders(const char * vertexShaderFileName, const char * fragmentShaderFileName); ~MyShaders(); GLuint getShaderProgram(); const char * getVertexShaderCode(); const char * getFragmentShaderCode(); }; MyShaders.cpp
      #include "MyShaders.h" MyShaders::MyShaders(const char * vertexShaderFileName, const char * fragmentShaderFileName) { this->vertexShaderFileName = vertexShaderFileName; this->fragmentShaderFileName = fragmentShaderFileName; // Load shaders from files MyFileHandler myVertexShaderFileHandler(this->vertexShaderFileName); this->vertexShaderCode = myVertexShaderFileHandler.readFile(); MyFileHandler myFragmentShaderFileHandler(this->fragmentShaderFileName); this->fragmentShaderCode = myFragmentShaderFileHandler.readFile(); // Compile shaders this->compileShaders(); } MyShaders::~MyShaders() { } void MyShaders::compileShaders() { this->vertexShaderHandle = glCreateShader(GL_VERTEX_SHADER); this->fragmentShaderHandle = glCreateShader(GL_FRAGMENT_SHADER); glShaderSource(this->vertexShaderHandle, 1, &(this->vertexShaderCode), NULL); glShaderSource(this->fragmentShaderHandle, 1, &(this->fragmentShaderCode), NULL); glCompileShader(this->vertexShaderHandle); glCompileShader(this->fragmentShaderHandle); this->shaderProgram = glCreateProgram(); glAttachShader(this->shaderProgram, this->vertexShaderHandle); glAttachShader(this->shaderProgram, this->fragmentShaderHandle); glLinkProgram(this->shaderProgram); return; } GLuint MyShaders::getShaderProgram() { return this->shaderProgram; } const char * MyShaders::getVertexShaderCode() { return this->vertexShaderCode; } const char * MyShaders::getFragmentShaderCode() { return this->fragmentShaderCode; } MyWindow.h
      #pragma once #include <glad\glad.h> #include <GLFW\glfw3.h> class MyWindow { private: GLFWwindow * windowHandle; int windowWidth; int windowHeight; const char * windowTitle; public: MyWindow(int windowWidth, int windowHeight, const char * windowTitle); ~MyWindow(); GLFWwindow * getWindowHandle(); void createWindow(); void MyWindow::destroyWindow(); }; MyWindow.cpp
      #include "MyWindow.h" MyWindow::MyWindow(int windowWidth, int windowHeight, const char * windowTitle) { this->windowHandle = NULL; this->windowWidth = windowWidth; this->windowWidth = windowWidth; this->windowHeight = windowHeight; this->windowTitle = windowTitle; glfwInit(); } MyWindow::~MyWindow() { } GLFWwindow * MyWindow::getWindowHandle() { return this->windowHandle; } void MyWindow::createWindow() { // Use OpenGL 3.3 and GLSL 3.3 glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 3); glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3); // Limit backwards compatibility glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE); glfwWindowHint(GLFW_OPENGL_FORWARD_COMPAT, GL_TRUE); // Prevent resizing window glfwWindowHint(GLFW_RESIZABLE, GL_FALSE); // Create window this->windowHandle = glfwCreateWindow(this->windowWidth, this->windowHeight, this->windowTitle, NULL, NULL); glfwMakeContextCurrent(this->windowHandle); } void MyWindow::destroyWindow() { glfwTerminate(); } MyShapes.h
      #pragma once #include <glad\glad.h> #include <GLFW\glfw3.h> class MyShapes { public: MyShapes(); ~MyShapes(); GLuint & drawTriangle(float coordinates[]); }; MyShapes.cpp
      #include "MyShapes.h" MyShapes::MyShapes() { } MyShapes::~MyShapes() { } GLuint & MyShapes::drawTriangle(float coordinates[]) { GLuint vertexBufferObject{}; GLuint vertexArrayObject{}; // Create a VAO glGenVertexArrays(1, &vertexArrayObject); glBindVertexArray(vertexArrayObject); // Send vertices to the GPU glGenBuffers(1, &vertexBufferObject); glBindBuffer(GL_ARRAY_BUFFER, vertexBufferObject); glBufferData(GL_ARRAY_BUFFER, sizeof(coordinates), coordinates, GL_STATIC_DRAW); // Dertermine the interpretation of the array buffer glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 3*sizeof(float), (void *)0); glEnableVertexAttribArray(0); // Unbind the buffers glBindBuffer(GL_ARRAY_BUFFER, 0); glBindVertexArray(0); return vertexArrayObject; } MyFileHandler.h
      #pragma once #include <cstdio> #include <cstdlib> class MyFileHandler { private: const char * fileName; unsigned long fileSize; void setFileSize(); public: MyFileHandler(const char * fileName); ~MyFileHandler(); unsigned long getFileSize(); const char * readFile(); }; MyFileHandler.cpp
      #include "MyFileHandler.h" MyFileHandler::MyFileHandler(const char * fileName) { this->fileName = fileName; this->setFileSize(); } MyFileHandler::~MyFileHandler() { } void MyFileHandler::setFileSize() { FILE * fileHandle = NULL; fopen_s(&fileHandle, this->fileName, "rb"); fseek(fileHandle, 0L, SEEK_END); this->fileSize = ftell(fileHandle); rewind(fileHandle); fclose(fileHandle); return; } unsigned long MyFileHandler::getFileSize() { return (this->fileSize); } const char * MyFileHandler::readFile() { char * buffer = (char *)malloc((this->fileSize)+1); FILE * fileHandle = NULL; fopen_s(&fileHandle, this->fileName, "rb"); fread(buffer, this->fileSize, sizeof(char), fileHandle); fclose(fileHandle); buffer[this->fileSize] = '\0'; return buffer; } VertexShader.glsl
      #version 330 core layout (location = 0) vec3 VertexPositions; void main() { gl_Position = vec4(VertexPositions, 1.0f); } FragmentShader.glsl
      #version 330 core out vec4 FragmentColor; void main() { FragmentColor = vec4(1.0f, 0.0f, 0.0f, 1.0f); } I am attempting to create a simple engine/graphics utility using some object-oriented paradigms. My first goal is to get some output from my engine, namely, a simple red triangle.
      For this goal, the MyShapes class will be responsible for defining shapes such as triangles, polygons etc. Currently, there is only a drawTriangle() method implemented, because I first wanted to see whether it works or not before attempting to code other shape drawing methods.
      The constructor of the MyEngine class creates a GLFW window (GLAD is also initialized here to load all OpenGL functionality), and the myEngine.run() method in Main.cpp is responsible for firing up the engine. In this run() method, the shaders get loaded from files via the help of my FileHandler class. The vertices for the triangle are processed by the myShapes.drawTriangle() method where a vertex array object, a vertex buffer object and vertrex attributes are set for this purpose.
      The while loop in the run() method should be outputting me the desired red triangle, but all I get is a grey window area. Why?
      Note: The shaders are compiling and linking without any errors.
      (Note: I am aware that this code is not using any good software engineering practices (e.g. exceptions, error handling). I am planning to implement them later, once I get the hang of OpenGL.)

       
    • By KarimIO
      EDIT: I thought this was restricted to Attribute-Created GL contexts, but it isn't, so I rewrote the post.
      Hey guys, whenever I call SwapBuffers(hDC), I get a crash, and I get a "Too many posts were made to a semaphore." from Windows as I call SwapBuffers. What could be the cause of this?
      Update: No crash occurs if I don't draw, just clear and swap.
      static PIXELFORMATDESCRIPTOR pfd = // pfd Tells Windows How We Want Things To Be { sizeof(PIXELFORMATDESCRIPTOR), // Size Of This Pixel Format Descriptor 1, // Version Number PFD_DRAW_TO_WINDOW | // Format Must Support Window PFD_SUPPORT_OPENGL | // Format Must Support OpenGL PFD_DOUBLEBUFFER, // Must Support Double Buffering PFD_TYPE_RGBA, // Request An RGBA Format 32, // Select Our Color Depth 0, 0, 0, 0, 0, 0, // Color Bits Ignored 0, // No Alpha Buffer 0, // Shift Bit Ignored 0, // No Accumulation Buffer 0, 0, 0, 0, // Accumulation Bits Ignored 24, // 24Bit Z-Buffer (Depth Buffer) 0, // No Stencil Buffer 0, // No Auxiliary Buffer PFD_MAIN_PLANE, // Main Drawing Layer 0, // Reserved 0, 0, 0 // Layer Masks Ignored }; if (!(hDC = GetDC(windowHandle))) return false; unsigned int PixelFormat; if (!(PixelFormat = ChoosePixelFormat(hDC, &pfd))) return false; if (!SetPixelFormat(hDC, PixelFormat, &pfd)) return false; hRC = wglCreateContext(hDC); if (!hRC) { std::cout << "wglCreateContext Failed!\n"; return false; } if (wglMakeCurrent(hDC, hRC) == NULL) { std::cout << "Make Context Current Second Failed!\n"; return false; } ... // OGL Buffer Initialization glClear(GL_DEPTH_BUFFER_BIT | GL_COLOR_BUFFER_BIT); glBindVertexArray(vao); glUseProgram(myprogram); glDrawElements(GL_TRIANGLES, indexCount, GL_UNSIGNED_SHORT, (void *)indexStart); SwapBuffers(GetDC(window_handle));  
    • By Tchom
      Hey devs!
       
      I've been working on a OpenGL ES 2.0 android engine and I have begun implementing some simple (point) lighting. I had something fairly simple working, so I tried to get fancy and added color-tinting light. And it works great... with only one or two lights. Any more than that, the application drops about 15 frames per light added (my ideal is at least 4 or 5). I know implementing lighting is expensive, I just didn't think it was that expensive. I'm fairly new to the world of OpenGL and GLSL, so there is a good chance I've written some crappy shader code. If anyone had any feedback or tips on how I can optimize this code, please let me know.
       
      Vertex Shader
      uniform mat4 u_MVPMatrix; uniform mat4 u_MVMatrix; attribute vec4 a_Position; attribute vec3 a_Normal; attribute vec2 a_TexCoordinate; varying vec3 v_Position; varying vec3 v_Normal; varying vec2 v_TexCoordinate; void main() { v_Position = vec3(u_MVMatrix * a_Position); v_TexCoordinate = a_TexCoordinate; v_Normal = vec3(u_MVMatrix * vec4(a_Normal, 0.0)); gl_Position = u_MVPMatrix * a_Position; } Fragment Shader
      precision mediump float; uniform vec4 u_LightPos["+numLights+"]; uniform vec4 u_LightColours["+numLights+"]; uniform float u_LightPower["+numLights+"]; uniform sampler2D u_Texture; varying vec3 v_Position; varying vec3 v_Normal; varying vec2 v_TexCoordinate; void main() { gl_FragColor = (texture2D(u_Texture, v_TexCoordinate)); float diffuse = 0.0; vec4 colourSum = vec4(1.0); for (int i = 0; i < "+numLights+"; i++) { vec3 toPointLight = vec3(u_LightPos[i]); float distance = length(toPointLight - v_Position); vec3 lightVector = normalize(toPointLight - v_Position); float diffuseDiff = 0.0; // The diffuse difference contributed from current light diffuseDiff = max(dot(v_Normal, lightVector), 0.0); diffuseDiff = diffuseDiff * (1.0 / (1.0 + ((1.0-u_LightPower[i])* distance * distance))); //Determine attenuatio diffuse += diffuseDiff; gl_FragColor.rgb *= vec3(1.0) / ((vec3(1.0) + ((vec3(1.0) - vec3(u_LightColours[i]))*diffuseDiff))); //The expensive part } diffuse += 0.1; //Add ambient light gl_FragColor.rgb *= diffuse; } Am I making any rookie mistakes? Or am I just being unrealistic about what I can do? Thanks in advance
    • By yahiko00
      Hi,
      Not sure to post at the right place, if not, please forgive me...
      For a game project I am working on, I would like to implement a 2D starfield as a background.
      I do not want to deal with static tiles, since I plan to slowly animate the starfield. So, I am trying to figure out how to generate a random starfield for the entire map.
      I feel that using a uniform distribution for the stars will not do the trick. Instead I would like something similar to the screenshot below, taken from the game Star Wars: Empire At War (all credits to Lucasfilm, Disney, and so on...).

      Is there someone who could have an idea of a distribution which could result in such a starfield?
      Any insight would be appreciated
  • Popular Now