• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.
Sign in to follow this  
Followers 0
madgallagher

OpenGL
Strange Video Memory Increase

16 posts in this topic

Hi Guys

 

I have a weird issue on opengl. I am rendering a lot of tris (millions) using vertex arrays.

Here is the issue.

 

If I load up the tris and display as shaded with a constant color, the video memory is showing around 600MB.

 

If I then change the color array values to be varied at different vertex, the memory usage increases to around 1GB.

If I then spin the model around, the memory usage slowly decreases to around 600MB (after a couple of minutes).

 

But, if I load up the model straight away with varied shading, the memory is only 600MB.

 

This is on linux, so is it a driver issue ? Or is there something more obvious I am neglecting.

 

I am using nvidia-smi to check the graphics memory usage.

 

Cheers in advance for any help !

0

Share this post


Link to post
Share on other sites

My first question is "Are you using the Linux drivers from your graphics card manufacturer or Nouveau?" If you are Nouveau I recommend you get the official Linux drivers for you video card before moving on.

 

Once thats all set and done and it still doesn't work, they my next question is "Are you using any type of culling?" If you are, its possible that when switching from the constant color to the varied shading the culling gets reset and isn't re-called until you move the camera (spin the model) or load the model right away with the varied shading. You'll need to find way to make sure that the culler is "always active". If you are attempting to use deferred shading rendering, there could also be some issues there, but I wouldn't be able to help you much as I'm just starting to learn about using defShading with OpenGL.

 

Also, have you tried testing it with a lower-poly model (maybe in the thousands of tris)?

 

Lastly, what version of OpenGL are you using and what is your graphics card?

0

Share this post


Link to post
Share on other sites

Hi

 

I am using the latest linux driver from Nvidia. The card is pretty old. Its an FX3800, but we see the same issue on an FX4000.

The same routines are used for both types of shading. I'm simply changing the values in the color array. Nothing else.

 

We get the same "doubling" of memory usage on smaller models too. Its like when the color array is updated, the memory

is not re-allocated on the card properly.

0

Share this post


Link to post
Share on other sites

I'm simply changing the values in the color array.

How are you doing this? Is it a VBO or a client-side vertex array? What hints was it created with?

 

the memory usage slowly decreases to around 600MB (after a couple of minutes).

Is this actually a problem? If the driver doesn't need that memory for other tasks right now, they it's ok for it to delay releasing it.

Edited by Hodgman
0

Share this post


Link to post
Share on other sites

OK, the render code is basically as follows:

 

glShadeModel(GL_SMOOTH);

glEnable(GL_COLOR_MATERIAL)

glColorMaterial(GL_FRONT_AND_BACK,GL_AMBIENT);

glColorMaterial(GL_FRONT_AND_BACK,GL_SPECULAR);

glColorMaterial(GL_FRONT_AND_BACK,GL_DIFFUSE);

 

glEnableClientState(GL_VERTEX_ARRAY);

glVertexPointer(....);

glEnableClientState(GL_NORMAL_ARRAY);

glNormalPointer(...);

glEnableClientState(GL_COLOR_ARRAY);

glColorPointer(....);

glDrawElements(........);

 

So, I am just using arrays, not VBOs. All I do is update the values in the color array that glColorPointer points to before I come into this routine.

 

The problem is, on a 1GB graphics card, if I am using 600MB and then change the color, it swamps the card and

the system starts badly lagging because it is trying to double the memory usage.

0

Share this post


Link to post
Share on other sites

By you use of glColorPointer() and the FX3000/4000, I'm guessing your using OpenGL 2.0. I personally don't have too much experience with anything earlier than OpenGL 3.1, so please take my help as a grain of salt.

 

Do you disable client state after you are finished writing to it or at program close? If you use the later, try disabling it after you are finished drawing everything. I'm not sure weather or not it will help, but its worth a shot in my opinion.

 

Is it possible for you to use VBOs rather than arrays? My understanding is that even back in OpenGL 2, it was more efficient to use buffers rather than arrays.

0

Share this post


Link to post
Share on other sites

Yes, sorry I forgot to add in the glDisableClientState commands. These are all deleted after the tris are drawn each time.

 

I added in VBOs and got the same effect even when deleting the VBO. Even simply using glBegin/End for the rendering (shock horror !!) causes the same effect.

 

It is kind of driving me crazy..................

0

Share this post


Link to post
Share on other sites

It's driving me crazy that I don't know how else I can help you...blink.png

 

Just one quick question... Are you using C++ or C or another language? Also, if you are using C++ or C, are you using any libraries like GLEW or SDL?

0

Share this post


Link to post
Share on other sites

Its all in C baby. I am not using anything like GLEW or SDL. Its pretty basic stuff.

 

Do you think it is a driver issue which is out of my control ??

0

Share this post


Link to post
Share on other sites

It could possibly be a driver issue. Have you tried it on an AMD/ATI card? What linux distro are you using?

0

Share this post


Link to post
Share on other sites
I suspect that this may be normal behaviour rather than a driver issue. The fact that you're seeing this kind of video RAM usage at all with client arrays indicates that the geometry is being cached in video RAM. So you specify the arrays and draw, all good. Then you change the color array and now you've got two copies cached - the original (which the driver is presumably keeping around in case you need to go back to it) and the new. After a while, if you haven't gone back to the original, the driver decides to throw out it's old copy.

You may be able to control some of this behaviour with some use of GL_EXT_compiled_vertex_array, or you may be better off not using a vertex array for your constant color case (just glColor4f it instead).
2

Share this post


Link to post
Share on other sites

Thanks, I'll check out the GL_EXT_compiled _vertex_array. I get the same problem when I have varied colours

at the vertex and just change the values to say RGB ->RBG. Basically, any change to the colour array causes

the problem.

 

I appreciate all your comments gentlemen.

0

Share this post


Link to post
Share on other sites

mhagain, on 28 Mar 2013 - 12:50, said:
I suspect that this may be normal behaviour rather than a driver issue. The fact that you're seeing this kind of video RAM usage at all with client arrays indicates that the geometry is being cached in video RAM. So you specify the arrays and draw, all good. Then you change the color array and now you've got two copies cached - the original (which the driver is presumably keeping around in case you need to go back to it) and the new. After a while, if you haven't gone back to the original, the driver decides to throw out it's old copy.

You may be able to control some of this behaviour with some use of GL_EXT_compiled_vertex_array, or you may be better off not using a vertex array for your constant color case (just glColor4f it instead).

Pretty much this. For the most part, the driver knows best. If your changing values in your submitted data, it's highly likely the driver is simply caching the data, without releasing the old buffer immediately.

This is not uncommon to see in windows either. In short, i'd recommend not worrying about what the driver is doing, unless it's actually causing a problem.
0

Share this post


Link to post
Share on other sites

Well I think it is a problem for him as he said that when he changes the value, his graphics card's memory usage shoots up from 600M to 1G and then it starts to lag. If the driver is caching the data, then he would need to find a way to remove the values he's editing from the cache.

0

Share this post


Link to post
Share on other sites
The problem is in the "idea" to store 600MB in a single vertex array and transfer it in each draw call. That is not the way to go. I guess the performace is terrible if the data is not cached. The driver is probably trying to save performance by caching data and that's the problem. It could be solved by making smaller buffers.
0

Share this post


Link to post
Share on other sites
The only real way to handle this that I can think of is to use two different fragment shaders, one with the constant colour as a uniform. From the looks of the OP's sample code he's not using shaders at all, but running on prehistoric hardware shouldnt be a problem here; the kind of hardware that can handle this volume of data will always have shaders available anyway.
0

Share this post


Link to post
Share on other sites

Well I think it is a problem for him as he said that when he changes the value, his graphics card's memory usage shoots up from 600M to 1G and then it starts to lag. If the driver is caching the data, then he would need to find a way to remove the values he's editing from the cache.

ah, i missed where he said it was lagging, my bad.
0

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!


Register a new account

Sign in

Already have an account? Sign in here.


Sign In Now
Sign in to follow this  
Followers 0

  • Similar Content

    • By Solid_Spy
      Hello, I have been working on SH Irradiance map rendering, and I have been using a GLSL pixel shader to render SH irradiance to 2D irradiance maps for my static objects. I already have it working with 9 3D textures so far for the first 9 SH functions.
      In my GLSL shader, I have to send in 9 SH Coefficient 3D Texures that use RGBA8 as a pixel format. RGB being used for the coefficients for red, green, and blue, and the A for checking if the voxel is in use (for the 3D texture solidification shader to prevent bleeding).
      My problem is, I want to knock this number of textures down to something like 4 or 5. Getting even lower would be a godsend. This is because I eventually plan on adding more SH Coefficient 3D Textures for other parts of the game map (such as inside rooms, as opposed to the outside), to circumvent irradiance probe bleeding between rooms separated by walls. I don't want to reach the 32 texture limit too soon. Also, I figure that it would be a LOT faster.
      Is there a way I could, say, store 2 sets of SH Coefficients for 2 SH functions inside a texture with RGBA16 pixels? If so, how would I extract them from inside GLSL? Let me know if you have any suggestions ^^.
    • By DaniDesu
      #include "MyEngine.h" int main() { MyEngine myEngine; myEngine.run(); return 0; } MyEngine.h
      #pragma once #include "MyWindow.h" #include "MyShaders.h" #include "MyShapes.h" class MyEngine { private: GLFWwindow * myWindowHandle; MyWindow * myWindow; public: MyEngine(); ~MyEngine(); void run(); }; MyEngine.cpp
      #include "MyEngine.h" MyEngine::MyEngine() { MyWindow myWindow(800, 600, "My Game Engine"); this->myWindow = &myWindow; myWindow.createWindow(); this->myWindowHandle = myWindow.getWindowHandle(); // Load all OpenGL function pointers for use gladLoadGLLoader((GLADloadproc)glfwGetProcAddress); } MyEngine::~MyEngine() { this->myWindow->destroyWindow(); } void MyEngine::run() { MyShaders myShaders("VertexShader.glsl", "FragmentShader.glsl"); MyShapes myShapes; GLuint vertexArrayObjectHandle; float coordinates[] = { 0.5f, 0.5f, 0.0f, 0.5f, -0.5f, 0.0f, -0.5f, 0.5f, 0.0f }; vertexArrayObjectHandle = myShapes.drawTriangle(coordinates); while (!glfwWindowShouldClose(this->myWindowHandle)) { glClearColor(0.5f, 0.5f, 0.5f, 1.0f); glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); // Draw something glUseProgram(myShaders.getShaderProgram()); glBindVertexArray(vertexArrayObjectHandle); glDrawArrays(GL_TRIANGLES, 0, 3); glfwSwapBuffers(this->myWindowHandle); glfwPollEvents(); } } MyShaders.h
      #pragma once #include <glad\glad.h> #include <GLFW\glfw3.h> #include "MyFileHandler.h" class MyShaders { private: const char * vertexShaderFileName; const char * fragmentShaderFileName; const char * vertexShaderCode; const char * fragmentShaderCode; GLuint vertexShaderHandle; GLuint fragmentShaderHandle; GLuint shaderProgram; void compileShaders(); public: MyShaders(const char * vertexShaderFileName, const char * fragmentShaderFileName); ~MyShaders(); GLuint getShaderProgram(); const char * getVertexShaderCode(); const char * getFragmentShaderCode(); }; MyShaders.cpp
      #include "MyShaders.h" MyShaders::MyShaders(const char * vertexShaderFileName, const char * fragmentShaderFileName) { this->vertexShaderFileName = vertexShaderFileName; this->fragmentShaderFileName = fragmentShaderFileName; // Load shaders from files MyFileHandler myVertexShaderFileHandler(this->vertexShaderFileName); this->vertexShaderCode = myVertexShaderFileHandler.readFile(); MyFileHandler myFragmentShaderFileHandler(this->fragmentShaderFileName); this->fragmentShaderCode = myFragmentShaderFileHandler.readFile(); // Compile shaders this->compileShaders(); } MyShaders::~MyShaders() { } void MyShaders::compileShaders() { this->vertexShaderHandle = glCreateShader(GL_VERTEX_SHADER); this->fragmentShaderHandle = glCreateShader(GL_FRAGMENT_SHADER); glShaderSource(this->vertexShaderHandle, 1, &(this->vertexShaderCode), NULL); glShaderSource(this->fragmentShaderHandle, 1, &(this->fragmentShaderCode), NULL); glCompileShader(this->vertexShaderHandle); glCompileShader(this->fragmentShaderHandle); this->shaderProgram = glCreateProgram(); glAttachShader(this->shaderProgram, this->vertexShaderHandle); glAttachShader(this->shaderProgram, this->fragmentShaderHandle); glLinkProgram(this->shaderProgram); return; } GLuint MyShaders::getShaderProgram() { return this->shaderProgram; } const char * MyShaders::getVertexShaderCode() { return this->vertexShaderCode; } const char * MyShaders::getFragmentShaderCode() { return this->fragmentShaderCode; } MyWindow.h
      #pragma once #include <glad\glad.h> #include <GLFW\glfw3.h> class MyWindow { private: GLFWwindow * windowHandle; int windowWidth; int windowHeight; const char * windowTitle; public: MyWindow(int windowWidth, int windowHeight, const char * windowTitle); ~MyWindow(); GLFWwindow * getWindowHandle(); void createWindow(); void MyWindow::destroyWindow(); }; MyWindow.cpp
      #include "MyWindow.h" MyWindow::MyWindow(int windowWidth, int windowHeight, const char * windowTitle) { this->windowHandle = NULL; this->windowWidth = windowWidth; this->windowWidth = windowWidth; this->windowHeight = windowHeight; this->windowTitle = windowTitle; glfwInit(); } MyWindow::~MyWindow() { } GLFWwindow * MyWindow::getWindowHandle() { return this->windowHandle; } void MyWindow::createWindow() { // Use OpenGL 3.3 and GLSL 3.3 glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 3); glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3); // Limit backwards compatibility glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE); glfwWindowHint(GLFW_OPENGL_FORWARD_COMPAT, GL_TRUE); // Prevent resizing window glfwWindowHint(GLFW_RESIZABLE, GL_FALSE); // Create window this->windowHandle = glfwCreateWindow(this->windowWidth, this->windowHeight, this->windowTitle, NULL, NULL); glfwMakeContextCurrent(this->windowHandle); } void MyWindow::destroyWindow() { glfwTerminate(); } MyShapes.h
      #pragma once #include <glad\glad.h> #include <GLFW\glfw3.h> class MyShapes { public: MyShapes(); ~MyShapes(); GLuint & drawTriangle(float coordinates[]); }; MyShapes.cpp
      #include "MyShapes.h" MyShapes::MyShapes() { } MyShapes::~MyShapes() { } GLuint & MyShapes::drawTriangle(float coordinates[]) { GLuint vertexBufferObject{}; GLuint vertexArrayObject{}; // Create a VAO glGenVertexArrays(1, &vertexArrayObject); glBindVertexArray(vertexArrayObject); // Send vertices to the GPU glGenBuffers(1, &vertexBufferObject); glBindBuffer(GL_ARRAY_BUFFER, vertexBufferObject); glBufferData(GL_ARRAY_BUFFER, sizeof(coordinates), coordinates, GL_STATIC_DRAW); // Dertermine the interpretation of the array buffer glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 3*sizeof(float), (void *)0); glEnableVertexAttribArray(0); // Unbind the buffers glBindBuffer(GL_ARRAY_BUFFER, 0); glBindVertexArray(0); return vertexArrayObject; } MyFileHandler.h
      #pragma once #include <cstdio> #include <cstdlib> class MyFileHandler { private: const char * fileName; unsigned long fileSize; void setFileSize(); public: MyFileHandler(const char * fileName); ~MyFileHandler(); unsigned long getFileSize(); const char * readFile(); }; MyFileHandler.cpp
      #include "MyFileHandler.h" MyFileHandler::MyFileHandler(const char * fileName) { this->fileName = fileName; this->setFileSize(); } MyFileHandler::~MyFileHandler() { } void MyFileHandler::setFileSize() { FILE * fileHandle = NULL; fopen_s(&fileHandle, this->fileName, "rb"); fseek(fileHandle, 0L, SEEK_END); this->fileSize = ftell(fileHandle); rewind(fileHandle); fclose(fileHandle); return; } unsigned long MyFileHandler::getFileSize() { return (this->fileSize); } const char * MyFileHandler::readFile() { char * buffer = (char *)malloc((this->fileSize)+1); FILE * fileHandle = NULL; fopen_s(&fileHandle, this->fileName, "rb"); fread(buffer, this->fileSize, sizeof(char), fileHandle); fclose(fileHandle); buffer[this->fileSize] = '\0'; return buffer; } VertexShader.glsl
      #version 330 core layout (location = 0) vec3 VertexPositions; void main() { gl_Position = vec4(VertexPositions, 1.0f); } FragmentShader.glsl
      #version 330 core out vec4 FragmentColor; void main() { FragmentColor = vec4(1.0f, 0.0f, 0.0f, 1.0f); } I am attempting to create a simple engine/graphics utility using some object-oriented paradigms. My first goal is to get some output from my engine, namely, a simple red triangle.
      For this goal, the MyShapes class will be responsible for defining shapes such as triangles, polygons etc. Currently, there is only a drawTriangle() method implemented, because I first wanted to see whether it works or not before attempting to code other shape drawing methods.
      The constructor of the MyEngine class creates a GLFW window (GLAD is also initialized here to load all OpenGL functionality), and the myEngine.run() method in Main.cpp is responsible for firing up the engine. In this run() method, the shaders get loaded from files via the help of my FileHandler class. The vertices for the triangle are processed by the myShapes.drawTriangle() method where a vertex array object, a vertex buffer object and vertrex attributes are set for this purpose.
      The while loop in the run() method should be outputting me the desired red triangle, but all I get is a grey window area. Why?
      Note: The shaders are compiling and linking without any errors.
      (Note: I am aware that this code is not using any good software engineering practices (e.g. exceptions, error handling). I am planning to implement them later, once I get the hang of OpenGL.)

       
    • By KarimIO
      EDIT: I thought this was restricted to Attribute-Created GL contexts, but it isn't, so I rewrote the post.
      Hey guys, whenever I call SwapBuffers(hDC), I get a crash, and I get a "Too many posts were made to a semaphore." from Windows as I call SwapBuffers. What could be the cause of this?
      Update: No crash occurs if I don't draw, just clear and swap.
      static PIXELFORMATDESCRIPTOR pfd = // pfd Tells Windows How We Want Things To Be { sizeof(PIXELFORMATDESCRIPTOR), // Size Of This Pixel Format Descriptor 1, // Version Number PFD_DRAW_TO_WINDOW | // Format Must Support Window PFD_SUPPORT_OPENGL | // Format Must Support OpenGL PFD_DOUBLEBUFFER, // Must Support Double Buffering PFD_TYPE_RGBA, // Request An RGBA Format 32, // Select Our Color Depth 0, 0, 0, 0, 0, 0, // Color Bits Ignored 0, // No Alpha Buffer 0, // Shift Bit Ignored 0, // No Accumulation Buffer 0, 0, 0, 0, // Accumulation Bits Ignored 24, // 24Bit Z-Buffer (Depth Buffer) 0, // No Stencil Buffer 0, // No Auxiliary Buffer PFD_MAIN_PLANE, // Main Drawing Layer 0, // Reserved 0, 0, 0 // Layer Masks Ignored }; if (!(hDC = GetDC(windowHandle))) return false; unsigned int PixelFormat; if (!(PixelFormat = ChoosePixelFormat(hDC, &pfd))) return false; if (!SetPixelFormat(hDC, PixelFormat, &pfd)) return false; hRC = wglCreateContext(hDC); if (!hRC) { std::cout << "wglCreateContext Failed!\n"; return false; } if (wglMakeCurrent(hDC, hRC) == NULL) { std::cout << "Make Context Current Second Failed!\n"; return false; } ... // OGL Buffer Initialization glClear(GL_DEPTH_BUFFER_BIT | GL_COLOR_BUFFER_BIT); glBindVertexArray(vao); glUseProgram(myprogram); glDrawElements(GL_TRIANGLES, indexCount, GL_UNSIGNED_SHORT, (void *)indexStart); SwapBuffers(GetDC(window_handle));  
    • By Tchom
      Hey devs!
       
      I've been working on a OpenGL ES 2.0 android engine and I have begun implementing some simple (point) lighting. I had something fairly simple working, so I tried to get fancy and added color-tinting light. And it works great... with only one or two lights. Any more than that, the application drops about 15 frames per light added (my ideal is at least 4 or 5). I know implementing lighting is expensive, I just didn't think it was that expensive. I'm fairly new to the world of OpenGL and GLSL, so there is a good chance I've written some crappy shader code. If anyone had any feedback or tips on how I can optimize this code, please let me know.
       
      Vertex Shader
      uniform mat4 u_MVPMatrix; uniform mat4 u_MVMatrix; attribute vec4 a_Position; attribute vec3 a_Normal; attribute vec2 a_TexCoordinate; varying vec3 v_Position; varying vec3 v_Normal; varying vec2 v_TexCoordinate; void main() { v_Position = vec3(u_MVMatrix * a_Position); v_TexCoordinate = a_TexCoordinate; v_Normal = vec3(u_MVMatrix * vec4(a_Normal, 0.0)); gl_Position = u_MVPMatrix * a_Position; } Fragment Shader
      precision mediump float; uniform vec4 u_LightPos["+numLights+"]; uniform vec4 u_LightColours["+numLights+"]; uniform float u_LightPower["+numLights+"]; uniform sampler2D u_Texture; varying vec3 v_Position; varying vec3 v_Normal; varying vec2 v_TexCoordinate; void main() { gl_FragColor = (texture2D(u_Texture, v_TexCoordinate)); float diffuse = 0.0; vec4 colourSum = vec4(1.0); for (int i = 0; i < "+numLights+"; i++) { vec3 toPointLight = vec3(u_LightPos[i]); float distance = length(toPointLight - v_Position); vec3 lightVector = normalize(toPointLight - v_Position); float diffuseDiff = 0.0; // The diffuse difference contributed from current light diffuseDiff = max(dot(v_Normal, lightVector), 0.0); diffuseDiff = diffuseDiff * (1.0 / (1.0 + ((1.0-u_LightPower[i])* distance * distance))); //Determine attenuatio diffuse += diffuseDiff; gl_FragColor.rgb *= vec3(1.0) / ((vec3(1.0) + ((vec3(1.0) - vec3(u_LightColours[i]))*diffuseDiff))); //The expensive part } diffuse += 0.1; //Add ambient light gl_FragColor.rgb *= diffuse; } Am I making any rookie mistakes? Or am I just being unrealistic about what I can do? Thanks in advance
    • By yahiko00
      Hi,
      Not sure to post at the right place, if not, please forgive me...
      For a game project I am working on, I would like to implement a 2D starfield as a background.
      I do not want to deal with static tiles, since I plan to slowly animate the starfield. So, I am trying to figure out how to generate a random starfield for the entire map.
      I feel that using a uniform distribution for the stars will not do the trick. Instead I would like something similar to the screenshot below, taken from the game Star Wars: Empire At War (all credits to Lucasfilm, Disney, and so on...).

      Is there someone who could have an idea of a distribution which could result in such a starfield?
      Any insight would be appreciated
  • Popular Now