• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.

Archived

This topic is now archived and is closed to further replies.

Dauntless

OpenGL
OpenGL 2.0?

25 posts in this topic

Has anyone heard any word of what will be of 3dLabs proprosal for OpenGL 2.0? I''m brand spankin new to OpenGL programming, and I kind of worry about the ARB''s lack of progress in putting in new features. What 3dLabs proposed seems pretty cool, but I''ve yet to hear any mention of whether it will be improved or not. Somehow, having proprietary extensions seems to be anti-thetical to the whole notion of the Open part of OpenGL. I hope the proposal goes through....
0

Share this post


Link to post
Share on other sites
On the OpenGL Official homepage you''ll find some info about the 2.0 proposal in the news.

www.opengl.org

0

Share this post


Link to post
Share on other sites
the arb are actively working on opengl2.0 at the moment, we will have to wait + see if its exactly like the 3dlabs proposal
0

Share this post


Link to post
Share on other sites
quote:
Original post by Dauntless
Somehow, having proprietary extensions seems to be anti-thetical to the whole notion of the Open part of OpenGL. I hope the proposal goes through....

I disagree. It makes it more open by allowing vendors to improve the API independent of any central group. The central group (the ARB) will later be able to standardize the extensions. This whole process allows vendors to add and test their features from the beginning.

[Resist Windows XP''s Invasive Production Activation Technology!]
0

Share this post


Link to post
Share on other sites
In my (personal) opinion ... as much as i love OpenGL compared to D3D ... D3D will win unless the ARB becomes FAR more proatice with regard to the extension problem. It''s very easy to promote such extensions, but unless every card manufacture adheres to the basic structure, this thing is for nothing. It''s about time that card manufacturers got together and decided upon a common policy which sidelined the ARB and produced a single interface to sell OpenGL. Otherwise, like I said, OpenGL will become defunct.

I never thought I''d say this, but I''m now looking at D3D as a viable alternative - and D3D has so many problems it isn''t true - but it''s still more viable at the moment.
0

Share this post


Link to post
Share on other sites
OpenGL does have a "single interface," that''s why all the extensions have the same ''style''. However, their potential uniqueness is the whole reason behind them! You can get more out of an NVidia or ATI or whatever video card through extensions than you can through DirectX (by the word of both NVidia and ATI). If you''re worried about programmable shaders, those should be standardize in the near future, since ATI and NVidia are finally agreeing with each other about them (read the ARB notes).

OpenGL 1.3 does have a lot of the extensions turned into standard functions, but Windows doesn''t support OpenGL 1.3. Almost every other OS does. Microsoft is simply trying to suppress OpenGL.

[Resist Windows XP''s Invasive Production Activation Technology!]
0

Share this post


Link to post
Share on other sites
N and V ...

Actually you''re kinda wrong and right at the same time - OpenGL (and extensions) have been around for a while now, and no one has aggreed to agree! It''s up to the ARB to force this kind of common policy, but I don''t see it happening for at least 2 years, maybe more. I wish that wasn''t the case ... but I fear it is. The ARB is a VERY slow acting organisation, whereas at least D3D is owned by one company and developed for it''s own purposes - I wish this wasn''t the case, but I fear it''s true.
0

Share this post


Link to post
Share on other sites
quote:
Original post by Shag
N and V ...

Actually you''re kinda wrong and right at the same time - OpenGL (and extensions) have been around for a while now, and no one has aggreed to agree! It''s up to the ARB to force this kind of common policy, but I don''t see it happening for at least 2 years, maybe more. I wish that wasn''t the case ... but I fear it is. The ARB is a VERY slow acting organisation, whereas at least D3D is owned by one company and developed for it''s own purposes - I wish this wasn''t the case, but I fear it''s true.


In that case, why don''t you just use D3D instead of arguing about OGL''s future?

I don''t mean to be harsh or anything but if you don''t see a future for OGL, why in heck do you use it?





"And that''s the bottom line cause I said so!"

Cyberdrek
Headhunter Soft
A division of DLC Multimedia

Resist Windows XP''s Invasive Production Activation Technology!

"gitty up" -- Kramer
0

Share this post


Link to post
Share on other sites
Why the heck do I use it?

Because it''s wonderful to use, and i care about it''s future. I just don''t won''t OpenGL to end up on the scrap pile because it couldn''t compete. That would be the biggest shame ever! Can you imagine Microsoft seeing off another BETTER technology?

Rest my case ...
0

Share this post


Link to post
Share on other sites
Mostly the so called shader technologies which nvidia and ati use. The trouble is they both seem to be diverging, rather than converging!

This is where D3D wins ... cards support a unified methodolgy in this field ... whereas OpenGL does not. Don''t get me wrong, OpenGL (in my opinion) is a much better API to use, but the ARB could, ironically, be the death of it!

It''s about time the card manufacturers got together to at least use the same interfaces to do a given job - even though the drivers would be left to optimise a given task.
0

Share this post


Link to post
Share on other sites
No. D3D never wins. The only thing that makes it good is that there are more books on it so even people who have no brains and truly never plan on programming with it can learn how.
0

Share this post


Link to post
Share on other sites
Just a little side note ...

Why are Microsoft still members of the ARB? when they actively discourage the use of OpenGL? mmm... no need to answer - it's called politics!

And, yes I do read the meeting notes! Whe they finally appear! lol. I just believe that the more presure that can be put on the ARB and the card manufacturers the better things will be for everyone!

Edited by - Shag on November 18, 2001 8:24:50 PM
0

Share this post


Link to post
Share on other sites
But back to an earlier question ... what extensions would I like to see unified?

Look at the NV_VERTEX blah ... These should have been part of the official extension mechanism from early days. Using AGP mem for such tasks is blatantly obvious in many ways - nVidia and ATI should be co-operating on such things! Lets face it, from a programmers point of view, there''s nothing worse than having to re-write code for different cards - that is definately a backward step! Remember the days of writing for DOS? Having to cater for different (sound and graphics) cards ... Not a good idea at all!
0

Share this post


Link to post
Share on other sites
The reason to keep opengl around(IMO), is because its great to use, and cross platform. I write games on linux, then port to windows. If microsoft makes D3D for linux, send me an e-mail, but only if it is as easy to use and initialize as OpenGL. Even then, I would not really want to learn another 3d API.
0

Share this post


Link to post
Share on other sites
using the example of vertex programs how does this differ from d3d + opengl

--- d3d ---
if (caps_bit vertex program)
do ...
else
do software fallback

--- opengl ---
if (arb_vertex_program)
do ...
else if (nv_vertex_program)
do ...
else
do software fallback

// nv_vertex_program is prolly very similar to arb_vertex_program ie once youve understood one u should be able to pick up the other easily

doing extensions are very easy to learn compared to other pieces of an engine eg collision detection+response/AI.
granted d3d''s method is less work ie 2 options vs 3. but u can argue opengls is more powerful ie the ppl who know the card the best wrote the extension to fit the hardware instead of the extension being choosen by a 3rd party + then trying to make the hardware fit the extension.

name me one commerical game that uses vertex programs with d3d or opengl, nothing yet? why not? cause game publishers want the largest user base (weve only starting to see 3dhardware required games becoming commonplace) if youre maybe wanting to get a publisher for your game yould be better off sticking with opengl1.1.

but youre prolly only doing this for other reasons (eg hoping to get a job in the industry/ experimenting etc) if so write something that works on your card. does it have to work on all cards? even if u choose d3d u will find there will be cards that wont run your program. see a recent article on gamasutra ''startopia postmortem''

>>Why are Microsoft still members of the ARB?<<

part of the ARB charter saiz none are the founding members (which ms is one of them) can be kicked out.

http://uk.geocities.com/sloppyturds/gotterdammerung.html
0

Share this post


Link to post
Share on other sites
I was thinking vertex arrays rahther than programs ... sorry I confused!

There needn''t be seperate extensions for (nowadays) such a fundamental extension. The ARB should have covered this, and allowed seperate mechanisms to operate with one command structure, which would free programmers from writing seperate code for different cards!
0

Share this post


Link to post
Share on other sites
Since I''m new to OpenGL, how do the extensions work exactly? From what I can understand, they are specific to the hardware being utilized. So doesn''t that make it harder for the programmer?

Now the programmer will have to put in some kind of Switch case to test to see what kind of hardware is in the computer, and then initialize itself. Wouldn''t it be easier to just have a standardized set of capabilities? From what I saw of 3dLabs proposal, they are heading the way of D3d by making functionality of the video card exposed through a high level programming language (ala shaders). To me, this makes a lot more sense than having to come up with proprietary extensions all the time, and makes cross platform (and cross hardware) development more diffucult.

And btw, can anyone point out a RECENT book on OpenGL (one that covers OGL 1.3) that is not specific to Windows? I''ve seen a few, but they all seem to concentrate on M$ development. I''m starting to read Norman Lin''s Beginning 3d programming on Linux, and for a newcomer to 3d programming, the first 3 chapters have been excellent. I think I''ll have to get the Advanced book to learn more about Mesa/OpenGl though....has anyone read either book all the way through?

0

Share this post


Link to post
Share on other sites
quote:
Original post by Dauntless
Since I''m new to OpenGL, how do the extensions work exactly? From what I can understand, they are specific to the hardware being utilized. So doesn''t that make it harder for the programmer?


Extensions aren''t always specific to the hardware. They also allow backwards compatibility (in the case of Windows and its slow uptake of new versions, that''s very important ). The ARB ''standardizes'' extensions and makes them an optional part of the OpenGL spec. (as in: you don''t have to support this feature to be officially OpenGL compliant, but if you do, do it this way) after a while.

[Resist Windows XP''s Invasive Production Activation Technology!]
0

Share this post


Link to post
Share on other sites
Remember the good old days with DOS when you had to write an separate Interface to cater for sooo many different Video/Audio cards? Seriously... DirectX has made developers lazy. Writing two separate effects to cater for say nVidia and ATI OpenGL extensions is still infinitely less work than simple Super VGA 2D graphics and Audio were under DOS.

Be thankful and stop complaining!

I like both OpenGL and Direct3D (The current API anyway), I actually find them quite similar. But Microsoft insists on changing their 3D API with every new DX version. This alone will prevent anyone who doesn''t want to learn a new API every year from sticking exclusively to D3D. My opinion? Support both!
0

Share this post


Link to post
Share on other sites
theres more news about opengl2.0 at www.opengl.org out today.
definitly looks like its moving foward pretty quickly

>>I''d like to thank the 3Dlabs team for their hard work in producing the white papers on a very aggressive schedule, and reviewers from Alias| Wavefront, Apple, ATI, id Software, Nvidia, PTC, SGI, SoftImage, the Stanford Graphics Lab and others.<<

id software hehe
0

Share this post


Link to post
Share on other sites
Direct3D is garbage, PERIOD.
wannabe OpenGL.

peace.

To the vast majority of mankind, nothing is more agreeable than to escape the need for mental exertion... To most people, nothing is more troublesome than the effort of thinking.
0

Share this post


Link to post
Share on other sites
quote:
Original post by jenova
Direct3D is garbage, PERIOD.
wannabe OpenGL.


I love OpenGL; I hope I''ve made that clear. But posting stuff like this is kind of inflamitory and useless. Sure, you can give your reasons and discuss objectively, but otherwise you''re just going to get all the people that love DirectX in here yelling . DirectX obviously works; but, does it work better? That''s the thing we''re discussing.

[Resist Windows XP''s Invasive Production Activation Technology!]
0

Share this post


Link to post
Share on other sites

  • Similar Content

    • By DaniDesu
      #include "MyEngine.h" int main() { MyEngine myEngine; myEngine.run(); return 0; } MyEngine.h
      #pragma once #include "MyWindow.h" #include "MyShaders.h" #include "MyShapes.h" class MyEngine { private: GLFWwindow * myWindowHandle; MyWindow * myWindow; public: MyEngine(); ~MyEngine(); void run(); }; MyEngine.cpp
      #include "MyEngine.h" MyEngine::MyEngine() { MyWindow myWindow(800, 600, "My Game Engine"); this->myWindow = &myWindow; myWindow.createWindow(); this->myWindowHandle = myWindow.getWindowHandle(); // Load all OpenGL function pointers for use gladLoadGLLoader((GLADloadproc)glfwGetProcAddress); } MyEngine::~MyEngine() { this->myWindow->destroyWindow(); } void MyEngine::run() { MyShaders myShaders("VertexShader.glsl", "FragmentShader.glsl"); MyShapes myShapes; GLuint vertexArrayObjectHandle; float coordinates[] = { 0.5f, 0.5f, 0.0f, 0.5f, -0.5f, 0.0f, -0.5f, 0.5f, 0.0f }; vertexArrayObjectHandle = myShapes.drawTriangle(coordinates); while (!glfwWindowShouldClose(this->myWindowHandle)) { glClearColor(0.5f, 0.5f, 0.5f, 1.0f); glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); // Draw something glUseProgram(myShaders.getShaderProgram()); glBindVertexArray(vertexArrayObjectHandle); glDrawArrays(GL_TRIANGLES, 0, 3); glfwSwapBuffers(this->myWindowHandle); glfwPollEvents(); } } MyShaders.h
      #pragma once #include <glad\glad.h> #include <GLFW\glfw3.h> #include "MyFileHandler.h" class MyShaders { private: const char * vertexShaderFileName; const char * fragmentShaderFileName; const char * vertexShaderCode; const char * fragmentShaderCode; GLuint vertexShaderHandle; GLuint fragmentShaderHandle; GLuint shaderProgram; void compileShaders(); public: MyShaders(const char * vertexShaderFileName, const char * fragmentShaderFileName); ~MyShaders(); GLuint getShaderProgram(); const char * getVertexShaderCode(); const char * getFragmentShaderCode(); }; MyShaders.cpp
      #include "MyShaders.h" MyShaders::MyShaders(const char * vertexShaderFileName, const char * fragmentShaderFileName) { this->vertexShaderFileName = vertexShaderFileName; this->fragmentShaderFileName = fragmentShaderFileName; // Load shaders from files MyFileHandler myVertexShaderFileHandler(this->vertexShaderFileName); this->vertexShaderCode = myVertexShaderFileHandler.readFile(); MyFileHandler myFragmentShaderFileHandler(this->fragmentShaderFileName); this->fragmentShaderCode = myFragmentShaderFileHandler.readFile(); // Compile shaders this->compileShaders(); } MyShaders::~MyShaders() { } void MyShaders::compileShaders() { this->vertexShaderHandle = glCreateShader(GL_VERTEX_SHADER); this->fragmentShaderHandle = glCreateShader(GL_FRAGMENT_SHADER); glShaderSource(this->vertexShaderHandle, 1, &(this->vertexShaderCode), NULL); glShaderSource(this->fragmentShaderHandle, 1, &(this->fragmentShaderCode), NULL); glCompileShader(this->vertexShaderHandle); glCompileShader(this->fragmentShaderHandle); this->shaderProgram = glCreateProgram(); glAttachShader(this->shaderProgram, this->vertexShaderHandle); glAttachShader(this->shaderProgram, this->fragmentShaderHandle); glLinkProgram(this->shaderProgram); return; } GLuint MyShaders::getShaderProgram() { return this->shaderProgram; } const char * MyShaders::getVertexShaderCode() { return this->vertexShaderCode; } const char * MyShaders::getFragmentShaderCode() { return this->fragmentShaderCode; } MyWindow.h
      #pragma once #include <glad\glad.h> #include <GLFW\glfw3.h> class MyWindow { private: GLFWwindow * windowHandle; int windowWidth; int windowHeight; const char * windowTitle; public: MyWindow(int windowWidth, int windowHeight, const char * windowTitle); ~MyWindow(); GLFWwindow * getWindowHandle(); void createWindow(); void MyWindow::destroyWindow(); }; MyWindow.cpp
      #include "MyWindow.h" MyWindow::MyWindow(int windowWidth, int windowHeight, const char * windowTitle) { this->windowHandle = NULL; this->windowWidth = windowWidth; this->windowWidth = windowWidth; this->windowHeight = windowHeight; this->windowTitle = windowTitle; glfwInit(); } MyWindow::~MyWindow() { } GLFWwindow * MyWindow::getWindowHandle() { return this->windowHandle; } void MyWindow::createWindow() { // Use OpenGL 3.3 and GLSL 3.3 glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 3); glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3); // Limit backwards compatibility glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE); glfwWindowHint(GLFW_OPENGL_FORWARD_COMPAT, GL_TRUE); // Prevent resizing window glfwWindowHint(GLFW_RESIZABLE, GL_FALSE); // Create window this->windowHandle = glfwCreateWindow(this->windowWidth, this->windowHeight, this->windowTitle, NULL, NULL); glfwMakeContextCurrent(this->windowHandle); } void MyWindow::destroyWindow() { glfwTerminate(); } MyShapes.h
      #pragma once #include <glad\glad.h> #include <GLFW\glfw3.h> class MyShapes { public: MyShapes(); ~MyShapes(); GLuint & drawTriangle(float coordinates[]); }; MyShapes.cpp
      #include "MyShapes.h" MyShapes::MyShapes() { } MyShapes::~MyShapes() { } GLuint & MyShapes::drawTriangle(float coordinates[]) { GLuint vertexBufferObject{}; GLuint vertexArrayObject{}; // Create a VAO glGenVertexArrays(1, &vertexArrayObject); glBindVertexArray(vertexArrayObject); // Send vertices to the GPU glGenBuffers(1, &vertexBufferObject); glBindBuffer(GL_ARRAY_BUFFER, vertexBufferObject); glBufferData(GL_ARRAY_BUFFER, sizeof(coordinates), coordinates, GL_STATIC_DRAW); // Dertermine the interpretation of the array buffer glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 3*sizeof(float), (void *)0); glEnableVertexAttribArray(0); // Unbind the buffers glBindBuffer(GL_ARRAY_BUFFER, 0); glBindVertexArray(0); return vertexArrayObject; } MyFileHandler.h
      #pragma once #include <cstdio> #include <cstdlib> class MyFileHandler { private: const char * fileName; unsigned long fileSize; void setFileSize(); public: MyFileHandler(const char * fileName); ~MyFileHandler(); unsigned long getFileSize(); const char * readFile(); }; MyFileHandler.cpp
      #include "MyFileHandler.h" MyFileHandler::MyFileHandler(const char * fileName) { this->fileName = fileName; this->setFileSize(); } MyFileHandler::~MyFileHandler() { } void MyFileHandler::setFileSize() { FILE * fileHandle = NULL; fopen_s(&fileHandle, this->fileName, "rb"); fseek(fileHandle, 0L, SEEK_END); this->fileSize = ftell(fileHandle); rewind(fileHandle); fclose(fileHandle); return; } unsigned long MyFileHandler::getFileSize() { return (this->fileSize); } const char * MyFileHandler::readFile() { char * buffer = (char *)malloc((this->fileSize)+1); FILE * fileHandle = NULL; fopen_s(&fileHandle, this->fileName, "rb"); fread(buffer, this->fileSize, sizeof(char), fileHandle); fclose(fileHandle); buffer[this->fileSize] = '\0'; return buffer; } VertexShader.glsl
      #version 330 core layout (location = 0) vec3 VertexPositions; void main() { gl_Position = vec4(VertexPositions, 1.0f); } FragmentShader.glsl
      #version 330 core out vec4 FragmentColor; void main() { FragmentColor = vec4(1.0f, 0.0f, 0.0f, 1.0f); } I am attempting to create a simple engine/graphics utility using some object-oriented paradigms. My first goal is to get some output from my engine, namely, a simple red triangle.
      For this goal, the MyShapes class will be responsible for defining shapes such as triangles, polygons etc. Currently, there is only a drawTriangle() method implemented, because I first wanted to see whether it works or not before attempting to code other shape drawing methods.
      The constructor of the MyEngine class creates a GLFW window (GLAD is also initialized here to load all OpenGL functionality), and the myEngine.run() method in Main.cpp is responsible for firing up the engine. In this run() method, the shaders get loaded from files via the help of my FileHandler class. The vertices for the triangle are processed by the myShapes.drawTriangle() method where a vertex array object, a vertex buffer object and vertrex attributes are set for this purpose.
      The while loop in the run() method should be outputting me the desired red triangle, but all I get is a grey window area. Why?
      Note: The shaders are compiling and linking without any errors.
      (Note: I am aware that this code is not using any good software engineering practices (e.g. exceptions, error handling). I am planning to implement them later, once I get the hang of OpenGL.)

       
    • By KarimIO
      EDIT: I thought this was restricted to Attribute-Created GL contexts, but it isn't, so I rewrote the post.
      Hey guys, whenever I call SwapBuffers(hDC), I get a crash, and I get a "Too many posts were made to a semaphore." from Windows as I call SwapBuffers. What could be the cause of this?
      Update: No crash occurs if I don't draw, just clear and swap.
      static PIXELFORMATDESCRIPTOR pfd = // pfd Tells Windows How We Want Things To Be { sizeof(PIXELFORMATDESCRIPTOR), // Size Of This Pixel Format Descriptor 1, // Version Number PFD_DRAW_TO_WINDOW | // Format Must Support Window PFD_SUPPORT_OPENGL | // Format Must Support OpenGL PFD_DOUBLEBUFFER, // Must Support Double Buffering PFD_TYPE_RGBA, // Request An RGBA Format 32, // Select Our Color Depth 0, 0, 0, 0, 0, 0, // Color Bits Ignored 0, // No Alpha Buffer 0, // Shift Bit Ignored 0, // No Accumulation Buffer 0, 0, 0, 0, // Accumulation Bits Ignored 24, // 24Bit Z-Buffer (Depth Buffer) 0, // No Stencil Buffer 0, // No Auxiliary Buffer PFD_MAIN_PLANE, // Main Drawing Layer 0, // Reserved 0, 0, 0 // Layer Masks Ignored }; if (!(hDC = GetDC(windowHandle))) return false; unsigned int PixelFormat; if (!(PixelFormat = ChoosePixelFormat(hDC, &pfd))) return false; if (!SetPixelFormat(hDC, PixelFormat, &pfd)) return false; hRC = wglCreateContext(hDC); if (!hRC) { std::cout << "wglCreateContext Failed!\n"; return false; } if (wglMakeCurrent(hDC, hRC) == NULL) { std::cout << "Make Context Current Second Failed!\n"; return false; } ... // OGL Buffer Initialization glClear(GL_DEPTH_BUFFER_BIT | GL_COLOR_BUFFER_BIT); glBindVertexArray(vao); glUseProgram(myprogram); glDrawElements(GL_TRIANGLES, indexCount, GL_UNSIGNED_SHORT, (void *)indexStart); SwapBuffers(GetDC(window_handle));  
    • By Tchom
      Hey devs!
       
      I've been working on a OpenGL ES 2.0 android engine and I have begun implementing some simple (point) lighting. I had something fairly simple working, so I tried to get fancy and added color-tinting light. And it works great... with only one or two lights. Any more than that, the application drops about 15 frames per light added (my ideal is at least 4 or 5). I know implementing lighting is expensive, I just didn't think it was that expensive. I'm fairly new to the world of OpenGL and GLSL, so there is a good chance I've written some crappy shader code. If anyone had any feedback or tips on how I can optimize this code, please let me know.
       
      Vertex Shader
      uniform mat4 u_MVPMatrix; uniform mat4 u_MVMatrix; attribute vec4 a_Position; attribute vec3 a_Normal; attribute vec2 a_TexCoordinate; varying vec3 v_Position; varying vec3 v_Normal; varying vec2 v_TexCoordinate; void main() { v_Position = vec3(u_MVMatrix * a_Position); v_TexCoordinate = a_TexCoordinate; v_Normal = vec3(u_MVMatrix * vec4(a_Normal, 0.0)); gl_Position = u_MVPMatrix * a_Position; } Fragment Shader
      precision mediump float; uniform vec4 u_LightPos["+numLights+"]; uniform vec4 u_LightColours["+numLights+"]; uniform float u_LightPower["+numLights+"]; uniform sampler2D u_Texture; varying vec3 v_Position; varying vec3 v_Normal; varying vec2 v_TexCoordinate; void main() { gl_FragColor = (texture2D(u_Texture, v_TexCoordinate)); float diffuse = 0.0; vec4 colourSum = vec4(1.0); for (int i = 0; i < "+numLights+"; i++) { vec3 toPointLight = vec3(u_LightPos[i]); float distance = length(toPointLight - v_Position); vec3 lightVector = normalize(toPointLight - v_Position); float diffuseDiff = 0.0; // The diffuse difference contributed from current light diffuseDiff = max(dot(v_Normal, lightVector), 0.0); diffuseDiff = diffuseDiff * (1.0 / (1.0 + ((1.0-u_LightPower[i])* distance * distance))); //Determine attenuatio diffuse += diffuseDiff; gl_FragColor.rgb *= vec3(1.0) / ((vec3(1.0) + ((vec3(1.0) - vec3(u_LightColours[i]))*diffuseDiff))); //The expensive part } diffuse += 0.1; //Add ambient light gl_FragColor.rgb *= diffuse; } Am I making any rookie mistakes? Or am I just being unrealistic about what I can do? Thanks in advance
    • By yahiko00
      Hi,
      Not sure to post at the right place, if not, please forgive me...
      For a game project I am working on, I would like to implement a 2D starfield as a background.
      I do not want to deal with static tiles, since I plan to slowly animate the starfield. So, I am trying to figure out how to generate a random starfield for the entire map.
      I feel that using a uniform distribution for the stars will not do the trick. Instead I would like something similar to the screenshot below, taken from the game Star Wars: Empire At War (all credits to Lucasfilm, Disney, and so on...).

      Is there someone who could have an idea of a distribution which could result in such a starfield?
      Any insight would be appreciated
    • By afraidofdark
      I have just noticed that, in quake 3 and half - life, dynamic models are effected from light map. For example in dark areas, gun that player holds seems darker. How did they achieve this effect ? I can use image based lighting techniques however (Like placing an environment probe and using it for reflections and ambient lighting), this tech wasn't used in games back then, so there must be a simpler method to do this.
      Here is a link that shows how modern engines does it. Indirect Lighting Cache It would be nice if you know a paper that explains this technique. Can I apply this to quake 3' s light map generator and bsp format ?
  • Popular Now