• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.
Sign in to follow this  
Followers 0
japro

OpenGL
OpenGL 4.3 - compute shaders and much more

27 posts in this topic

Interesting!

For us lazy programmers, please give a short summary of the benefits or disadvantages, as you see it!
0

Share this post


Link to post
Share on other sites
OpenGL compute shaders... or 'opps, we got it wrong and MS got it right... quick back track!'

I skimmed a few other things; basically bringing the features up to D3D11 level and continuing the OpenGL tradition of 'here are 101 ways to do things... good luck with that!'

In fact could someone give me an update on the state of Direct State Access in OGL? 4.3 doesn't seem to have it as a feature and last I checked it was a case of some things and not all things...
1

Share this post


Link to post
Share on other sites
[quote name='larspensjo' timestamp='1344280003' post='4966758']
For us lazy programmers, please give a short summary of the benefits or disadvantages, as you see it!
[/quote]
I think people that are more involved with all this can give more competent insights. g-truc has a nice review: http://www.g-truc.net/post-0494.html#menu

I literally just saw this hours ago and have still to go through all of it. I am mostly excited for compute shaders. The other additions I looked into seemed to be some obvious fixes ("layout(location =...)" for uniforms, imageSize etc.) as well as improvements to memory related aspects that play nice with compute shaders (ARB_shader_storage_buffer_object).
0

Share this post


Link to post
Share on other sites
Not much to be excited about it seems, i think most of it has been available as extensions for ages as usual, only thing i find interesting is the texture parameter queries and shader storage buffers allthough i guess its just the old ext_texture_buffer in a new package, ES3 compatibility might be nice if we get some ES3 capable devices to play with.

It would be nice if moderators didn't try to start an API flamewar though, i think we get enough of those.

Edit: To phantom, i never said you weren't correct. i've replied in a PM to you instead to avoid derailing this thread. Edited by SimonForsman
0

Share this post


Link to post
Share on other sites
[quote name='SimonForsman' timestamp='1344283395' post='4966787']
It would be nice if moderators didn't try to start an API flamewar though, i think we get enough of those.
[/quote]

Really?

Tell me what was wrong with my statements?

Computer Shaders - admission that OpenCL/GL interop has failed to work.
Features - brings it up to D3D11 standard but, with at least one extension, introduces yet another way to do things
DSA - genuine question about the state of it...

Note: no where did I say 'D3D11 is better!' - all I did was call them out on areas they are still lacking - which is the API interface in general and MAYBE the state of DSA, which I asked about..

So, yeah, if not fawning over a new release of an incremental update to an outdated API is 'starting a flame' war then fine, I started a flame war...
0

Share this post


Link to post
Share on other sites
CL/GL interop didn't fail to work. It did work quite well. Despite this I have to admit that it is way more complicated than the DX11 compute shaders, BUT it DID work. In fact I could port DX11 compute shaders to OpenCL and make it work together with OpenGL. see:
[url="http://www.gamedev.net/page/community/iotd/index.html/_/tile-based-deferred-shading-via-opencl-r233"]http://www.gamedev.n...via-opencl-r233[/url]
I'm looking forward to trying out OGL compute shaders though, as it seems more reasonable to use it for processing textures / lighting.
The debugging feature is quite an improvement as such functionality was missing. Edited by Yours3!f
1

Share this post


Link to post
Share on other sites
[quote name='phantom' timestamp='1344284153' post='4966790']Computer Shaders - admission that OpenCL/GL interop has failed to work.
Features - brings it up to D3D11 standard but, with at least one extension, introduces yet another way to do things
DSA - genuine question about the state of it...
Note: no where did I say 'D3D11 is better!'
[/quote]
[quote name='mhagain' timestamp='1344286880' post='4966809']
phantom is actually correct, although the wording chosen can certainly come across as "let's start a flame war" - but look beyond that at what the update actually does have to offer.[/quote]None of that is surprising, though. And indeed D3D11 is "better", except for the little detail that it's proprietary and Windows-only (which, as it happens, is [i]the one [/i]important detail for me personally).

OpenGL is necessarily worse because it is designed by committee (ARB, Khronos, name it as you like). In addition to design by committee being always kind of troublesome, this particular committee has contained and contains members that have strong antipodal interests.

I won't say that Microsoft certainly has no interest in making OpenGL as good or better than their own product, because Microsoft is no longer involved (...at least [i]officially[/i]). However, there's still Intel as a good example of an entity that is still officially involved.
Intel who already struggles supporting OpenGL 3.x on their Sandy/Ivy Bridge CPUs has a strong motivation not to add too many features too quickly. Promoting CPUs with integrated graphics is much harder if people have the impression that they don't support most modern features. Thus, advertising OpenGL and pushing its development forward lessens revenue.

Companies like AMD and nVidia on the other hand do have a (rather obvious) strong interest in pushing new features onto the market, because this allows them to sell new cards. But then again supporting [i]both [/i]D3D and OpenGL means having twice as much driver development cost than actually necessary. If 90-95% of the software in their target market already uses D3D anyway, that's a bad deal. So again, even though there is some motivation, it is not necessarily overwhelming for OpenGL as such. If people buy the new nVidia 780 GTX Ultra because it supports D3D 12.1, which is needed to play Warcraft Ultimate, then that's just as good.
0

Share this post


Link to post
Share on other sites
[quote name='samoth' timestamp='1344334319' post='4966961']
And indeed D3D11 is "better", except for the little detail that it's proprietary and Windows-only (which, as it happens, is [i]the one [/i]important detail for me personally).
[/quote]

While I don't doubt the importance of this, once you get outside of Windows things get a bit ropey support wise - OSX, the largest of the non-Windows home computer platforms, lags OpenGL versions by some way with OSX10.7 supporting GL3.2, a standard released in 2009 on an OS only a year old.

I'm not sure about Linux support tbh; I tend to hear it swinging betweeen 'good' and 'bad' with a healthy dose of 'no closed source code in my linux!' kicking around.
0

Share this post


Link to post
Share on other sites
[quote name='Yours3!f' timestamp='1344330295' post='4966948']
CL/GL interop didn't fail to work. It did work quite well. Despite this I have to admit that it is way more complicated than the DX11 compute shaders, BUT it DID work. In fact I could port DX11 compute shaders to OpenCL and make it work together with OpenGL. see:
[url="http://www.gamedev.net/page/community/iotd/index.html/_/tile-based-deferred-shading-via-opencl-r233"]http://www.gamedev.n...via-opencl-r233[/url]
I'm looking forward to trying out OGL compute shaders though, as it seems more reasonable to use it for processing textures / lighting.
The debugging feature is quite an improvement as such functionality was missing.
[/quote]

I'd consider OpenCL to be more of an alternative to nvidias CUDA than to DirectCompute aswell, OpenCL works without a OpenGL render context and can also be used with Direct3D (atleast on nvidia hardware, which is quite an advantage if you want to use different renderers but still use the same GPGPU solution), OpenGL definitly needed built in compute shaders aswell but OpenCL still has its place). One can debate if the OpenCL interop should be in core rather than as an extension though. (it might have made more sense to keep the interop functions as an extension and pushed in compute shaders earlier)
0

Share this post


Link to post
Share on other sites
[quote name='phantom' timestamp='1344336318' post='4966969']I'm not sure about Linux support tbh; I tend to hear it swinging betweeen 'good' and 'bad' with a healthy dose of 'no closed source code in my linux!' kicking around.[/quote]For me, it has always "kind of" worked, but never really well as compared to how well it works under Windows.

This will (hopefully) drastically change in the near future, if Mr. Stallman doesn't prevent it. Linux becoming a Windows8-competing Steam platform would mean that some modern, advanced graphics API would [i]have to[/i] be available and well-supported. What else could it mean but serious OpenGL support from IHVs?
1

Share this post


Link to post
Share on other sites
[quote name='samoth' timestamp='1344337679' post='4966978']
What else could it mean but serious OpenGL support from IHVs?
[/quote]

Maybe, or it could take the same route that OpenGL on Windows takes to some extent; make it work for Game X.

For some time if you wanted performance you had to hit the same path as iD games, maybe the same will happen when it comes to following Valve's lead onto Linux? Do it their way or fall off the 'fast' path.

We'll see how it works out - I remember when Linux games appeared in the shops for a short while around 1999 with the iD games going on sale; at the time that failed to set the world alight as it seemed no one wanted to buy them. Maybe, 10+ years on Linux users (note: users. I maintain Stallman is a crazy person) as a little more pragmatic about things and Steam will work out and enough of a market share will be carved out for a good feedback loop to be generated with regards to market share, driver development and tool development going forward.

My only 'worry' about that is the continued involvement of the ARB who, historically, simply don't make good choices.
OpenGL2.0 and 3.0 are proof of this and nothing they have done since then has convinced me otherwise - if gaming on Linux makes it then it'll be down to Valve, NV and AMD working together and not the ARB.

I'm watching with intrest to see how this pans out, not least of all because it might well affect my day job [img]http://public.gamedev.net//public/style_emoticons/default/smile.png[/img] Edited by phantom
0

Share this post


Link to post
Share on other sites
I think it's fair to say that both APIs have a healthy (or unhealthy, delete as appropriate) dollop of "worse is better" in them, so it really does come down to target platforms and personal preferences.

Regarding Intel, have you been following the Valve blog about porting Steam and L4D to Linux? Intel are definitely playing quite an active role, working with Valve, and taking feedback on board. Currently it's only the Linux driver, of course, but it seems reasonable to guess that some of the quality improvements coming out of this can also be fed into their drivers for other platforms.

The overall feeling here is definitely one of OpenGL coming into the ascendant again, while D3D appears to be languishing with a somewhat unknown/uncertain future. Can it be sustained? No idea, but the next few years sure won't be boring.
0

Share this post


Link to post
Share on other sites
[quote name='phantom' timestamp='1344338536' post='4966979']
[quote name='samoth' timestamp='1344337679' post='4966978']
What else could it mean but serious OpenGL support from IHVs?
[/quote]

Maybe, or it could take the same route that OpenGL on Windows takes to some extent; make it work for Game X.

For some time if you wanted performance you had to hit the same path as iD games, maybe the same will happen when it comes to following Valve's lead onto Linux? Do it their way or fall off the 'fast' path.

We'll see how it works out - I remember when Linux games appeared in the shops for a short while around 1999 with the iD games going on sale; at the time that failed to set the world alight as it seemed no one wanted to buy them. Maybe, 10+ years on Linux users (note: users. I maintain Stallman is a crazy person) as a little more pragmatic about things and Steam will work out and enough of a market share will be carved out for a good feedback loop to be generated with regards to market share, driver development and tool development going forward.

My only 'worry' about that is the continued involvement of the ARB who, historically, simply don't make good choices.
OpenGL2.0 and 3.0 are proof of this and nothing they have done since then has convinced me otherwise - if gaming on Linux makes it then it'll be down to Valve, NV and AMD working together and not the ARB.

I'm watching with intrest to see how this pans out, not least of all because it might well affect my day job [img]http://public.gamedev.net//public/style_emoticons/default/smile.png[/img]
[/quote]

The big problem with IDs Linux push was that there was essentially only one game you could find in stores (Quake3) and the number of stores carrying the Linux version was extremely limited, most Linux users ended up buying the Windows version and downloading the Linux binary anyway since that was the easiest option (The pragmatic users went with the path of least resistance)

With steam the big change is that everything is available for everyone at any time and buying a game for PlatformX usually lets you install and play the same game on PlatformsY and Z aswell (So even users of dualboot systems don't have to choose between buying for Windows and buying for Linux, they just buy it and play on whatever OS they happen to be logged into at the moment) (Personally i primarily buy Windows versions even though i use Linux aswell since it is far easier to get Windows games running in Linux than vice versa and having to reboot or switch machine to play a different game is far too annoying (With work there isn't much choice, Linux is simply the better OS for what i'm doing (apart from some legacy ASP.Net systems i have to maintain that just won't work properly with mono)).

As for what Valves Linux move will do for OpenGL i'd expect pretty much the same as AAA OpenGL use on Windows and Mac have done, IHVs will optimize the paths used by big AAA titles, it doesn't really matter that much though, indie titles will not push the limits far enough for that to matter and end users don't care if random indie title X runs at 3000 fps or 6000 fps (They will however care if expensive AAA title Y runs at 25fps or 60fps but as long as the IHVs optimize for the AAA titles its all good), The API is fine, you can complain about the ARB making bad or slow decisions but the API itself is fine(it might not be perfect but it doesn't have to be), it gives developers access to the features they need on the platforms they're targeting and according to Valve OpenGL also still performs better on both Windows and Linux than D3D9 does and they did manage to get better performance in Linux than on Windows so overall things are looking decent.

The main problems i see with gaming on Linux, (apart from the low number of available titles) is hardware support, There still isn't a really good solution for more advanced controllers, the soundsystem(s) are a bit of a mess and the desktop enviroments take far too much tweaking to get really good(Which really scares users away, Unity might be easy to use but its a real pain if you want to do anything even remotely advanced while Gnome/KDE have become quite a mess in the latest versions.

When it comes to OpenGL the biggest problem is Apple, they add support for newer versions at an extremely slow pace, 4.3 will not be relevant for another 2-3 years (The main reason to use OpenGL is to support OS X, and that means using OpenGL 2.1+extensions today or possibly 3.2)
1

Share this post


Link to post
Share on other sites
[quote name='SimonForsman' timestamp='1344343410' post='4966992']
The API is fine, you can complain about the ARB making bad or slow decisions but the API itself is fine(it might not be perfect but it doesn't have to be), it gives developers access to the features they need on the platforms they're targeting and according to Valve OpenGL also still performs better on both Windows and Linux than D3D9 does and they did manage to get better performance in Linux than on Windows so overall things are looking decent.
[/quote]

I disagree on the API - the bind to edit model is broken. The D3D model of 'operations on objects' is saner. The ARB put two years of work into a better API with these semantics (still C style, not C++ I might add) and then dumped it. Until DSA is the norm the API will remain broken.

The OpenGL vs D3D9 thing is a non-event.
I've commented on this else where; D3D9 is known to be slower on small batches than OpenGL, it's been public knowledge for some time.
The 'zomg fps difference!' reaction is also a non-event once you look closer at it;
Linux + OpenGL : 3.17ms/frame
Win7 + D3D9 (270fps): 3.7ms/frame
Win7 + D3D9 (304fps): 3.29ms/frame

Once you factor in differences in code due to being able to clean write the OpenGL backend to take more advantage of 'lessons learnt' on the D3D9 and probabl with a more modern slant on things (they uses a DX11 class device; how many D3D11 class feature extensions or NV extensions did they use?) and the D3D9 overhead 0.12ms/frame is... well, nothing. (The 0.6ms was worrying but they basically said in their post 'opps, we got the batching wrong' although apprently no one noticed this 'performance loss').

I suspect that if they gave the same treatment, a clean rewrite, using D3D11 and properly structured for it then on Windows the performance would be equal at worst, if not slightly better as a well structured D3D11 app will have less CPU overhead than a D3D9 application (or indeed a D3D11 application written in the D3D9 style).

In short; well done to Valve but nothing we didn't already know about D3D9 has been found.
1

Share this post


Link to post
Share on other sites
[quote name='phantom' timestamp='1344344804' post='4966997']
Linux + OpenGL : 3.17ms/frame
Win7 + D3D9 (270fps): 3.7ms/frame
Win7 + D3D9 (304fps): 3.29ms/frame
[/quote]

The 303.4fps was Win7 + OpenGL, not D3D9. But yes, D3D9's high per batch overhead is old news, what isn't old news is that (nvidias) OpenGL drivers are still good enough to take advantage of this. (Most "recent" games with both OpenGL and D3D9 renderers have gotten far worse performance with OpenGL and it does show that OpenGL performance is "good enough"). As for what OpenGL version they're using it is quite likely that it is 2.x since they didn't make a clean rewrite, (its based on the Mac version afterall).
0

Share this post


Link to post
Share on other sites
I think people are over-excited about Valve's contribution to all of this. Desura has already been providing a viable portal for Linux games for some time now. I don't see Valve's entry as groundbreaking; rather, they do not want to be left behind. The endorsement of Valve however is good for OpenGL because Desura doesn't have the brand name recognition or pull that Valve has.
0

Share this post


Link to post
Share on other sites
[quote name='japro' timestamp='1344350667' post='4967024']
You guys are no fun [img]http://public.gamedev.net//public/style_emoticons/default/sad.png[/img]. I was having so much fun with compute shaders (actually, still have [img]http://public.gamedev.net//public/style_emoticons/default/tongue.png[/img]).
gravitational n-body: [url="https://github.com/progschj/OpenGL-Examples/blob/master/experimental/XXcompute_shader_nbody.cpp"]https://github.com/p...hader_nbody.cpp[/url]
[/quote]
well, good for you. I'm still waiting for the AMD drivers as from nvidia I only have a gf8600gt, which doesn't support this. I can't wait to port my tile based deferred renderer to OGL CS. Actually thanks for the sample code :) I'll have to think less on my own :D Edited by Yours3!f
0

Share this post


Link to post
Share on other sites
[quote name='SimonForsman' timestamp='1344351532' post='4967031']
The 303.4fps was Win7 + OpenGL, not D3D9. But yes, D3D9's high per batch overhead is old news, what isn't old news is that (nvidias) OpenGL drivers are still good enough to take advantage of this. (Most "recent" games with both OpenGL and D3D9 renderers have gotten far worse performance with OpenGL and it does show that OpenGL performance is "good enough"). As for what OpenGL version they're using it is quite likely that it is 2.x since they didn't make a clean rewrite, (its based on the Mac version afterall).
[/quote]

Ah, my bad - I was short of time when I originally skimmed the article and just recycled my numbers from my original comment on another forum :)

I suspect the GL vs DX9 speed was simply people not finding the GL 'fast path' either due to the API not making it easy or effort not being placed in that area. I don't think there has ever been a question of OpenGL's performance vs D3D9 being anything but 'good enough' - the problem was that for ages D3D9 had features GL did not which were pretty fundimental (hello FBO extension!) and driver quality wasn't that good (ATI, back in the day, I'm looking at you).

They don't mention what they basis it on, reading it seems to imply it was based off the core D3D path just with abstraction layers built in; so it doesn't really answer how they did it.

If anything this result seems to imply that despite 'batch batch batch' being shoved down everyones throats for the past X years Valve are Doing It Wrong ;)

The "problem" faced by OpenGL right now is that;
a) "Everyone" already has a D3D11 path in place and are phasing out D3D9 engines and features
b) D3D11-a-like is the expected API for the XBox
c) The biggest OpenGL market, outside of Windows, is OSX
d) OSX is massively lagging cutting edge GL development (OSX10.7 supporting 3.2; 3 year old spec on a 1 year old machine)
e) Not including Compute or requiring OpenCL interop in the core was also a mistake imo

Between feature lag in the API, an established API and tools path in existance and OSX's lag most AAA studios have no intrest in it right now; however I suspect they will be watching Valve's experiance closely and when the first numbers come out with regards to market share this might change.

(On a personally note, while I once prefered GLSL's way of doing things to HLSL's when it came to declaring inputs and varyings all that 'layout' stuff is just horrible - I'd love to know what they were smoking when they decided on that. HLSL really nails that. Same with providing things like batch ids etc via semantic tagged inputs into the main function vs globals. Really not a fan of 'magic' globals in my shaders these days; prefer to be able to name and require them myself - its the gl<Matrix name> all over again.)
1

Share this post


Link to post
Share on other sites
[quote name='bvanevery' timestamp='1344372756' post='4967137']
I think people are over-excited about Valve's contribution to all of this. Desura has already been providing a viable portal for Linux games for some time now. I don't see Valve's entry as groundbreaking; rather, they do not want to be left behind. The endorsement of Valve however is good for OpenGL because Desura doesn't have the brand name recognition or pull that Valve has.
[/quote]

It's maybe not groundbreaking, but it's certainly [i]important[/i], as Valve are a major studio with no small measure of influence and Steam is utterly gigantic for content delivery; being something of a de-facto standard in the Windows world.
0

Share this post


Link to post
Share on other sites
There's not much point over-analysing Valve's L4D profiling results, I wouldn't be distracted by them.
Win7+D9 = 270.6Hz = 3.7ms
Win7+GL = 303.4Hz = 3.3ms
Difference = 0.4ms.

Firstly, if a game is running at >60Hz, then performance has already been achieved. Secondly, 0.4ms is a trivial difference in your renderer's CPU usage. Thirdly D9 is known to have a lot of overhead, so these numbers should be expected.
if two radically different approaches only differ by half a milli, then I'd go with the one that's simpler to write/maintain. If it's critical for you to shave half a milli off your frame times, then you've got bigger problems to deal with![hr]So, uh, Steam for Linux driving GL adoption and GL4.3 catching up to the D11 feature set, cool! Edited by Hodgman
0

Share this post


Link to post
Share on other sites
[quote name='phantom' timestamp='1344417475' post='4967311']
Between feature lag in the API, an established API and tools path in existance and OSX's lag most AAA studios have no intrest in it right now; however I suspect they will be watching Valve's experiance closely and when the first numbers come out with regards to market share this might change.
[/quote]

I don't think valves results will matter for other studios (Linux marketshare is small so reaching out to 1-2% extra customers isn't all that important), Valves move has far more to do with indie game distribution(Linux is a rather big platform for indie sales) than pushing their own games, AAA titles can get noticed by 100% of the Windows gamer market quite easily, Indie games can't (but can quite easily get noticed by 100% of the Linux gamers) Edited by SimonForsman
0

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!


Register a new account

Sign in

Already have an account? Sign in here.


Sign In Now
Sign in to follow this  
Followers 0

  • Similar Content

    • By Solid_Spy
      Hello, I have been working on SH Irradiance map rendering, and I have been using a GLSL pixel shader to render SH irradiance to 2D irradiance maps for my static objects. I already have it working with 9 3D textures so far for the first 9 SH functions.
      In my GLSL shader, I have to send in 9 SH Coefficient 3D Texures that use RGBA8 as a pixel format. RGB being used for the coefficients for red, green, and blue, and the A for checking if the voxel is in use (for the 3D texture solidification shader to prevent bleeding).
      My problem is, I want to knock this number of textures down to something like 4 or 5. Getting even lower would be a godsend. This is because I eventually plan on adding more SH Coefficient 3D Textures for other parts of the game map (such as inside rooms, as opposed to the outside), to circumvent irradiance probe bleeding between rooms separated by walls. I don't want to reach the 32 texture limit too soon. Also, I figure that it would be a LOT faster.
      Is there a way I could, say, store 2 sets of SH Coefficients for 2 SH functions inside a texture with RGBA16 pixels? If so, how would I extract them from inside GLSL? Let me know if you have any suggestions ^^.
    • By DaniDesu
      #include "MyEngine.h" int main() { MyEngine myEngine; myEngine.run(); return 0; } MyEngine.h
      #pragma once #include "MyWindow.h" #include "MyShaders.h" #include "MyShapes.h" class MyEngine { private: GLFWwindow * myWindowHandle; MyWindow * myWindow; public: MyEngine(); ~MyEngine(); void run(); }; MyEngine.cpp
      #include "MyEngine.h" MyEngine::MyEngine() { MyWindow myWindow(800, 600, "My Game Engine"); this->myWindow = &myWindow; myWindow.createWindow(); this->myWindowHandle = myWindow.getWindowHandle(); // Load all OpenGL function pointers for use gladLoadGLLoader((GLADloadproc)glfwGetProcAddress); } MyEngine::~MyEngine() { this->myWindow->destroyWindow(); } void MyEngine::run() { MyShaders myShaders("VertexShader.glsl", "FragmentShader.glsl"); MyShapes myShapes; GLuint vertexArrayObjectHandle; float coordinates[] = { 0.5f, 0.5f, 0.0f, 0.5f, -0.5f, 0.0f, -0.5f, 0.5f, 0.0f }; vertexArrayObjectHandle = myShapes.drawTriangle(coordinates); while (!glfwWindowShouldClose(this->myWindowHandle)) { glClearColor(0.5f, 0.5f, 0.5f, 1.0f); glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); // Draw something glUseProgram(myShaders.getShaderProgram()); glBindVertexArray(vertexArrayObjectHandle); glDrawArrays(GL_TRIANGLES, 0, 3); glfwSwapBuffers(this->myWindowHandle); glfwPollEvents(); } } MyShaders.h
      #pragma once #include <glad\glad.h> #include <GLFW\glfw3.h> #include "MyFileHandler.h" class MyShaders { private: const char * vertexShaderFileName; const char * fragmentShaderFileName; const char * vertexShaderCode; const char * fragmentShaderCode; GLuint vertexShaderHandle; GLuint fragmentShaderHandle; GLuint shaderProgram; void compileShaders(); public: MyShaders(const char * vertexShaderFileName, const char * fragmentShaderFileName); ~MyShaders(); GLuint getShaderProgram(); const char * getVertexShaderCode(); const char * getFragmentShaderCode(); }; MyShaders.cpp
      #include "MyShaders.h" MyShaders::MyShaders(const char * vertexShaderFileName, const char * fragmentShaderFileName) { this->vertexShaderFileName = vertexShaderFileName; this->fragmentShaderFileName = fragmentShaderFileName; // Load shaders from files MyFileHandler myVertexShaderFileHandler(this->vertexShaderFileName); this->vertexShaderCode = myVertexShaderFileHandler.readFile(); MyFileHandler myFragmentShaderFileHandler(this->fragmentShaderFileName); this->fragmentShaderCode = myFragmentShaderFileHandler.readFile(); // Compile shaders this->compileShaders(); } MyShaders::~MyShaders() { } void MyShaders::compileShaders() { this->vertexShaderHandle = glCreateShader(GL_VERTEX_SHADER); this->fragmentShaderHandle = glCreateShader(GL_FRAGMENT_SHADER); glShaderSource(this->vertexShaderHandle, 1, &(this->vertexShaderCode), NULL); glShaderSource(this->fragmentShaderHandle, 1, &(this->fragmentShaderCode), NULL); glCompileShader(this->vertexShaderHandle); glCompileShader(this->fragmentShaderHandle); this->shaderProgram = glCreateProgram(); glAttachShader(this->shaderProgram, this->vertexShaderHandle); glAttachShader(this->shaderProgram, this->fragmentShaderHandle); glLinkProgram(this->shaderProgram); return; } GLuint MyShaders::getShaderProgram() { return this->shaderProgram; } const char * MyShaders::getVertexShaderCode() { return this->vertexShaderCode; } const char * MyShaders::getFragmentShaderCode() { return this->fragmentShaderCode; } MyWindow.h
      #pragma once #include <glad\glad.h> #include <GLFW\glfw3.h> class MyWindow { private: GLFWwindow * windowHandle; int windowWidth; int windowHeight; const char * windowTitle; public: MyWindow(int windowWidth, int windowHeight, const char * windowTitle); ~MyWindow(); GLFWwindow * getWindowHandle(); void createWindow(); void MyWindow::destroyWindow(); }; MyWindow.cpp
      #include "MyWindow.h" MyWindow::MyWindow(int windowWidth, int windowHeight, const char * windowTitle) { this->windowHandle = NULL; this->windowWidth = windowWidth; this->windowWidth = windowWidth; this->windowHeight = windowHeight; this->windowTitle = windowTitle; glfwInit(); } MyWindow::~MyWindow() { } GLFWwindow * MyWindow::getWindowHandle() { return this->windowHandle; } void MyWindow::createWindow() { // Use OpenGL 3.3 and GLSL 3.3 glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 3); glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3); // Limit backwards compatibility glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE); glfwWindowHint(GLFW_OPENGL_FORWARD_COMPAT, GL_TRUE); // Prevent resizing window glfwWindowHint(GLFW_RESIZABLE, GL_FALSE); // Create window this->windowHandle = glfwCreateWindow(this->windowWidth, this->windowHeight, this->windowTitle, NULL, NULL); glfwMakeContextCurrent(this->windowHandle); } void MyWindow::destroyWindow() { glfwTerminate(); } MyShapes.h
      #pragma once #include <glad\glad.h> #include <GLFW\glfw3.h> class MyShapes { public: MyShapes(); ~MyShapes(); GLuint & drawTriangle(float coordinates[]); }; MyShapes.cpp
      #include "MyShapes.h" MyShapes::MyShapes() { } MyShapes::~MyShapes() { } GLuint & MyShapes::drawTriangle(float coordinates[]) { GLuint vertexBufferObject{}; GLuint vertexArrayObject{}; // Create a VAO glGenVertexArrays(1, &vertexArrayObject); glBindVertexArray(vertexArrayObject); // Send vertices to the GPU glGenBuffers(1, &vertexBufferObject); glBindBuffer(GL_ARRAY_BUFFER, vertexBufferObject); glBufferData(GL_ARRAY_BUFFER, sizeof(coordinates), coordinates, GL_STATIC_DRAW); // Dertermine the interpretation of the array buffer glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 3*sizeof(float), (void *)0); glEnableVertexAttribArray(0); // Unbind the buffers glBindBuffer(GL_ARRAY_BUFFER, 0); glBindVertexArray(0); return vertexArrayObject; } MyFileHandler.h
      #pragma once #include <cstdio> #include <cstdlib> class MyFileHandler { private: const char * fileName; unsigned long fileSize; void setFileSize(); public: MyFileHandler(const char * fileName); ~MyFileHandler(); unsigned long getFileSize(); const char * readFile(); }; MyFileHandler.cpp
      #include "MyFileHandler.h" MyFileHandler::MyFileHandler(const char * fileName) { this->fileName = fileName; this->setFileSize(); } MyFileHandler::~MyFileHandler() { } void MyFileHandler::setFileSize() { FILE * fileHandle = NULL; fopen_s(&fileHandle, this->fileName, "rb"); fseek(fileHandle, 0L, SEEK_END); this->fileSize = ftell(fileHandle); rewind(fileHandle); fclose(fileHandle); return; } unsigned long MyFileHandler::getFileSize() { return (this->fileSize); } const char * MyFileHandler::readFile() { char * buffer = (char *)malloc((this->fileSize)+1); FILE * fileHandle = NULL; fopen_s(&fileHandle, this->fileName, "rb"); fread(buffer, this->fileSize, sizeof(char), fileHandle); fclose(fileHandle); buffer[this->fileSize] = '\0'; return buffer; } VertexShader.glsl
      #version 330 core layout (location = 0) vec3 VertexPositions; void main() { gl_Position = vec4(VertexPositions, 1.0f); } FragmentShader.glsl
      #version 330 core out vec4 FragmentColor; void main() { FragmentColor = vec4(1.0f, 0.0f, 0.0f, 1.0f); } I am attempting to create a simple engine/graphics utility using some object-oriented paradigms. My first goal is to get some output from my engine, namely, a simple red triangle.
      For this goal, the MyShapes class will be responsible for defining shapes such as triangles, polygons etc. Currently, there is only a drawTriangle() method implemented, because I first wanted to see whether it works or not before attempting to code other shape drawing methods.
      The constructor of the MyEngine class creates a GLFW window (GLAD is also initialized here to load all OpenGL functionality), and the myEngine.run() method in Main.cpp is responsible for firing up the engine. In this run() method, the shaders get loaded from files via the help of my FileHandler class. The vertices for the triangle are processed by the myShapes.drawTriangle() method where a vertex array object, a vertex buffer object and vertrex attributes are set for this purpose.
      The while loop in the run() method should be outputting me the desired red triangle, but all I get is a grey window area. Why?
      Note: The shaders are compiling and linking without any errors.
      (Note: I am aware that this code is not using any good software engineering practices (e.g. exceptions, error handling). I am planning to implement them later, once I get the hang of OpenGL.)

       
    • By KarimIO
      EDIT: I thought this was restricted to Attribute-Created GL contexts, but it isn't, so I rewrote the post.
      Hey guys, whenever I call SwapBuffers(hDC), I get a crash, and I get a "Too many posts were made to a semaphore." from Windows as I call SwapBuffers. What could be the cause of this?
      Update: No crash occurs if I don't draw, just clear and swap.
      static PIXELFORMATDESCRIPTOR pfd = // pfd Tells Windows How We Want Things To Be { sizeof(PIXELFORMATDESCRIPTOR), // Size Of This Pixel Format Descriptor 1, // Version Number PFD_DRAW_TO_WINDOW | // Format Must Support Window PFD_SUPPORT_OPENGL | // Format Must Support OpenGL PFD_DOUBLEBUFFER, // Must Support Double Buffering PFD_TYPE_RGBA, // Request An RGBA Format 32, // Select Our Color Depth 0, 0, 0, 0, 0, 0, // Color Bits Ignored 0, // No Alpha Buffer 0, // Shift Bit Ignored 0, // No Accumulation Buffer 0, 0, 0, 0, // Accumulation Bits Ignored 24, // 24Bit Z-Buffer (Depth Buffer) 0, // No Stencil Buffer 0, // No Auxiliary Buffer PFD_MAIN_PLANE, // Main Drawing Layer 0, // Reserved 0, 0, 0 // Layer Masks Ignored }; if (!(hDC = GetDC(windowHandle))) return false; unsigned int PixelFormat; if (!(PixelFormat = ChoosePixelFormat(hDC, &pfd))) return false; if (!SetPixelFormat(hDC, PixelFormat, &pfd)) return false; hRC = wglCreateContext(hDC); if (!hRC) { std::cout << "wglCreateContext Failed!\n"; return false; } if (wglMakeCurrent(hDC, hRC) == NULL) { std::cout << "Make Context Current Second Failed!\n"; return false; } ... // OGL Buffer Initialization glClear(GL_DEPTH_BUFFER_BIT | GL_COLOR_BUFFER_BIT); glBindVertexArray(vao); glUseProgram(myprogram); glDrawElements(GL_TRIANGLES, indexCount, GL_UNSIGNED_SHORT, (void *)indexStart); SwapBuffers(GetDC(window_handle));  
    • By Tchom
      Hey devs!
       
      I've been working on a OpenGL ES 2.0 android engine and I have begun implementing some simple (point) lighting. I had something fairly simple working, so I tried to get fancy and added color-tinting light. And it works great... with only one or two lights. Any more than that, the application drops about 15 frames per light added (my ideal is at least 4 or 5). I know implementing lighting is expensive, I just didn't think it was that expensive. I'm fairly new to the world of OpenGL and GLSL, so there is a good chance I've written some crappy shader code. If anyone had any feedback or tips on how I can optimize this code, please let me know.
       
      Vertex Shader
      uniform mat4 u_MVPMatrix; uniform mat4 u_MVMatrix; attribute vec4 a_Position; attribute vec3 a_Normal; attribute vec2 a_TexCoordinate; varying vec3 v_Position; varying vec3 v_Normal; varying vec2 v_TexCoordinate; void main() { v_Position = vec3(u_MVMatrix * a_Position); v_TexCoordinate = a_TexCoordinate; v_Normal = vec3(u_MVMatrix * vec4(a_Normal, 0.0)); gl_Position = u_MVPMatrix * a_Position; } Fragment Shader
      precision mediump float; uniform vec4 u_LightPos["+numLights+"]; uniform vec4 u_LightColours["+numLights+"]; uniform float u_LightPower["+numLights+"]; uniform sampler2D u_Texture; varying vec3 v_Position; varying vec3 v_Normal; varying vec2 v_TexCoordinate; void main() { gl_FragColor = (texture2D(u_Texture, v_TexCoordinate)); float diffuse = 0.0; vec4 colourSum = vec4(1.0); for (int i = 0; i < "+numLights+"; i++) { vec3 toPointLight = vec3(u_LightPos[i]); float distance = length(toPointLight - v_Position); vec3 lightVector = normalize(toPointLight - v_Position); float diffuseDiff = 0.0; // The diffuse difference contributed from current light diffuseDiff = max(dot(v_Normal, lightVector), 0.0); diffuseDiff = diffuseDiff * (1.0 / (1.0 + ((1.0-u_LightPower[i])* distance * distance))); //Determine attenuatio diffuse += diffuseDiff; gl_FragColor.rgb *= vec3(1.0) / ((vec3(1.0) + ((vec3(1.0) - vec3(u_LightColours[i]))*diffuseDiff))); //The expensive part } diffuse += 0.1; //Add ambient light gl_FragColor.rgb *= diffuse; } Am I making any rookie mistakes? Or am I just being unrealistic about what I can do? Thanks in advance
    • By yahiko00
      Hi,
      Not sure to post at the right place, if not, please forgive me...
      For a game project I am working on, I would like to implement a 2D starfield as a background.
      I do not want to deal with static tiles, since I plan to slowly animate the starfield. So, I am trying to figure out how to generate a random starfield for the entire map.
      I feel that using a uniform distribution for the stars will not do the trick. Instead I would like something similar to the screenshot below, taken from the game Star Wars: Empire At War (all credits to Lucasfilm, Disney, and so on...).

      Is there someone who could have an idea of a distribution which could result in such a starfield?
      Any insight would be appreciated
  • Popular Now