Sign in to follow this  
Hodge

OpenGL OpenGL to lose the battle against direct 10

Recommended Posts

I personally prefer OpenGL over Direct3d. One of the biggest benefits of OpenGL is that it is open sourced where DirectX is commercial. Both are consided required for serious graphic programmer but it seems to me that OpenGL is losing favor with developers. In the past(DirectX 5 era) OpenGL was considered more powerful that DirectX however it seems that alot of developers think that this isn't the case. I have read alot of postings that state that OpenGL will lose it's current support from graphics and game developers when DirectX 10 comes. I personnally refuse to think that OpenGL will lose over Direct even if there are more books and more people that support it OpenGL will always be open source.

Share this post


Link to post
Share on other sites
opengl is a specification, i think

AFAIK, opengl isnt "open source". however, i believe there are open source implementations of it, but the one im thinking of (mesa?) is a software renderer.

however, i generally found it easy to learn opengl, where as the insane amount of setting up and handles that need to be learned put my off directx.

the major benefit of opengl is that it is cross platform, asar as im concerned.

the other issue is directx is a graphics/network/input/otherstuff api, while opengl is a grahpics api.

they aren't directly comparable.

but SDL/openGL i find easier on the head. my 2 cents

Share this post


Link to post
Share on other sites
I will surely not "abandon" OpenGL to use Direct3D 10, since OpenGL is the only alternative (more or less) for multi-platform programming. And I'm pretty sure most game developers will keep supporting OpenGL.

Share this post


Link to post
Share on other sites
I think OpenGL still has a lot of life in it. For purely Windows-based games, you'll likely see a big shift to DirectX 10 simply due to the native support in Windows Vista and (at least from Microsoft) a lack of support for OpenGL.

Of course, that won't have a huge impact (MS not supporting OpenGL) because ATI/NVIDIA will (likely) continue to support it, thus users will have the necessary support available. As well, developers could always distribute the necessary support with their game (in the same way they do DX right now).

And don't forget, the PlayStation3 is based on OpenGL (or, rather, OpenGLES, or what they call PSGL). So game developers will certainly not be leaving OpenGL anytime soon. If anything, you'll likely see a resurgence in OpenGL now that a very close cousin/stepbrother is on the PS3.

That being said, I personally like DirectX *overall* in comparison to OpenGL, primarily due to the incredible amount of support provided by Microsoft. Purely at the API level, though, OpenGL is often "quicker" to get going and experiment with. As a result, most throw-away temporary renderers I've seen are implemented in OpenGL.

Share this post


Link to post
Share on other sites
In any well designed engine, toggling between D3D and OGL (at least right now) is a trivial proposition. Sure it might require some QA, but the basic port is quick.

Now, with D3D 10, OpenGL/ARB will have to tack on some kludges...excuse me, approve some extensions quickly in order to come up to snuff, or they will have serious problems. OpenGL is no longer driving innovation, as it was for several years. They're playing a pure catch-up game now, and if they mess it up, they'll be relegated to the sidelines (i.e. alternate OSes).

Share this post


Link to post
Share on other sites
All things considered, OpenGL already is "relegated to the sidelines." There's no real compelling reason to use OGL if all your developing for is Windows, and since that's 99%-ish of the PC game market very few developers are going to do ports anyways. For someone in it purely for commerical reasons, DirectX is definately more attractive.

That said, like Promit mentioned: What API an engine runs on is pretty insignificant anymore. A well designed engine should be able to port from one to the other in a negligable amount of time. The days are gone where an engine would be pitched as "DirectX based" or "OpenGL based". It's all about the feature set and the tools now.

OpenGL is never going to "die": there are too many modeling apps and the like that use it, and too many older games that need it. The card manufacturers will continue to support it, regaurdless of how badly MS neglects it, and it will continue to be used for ports and various consoles. By and large, though, DirectX is the standard now, and that's the way it looks to stay for a long time.

(Note: I develop primarily using OpenGL, so don't take me to be a DirectX fanboy or anything. I'm just stating the facts.)

Share this post


Link to post
Share on other sites
Ah, the "OpenGL is going to diieee!!!11" post of the week. It was already overdue.

*sigh*

Anyway. Will OpenGL lose the "battle" against DX10 ? Nope, it won't. Let's look at some facts, shall we ? First, an API doesn't "drive innovation" anymore, these times are long over. The hardware vendors drive innovation, no, they push it at all price. Both major APIs just try to keep up with it.

And MS has an impressive track record of completely failing at this "catch the new GPU feature" game. The DX9 instancing fiasco is a perfect example where hardware vendors started rebelling against MS's API policies, by even technically circumventing it. MS learned their lesson (well, partially at least), and made DX10 broader in feature context than what has been done before.

OpenGL doesn't work this way, because it uses a different release model. OpenGL evolves continously, while D3D does it in jumps. It has always been the same: DX is a step ahead of OpenGL when a new version is released. Then after some time both are on par again. And finally, OpenGL gets the lead because new GPU features get released as GL extensions. Repeat the cycle for the next version.

So yeah, DX10 has features current OpenGL implementations don't have. So what ? OpenGL will get them as soon as the demand is there. And just wait until GPUs have surpassed DX10 features. Then OGL will lead again, until DX11. And so on. It's just the way both update models work.

Quote:

There's no real compelling reason to use OGL if all your developing for is Windows,

And there is no real compelling reason not to use it.

Quote:

and since that's 99%-ish of the PC game market very few developers are going to do ports anyways. For someone in it purely for commerical reasons, DirectX is definately more attractive.

How so ?

In fact, it's more a question about the graphics framework game companies license from third parties. Most companies won't give a shit about whether the engine they buy is D3D or OGL, as long as it's cutting edge and not too expensive. The reason why currently more 3D engines in the game sector use D3D is historical (well, around 2, 3 years ago). Theoretically, this could change again at any time.

Share this post


Link to post
Share on other sites
Some misunderstanding people got from my original post, and it's entirely my fault for just putting in another attention getting & poorly written OpenGL post. Here is one post by the mod Yann L that shows how I screwed up my argument for OpenGL losing popularity.

Quote:

Ah, teh "OpenGL is going to diieee!!!11" post of the week. It was already overdue.

*sigh*



Well I never did say that OpenGL was going to "diieeee!!11" what I was trying to get at is how alot of developers are using Direct3d where openGL would work better. Take Carmack for instance he was a long term OpenGL guy and on the latest Doom installment he worked with DirectX 8 stating that DX8 at the time had the best pixel shader support and the best vertex shader support.

Perhaps I am just overeacting to Microsoft's success with Direct. But I wish that more individuals would avoid DirectX just because Microsoft is a 10 ton corperation with a "mostly" bad rep.

Also on a side note I also f**ked up by stating that OpenGL is open source, Opengl by itself isn't, I should have said it is the current industry standard. I was actually thinking about mesa which is an open source implementation of the OpenGL. I also forgot to state that opengl is cross-platform and that too many computer users are trapped in to using windows when they have other options. BTW sorry about my piss poor writtening style and all my english errors.

Share this post


Link to post
Share on other sites
Quote:
Original post by Hodge
Some misunderstanding people got from my original post, and it's entirely my fault for just putting in another attention getting & poorly written OpenGL post. Here is one post by the mod Yann L that shows how I screwed up my argument for OpenGL losing popularity.

I wasn't specifically referring to your post. This is yet another standard D3D versus OGL thread. Whatever the exact way of starting it might be, it will always end in the same way, trust me on that. People always use the same fallacious arguments, ignore facts, and twist reality to fit with whatever API they might prefer. Again, this is not targeted at you in particular, it's just a typical property of such threads: objectivity dies first.

Quote:

Well I never did say that OpenGL was going to "diieeee!!11" what I was trying to get at is how alot of developers are using Direct3d where openGL would work better. Take Carmack for instance he was a long term OpenGL guy and on the latest Doom installment he worked with DirectX 8 stating that DX8 at the time had the best pixel shader support and the best vertex shader support.

Doom 3 uses OpenGL, except for the XBox version (obviously).

Quote:

Perhaps I am just overeacting to Microsoft's success with Direct. But I wish that more individuals would avoid DirectX just because Microsoft is a 10 ton corperation with a "mostly" bad rep.

And that is supposed to be a valid reason to avoid it ? Select an API based on objective technical assessments. Select it based on your personal preference for its semantics. Select it based on driver, OS or platform support. But avoiding an API because it is done by "the big evil M$" is just stupid. No offense intended, but I smell zealotry from miles around.

Quote:

I also forgot to state that opengl is cross-platform and that too many computer users are trapped in to using windows when they have other options.

Windows is the industry standard on consumer level platforms right now. Deal with it. As an indie developer, you have the choice: use Windows or Linux, use D3D or OGL. Use what you prefer, or what might be better suited to your particular needs. Choice is always good.

But if you are a large scale game development studio, your priority is not to bring down Windows or D3D and introduce the next OS revolution. Your priority is to get the game done on schedule, to reduce costs, and to make shareholders and/or publishers happy. What API is used to do that is completely irrelevant. More often than not, the techies (who might be able to judge the pros and contras of an API) are not even involved in the selection process. The purchasing and finance department will choose whatever engine seems to be most suited in terms of minimizing investment and a fast ROI.

In the industry, there are no OGL versus D3D arguments. The question doesn't even exist. It's all about the bottom line. Create a groundbreaking new OpenGL based 3D engine that sells cheaper than existing engines, and suddendly everybody will use OpenGL. Do the same with D3D, and the balance tips into the other direction again. Right now, it's the latter. Tomorrow, everything can change again.

Share this post


Link to post
Share on other sites
This is what Yann L was talking about when he said People always use the same fallacious arguments like "the big evil M$" is always bad.
Quote:
Original post by Anonymous Poster
OpenGL isn't going to die ... but Micro$oft is sure trying to kill it in the new Window$ (vista)!

Is there any chance that Micro$$$oft will be able exclude OpenGL from Window$?
Something like: if you want your game to run on Linux, Unix, Free BSD, MacOS, HP calculators, alarm clocks, etc then you'll use OpenGL, but if you want to run it on Window$ then you WILL have to use DX (because a spinning cube will run at 10fps on OGL)?


I am actually currently running a build of the newest windows (Longhorn) and prior to me typing this post I find that it does indeed work quite well. The thing that gets me about Microsoft (and I am repeting myself) is that they do everything in their power to make people to be "stuck" using their products. Often I find that people arn't even aware that they can use some of their favorite software and API's on other platforms. I did however agree with you on this statement.

Quote:
Quote Yann L. Select an API based on objective technical assessments. Select it based on your personal preference for its semantics. Select it based on driver, OS or platform support.


Well said but any good developer should think about their customers os preference rather then their preferences. This is why I would discourage using directX. You simply can't use it on a mac or unix machine. BTW ID software first built the newer doom engine with support for directX then added OpenGL. It took quite awhile for them to successfully port the game to Unix & Mac

Share this post


Link to post
Share on other sites
Quote:
Original post by Hodge
This is what Yann L was talking about when he said People always use the same fallacious arguments like "the big evil M$" is always bad.

Exactly.

Quote:
Original post by Hodge
The thing that gets me about Microsoft (and I am repeting myself) is that they do everything in their power to make people to be "stuck" using their products.

Of course, and that's a valid point against them. That's also why they have been subject to numerous antitrust lawsuits over the years, and probably will be in the future. But this should not be relevant for selecting the viability of an API in a commercial project.


Quote:
said but any good developer should think about their customers os preference rather then their preferences. This is why I would discourage using directX. You simply can't use it on a mac or unix machine.

Irrelevant. Macs or Linux are no viable game platforms right now. If your customer base is based on Mac or a Unix derivate, then the question doesn't even pose itself: OpenGL is the only alternative available. If you target Windows, select OGL or D3D on technical or financial points. There is no reason to discourage the use of an API, just because it might not work on platforms you don't even target.

Quote:

BTW ID software first built the newer doom engine with support for directX then added OpenGL. It took quite awhile for them to successfully port the game to Unix & Mac

Nope. Doom 3 has always used OpenGL, from the beginning on. It was recently ported to D3D for the XBox version, but that's it. You're thinking about non graphics related DX features, such as input and sound.

Share this post


Link to post
Share on other sites
God I get sick of people stating id used DX first then went to GL. id has always used GL for all their game engines since Q2 if I remember right. I am not sure but I am assuming Unreal3 is using DX but the game is being ported to GL by Ryan Gordon to run on Mac... I know their toolset will be usable by all platforms so I am unsure what they are using to allow this.

Share this post


Link to post
Share on other sites
ID have always used OpenGL for rendering, before that it was all software. And, ID werent the ones responsible for porting Doom3 to Direct3D (for the xbox).

(Now for my 2 cents)

OpenGL will never die, if it does fail in the games industry, it will always be alive in the Everything-but-games graphics industries. Without the games industry, Direct3D wouldnt exist at all.

Microsoft's made a lot of controversal (sp) decisions in DirectX 10 and Vista...for example, everything pre-DX 10 will be layered ontop of it as it has no native support. So DX7+8+9 games will suffer on vista. However OpenGL still has a chance, nVidia, ATI and other IHV have spend billions on OpenGL, i dont think they will be too keen on Microsoft wasting that. Not to mention Software vendors that develop 3D Graphics (which means OpenGL).

Even if OpenGL does suffer, and has to have a backseat on Vista, it will still be there. And it will just force more people and buisnesses to Linux, not the majority, but some.

-Twixn-

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
the funny part, is that hodge dont even know some things about opengl, and put the ID example. id and DX? what is the point? not because carmack port his game to XBOX all of us have to use DX. hi is not god, please.

"i thought on mesa", nice excuse, for a poor knowledge
"BTW ID software first built the newer doom engine with support for directX then added OpenGL", yeah yeah

same word as yann
far far away(i'm from chile xD) i'm smelling some zealotry

as i see, this is a another D3D vs OGL thread xD

Vicente
my english sucks :P

Share this post


Link to post
Share on other sites
OpenGL surely will never die but the most compelling evidence that points to its fading away from mainstream markets is the fact that the next iteration of Windows is not going to natively support OpenGL (or so i've heard). This fact alone should definitely change the landscape and will probably be a determining factor in many studios API of choice. I think it's a terrible idea that Bill will not natively support it as it completely destroys backwards compatability with many applications and narrows the scope of applications that can be run on a strictly windows based system (that is unless the user installs the OpenGL SDK which they should!). However Microsoft is solely interested in market share and money. I assume this is a strategic position, albeit IMO a stupid one. I frickin' love me some OpenGL

Share this post


Link to post
Share on other sites
Someone better tell Sony that OpenGL is dead, they don't seem to know about it.

Besides, Microsoft don't write graphics card drivers, ATI and nVidia do. Whether programs running on the new windows can use OpenGL simply is not up to Microsoft.

The new windows uses DirectX to render it's GUI. How many people will notice that the new windows GUI has been disabled when they're running a fullscreen game?

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
True that GPU manufacturers are making the innovations. They are making the hardware obviously. BUT, I think MS also has a say in this because they demand certain features from the GPUs for the DX10.

Would we be getting Geometry shaders in GPUs if MS left it out of DX10? I am not saying MS invented Geometry shaders or anything, they have been thought about since a long time, but if it was left just to the IHVs, would they have come up with Geometry Shaders at this point? Would they have agreed on how it should be exposed? I don't really see this happening in the ARB. Just see how long it took to get FBO and it might still get minor changes. Wheras D3D specified the interface and in case it was wrong, then they will update it in the next version of DX10. But at the ARB they try to get it right before entering into the core as unlike D3d OpenGL retains the same API and hence they are very cautious about modiying the core api -which is good. But for the end user it only means a mich longer time before he gets the features exposed in a single interface across all hardware.

So now MS specifies how Geometry shaders are to be used (after discussions with IHVs ofcourse), and they say they must absolutely have it for DX10. MS has that authority, wheras there is no such authority in the ARB. And added to that MS slaps on NDAs on the IHVs. Have geometry shaders been discussed in the ARB yet? Or texture arrays? I see details of this only after the DX10 info has been made public. MS should be having a more open discussion but they will not.

I am only using the Geometry shaders as an example. Some might say its not the right way to do it, or it might not be very useful, but atleast we are getting some big changes.

Also d3d has the advantage that, newbies need to get the DX sdk and they are set up. They have lots of examples. They have d3dx which handles a lot of lame things that have to be written by the newbies in case of opengl or they should try to find some other libraries for it. Which makes OpenGL less attractive. With the ARB we have the problem they don't do much about making OpenGL easier for newbies to pick up. There is no standard way for newbies to pick up, some find glew, some find some headers and that they have to get the function pointers at runtime, some write their own bitmap loaders, some use glu, some use devil. All this, simply creates more problems for them. Which d3d users don't have. We need a standard opengl sdk, which tells the newbies what to use and what not to.

I wish that sony contributes some standard helper toolkit for OpenGL as GL ES 2.0 is being used in PS3. Khronos group seems to be getting things done. (and yes I know that people from the ARB also participate in the Khronos group).

Share this post


Link to post
Share on other sites
Quote:
Original post by rip-offAFAIK, opengl isnt "open source". however, i believe there are open source implementations of it, but the one im thinking of (mesa?) is a software renderer.


mesa does hardware as well.

Share this post


Link to post
Share on other sites
Quote:

We need a standard opengl sdk, which tells the newbies what to use and what not to.


I second that. OpenGL is great, but often the user is left to have to make functions for other things, unlike Microsoft's Direct3D, which has tons of things that do it for the user. I really think they should make an OpenGL SDK.

Share this post


Link to post
Share on other sites
That is a good idea actually. Just bundle up some good open source stuff in a single downloadable package (preferably with a nice automatic installer for windows people). For example GLFW, some decent matrix/vector library, DevIL, some simple model loading stuff (lib3DS?). And then put up a webpage somewhere to direct people to.

Share this post


Link to post
Share on other sites
Here is one quote I enjoyed laughing at by an anonymous Poster


Quote:
Original post by Anonymous Poster
the funny part, is that hodge dont even know some things about opengl, and put the ID example. id and DX? what is the point? not because carmack port his game to XBOX all of us have to use DX. hi is not god, please.

"i thought on mesa", nice excuse, for a poor knowledge
"BTW ID software first built the newer doom engine with support for directX then added OpenGL", yeah yeah

same word as yann
far far away(i'm from chile xD) i'm smelling some zealotry

as i see, this is a another D3D vs OGL thread xD

Vicente
my english sucks :P



Hi is not god? That's a smart thing to write in a post. Who here said that Carmack was a great developer? Not I. (he is far from one) Carmack is just a famous figurehead that gets nearly every nerd with a short attention span simply because doom and quake (Wow ID software found out that killing pixels generates a large fan base) were such successful game francises (dispite the latest doom and quake are sub-par game with high reviews stating that graphics makes a game's gameplay good) and he is said to be a programming & graphics guru(he isn't as smart as developers suggest, he just works way too much). Besides the graphics are already way out of date. Also I think that the engine isn't as good as people say it is(I am way off subject here).

Anyway how come that every single time someone posts something like "We could benefit with more game ports for linux" everyone says it's zealtry(sic) and that every single time the word "Microsoft" is discussed on the forum the old monopoly topic comes along. The extreme zealotry for linux thing is usually a quick statement used against anyone slightly for open-source. Not everyone who uses linix is anti microsoft (I already said that I use windows) or an Open-source evangelist.

Linux and Mac does have a significant amount of short comings. I should state that one could guess that one of there biggest problem has to do with compuuter users says that they "will never leave microsoft product because it does what I need". All I was saying here, is that everyone would benefit from more AAA games ports. It's true that the developers might have to lower graphic setting for the game and they probably will not make as much money as they want. But I think it would benefit the game market in the long run.

I actually do know quite alot about direct3d and OpenGL I am a little outdated on the latest development on them but I have read enough on them to post. I already said I have made alot of uncorrected mistakes in my posts so please don't make a post telling me that. Once again I have tons of spelling and grammer mistakes on this post.

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
First off, what makes you say Carmack is a poor developer? You're going to have a tough time qualifying that, since he's the brains behind some of the greatest games ever made.

Secondly- why the attack on Doom 3? You think just because you didn't like it, that it was a bad game? Doom 3 is one of the best games I've ever played. Yes, it ultimately got boring (for me), but it provided me with some unforgettable gaming memories. On the graphics side of things- the technology is a year old now already. It is targeted towards graphics hardware 4 generations old. The only games I can think of that can rival Doom 3 in terms of graphics are FarCry and Riddick.

After those points, you seem to have gone on a bit of a bender, and I'm not sure what you're trying to say, so I'll leave it here.

I understand you might be pissed off by negative posts, and I don't condone people rating you down for that- but please, think about what you're saying before you insult one of the most talented (like it or not) and highly regarded developers in our industry.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Forum Statistics

    • Total Topics
      628281
    • Total Posts
      2981796
  • Similar Content

    • By mellinoe
      Hi all,
      First time poster here, although I've been reading posts here for quite a while. This place has been invaluable for learning graphics programming -- thanks for a great resource!
      Right now, I'm working on a graphics abstraction layer for .NET which supports D3D11, Vulkan, and OpenGL at the moment. I have implemented most of my planned features already, and things are working well. Some remaining features that I am planning are Compute Shaders, and some flavor of read-write shader resources. At the moment, my shaders can just get simple read-only access to a uniform (or constant) buffer, a texture, or a sampler. Unfortunately, I'm having a tough time grasping the distinctions between all of the different kinds of read-write resources that are available. In D3D alone, there seem to be 5 or 6 different kinds of resources with similar but different characteristics. On top of that, I get the impression that some of them are more or less "obsoleted" by the newer kinds, and don't have much of a place in modern code. There seem to be a few pivots:
      The data source/destination (buffer or texture) Read-write or read-only Structured or unstructured (?) Ordered vs unordered (?) These are just my observations based on a lot of MSDN and OpenGL doc reading. For my library, I'm not interested in exposing every possibility to the user -- just trying to find a good "middle-ground" that can be represented cleanly across API's which is good enough for common scenarios.
      Can anyone give a sort of "overview" of the different options, and perhaps compare/contrast the concepts between Direct3D, OpenGL, and Vulkan? I'd also be very interested in hearing how other folks have abstracted these concepts in their libraries.
    • By aejt
      I recently started getting into graphics programming (2nd try, first try was many years ago) and I'm working on a 3d rendering engine which I hope to be able to make a 3D game with sooner or later. I have plenty of C++ experience, but not a lot when it comes to graphics, and while it's definitely going much better this time, I'm having trouble figuring out how assets are usually handled by engines.
      I'm not having trouble with handling the GPU resources, but more so with how the resources should be defined and used in the system (materials, models, etc).
      This is my plan now, I've implemented most of it except for the XML parts and factories and those are the ones I'm not sure of at all:
      I have these classes:
      For GPU resources:
      Geometry: holds and manages everything needed to render a geometry: VAO, VBO, EBO. Texture: holds and manages a texture which is loaded into the GPU. Shader: holds and manages a shader which is loaded into the GPU. For assets relying on GPU resources:
      Material: holds a shader resource, multiple texture resources, as well as uniform settings. Mesh: holds a geometry and a material. Model: holds multiple meshes, possibly in a tree structure to more easily support skinning later on? For handling GPU resources:
      ResourceCache<T>: T can be any resource loaded into the GPU. It owns these resources and only hands out handles to them on request (currently string identifiers are used when requesting handles, but all resources are stored in a vector and each handle only contains resource's index in that vector) Resource<T>: The handles given out from ResourceCache. The handles are reference counted and to get the underlying resource you simply deference like with pointers (*handle).  
      And my plan is to define everything into these XML documents to abstract away files:
      Resources.xml for ref-counted GPU resources (geometry, shaders, textures) Resources are assigned names/ids and resource files, and possibly some attributes (what vertex attributes does this geometry have? what vertex attributes does this shader expect? what uniforms does this shader use? and so on) Are reference counted using ResourceCache<T> Assets.xml for assets using the GPU resources (materials, meshes, models) Assets are not reference counted, but they hold handles to ref-counted resources. References the resources defined in Resources.xml by names/ids. The XMLs are loaded into some structure in memory which is then used for loading the resources/assets using factory classes:
      Factory classes for resources:
      For example, a texture factory could contain the texture definitions from the XML containing data about textures in the game, as well as a cache containing all loaded textures. This means it has mappings from each name/id to a file and when asked to load a texture with a name/id, it can look up its path and use a "BinaryLoader" to either load the file and create the resource directly, or asynchronously load the file's data into a queue which then can be read from later to create the resources synchronously in the GL context. These factories only return handles.
      Factory classes for assets:
      Much like for resources, these classes contain the definitions for the assets they can load. For example, with the definition the MaterialFactory will know which shader, textures and possibly uniform a certain material has, and with the help of TextureFactory and ShaderFactory, it can retrieve handles to the resources it needs (Shader + Textures), setup itself from XML data (uniform values), and return a created instance of requested material. These factories return actual instances, not handles (but the instances contain handles).
       
       
      Is this a good or commonly used approach? Is this going to bite me in the ass later on? Are there other more preferable approaches? Is this outside of the scope of a 3d renderer and should be on the engine side? I'd love to receive and kind of advice or suggestions!
      Thanks!
    • By nedondev
      I 'm learning how to create game by using opengl with c/c++ coding, so here is my fist game. In video description also have game contain in Dropbox. May be I will make it better in future.
      Thanks.
    • By Abecederia
      So I've recently started learning some GLSL and now I'm toying with a POM shader. I'm trying to optimize it and notice that it starts having issues at high texture sizes, especially with self-shadowing.
      Now I know POM is expensive either way, but would pulling the heightmap out of the normalmap alpha channel and in it's own 8bit texture make doing all those dozens of texture fetches more cheap? Or is everything in the cache aligned to 32bit anyway? I haven't implemented texture compression yet, I think that would help? But regardless, should there be a performance boost from decoupling the heightmap? I could also keep it in a lower resolution than the normalmap if that would improve performance.
      Any help is much appreciated, please keep in mind I'm somewhat of a newbie. Thanks!
    • By test opty
      Hi,
      I'm trying to learn OpenGL through a website and have proceeded until this page of it. The output is a simple triangle. The problem is the complexity.
      I have read that page several times and tried to analyse the code but I haven't understood the code properly and completely yet. This is the code:
       
      #include <glad/glad.h> #include <GLFW/glfw3.h> #include <C:\Users\Abbasi\Desktop\std_lib_facilities_4.h> using namespace std; //****************************************************************************** void framebuffer_size_callback(GLFWwindow* window, int width, int height); void processInput(GLFWwindow *window); // settings const unsigned int SCR_WIDTH = 800; const unsigned int SCR_HEIGHT = 600; const char *vertexShaderSource = "#version 330 core\n" "layout (location = 0) in vec3 aPos;\n" "void main()\n" "{\n" " gl_Position = vec4(aPos.x, aPos.y, aPos.z, 1.0);\n" "}\0"; const char *fragmentShaderSource = "#version 330 core\n" "out vec4 FragColor;\n" "void main()\n" "{\n" " FragColor = vec4(1.0f, 0.5f, 0.2f, 1.0f);\n" "}\n\0"; //******************************* int main() { // glfw: initialize and configure // ------------------------------ glfwInit(); glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3); glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 3); glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE); // glfw window creation GLFWwindow* window = glfwCreateWindow(SCR_WIDTH, SCR_HEIGHT, "My First Triangle", nullptr, nullptr); if (window == nullptr) { cout << "Failed to create GLFW window" << endl; glfwTerminate(); return -1; } glfwMakeContextCurrent(window); glfwSetFramebufferSizeCallback(window, framebuffer_size_callback); // glad: load all OpenGL function pointers if (!gladLoadGLLoader((GLADloadproc)glfwGetProcAddress)) { cout << "Failed to initialize GLAD" << endl; return -1; } // build and compile our shader program // vertex shader int vertexShader = glCreateShader(GL_VERTEX_SHADER); glShaderSource(vertexShader, 1, &vertexShaderSource, nullptr); glCompileShader(vertexShader); // check for shader compile errors int success; char infoLog[512]; glGetShaderiv(vertexShader, GL_COMPILE_STATUS, &success); if (!success) { glGetShaderInfoLog(vertexShader, 512, nullptr, infoLog); cout << "ERROR::SHADER::VERTEX::COMPILATION_FAILED\n" << infoLog << endl; } // fragment shader int fragmentShader = glCreateShader(GL_FRAGMENT_SHADER); glShaderSource(fragmentShader, 1, &fragmentShaderSource, nullptr); glCompileShader(fragmentShader); // check for shader compile errors glGetShaderiv(fragmentShader, GL_COMPILE_STATUS, &success); if (!success) { glGetShaderInfoLog(fragmentShader, 512, nullptr, infoLog); cout << "ERROR::SHADER::FRAGMENT::COMPILATION_FAILED\n" << infoLog << endl; } // link shaders int shaderProgram = glCreateProgram(); glAttachShader(shaderProgram, vertexShader); glAttachShader(shaderProgram, fragmentShader); glLinkProgram(shaderProgram); // check for linking errors glGetProgramiv(shaderProgram, GL_LINK_STATUS, &success); if (!success) { glGetProgramInfoLog(shaderProgram, 512, nullptr, infoLog); cout << "ERROR::SHADER::PROGRAM::LINKING_FAILED\n" << infoLog << endl; } glDeleteShader(vertexShader); glDeleteShader(fragmentShader); // set up vertex data (and buffer(s)) and configure vertex attributes float vertices[] = { -0.5f, -0.5f, 0.0f, // left 0.5f, -0.5f, 0.0f, // right 0.0f, 0.5f, 0.0f // top }; unsigned int VBO, VAO; glGenVertexArrays(1, &VAO); glGenBuffers(1, &VBO); // bind the Vertex Array Object first, then bind and set vertex buffer(s), //and then configure vertex attributes(s). glBindVertexArray(VAO); glBindBuffer(GL_ARRAY_BUFFER, VBO); glBufferData(GL_ARRAY_BUFFER, sizeof(vertices), vertices, GL_STATIC_DRAW); glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 3 * sizeof(float), (void*)0); glEnableVertexAttribArray(0); // note that this is allowed, the call to glVertexAttribPointer registered VBO // as the vertex attribute's bound vertex buffer object so afterwards we can safely unbind glBindBuffer(GL_ARRAY_BUFFER, 0); // You can unbind the VAO afterwards so other VAO calls won't accidentally // modify this VAO, but this rarely happens. Modifying other // VAOs requires a call to glBindVertexArray anyways so we generally don't unbind // VAOs (nor VBOs) when it's not directly necessary. glBindVertexArray(0); // uncomment this call to draw in wireframe polygons. //glPolygonMode(GL_FRONT_AND_BACK, GL_LINE); // render loop while (!glfwWindowShouldClose(window)) { // input // ----- processInput(window); // render // ------ glClearColor(0.2f, 0.3f, 0.3f, 1.0f); glClear(GL_COLOR_BUFFER_BIT); // draw our first triangle glUseProgram(shaderProgram); glBindVertexArray(VAO); // seeing as we only have a single VAO there's no need to // bind it every time, but we'll do so to keep things a bit more organized glDrawArrays(GL_TRIANGLES, 0, 3); // glBindVertexArray(0); // no need to unbind it every time // glfw: swap buffers and poll IO events (keys pressed/released, mouse moved etc.) glfwSwapBuffers(window); glfwPollEvents(); } // optional: de-allocate all resources once they've outlived their purpose: glDeleteVertexArrays(1, &VAO); glDeleteBuffers(1, &VBO); // glfw: terminate, clearing all previously allocated GLFW resources. glfwTerminate(); return 0; } //************************************************** // process all input: query GLFW whether relevant keys are pressed/released // this frame and react accordingly void processInput(GLFWwindow *window) { if (glfwGetKey(window, GLFW_KEY_ESCAPE) == GLFW_PRESS) glfwSetWindowShouldClose(window, true); } //******************************************************************** // glfw: whenever the window size changed (by OS or user resize) this callback function executes void framebuffer_size_callback(GLFWwindow* window, int width, int height) { // make sure the viewport matches the new window dimensions; note that width and // height will be significantly larger than specified on retina displays. glViewport(0, 0, width, height); } As you see, about 200 lines of complicated code only for a simple triangle. 
      I don't know what parts are necessary for that output. And also, what the correct order of instructions for such an output or programs is, generally. That start point is too complex for a beginner of OpenGL like me and I don't know how to make the issue solved. What are your ideas please? What is the way to figure both the code and the whole program out correctly please?
      I wish I'd read a reference that would teach me OpenGL through a step-by-step method. 
  • Popular Now