Jump to content
  • Advertisement
Sign in to follow this  
Ashkan

OpenGL wglShareLists and multiple render windows

This topic is 3690 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

MSDN is the only crediable source on wiggle (that I know of) and it's pretty much outdated with plenty of missing functions so there's usually some sort of dark magic involved if one is about to step outside the cozy shelter and enter the dark realm of lesser-used functions. wglShareLists() is one member of this family. MSDN states that this function informs OpenGL server to share the diplay-list space between several rendering contexts but it never talks about other server-side resources. I'm assuming that other server-side resources such as textures and VBOs are also shared, but to my experience server-side states are not. I've been playing around with wglShareLists() for a few days now so my knowledge on the subject is limitted. My interpretation might also be completely wrong, especially considering the scarce documentaion that's available but my experience is that server-side states (states that are set by glEnable/glDisable for instance) are not shared between the rendering contexts so they must be explicitly set for every single rendering context that's available, which is a pain in the rear end in applications with multiple render window setups. These states tend to get out of synch and cause headaches in the long run. So my first question is this: Do you have any experience with wglShareLists()? What's your take on the aforementioned dilemma? As far as I'm concerned, D3D's architecture is a whole lot cleaner in this regard. In D3D, pipeline states are stored inside the device object, which, as far as I know, is not directly related or bound to a window. These states are shared among all windows which in turn, are represented by swap chains. There is always at least one swap chain for each device, known as the implicit swap chain. However, an additional swap chain for rendering multiple views from the same device can be created. In OpenGL, rendering contexts are bound to windows (rendering contexts are bound to device contexts which in turn are bound to windows, to be exact), which makes sharing them non-trivial considering the set of functions that are available. I even went as far as creating a dummy rendering context that's attached to a dummy window and used that as a hub to share other states, but you see, wglShareLists doesn't allow sharing of states. So, do you know of a workaround? Did you find this issue problematic? Thanks

Share this post


Link to post
Share on other sites
Advertisement
Yes, MSDN is outdated. It will soon be updated *

wglShareLists is not such a great name for the function, but essentially it shares objects between 2 or more GL contexts :
- display lists (essentially replaced by VBO/IBO)
- VBO/IBO
- shaders
- textures
- FBO
- PBO

* just kidding

Quote:
I even went as far as creating a dummy rendering context that's attached to a dummy window and used that as a hub to share other states, but you see, wglShareLists doesn't allow sharing of states.


Try just making 1 GL context and use them for both windows.

Share this post


Link to post
Share on other sites
Quote:
Original post by V-man
Yes, MSDN is outdated. It will soon be updated *

wglShareLists is not such a great name for the function, but essentially it shares objects between 2 or more GL contexts :
- display lists (essentially replaced by VBO/IBO)
- VBO/IBO
- shaders
- textures
- FBO
- PBO

* just kidding


You certainly got me for a moment [smile] I thought you work for microsoft or something [grin]

Quote:

Quote:
I even went as far as creating a dummy rendering context that's attached to a dummy window and used that as a hub to share other states, but you see, wglShareLists doesn't allow sharing of states.


Try just making 1 GL context and use them for both windows.


But how?! Each rendering context must be associated with a device context (wglCreateContext needs that) and each window has a device context of its own. So, I think the question boils down to this: how can several windows share a device context? Or am I missing something? A code snippet might help clarify these points.

Any help is greatly appreciated.

Share this post


Link to post
Share on other sites
Yes, wglCreateContext takes a HDC as a parameter but I think the driver doesn't care as other people seem to be doing this for a long time. I don't know what things you must respect. Perhaps both windows must be created from the same process. It shouldn't matter if both windows are created in the same thread or not. I think both windows must be on the same graphics card. If 2 graphics cards are used (like 2 nvidia's), the driver would do some management so it would work.
The pixelformat you give to SetPixelFormat should be the same.

Share this post


Link to post
Share on other sites
Quote:

....server-side states (states that are set by glEnable/glDisable for instance) are not shared between the rendering contexts so they must be explicitly set for every single rendering context that's available, which is a pain in the rear end in applications with multiple render window setups. These states tend to get out of synch and cause headaches in the long run. So my first question is this: Do you have any experience with wglShareLists()? What's your take on the aforementioned dilemma?


First, the most important question I have is:
What are you using multiple windows for?

Then the others:
Can you give an example of "states get out of sync"?

My thinking is quite different (maybe I am just stuck with thinking of making the best of what apis have to offer to date?): I cannot see a real dilemma(until you answer the "states get out of sync" and other question above perhaps).

Quote:

...experience with wglShareLists()?

wglShareLists() can actually be well suited to mfc or forms applications(once you get over its undocumented idiosyncracies): windows manages the err..windows, and each window's contents is managed (or rather rendered) by the graphics api of choice..(obviously opengl, in this case). However, this depends on your specific application too really. The example application I have in mind is something like 3ds max: in so far has having multiple views of the same scene, and even numerous other windows that may display other things such as materials/texures.

The only real disadvantage of not being able to share state changes with wglShareLists I can see is:
You have already mentioned-> making unneccessary server-side state calls(e.g. having to Re-glEnable()/Re-glDisable() etc..of the same state after a context change). Ultimately, you could minimize the number of server-side state change calls which is apparently always good (some would say critical) for max performance. And it's even better if you do not need to change any more states after that!

See any others?

From the point of view that states do change and each window has the option of rendering its contents differently:
I would still want to manage/keep track of my state changes.
Whether they are 'global' to all(or more than 1) contexts or only 'local' to each context. Interestingly, let's just say I could share state changes among contexts via wglShareLists() and I still want to manage/keep track of state changes for what gets rendered in each window. I would hope for increased performance, but managing those state changes could become a little more complex as I may need to then consider which states I am dealing with, the 'global' ones(where more than one window currently shares it) or the 'local' ones(one window's current state). A minor price to pay if the performance increase is worth it. So, in a way, in this case, if you have a lot of state changes happening, your task of managing them becomes more complex, unless each state change applies to every window all the time (i.e. not only multiple windows, but duplicate windows? for what purpose?)

I imagine possible performance enhancements may be had by a programmer actually implementing such a similar scheme even in a one windowed app to avoid the notorious redundant state change?(By the by, non-pure directx devices can be more forgiving, filtering out redundant state changes)

So, after all that, I would presume that you are really searching for a better way to manage your server-side state changes(Your "states get out of sync" remark is indicative of this to me) instead of assuming server-state sharing may be a solution to your pain?
Yes? No?

Terrible, aren't I, answering the question I wish you had asked, instead of answering the question you actually asked? Maybe I should become a politician (but it pays too much and I am not a very good liar).

Or maybe I am barking up the wrong tree? woof. It's easy to do.

Quote:

....So, do you know of a workaround? Did you find this issue problematic?

Since when is programming not problematic! ;)
I think I know what you mean: realtively speaking, no: You may just need to be more (boringly) systematic in your approach as the size of your state changes
grow. It depends how many state changes per frame you have to manage.
Again, with your "states out of sync" problem you either have hundreds/thousands or you are finding it difficult to manage the few you have.

Tell us. Who knows, maybe "Try just making 1 GL context and use them for both windows" may just be more suitable. There's nothing wrong with throwing it out there as an option, but I see no evidence to presume it happens to be the most suitable one?

RE: wglShareLists() undocumented idiosyncracies. e.g. I have found that in a forms application the contexts of 4 views works appropriately only before any context is actually made current (i.e. call wglShareLists() before any calls to
wglMakeCurrent()....go figure ( the cause could be elsewhere?...but this was a solution).

Share this post


Link to post
Share on other sites
OK, maybe I went too far with my "states get out of synch" statement, or maybe you just based too much of your arguments on that [smile]. Anyway, all I meant was that having to manage a different set of states for every single window is sure too much work to do. My argument was mainly a result of comparing OpenGL's unintuitive approach and D3D's more elegant method of handling things in this regard (that's just my opinion).

I haven't yet experimented with V-man's approach (i.e. having a single rendering context for multiple windows), but as there seems to be quite a few people suggesting that approach (I asked the same question at OpenGL.org forums and got the same answer; greedy me!), I'm going to assume that it works well which as far as I'm conrened is miles better than using wglShareLists, but the downside to both of these approaches (i.e. 1- having one rendering context for multiple windows and 2- using wglShareLists to share server-side resources), is that all DCs must have the same pixel format, which is even a greater concern compared to the aforementioned state management issue. What if I want to enable anti-aliasing for one of my windows and disable that on all others? What if I want to have several windows with different depth percisions? Should I load all server-side resource several times? You've gotta admit that this really sucks.

Quote:
Tell us. Who knows, maybe "Try just making 1 GL context and use them for both windows" may just be more suitable. There's nothing wrong with throwing it out there as an option, but I see no evidence to presume it happens to be the most suitable one?

So why do you prefer using wglShareLists()?

Quote:

RE: wglShareLists() undocumented idiosyncracies. e.g. I have found that in a forms application the contexts of 4 views works appropriately only before any context is actually made current (i.e. call wglShareLists() before any calls to
wglMakeCurrent()....go figure ( the cause could be elsewhere?...but this was a solution).

You mean one should call wglShareLists() before ANY calls to wglMakeCurrent() or only before wglMakeCurrent() calls that take place on the rendering context that's to be shared?

Share this post


Link to post
Share on other sites
Quote:
OK, maybe I went too far with my "states get out of synch" statement, or maybe you just based too much of your arguments on that .


No problem. If I had to enter a plea it would be: guilty as charged.
I kept on thinking "how do you do that"? So I invented my little "server state mis-management" agenda. sorry.

So, are you now saying your states do not get out of sync? Did you try to go for a more dramatic effect or something? Well... it worked!

Quote:

My argument was mainly a result of comparing OpenGL's unintuitive approach and D3D's more elegant method of handling things in this regard (that's just my
opinion).


That is easily evident from your orginal D3D comments. And I would slightly tend to agree (if I forced myself).

Quote:

..but the downside to both of these approaches (i.e. 1- having one rendering context for multiple windows and 2- using wglShareLists to share server-side resources), is that all DCs must have the same pixel format, which is even a greater concern compared to the aforementioned state management issue. What if I want to enable anti-aliasing for one of my windows and disable that on all others? What if I want to have several windows with different depth percisions? Should I load all server-side resource several times? You've gotta admit that this really sucks.


fair enough. Being in a pragmatic state of mind, I guess the majority of what's left of the opengl graphics programming community has bitten the bullet(waiting...and waiting..and waiting...for..what is it..that new thing...Opengl 3.0?) and tolerated it to date, if they haven't already moved over exclusively to Directx. If there was an actual specific problem you are trying to present, maybe I could respond in a better way (I'll assume you just need to express your frustration).

Quote:

So why do you prefer using wglShareLists()?

I cannot really give you any compelling reasons why. It's the first thing I found that allowed me to share those resources listed above in as many windows as I want. Pretty simple really, and it's worked fine ever since. Now I guess if someone gave me a compelling reason to simply change to one rendering context for multiple windows(e.g. increase my current 260fps in 4 views to say ...I dunno..300fps) I might consider not using it any longer(and go with the flow?..hoping as little as possible server state-changing/management code would need to be reorganised to make it worth it...now that would be a dilemma!).

Quote:

I'm going to assume that it works well which as far as I'm conrened is miles better than using wglShareLists

I'd like to hear how it goes. Give me a compelling reason to do likewise and I won't be able to resist joining you.

Quote:

You mean one should call wglShareLists() before ANY calls to wglMakeCurrent() or only before wglMakeCurrent() calls that take place on the rendering context that's to be shared?


sorry about that. To clarify:
Call wglShareLists() on shared contexts before any of those shared contexts are made current with wglMakeCurrent(). So that implies wglMakeCurrent() can be called before wglShareLists() as long as it is called on a context you do not intend to be a sharer/sharee (hope that's better - it only occurs in C++ forms applications. No such problem in mfc apps). (incidently, you should view this problem for what it is: an isolated incident...until it is/has been shown to be repeatable by others - as you said - since not much info around about wglShareLists() ).

Share this post


Link to post
Share on other sites
Quote:
What if I want to enable anti-aliasing for one of my windows and disable that on all others? What if I want to have several windows with different depth percisions? Should I load all server-side resource several times? You've gotta admit that this really sucks.


Yes, that is correct.
In general, like the previous person said it's better to just have 1 window with multiple views just like 3dS max does andmany other CAD and content creation software.

Share this post


Link to post
Share on other sites
Quote:

Yes, that is correct.
In general, like the previous person said it's better to just have 1 window with multiple views just like 3dS max does andmany other CAD and content creation software.

now you are confusing me.

apparently, 3ds max used to be a slightly different beast again:

"Since MAX is highly multi-threaded, it is absolutely imperative that the OpenGL driver be thread safe. In particular, MAX maintains one "draw thread" per viewport (four total), and these threads create and hold on to their own OpenGL rendering contexts (OGLRC) for the entire run of MAX. In more detail, the contexts are created and made current at the beginning of the MAX session (one context per each of the four drawing threads), and the four contexts remain current in their respective threads until MAX is terminated. "

if this ancient history is to be believed from here Some pertinent questions would be:

(1)would said opengl contexts have been shared?(an almost rhetorical question?)

(2)Does it apply to the latest 3dsmax today?
Another almost rhetorical question, but more unanswerable than rhetorical.

Share this post


Link to post
Share on other sites
Thank you both of you guys.

Quote:

Quote:

My argument was mainly a result of comparing OpenGL's unintuitive approach and D3D's more elegant method of handling things in this regard (that's just my
opinion).


That is easily evident from your orginal D3D comments. And I would slightly tend to agree (if I forced myself).

I'd like to add that I'm not bashing OpenGL in anyway... and not that you implied that I'm influenced by such mentality. Far from it. I just wanted to clarify that I like both APIs the same and my only purpose is to get myself acquianted with the quirks. D3D has its own weaknesses too.

Quote:

Quote:

..but the downside to both of these approaches (i.e. 1- having one rendering context for multiple windows and 2- using wglShareLists to share server-side resources), is that all DCs must have the same pixel format, which is even a greater concern compared to the aforementioned state management issue. What if I want to enable anti-aliasing for one of my windows and disable that on all others? What if I want to have several windows with different depth percisions? Should I load all server-side resource several times? You've gotta admit that this really sucks.


fair enough. Being in a pragmatic state of mind, I guess the majority of what's left of the opengl graphics programming community has bitten the bullet(waiting...and waiting..and waiting...for..what is it..that new thing...Opengl 3.0?) and tolerated it to date, if they haven't already moved over exclusively to Directx. If there was an actual specific problem you are trying to present, maybe I could respond in a better way (I'll assume you just need to express your frustration).


It seems that the community has been waiting for OpenGL 3.0 forever...

Back to our discussion, to reiterate my earlier question, do you know of an approach that allows several windows to have different pixel formats while the server-side resources are still shared?

Quote:

Quote:

So why do you prefer using wglShareLists()?

I cannot really give you any compelling reasons why. It's the first thing I found that allowed me to share those resources listed above in as many windows as I want. Pretty simple really, and it's worked fine ever since. Now I guess if someone gave me a compelling reason to simply change to one rendering context for multiple windows(e.g. increase my current 260fps in 4 views to say ...I dunno..300fps) I might consider not using it any longer(and go with the flow?..hoping as little as possible server state-changing/management code would need to be reorganised to make it worth it...now that would be a dilemma!).

Quote:

I'm going to assume that it works well which as far as I'm conrened is miles better than using wglShareLists

I'd like to hear how it goes. Give me a compelling reason to do likewise and I won't be able to resist joining you.

It addresses the first issue: state management. You won't be needing to keep track of states for different rendering contexts since all states reside in a single context. It simplifies the design to some extent. Of course, you'd be needing some form of state management to cull redundant states.

Quote:

Quote:

You mean one should call wglShareLists() before ANY calls to wglMakeCurrent() or only before wglMakeCurrent() calls that take place on the rendering context that's to be shared?


sorry about that. To clarify:
Call wglShareLists() on shared contexts before any of those shared contexts are made current with wglMakeCurrent(). So that implies wglMakeCurrent() can be called before wglShareLists() as long as it is called on a context you do not intend to be a sharer/sharee (hope that's better - it only occurs in C++ forms applications. No such problem in mfc apps). (incidently, you should view this problem for what it is: an isolated incident...until it is/has been shown to be repeatable by others - as you said - since not much info around about wglShareLists() ).


Thanks for the tip.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
  • Advertisement
  • Popular Tags

  • Similar Content

    • By Lewa
      So, i'm still on my quest to unterstanding the intricacies of HDR and implementing this into my engine. Currently i'm at the step to implementing tonemapping. I stumbled upon this blogposts:
      http://filmicworlds.com/blog/filmic-tonemapping-operators/
      http://frictionalgames.blogspot.com/2012/09/tech-feature-hdr-lightning.html
      and tried to implement some of those mentioned tonemapping methods into my postprocessing shader.
      The issue is that none of them creates the same results as shown in the blogpost which definitely has to do with the initial range in which the values are stored in the HDR buffer. For simplicity sake i store the values between 0 and 1 in the HDR buffer (ambient light is 0.3, directional light is 0.7)
      This is the tonemapping code:
      vec3 Uncharted2Tonemap(vec3 x) { float A = 0.15; float B = 0.50; float C = 0.10; float D = 0.20; float E = 0.02; float F = 0.30; return ((x*(A*x+C*B)+D*E)/(x*(A*x+B)+D*F))-E/F; } This is without the uncharted tonemapping:
      This is with the uncharted tonemapping:
      Which makes the image a lot darker.
      The shader code looks like this:
      void main() { vec3 color = texture2D(texture_diffuse, vTexcoord).rgb; color = Uncharted2Tonemap(color); //gamma correction (use only if not done in tonemapping code) color = gammaCorrection(color); outputF = vec4(color,1.0f); } Now, from my understanding is that tonemapping should bring the range down from HDR to 0-1.
      But the output of the tonemapping function heavily depends on the initial range of the values in the HDR buffer. (You can't expect to set the sun intensity the first time to 10 and the second time to 1000 and excpect the same result if you feed that into the tonemapper.) So i suppose that this also depends on the exposure which i have to implement?
      To check this i plotted the tonemapping curve:
      You can see that the curve goes only up to around to a value of 0.21 (while being fed a value of 1) and then basically flattens out. (which would explain why the image got darker.)
       
      My guestion is: In what range should the values in the HDR buffer be which then get tonemapped? Do i have to bring them down to a range of 0-1 by multiplying with the exposure?
      For example, if i increase the values of the light by 10 (directional light would be 7 and ambient light 3) then i would need to divide HDR values by 10 in order to get a value range of 0-1 which then could be fed into the tonemapping curve. Is that correct?
    • By nOoNEE
      i am reading this book : link
      in the OpenGL Rendering Pipeline section there is a picture like this: link
      but the question is this i dont really understand why it is necessary to turn pixel data in to fragment and then fragment into pixel could please give me a source or a clear Explanation that why it is necessary ? thank you so mu
       
       
    • By Inbar_xz
      I'm using the OPENGL with eclipse+JOGL.
      My goal is to create movement of the camera and the player.
      I create main class, which create some box in 3D and hold 
      an object of PlayerAxis.
      I create PlayerAxis class which hold the axis of the player.
      If we want to move the camera, then in the main class I call to 
      the func "cameraMove"(from PlayerAxis) and it update the player axis.
      That's work good.
      The problem start if I move the camera on 2 axis, 
      for example if I move with the camera right(that's on the y axis)
      and then down(on the x axis) -
      in some point the move front is not to the front anymore..
      In order to move to the front, I do
      player.playerMoving(0, 0, 1);
      And I learn that in order to keep the front move, 
      I need to convert (0, 0, 1) to the player axis, and then add this.
      I think I dont do the convert right.. 
      I will be glad for help!

      Here is part of my PlayerAxis class:
       
      //player coordinate float x[] = new float[3]; float y[] = new float[3]; float z[] = new float[3]; public PlayerAxis(float move_step, float angle_move) { x[0] = 1; y[1] = 1; z[2] = -1; step = move_step; angle = angle_move; setTransMatrix(); } public void cameraMoving(float angle_step, String axis) { float[] new_x = x; float[] new_y = y; float[] new_z = z; float alfa = angle_step * angle; switch(axis) { case "x": new_z = addVectors(multScalar(z, COS(alfa)), multScalar(y, SIN(alfa))); new_y = subVectors(multScalar(y, COS(alfa)), multScalar(z, SIN(alfa))); break; case "y": new_x = addVectors(multScalar(x, COS(alfa)), multScalar(z, SIN(alfa))); new_z = subVectors(multScalar(z, COS(alfa)), multScalar(x, SIN(alfa))); break; case "z": new_x = addVectors(multScalar(x, COS(alfa)), multScalar(y, SIN(alfa))); new_y = subVectors(multScalar(y, COS(alfa)), multScalar(x, SIN(alfa))); } x = new_x; y = new_y; z = new_z; normalization(); } public void playerMoving(float x_move, float y_move, float z_move) { float[] move = new float[3]; move[0] = x_move; move[1] = y_move; move[2] = z_move; setTransMatrix(); float[] trans_move = transVector(move); position[0] = position[0] + step*trans_move[0]; position[1] = position[1] + step*trans_move[1]; position[2] = position[2] + step*trans_move[2]; } public void setTransMatrix() { for (int i = 0; i < 3; i++) { coordiTrans[0][i] = x[i]; coordiTrans[1][i] = y[i]; coordiTrans[2][i] = z[i]; } } public float[] transVector(float[] v) { return multiplyMatrixInVector(coordiTrans, v); }  
      and in the main class i have this:
       
      public void keyPressed(KeyEvent e) { if (e.getKeyCode()== KeyEvent.VK_ESCAPE) { System.exit(0); //player move } else if (e.getKeyCode()== KeyEvent.VK_W) { //front //moveAmount[2] += -0.1f; player.playerMoving(0, 0, 1); } else if (e.getKeyCode()== KeyEvent.VK_S) { //back //moveAmount[2] += 0.1f; player.playerMoving(0, 0, -1); } else if (e.getKeyCode()== KeyEvent.VK_A) { //left //moveAmount[0] += -0.1f; player.playerMoving(-1, 0, 0); } else if (e.getKeyCode()== KeyEvent.VK_D) { //right //moveAmount[0] += 0.1f; player.playerMoving(1, 0, 0); } else if (e.getKeyCode()== KeyEvent.VK_E) { //moveAmount[0] += 0.1f; player.playerMoving(0, 1, 0); } else if (e.getKeyCode()== KeyEvent.VK_Q) { //moveAmount[0] += 0.1f; player.playerMoving(0, -1, 0); //camera move } else if (e.getKeyCode()== KeyEvent.VK_I) { //up player.cameraMoving(1, "x"); } else if (e.getKeyCode()== KeyEvent.VK_K) { //down player.cameraMoving(-1, "x"); } else if (e.getKeyCode()== KeyEvent.VK_L) { //right player.cameraMoving(-1, "y"); } else if (e.getKeyCode()== KeyEvent.VK_J) { //left player.cameraMoving(1, "y"); } else if (e.getKeyCode()== KeyEvent.VK_O) { //right round player.cameraMoving(-1, "z"); } else if (e.getKeyCode()== KeyEvent.VK_U) { //left round player.cameraMoving(1, "z"); } }  
      finallt found it.... i confused with the transformation matrix row and col. thanks anyway!
    • By Lewa
      So, i'm currently trying to implement an SSAO shader from THIS tutorial and i'm running into a few issues here.
      Now, this SSAO method requires view space positions and normals. I'm storing the normals in my deferred renderer in world-space so i had to do a conversion and reconstruct the position from the depth buffer.
      And something there goes horribly wrong (which has probably to do with worldspace to viewspace transformations).
      (here is the full shader source code if someone wants to take a look at it)
      Now, i suspect that the normals are the culprit.
      vec3 normal = ((uNormalViewMatrix*vec4(normalize(texture2D(sNormals, vTexcoord).rgb),1.0)).xyz); "sNormals" is a 2D texture which stores the normals in world space in a RGB FP16 buffer.
      Now i can't use the camera viewspace matrix to transform the normals into viewspace as the cameras position isn't set at (0,0,0), thus skewing the result.
      So what i did is to create a new viewmatrix specifically for this normal without the position at vec3(0,0,0);
      //"camera" is the camera which was used for rendering the normal buffer renderer.setUniform4m(ressources->shaderSSAO->getUniform("uNormalViewMatrix"), glmExt::createViewMatrix(glm::vec3(0,0,0),camera.getForward(),camera.getUp())//parameters are (position,forwardVector,upVector) ); Though i have the feeling this is the wrong approach. Is this right or is there a better/correct way of transforming a world space normal into viewspace?
    • By HawkDeath
      Hi,
      I'm trying mix two textures using own shader system, but I have a problem (I think) with uniforms.
      Code: https://github.com/HawkDeath/shader/tree/test
      To debug I use RenderDocs, but I did not receive good results. In the first attachment is my result, in the second attachment is what should be.
      PS. I base on this tutorial https://learnopengl.com/Getting-started/Textures.


    • By norman784
      I'm having issues loading textures, as I'm clueless on how to handle / load images maybe I missing something, but the past few days I just google a lot to try to find a solution. Well theres two issues I think, one I'm using Kotlin Native (EAP) and OpenGL wrapper / STB image, so I'm not quite sure wheres the issue, if someone with more experience could give me some hints on how to solve this issue?
      The code is here, if I'm not mistaken the workflow is pretty straight forward, stbi_load returns the pixels of the image (as char array or byte array) and you need to pass those pixels directly to glTexImage2D, so a I'm missing something here it seems.
      Regards
    • By Hashbrown
      I've noticed in most post processing tutorials several shaders are used one after another: one for bloom, another for contrast, and so on. For example: 
      postprocessing.quad.bind() // Effect 1 effect1.shader.bind(); postprocessing.texture.bind(); postprocessing.quad.draw(); postprocessing.texture.unbind(); effect1.shader.unbind(); // Effect 2 effect2.shader.bind(); // ...and so on postprocessing.quad.unbind() Is this good practice, how many shaders can I bind and unbind before I hit performance issues? I'm afraid I don't know what the good practices are in open/webGL regarding binding and unbinding resources. 
      I'm guessing binding many shaders at post processing is okay since the scene has already been updated and I'm just working on a quad and texture at that moment. Or is it more optimal to put shader code in chunks and bind less frequently? I'd love to use several shaders at post though. 
      Another example of what I'm doing at the moment:
      1) Loop through GameObjects, bind its phong shader (send color, shadow, spec, normal samplers), unbind all.
      2) At post: bind post processor quad, and loop/bind through different shader effects, and so on ...
      Thanks all! 
    • By phil67rpg
      void collision(int v) { collision_bug_one(0.0f, 10.0f); glutPostRedisplay(); glutTimerFunc(1000, collision, 0); } void coll_sprite() { if (board[0][0] == 1) { collision(0); flag[0][0] = 1; } } void erase_sprite() { if (flag[0][0] == 1) { glColor3f(0.0f, 0.0f, 0.0f); glBegin(GL_POLYGON); glVertex3f(0.0f, 10.0f, 0.0f); glVertex3f(0.0f, 9.0f, 0.0f); glVertex3f(1.0f, 9.0f, 0.0f); glVertex3f(1.0f, 10.0f, 0.0f); glEnd(); } } I am using glutTimerFunc to wait a small amount of time to display a collision sprite before I black out the sprite. unfortunately my code only blacks out the said sprite without drawing the collision sprite, I have done a great deal of research on the glutTimerFunc and  animation.
    • By Lewa
      So, i stumbled upon the topic of gamma correction.
      https://learnopengl.com/Advanced-Lighting/Gamma-Correction
      So from what i've been able to gather: (Please correct me if i'm wrong)
      Old CRT monitors couldn't display color linearly, that's why gamma correction was nessecary. Modern LCD/LED monitors don't have this issue anymore but apply gamma correction anyway. (For compatibility reasons? Can this be disabled?) All games have to apply gamma correction? (unsure about that) All textures stored in file formats (.png for example) are essentially stored in SRGB color space (as what we see on the monitor is skewed due to gamma correction. So the pixel information is the same, the percieved colors are just wrong.) This makes textures loaded into the GL_RGB format non linear, thus all lighting calculations are wrong You have to always use the GL_SRGB format to gamma correct/linearise textures which are in SRGB format  
      Now, i'm kinda confused how to proceed with applying gamma correction in OpenGL.
      First of, how can i check if my Monitor is applying gamma correction? I noticed in my monitor settings that my color format is set to "RGB" (can't modify it though.) I'm connected to my PC via a HDMI cable. I'm also using the full RGB range (0-255, not the 16 to ~240 range)
       
      What i tried to do is to apply a gamma correction shader shown in the tutorial above which looks essentially like this: (it's a postprocess shader which is applied at the end of the renderpipeline)
      vec3 gammaCorrection(vec3 color){ // gamma correction color = pow(color, vec3(1.0/2.2)); return color; } void main() { vec3 color; vec3 tex = texture2D(texture_diffuse, vTexcoord).rgb; color = gammaCorrection(tex); outputF = vec4(color,1.0f); } The results look like this:
      No gamma correction:
      With gamma correction:
       
      The colors in the gamma corrected image look really wased out. (To the point that it's damn ugly. As if someone overlayed a white half transparent texture. I want the colors to pop.)
      Do i have to change the textures from GL_RGB to GL_SRGB in order to gamma correct them in addition to applying the post process gamma correction shader? Do i have to do the same thing with all FBOs? Or is this washed out look the intended behaviour?
    • By OneKaidou
      Hi
       
      I am trying to program shadow volumes and i stumbled upon an artifact which i can not find the cause for.
      I generate the shadow volumes using a geometry shader with reversed extrusion (projecting the lightfacing triangles to infinity) and write the stencil buffer according to z-fail. The base of my code is the "lighting" chapter from learnopengl.com, where i extended the shader class to include geometry shader. I also modified the "lightingshader" to draw the ambient pass when "pass" is set to true and the diffuse/ specular pass when set to false. For easier testing i added a view controls to switch on/off the shadow volumes' color rendering or to change the cubes' position, i made the lightnumber controllable and changed the diffuse pass to render green for easier visualization of my problem.
       
      The first picture shows the rendered scene for one point light, all cubes and the front cube's shadow volume is the only one created (intentional). Here, all is rendered as it should be with all lit areas green and all areas inside the shadow volume black (with the volume's sides blended over).

      If i now turn on the shadow volumes for all the other cubes, we get a bit of a mess, but its also obvious that some areas that were in shadow before are now erroneously lit (for example the first cube to the right from the originaly shadow volumed cube). From my testing the areas erroneously lit are the ones where more than one shadow volume marks the area as shadowed.

      To check if a wrong stencil buffer value caused this problem i decided to change the stencil function for the diffuse pass to only render if the stencil is equal to 2. As i repeated this approach with different values for the stencil function i found out that if i set the value equal to 1 or any other uneven value the lit and shadowed areas are inverted and if i set it to 0 or any other even value i get the results shown above.
      This lead me to believe that the value and thus the stencil buffer values may be clamped to [0,1] which would also explain the artifact, because twice in shadow would equal in no shadow at all, but from what i found on the internet and from what i tested with
      GLint stencilSize = 0; glGetFramebufferAttachmentParameteriv(GL_DRAW_FRAMEBUFFER, GL_STENCIL, GL_FRAMEBUFFER_ATTACHMENT_STENCIL_SIZE, &stencilSize); my stencilsize is 8 bit, which should be values within [0,255].
      Does anyone know what might be the cause for this artifact or the confusing results with other stencil functions?
       
      // [the following code includes all used gl* functions, other parts are due to readability partialy excluded] // glfw: initialize and configure // ------------------------------ glfwInit(); glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 4); glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 4); glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE); // glfw window creation // -------------------- GLFWwindow* window = glfwCreateWindow(SCR_WIDTH, SCR_HEIGHT, "LearnOpenGL", NULL, NULL); if (window == NULL) { cout << "Failed to create GLFW window" << endl; glfwTerminate(); return -1; } glfwMakeContextCurrent(window); glfwSetFramebufferSizeCallback(window, framebuffer_size_callback); glfwSetCursorPosCallback(window, mouse_callback); glfwSetScrollCallback(window, scroll_callback); // tell GLFW to capture our mouse glfwSetInputMode(window, GLFW_CURSOR, GLFW_CURSOR_DISABLED); // glad: load all OpenGL function pointers // --------------------------------------- if (!gladLoadGLLoader((GLADloadproc)glfwGetProcAddress)) { cout << "Failed to initialize GLAD" << endl; return -1; } // ==================================================================================================== // window and functions are set up // ==================================================================================================== // configure global opengl state // ----------------------------- glEnable(GL_DEPTH_TEST); glEnable(GL_CULL_FACE); // build and compile our shader program [...] // set up vertex data (and buffer(s)) and configure vertex attributes [...] // shader configuration [...] // render loop // =========== while (!glfwWindowShouldClose(window)) { // input processing and fps calculation[...] // render // ------ glClearColor(0.1f, 0.1f, 0.1f, 1.0f); glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glDepthMask(GL_TRUE); //enable depth writing glDepthFunc(GL_LEQUAL); //avoid z-fighting //draw ambient component into color and depth buffer view = camera.GetViewMatrix(); projection = glm::perspective(glm::radians(camera.Zoom), (float)SCR_WIDTH / (float)SCR_HEIGHT, 0.1f, 100.0f); // setting up lighting shader for ambient pass [...] // render the cubes glBindVertexArray(cubeVAO); for (unsigned int i = 0; i < 10; i++) { //position cube [...] glDrawArrays(GL_TRIANGLES, 0, 36); } //------------------------------------------------------------------------------------------------------------------------ glDepthMask(GL_FALSE); //disable depth writing glEnable(GL_BLEND); glBlendFunc(GL_ONE, GL_ONE); //additive blending glEnable(GL_STENCIL_TEST); //setting up shadowShader and lightingShader [...] for (int light = 0; light < lightsused; light++) { glDepthFunc(GL_LESS); glClear(GL_STENCIL_BUFFER_BIT); //configure stencil ops for front- and backface to write according to z-fail glStencilOpSeparate(GL_FRONT, GL_KEEP, GL_DECR_WRAP, GL_KEEP); //-1 for front-facing glStencilOpSeparate(GL_BACK, GL_KEEP, GL_INCR_WRAP, GL_KEEP); //+1 for back-facing glStencilFunc(GL_ALWAYS, 0, GL_TRUE); //stencil test always passes if(hidevolumes) glColorMask(GL_FALSE, GL_FALSE, GL_FALSE, GL_FALSE); //disable writing to the color buffer glDisable(GL_CULL_FACE); glEnable(GL_DEPTH_CLAMP); //necessary to render SVs into infinity //draw SV------------------- shadowShader.use(); shadowShader.setInt("lightnr", light); int nr; if (onecaster) nr = 1; else nr = 10; for (int i = 0; i < nr; i++) { //position cube[...] glDrawArrays(GL_TRIANGLES, 0, 36); } //-------------------------- glDisable(GL_DEPTH_CLAMP); glEnable(GL_CULL_FACE); glStencilFunc(GL_EQUAL, 0, GL_TRUE); //stencil test passes for ==0 so only for non shadowed areas glStencilOp(GL_KEEP, GL_KEEP, GL_KEEP); //keep stencil values for illumination glColorMask(GL_TRUE, GL_TRUE, GL_TRUE, GL_TRUE); //enable writing to the color buffer glDepthFunc(GL_LEQUAL); //avoid z-fighting //draw diffuse and specular pass lightingShader.use(); lightingShader.setInt("lightnr", light); // render the cubes for (unsigned int i = 0; i < 10; i++) { //position cube[...] glDrawArrays(GL_TRIANGLES, 0, 36); } } glDisable(GL_BLEND); glDepthMask(GL_TRUE); //enable depth writing glDisable(GL_STENCIL_TEST); //------------------------------------------------------------------------------------------------------------------------ // also draw the lamp object(s) [...] // glfw: swap buffers and poll IO events (keys pressed/released, mouse moved etc.) // ------------------------------------------------------------------------------- glfwSwapBuffers(window); glfwP } // optional: de-allocate all resources once they've outlived their purpose: // ------------------------------------------------------------------------ glDeleteVertexArrays(1, &cubeVAO); glDeleteVertexArrays(1, &lightVAO); glDeleteBuffers(1, &VBO); // glfw: terminate, clearing all previously allocated GLFW resources. // ------------------------------------------------------------------ glfwTerminate(); return 0;  
  • Advertisement
  • Popular Now

  • Forum Statistics

    • Total Topics
      631371
    • Total Posts
      2999614
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

Participate in the game development conversation and more when you create an account on GameDev.net!

Sign me up!