Jump to content

  • Log In with Google      Sign In   
  • Create Account

Intel crash


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
5 replies to this topic

#1 Marc477   Members   -  Reputation: 118

Like
0Likes
Like

Posted 22 August 2014 - 12:24 PM

Hi, I have a glsl shader (for deferred lighting), and I uses cubemaps for omnidirectional lights. Everything works fine on my NVIDIA gpu. but when I try to compile with an intel gpu, it crashes during the link.

 

If I remove that line:

visibility += texture( LightCubeMap[lightID], vec4(dirOffset, depth) );

Then there is no crash.

 

Is there any known problem with samplerCubeShadow in glsl version 330 compiled on a intel GPU ? I can't find anything on google. Or maybe my problem is something else ?

 

Thank you.



Sponsor:

#2 Marc477   Members   -  Reputation: 118

Like
0Likes
Like

Posted 22 August 2014 - 01:34 PM

Ok forget this, I updated my intel gpu driver and now there is no crash. But things are displayed differently. Do you know any website reference that explains what are the difference between intel and Nvidia glsl interpretation to find what can causes the differences ? Thank you



#3 cgrant   Members   -  Reputation: 657

Like
0Likes
Like

Posted 22 August 2014 - 02:36 PM


. But things are displayed differently.


Thats the nature of the OpenGL beast. Each IHV have there interpretation of the specification, some more lenient than others. With that said, code/shaders that work on one implementation sometime will not run/compile on another implementation. There is really no document that outlines the difference between implementation except for what extensions/features they do/do not support. What exactly is being displayed differently ? If you have pinpointed a particular section of the shader that contribute to the difference, then it would be possible to give a few suggestion. I recently tried to implement deferred lighting with GLES and tried to use an encoded framebuffer to maintain some dynamic range, tested on a Adreno, Mali and PowerVR GPU, and they all gave slightly different results. Some worse than other, took a while to track this issue down, but in the end the issue was caused by not enough precision on certain implementation. What exact GPU are you using for Intel and Nvidia?

#4 Marc477   Members   -  Reputation: 118

Like
0Likes
Like

Posted 22 August 2014 - 02:59 PM

NVIDIA GeForce GT 630M (laptop) :

869174img1.png

 

Intel HD Graphics 4000 (laptop) :

 

438006img2.png



#5 mark ds   Members   -  Reputation: 1268

Like
2Likes
Like

Posted 22 August 2014 - 06:17 PM

Are you by any chance uploading a 3x3 matrix when you should be uploading a 4x4 matrix?

 

The reason I ask is because the triangular 'ish' lit part of the shadow on the second image (slightly right of top-left) matches the first image's bottom-left section below the table. If you rotate the shadow volume by roughly 45 degrees on the second image it 'kinda' fits.



#6 Hodgman   Moderators   -  Reputation: 30388

Like
1Likes
Like

Posted 22 August 2014 - 08:27 PM

You can try passing your shader code through the refence compiler - it's supposed to be the gold standard for whether your GLSL code is valid or not.




Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS