Jump to content

  • Log In with Google      Sign In   
  • Create Account


Aks9

Member Since 10 Jul 2009
Offline Last Active Today, 11:10 AM

Posts I've Made

In Topic: why the gl_ClipDistance[] doesn't work?

22 June 2014 - 04:35 AM

 

I set "gl_ClipDistance[0] = -1.0;" as you saied, but what make me confused is that the result is not changed, my model didn't disappear.

 

I hope you have couplet it with glEnable(GL_CLIP_DISTANCE0); in the host application, and the shader is actually executing.

 

Maybe you should read a little bit about the topic. There is a lot of material in all OpenGL related books:

  • OpenGL SuperBible 5th Ed. – pg.528
  • OpenGL SuperBible 6th Ed. – pg.276-281.
  • OpenGL Programming Guide 8th Ed. – pg.238
  • OpenGL Shading Language 3rd Ed – pg.112, 286
     

In Topic: why the gl_ClipDistance[] doesn't work?

21 June 2014 - 09:17 AM

Then your code is wrong, as I presumed.

There is no need for multiple clip-distances. Set just gl_ClipDistance[0].

And, in order to prove that it works, set gl_ClipDistance[0] = -1.0;

If your model disappears, clipping works. :)


In Topic: why the gl_ClipDistance[] doesn't work?

21 June 2014 - 06:41 AM

Clipping works perfectly, but you have to enable it. ;)

 

By setting glEnable(GL_CLIP_DISTANCE0+2), you have enabled gl_ClipDistance[2], not 0 and 1.

Also, I'm not sure whether your math is correct or not. It depends on your algorithm.

 

Just to know, if gl_ClipDistance[x] > 0.0, the vertex is visible, if it is <= 0.0, it is clipped.

 


In Topic: Are two OpenGL contexts still necessary for concurrent copy and render?

21 May 2014 - 03:32 AM

 

OK! My apologies!

But you are wrong about my ego. As you can see it is not so large. smile.png

I didn't noticed the time in the post in this forum. I'm regularly checking OpenGL forums, and this thread I noticed after several of my answers on another, which make me think you are not satisfied with them. That was my mistake.

Btw, you didn't knowledge my posts, which also make me think you are not satisfied and searching for other opinions. That's why I overreacted. Mea culpa! (I have also attended gymnasium and had Latin, so I hope we could understand each other perfectly. wink.png  )


In Topic: Are two OpenGL contexts still necessary for concurrent copy and render?

17 May 2014 - 06:01 AM

I can't quote a source right now and might be wrong on the exact hardware generation (though I believe it was in Cozzi and Riccio's book?). Basically, the thing is that pre-Kepler (or was it Fermi? I think it was Kepler) hardware has one dedicated DMA unit that runs inependently of the stream processors, so it can do one DMA operation while it is rendering, without you doing anything special. However, only Quadro drivers actually use this feature, consumer-level drivers stall rendering while doing DMA. Kepler and later have 2 DMA units and could do DMA uploads and downloads in parallel while rendering, but again, only Quadro drivers use the hardware's full potential.

This drives me crazy because I've already answered on Prune's question in another forum. dry.png

Pre-Fermi GPUs do not allow overlapping rendering and data downloading/uploading. Fermi was the first NV architecture where it is enabled. High-end Quadro cards have two separate DMA channels that can overlap, while GeForce cards have (or at least is enabled) just one. It is not clear whether two channels can transfer data in the same direction simultaneously (I guess not, but it is quite reasonable) This is known as (Dual) Copy Engine. Kepler has the same capability as Fermi considering the way how copy engine is working. Activating Copy Engine is not free, so by default it is turned off. NV drivers use heuristics (there is no special command to turn it on) to activate copy engine, and that is a separate context doing only data tranfer. That's why the second context is necessary.

 

Please correct me if I'm wrong.

 

This is probably my last post post about (Dual) Copy Engine since I'm really tired of repeating the same thing. 


PARTNERS