Jump to content

  • Log In with Google      Sign In   
  • Create Account


Member Since 27 Feb 2012
Offline Last Active Oct 23 2016 04:08 AM

Posts I've Made

In Topic: OpenGl Es analysis

24 September 2016 - 01:05 AM

Thanks @cgrant and @mhagain for your kindly reply.

I knew it might make little sense to measure a shader in this naive way and it was difficult to isolate different stages. But since the actual temporal performances are closely guarded secrets of the chip vendors, that is the only way I can come up with. I took up this idea from ''In-Depth Performance Analyses of DirectX 9 Shading Hardware." from ShaderX3.

I knew this kind of measurement is old-fashioned, but is there some better ideas to give a coarse value to the artists that how fine is the geometry and how fine is the geometry and how much is the max chararacters a mobile device can support? There are several 3-rd party measurement tools like  GPU ShaderAnalyze from AMD or ShaderPerf2 from NV. But they are not friendly to mobile Gpus.

In Topic: OpenGl Es analysis

24 September 2016 - 12:38 AM

Thanks @Columbo for your reply.

I do my Fshader test with alpha blending on, although it may involve unnecessary blending operation after Fshader.

In Topic: OpenGl Es analysis

23 September 2016 - 01:47 AM

Thanks @Columbo for your reply.


With my fragment shader test, I do it with depth write off and depth test off. I think it is similar way to get a massive amout of overdraw like transparent rendering.


With your vertex shader way, can I do it by projecting the vertexes to a given fixed tiny point so that they are not rasterized?


I also need to measure the cost of varying parameters between VShader and FShader and whether the bandwidth is the bottleneck as well. Is there some idea? 

In Topic: depth layer number

22 November 2013 - 10:42 AM


In Topic: How to bind a 3d noise volume

22 November 2013 - 02:30 AM

I got the answer. GEM2 "nv_dds.cpp" and "nv_dds.h" could do the job.