Jump to content

  • Log In with Google      Sign In   
  • Create Account

kalle_h

Member Since 05 May 2012
Offline Last Active Today, 02:11 PM

Posts I've Made

In Topic: Alpha Blend for monocolor (or min/max depth buffer)

14 September 2016 - 06:29 AM

 

You should test to render twice with less and greater depth testing.

In my case I have decent amount vertices with not trivial vs, I guess having those vertices go though vs twice will probably slower than the method I mentioned? But you are right I should benchmark it

 

Thanks 

 

How much overdraw you expect?


In Topic: Alpha Blend for monocolor (or min/max depth buffer)

14 September 2016 - 02:26 AM

You should test to render twice with less and greater depth testing.


In Topic: How are 2D bone animations handled in game development software?

14 September 2016 - 02:23 AM

http://esotericsoftware.com/


In Topic: Tangent Space computation for dummies?...

23 July 2016 - 03:13 AM

 

Technically I can misuse any data-structure. A vertex is a point with edges to other points. I was just reacting to the comprehension part. Premature optimization. I remember engines which could only do flat shaded polygons. I do not know why current API do make the hack of vertex normals easier to use then the mathematical sound way of using the normal of the polygon. And I do not think that this is the case.

The GPU's concept of a vertex is "a tuple of attributes", such as position/normal/texture-coordinate -- the mathematical definition doesn't really apply :(

A GPU vertex doesn't even need to include position! When drawing curved shapes, "GPU vertices" are actually "control points" and not vertices at all.

 

There's also no native way to supply per-primitive data to the GPU -- such as supplying positions per-vertex and normals per-face. Implementing per-face attributes is actually harder (and requires more computation time) than supplying attributes per vertex, because the native "input assembler" only has the concept of per-vertex and per-instance attributes.

 

You could treat vertex as triangle and expand it at geometry shader. It just bit cumbersome and performance would be awful.


In Topic: Computing Matrices on the GPU

09 June 2016 - 01:47 PM

Just calculate it at vertex shader for start and then profile to see what is your bottleneck. If it seems that vertex shader calculations are problem then you start to think how to optimize it.


PARTNERS