Jump to content

  • Log In with Google      Sign In   
  • Create Account


bwhiting

Member Since 22 Jul 2010
Offline Last Active Today, 02:07 AM
-----

Topics I've Started

interactive ui on the gpu

06 August 2013 - 03:35 PM

Hey all,

 

Just after some idea's really regarding ui rendering on the GPU. Specifically, methods/ideas to keep draw calls batched while being interactive.

 

The problem:

A batch of say 20 items in a list, all grouped in one draw call. What are my options for interactivity, say rollover effects.

Let us say for example 1 texture will be shown in the normal state and when selected or interacted with, another texture is displayed.

 

Hear are my current thoughs:

Just render the offending item again on top of itself ignoring depth.

   This would add only one draw call and would allow for alpha based transitions,

   It may lead to issues if the over state were smaller than the original though.

Somehow stencil/mask out the old item then redraw the new one:

    I guess this would add 2 draw calls but the original item would be completely hidden.

    No way to transition though?

Lerp between the 2 textures (1 texture atlas) for every item and update constants to drive the distinction between the 2 textures:

    All still done in 1 draw call and could animate the lerp float on the cpu no probs. 

    This would waste a texture read most of the time but probably not a performance issue at all, 128 constants allowed so it would limit the list size.

 

...

Right, so that was might thinking of the top of my head, now smash me with better ideas of brilliance!!! (or just critique the ideas I had above)

I am looking for the right balance of flexibility and performance. Never really done UI stuff on the GPU so don't be angry if I am approaching this like drunk 6 year old!

 

:)

 

In summary, how to draw interactive UI efficiently on the GPU assuming a shader model 2 environment?

Thanks for your time!


Collision detection with scaled spheres :(

12 July 2013 - 09:46 AM

A quick question for you collision detection gurus:

 

How do you handle sphere vs triangle mesh collision detection when the mesh is scaled dis-proportionately?

 

i.e. a sphere vs a mesh with a scale of (1,0.5,1.25).

 

When scales are uniform it is easy to just transform the center of the sphere by inverse world matrix of the mesh then scale the radius by the inverse scale. But when the scales are out, I am not sure what to do.

 

What is the usual response here, ban scaling unless proportional? Or is the solution really simple but something I haven't thought of yet.

 

Cheers, and apologies if this is a really idiotic question.


Simple collision detection when 2 objects push body into eachother

27 May 2013 - 06:57 AM

Probably a really easy question for you season pros, but I can't seem to think of an obvious solution to this problem.

 

Here it is in a simple form:

Imagine a rectangle travelling horizontally, it collides with another rectangle that is above it, overlapping it only slightly from above:

 

            _____

___      |        |

|     |     |-------|

|     |

|     |     ----->

|___|_______________

 

This will result in the object being pushed down into the floor (the penetration in the downward direction may be the smallest) which will lead to it being pushed back up by the floor.

 

How do you avoid this problem and ensure that the object stops dead on collision rather than gets pushed down.

 

I though about separating the collision into stages, testing only one axis at a time but that feels clunky.

 

Hold the phones, whilst writing this I think maybe I should never check the bottom side of the top body because its normal (if dotted with the direction of the other body) would show that they should never be tested against each other.

 

Maybe that is more important than I though, never check a face if its normal dot the direction is greater than 0? Am I on the right lines? Am only really dabbling here so not the end of the world if I can work it out. Thanks for your time.

 

smile.png


Banding Woes of DOOOM - ssao and depth

08 July 2012 - 05:17 AM

Ok this is bugging me massively and I am sure its an easy fix!!

1st some screenies:

SSAO 1:
Attached File  banding1.jpg   90.22KB   50 downloads
SSAO 2 (problem highlighted):
Attached File  banding2.jpg   91.12KB   55 downloads
SSAO 3:
Attached File  banding3.jpg   33.81KB   45 downloads
Banding in the depth buffer:
Attached File  banding_depth.jpg   51.07KB   54 downloads

Right I am pretty sure all my issues are from the banding seen in the depth buffer, while I understand when rendering it out to the screen there will be some banding but not like this.. it seems as if the depth, as it gets further, has blips! i.e. if I sample the colours in photoshop the value decreases but at the edge of the bands it jumps up!?!?!

WHYYYYYYYYYYYYYYYYY?!!!!

I am encoding a very short depth (1- 100ish) instead of 1000 for the far plane, and am storing it across all for channels for precision:

The depth buffer has been linear-ified, and have checked the maths on paper and in pure Actionscript
and the values are correct, i.e. a depth of 50 is 0.5 in the depth buffer and once its gone through encoding and decoding comes back exactly the same.... this applies to every values that I tried, but something is clearly going wrong on the GPU side of things.

So I was wondering if someone has run into this before or from the pictures can recognize the issue. I don't think it is a precision problem as the depth is so short and encoded across 4 channels and in my tests that gives a pretty high accuracy.

Any ideas?? If anyone wants I can post the code of the encoding and decoding process too.

Thanks

B

Bitangent and Tangent seams

22 March 2012 - 05:47 AM

Hi folks, generated some revolution meshes, and noticed that while my uvs seem to be fine there are seams where there join up:

bitangent

Posted Image

tangent:

Posted Image


The seam along one axis is much more apparent than the other.
There must be a simple way to get rid of these and though I would put it to you guys.

While I could probably calculate these more accurately procedurally, I would like the method to work for loaded geometry too.

here's the code:

var ids:Vector.<uint> = mesh.ids;
var vertices:Vector.<Number> = mesh.vertices;
var uvs:Vector.<Number> = mesh.uvs;
var tangents:Vector.<Number> = new Vector.<Number>(vertices.length);
var bitangents:Vector.<Number> = new Vector.<Number>(vertices.length);
var numTriangles:int = ids.length;
for(var i:int = 0; i < numTriangles; i+=3)
{
var id1:int = ids[i];
var id2:int = ids[i+1];
var id3:int = ids[i+2];
var p1:Vector3D = new Vector3D(vertices[int(id1 * 3)], vertices[int(id1 * 3 + 1)], vertices[int(id1 * 3 + 2)]);
var p2:Vector3D = new Vector3D(vertices[int(id2 * 3)], vertices[int(id2 * 3 + 1)], vertices[int(id2 * 3 + 2)]);
var p3:Vector3D = new Vector3D(vertices[int(id3 * 3)], vertices[int(id3 * 3 + 1)], vertices[int(id3 * 3 + 2)]);

var uv1:Point = new Point(uvs[int(id1 * 2)], uvs[int(id1 * 2 + 1)]);
var uv2:Point = new Point(uvs[int(id2 * 2)], uvs[int(id2 * 2 + 1)]);
var uv3:Point = new Point(uvs[int(id3 * 2)], uvs[int(id3 * 2 + 1)]);
var edge1:Vector3D = p2.subtract(p1);
var edge2:Vector3D = p3.subtract(p1);
var edge1uv:Point = uv2.subtract(uv1);
var edge2uv:Point = uv3.subtract(uv1);

var cp:Number = edge1uv.y * edge2uv.x - edge1uv.x * edge2uv.y;
if ( cp != 0 )
{
  var mul:Number = 1 / cp;
  var tangent:Vector3D = new Vector3D();
  var bitangent:Vector3D = new Vector3D();
  tangent.x = (edge2uv.y * edge1.x - edge1uv.y * edge2.x) * mul;
  tangent.y = (edge2uv.y * edge1.y - edge1uv.y * edge2.y) * mul;
  tangent.z = (edge2uv.y * edge1.z - edge1uv.y * edge2.z) * mul;

  bitangent.x = (-edge2uv.x * edge1.x + edge1uv.x * edge2.x) * mul;
  bitangent.y = (-edge2uv.x * edge1.y + edge1uv.x * edge2.y) * mul;
  bitangent.z = (-edge2uv.x * edge1.z + edge1uv.x * edge2.z) * mul;

  tangent.normalize();
  bitangent.normalize();

  tangents[int(id1 * 3)] = tangent.x;
  tangents[int(id1 * 3 + 1)] = tangent.y;
  tangents[int(id1 * 3 + 2)] = tangent.z;

  tangents[int(id2 * 3)] = tangent.x;
  tangents[int(id2 * 3 + 1)] = tangent.y;
  tangents[int(id2 * 3 + 2)] = tangent.z;

  tangents[int(id3 * 3)] = tangent.x;
  tangents[int(id3 * 3 + 1)] = tangent.y;
  tangents[int(id3 * 3 + 2)] = tangent.z;

  bitangents[int(id1 * 3)] = bitangent.x;
  bitangents[int(id1 * 3 + 1)] = bitangent.y;
  bitangents[int(id1 * 3 + 2)] = bitangent.z;

  bitangents[int(id2 * 3)] = bitangent.x;
  bitangents[int(id2 * 3 + 1)] = bitangent.y;
  bitangents[int(id2 * 3 + 2)] = bitangent.z;

  bitangents[int(id3 * 3)] = bitangent.x;
  bitangents[int(id3 * 3 + 1)] = bitangent.y;
  bitangents[int(id3 * 3 + 2)] = bitangent.z;
}
}
mesh.tangents = tangents;
mesh.bitangents = bitangents;

hopefully you will know right off the bat what the issue is, thank for your time Posted Image

edit:
ah, looks like all I need to do is average them as I would the normals? if that's case then boy am I a melon.

PARTNERS