Jump to content

  • Log In with Google      Sign In   
  • Create Account


Member Since 02 Nov 2012
Offline Last Active Yesterday, 03:44 PM

#5254255 binary alpha

Posted by C0lumbo on 27 September 2015 - 12:13 PM

For bonus points, use this technique to avoid jaggies on your foliage (not invented by Wolfire, but they have a relatively short clear explanation): http://blog.wolfire.com/2009/02/rendering-plants-with-smooth-edges/


TLDR version: Render all foliage twice. First pass use clip/discard/alpha test on your foliage (as Hodgman describes). Second pass, render with alpha blending.


A more correct solution might be to look into alpha-to-coverage with multi-sample antialiasing.

#5249285 Alpha blend mystery

Posted by C0lumbo on 28 August 2015 - 12:03 AM

I think Jihodg is on the right track. Your code that's hardcoding a test value for texture two's alpha is only modifying the top mip.


If the lower mips are already generated, then they will still be opaque. Try regenerating the lower mips after hardcoding that transparency, or modify your code so that you iterate through all the mips and hardcode an alpha value.

#5248363 OpenGL-ES Light-Shader: distance-issue

Posted by C0lumbo on 23 August 2015 - 09:34 AM

Sounds like a precision issue to me. Your math is being done at mediump which could have a maximum range of -16384 to 16384, which is quite a lot more than your 250-300 but do bear in mind that the distance function is almost certainly having to square. In fact, maybe the compiler is smart enough to remove the square root from the 'distance' function and compare with your radius squared (which can be calculated per draw call) instead of using radius.


First up, if you have a device that supports highp in the pixel shader try just changing 'precision mediump float;' to 'precision highp float;'. If that fixes it then we're on the right track.


But I wouldn't stop there because lots of Android devices don't support that. Try doing your lighting math in normalized screen coordinates instead of pixel coordinates maybe? (i.e. divide both the pixel pos and the light pos by the screen height or the screen width)

#5247748 Controls for Mobile Platformer

Posted by C0lumbo on 19 August 2015 - 03:12 PM

I would check out the Mikey games (Mikey Shorts, Mikey Boots) for example of how to do it right. Basically, you want a left and right button for your left thumb, and a jump plus a maximum of one other button for your right thumb. I wouldn't recommend any more than that.


Extra points if you make the button size/positioning customizable.

#5246654 DirectX Camera Jitter

Posted by C0lumbo on 15 August 2015 - 03:02 AM


You shouldn't be scaling mouse movement by the time delta. Mouse movement should have a direct mapping to rotation. These lines:


XMMATRIX xRotation = XMMatrixRotationY(((float)-deltaX  * (float)timer.Delta()));
XMMATRIX yRotation = XMMatrixRotationX(((float)-deltaY * (float)timer.Delta()));

This is wrong. You're camera movement should be tied to the frame-time just like all movement. Otherwize, a 30 fps game would result in a more sluggish camera rotation compared to when it was running at 60 fps. 

In any case, it doesn't explain the issue.


Edit : Unless of course, you're not taking input snapshots per frame, but rather updating input at a frame-independent rate, such as by directly pushing windows msgs into your engine.



We're talking about mouse cursor movement.


If we were talking about control pad analog sticks rotating the camera, then there should be a time delta applied, but we're talking about mouse movement, which should definitely not have a time delta applied (as proved by the fact that it fixed the bug!)

#5246404 DirectX Camera Jitter

Posted by C0lumbo on 14 August 2015 - 01:00 AM

You shouldn't be scaling mouse movement by the time delta. Mouse movement should have a direct mapping to rotation. These lines:


XMMATRIX xRotation = XMMatrixRotationY(((float)-deltaX  * (float)timer.Delta()));
XMMATRIX yRotation = XMMatrixRotationX(((float)-deltaY * (float)timer.Delta()));

#5243226 Why is this 1?

Posted by C0lumbo on 28 July 2015 - 12:38 PM

I think it's kind of confusing showing the normalised device coordinates on the same diagram as the view frustum.


It seems like a somewhat separate concept to me, and there's no reason that projection window line couldn't sit to the right of the near plane, or even to the right of the far plane!

#5242567 OpenGL ES God Ray Precision error

Posted by C0lumbo on 25 July 2015 - 01:06 AM

The best way to start would be to force everything to use highp and see if the problem goes away. If it does, then you can start the process of figuring out what needs to be highp and what can be mediump or lowp.


IMO, using a precision declaration at the top (the line: "precision mediump float;") is bad practice, on mobile fragment shaders you ought to think about the required precision of every operation. But in this case it makes life easier because all you need to do to confirm whether it's a precision problem is change that line to be "precision highp float;". You should also go into your vertex shader and make sure the texture_coord varying is being output as a highp too.


If that fixes it, then that's great, but you do need to bear in mind that not all Android GPUs support high precision floats in their fragment shader. However, you might be able to get it all working at mediump. Most likely the place where precision is being lost is the iterative adjustment of textCoo, you might see an improvement by calculating textCoo on each iteration rather than applying a delta. (change the line "textCoo -= deltaTextCoord;" to something like "textCoo = texture_coord.st - (deltaTextCoord * (i + 1));" or if tweak stuff correctly before you enter your loop you could use the far more optimal "textCoo = texture_coord.st + (deltaTextCoord * i);")


As an aside, attempting 128 samples might be too many unless you're targeting only the extreme high end devices.

#5240989 DXT3 nowadays

Posted by C0lumbo on 16 July 2015 - 10:55 PM

Can't see why anyone would need to use DXT3/BC2 in normal circumstances.


Only exception I can think of is some scenario where the value is used as some sort of index into a lookup table, and that having 16 values is a positive.


This is an excellent article on compression formats: http://www.reedbeta.com/blog/2012/02/12/understanding-bcn-texture-compression-formats/



BC2 is a bit of an odd duck, and frankly is never used nowadays. It stores RGBA data, using BC1 for the RGB part, and a straight 4 bits per pixel for the alpha channel. The alpha part doesn’t use any endpoints-and-indices scheme, just stores explicit pixel values. But since each alpha value is just 4 bits, there are only 16 distinct levels of alpha, which causes extreme banding and makes it impossible to represent a smooth gradient or edge even approximately. Like BC3, it totals 16 bytes per block. As far as I can think of, there’s no reason ever to use this format, since BC3 can do a better job in the same amount of memory. I include it here just for historical reasons.

#5240191 Compute cube edges in screenspace

Posted by C0lumbo on 13 July 2015 - 11:06 PM

What I'd do is transform all the 8 corner points of the cube to 2D screen space, and then use a 2D convex hull algorithm to recompute which edges are the outside edges of the projected silhouette. That solution has the appeal of using only generic "stock" algorithms (3D->2D projection, and 2D convex hull), so it will be very easy to reason about and maintain in the long run.


Good call!


Way easier than calculating silhouette edges, I think.

#5239691 Google play services?

Posted by C0lumbo on 11 July 2015 - 02:00 AM

I've been implementing google play sign-in recently, and my understanding got a lot clearer after watching this video:


TLDW? GameHelper is deprecated.


Probably doesn't explain the problem, but might help you simplify your code.

#5233407 Compute cube edges in screenspace

Posted by C0lumbo on 07 June 2015 - 01:06 PM

Sounds like silhouette calculation. For each face of the cube you can calculate whether it is facing toward the camera or away from the camera. Then for each edge of the cube you can calculate if it connects a face that's pointing away and a face that's pointing toward the camera. If so, then it forms part of the silhouette.


Tricky to implement though, because you need to deal with floating point precision issues, and once you've figured out the silhouette edges, working out the correct ordering you want might be fiddly.


"Also, if I subdivide the cube into 8, would the same relative points for the smaller cubes also form an outline for each smaller cubes?" - With an orthographic view, I think yes. With a projection view, I think no.

#5231805 Help with opengl diffuse shader

Posted by C0lumbo on 29 May 2015 - 10:38 PM

The reason it comes out transparent is because of the last line:


gl_FragColor = (diffuse * texture2D(u_Texture, v_TexCoordinate));


You're multiplying the RGBA you get from texture2D by the diffuse scalar you've calculated. You only want to multiply the RGB. Try:


gl_FragColor = (vec4(diffuse, diffuse, diffuse, 1.0) * texture2D(u_Texture, v_TexCoordinate));

#5230567 Imposter Transitioning to Mesh

Posted by C0lumbo on 23 May 2015 - 08:10 AM

Have you tried just manipulating the world matrix of the 3D mesh version of the tree?


You should be able to quantise down the rotation of the tree to match the same orientation that the imposter was using, and you manipulate the scale of the matrix to flatten it as well.


The other alternative might be to do some sort of fragment discard based transition effect which is often more subtle than simple 'popping'. It's mentioned (but not detailed) in this paper: http://www.cs.ucsb.edu/~holl/pubs/Candussi-2005-EG.pdf


"To avoid the popping effect while going from one LOD to the other, we implemented a smooth fade-in/fade-out transition for billboards. The fade effect is done with alpha test; when fading out, the number of rendered pixels is smoothly reduced, when fading in, it is smoothly increased. This is done by assigning random alpha values for the billboard texture pixels and then varying the alpha comparison value for the alpha test."

#5230447 degenerate triangles

Posted by C0lumbo on 22 May 2015 - 12:22 PM

Use indices. Obviously the proper answer is to profile, but the performance difference is probably marginal enough that it's hard to measure, so just use indices because it makes sense.


By using indices you are sending less data through (an index is smaller than a vertex), but more importantly by using an indexed draw you are allowing the GPU to use an optimisation called a post-transform cache. Basically, the quad (012, 213) involves 6 vertices, but the GPU can realise that it can reuse the previous vertex shader output for the second instance of vertices 1 and 2, skipping some work. GPUs don't use a post-transform cache for non-indexed rendering because it'd be too much work to check which vertices are duplicates of earlier ones.


There's a trickier question of whether or not you should use indices as a triangle strip with degenerates (0, 1, 2, 3, 3, 4, 4, 5, 6, 7, 7, etc) or a triangle list (0, 1, 2, 2, 1, 3, 4, 5, 6, 6, 5, 7). It probably makes little difference but I'd choose the latter because there is at least some archaic console hardware that performed particularly badly with degenerates.


Note that rendering a big load of quads is quite common, sometimes it's handy just to make one giant array of indices exactly for that purpose and share it across your 2D system, your particle system, etc.