Jump to content

  • Log In with Google      Sign In   
  • Create Account


Member Since 18 May 2011
Offline Last Active Today, 10:27 AM

Posts I've Made

In Topic: How do triangle strips improve cache coherency ?

04 November 2014 - 09:41 AM


They improve cache coherency by reading straight through the vertex buffer. There are a minimized number of cache-misses because it's reading through the vertex buffer memory in a straight line. Every triangle after the first one only needs to read in one additional vertex (the next vertex in the vertex buffer) to form a full triangle. When you index vertices you are potentially jumping between random vertex locations anywhere inside the vertex buffer memory depending on how the mesh verts are connected.

In Topic: What is a lobby server?

30 October 2014 - 01:20 PM

Don't forget about RakNet either! It was just open sourced with a BSD license after Oculus bought it. It takes care of a lot of networking stuff including packet priority/reliability, data-replication across client/server, events, and it all uses a nice "TCP-over-UDP" algorithm that keeps things fast. It can also communicate between 32/64 bit clients/servers with no issue. It's a pretty mature library.


It is only free on computers, however. It will cost you money if you want to branch into consoles.

In Topic: "Expanding" a bezier curve to a "pipe"

29 October 2014 - 03:16 PM

Off the top of my head I would probably try to generate circles that can be connected/triangulated together to define the cylinders for the pipes.


To generate one of the circles I would sample two positions on the curve; one to get the point where the circle sits around, and then another one a small increment forward on the curve. Then subtract the second from the first and normalize to get a forward vector to orient the circle. Then use some cross products to get curve oriented up/right vectors from the forward vector. Then with cylinder right/up vectors you can generate a circle of points in 2D (that you would probably just re-use), and then multiply the XY of those points by the right/up vectors for oriented circle.


After you generate several of the oriented-circles to outline the shape of the pipe, you can figure out a triangulation to connect together two of the oriented circles into a pipe.

The geometry will probably act weird if the pipes take sharp turns. Also you will have to triangulate the two endpoint circles so the pipe isn't open-ended.


If you want to generate a half-pipe you can design an outline of the half-pipe shape and then do the same steps as with the circle. Multiplying those 2D shape-outline coords by the oriented-bezier axis' will orient them along the curve. This is pretty much like an extrusion.


You might could just search for a shape extrusion algorithm, actually.

In Topic: Quick Multitexturing Question - Why is it necessary here to divide by the num...

21 August 2014 - 08:01 PM

Also, don't let color averaging stop you there with texture blending!

If you use a linear interpolation you can blend any amount of a texture into another. With a lerp you can blend more or less of a texture in to taste as long as the interpolate value is between 0 and 1.

vec3 red = vec3(1,0,0);
vec3 black = vec3(0,0,0);
vec3 mixedColor = mix(red, black, 0.25);

// This gives you 75% Red and 25% Black.

Another cool application is smooth texture blending. You can use color lerping on outdoor terrain to seamlessly blend different textures together, like grass and dirt, in irregular ways that break up the texture on a mesh so that it isn't solid. You give different vertices on a mesh different lerp parameters, and vertex interpolation will give you all of the interpolation values in between so it fades from one blend percentage to the other. Check out the screenshot of the day and notice the texture blending on the ground in the back. http://www.gamedev.net/page/showdown/view.html/_/slush-games-r46850


Texture blending is pretty handy.

In Topic: Cascaded Shadow Maps Optimization Concept?

30 May 2014 - 04:43 PM


Outputting depth from the pixel shader disables early-Z, so that can come at a pretty significant performance cost.


On DX11+ hardware you can do texture reads in the vertex shader. You could tesselate the quad up to some level, read the depths, and interpolate them with the verts. That would let you keep the early-z! Then you would have to worry about the tesselation/quality tradoff, and how many polys are in the tesslated quad vs the model. This would also reduce the texture reads.


Also, the different zoom-scales of cascades in CSM would actually benefit the tesselation I think. More detail up close, and less further away.


Edit: 'Doh, you would have to sample the depth texture in the geometry shader, because tesselation happens after the vertex shader.


Also, it may be better to pre-bake out some tesselated quads instead of letting the tesselator do a bunch of redundant work. It would tesselate all of the object quads the same way. Then you could just use the vertex shader for the texture sampling. Managing those vertex buffers might be a pain in the ass though, so I don't know if it's worth it.