Jump to content

  • Log In with Google      Sign In   
  • Create Account


Member Since 15 Oct 2002
Offline Last Active Sep 21 2016 07:26 PM

Posts I've Made

In Topic: Snow

09 March 2009 - 04:39 AM

You could also use 16- or 8-bit vertex position formats to reduce the amount of data you need to store. I don't know what your units are, but unless you really want centimetre accuracy you might be able to get away with quantised positions. If you have some sort of grid or quadtree system for your world, you could store those quantised positions as offsets from the current cell or node or whatever, and pass the origin in to your shader along with the verts.

Also consider generating footprints less often, and only allowing a fixed number. Your footprints buffer would act like a fifo, and you could fade out the old ones before re-using them.

Another idea is when you a chunk of your world goes out of frustum, you could throw away, say, every second footprint. When you come back to that area, you aren't likely to notice that a bunch of footprints have disappeared if they are quite dense.

It really depends on how persistent you want it to be. You can probably fake it a lot to give the impression of persistence without actually storing all that much data.


In Topic: Setting photoshop's alpha channel bit depth

06 August 2008 - 07:43 PM

There was a problem with Photoshop 7 and alpha in TGA files. Adobe changed it to save transparency information as the alpha channel in TGA files instead of using the actual alpha channel. They fixed it in 7.0.1, but you can download the fixed TGA plugin from here:



In Topic: multi core multi threading

10 May 2008 - 05:15 PM

Rendering, physics, AI, sound, asset streaming.. There's at least five things you could put in separate threads.

Also, they will have your issues, but they will have come up with solutions. For example, when they unload assets, they may flag the asset as deinitialising, and then wait a frame or two to ensure that it is not being rendered before unloading it.


In Topic: dynamic allocation

28 December 2007 - 01:53 PM

Maybe it's because I have a console development background, but I agree with the suggestion of using a pool of objects allocated once at program initialisation. If you use some sort of a list, spawning a new object becomes a simple matter of popping it off the start (or end) of your list, and destroying it means pushing it back on to your list. This way you have one big allocation at the start of your program and no mid-game allocations that cause memory fragmentation. You will also improve your cache coherency if you allocate a big array of objects.

These techniques probably don't often occur to PC-centric programmers where there is plenty of memory and CPU to spare, but in the console space where these are at a premium, it helps a lot. That's not to say that it won't help PC software too though.


In Topic: How to achieve correct transparencies?

13 September 2007 - 04:38 PM

Also, when are you rendering your transparent objects? It looks like they might be blending against your sky or your water or something..