Jump to content

  • Log In with Google      Sign In   
  • Create Account


Member Since 15 Nov 2010
Offline Last Active Jul 20 2016 06:43 PM

Posts I've Made

In Topic: Computing an optimized mesh from a number of spheres?

16 April 2016 - 09:02 AM

Thanks for all the awesome responses! I'm not entirely sure I can implement this though... I will try to check out some Java libraries for accomplishing all this for me, hopefully. And here I thought I had a new idea... >___>


The only thing I had really heard about before was meta-balls.

In Topic: A phenomenological scattering model

03 April 2016 - 09:59 AM

I gave it a whack, just to see. The method is so simple, it's easy to integrate.

The "DepthWeighted" image is using the "A phenomenological scattering model" maths. I'm just using the same weighting algorithm that was presented in the paper. And just reflections here -- no transmission/refraction.
There are some more screenshots here (with some trees and grass and things).
It seems to work best when there are few layers. For certain types of geometry, it might be ok. But some cases can turn to mush.

Assuming lighting method similar to the paper, I didn't notice weighting hurting precision too much... If you have a lot of layers, you're going to get mush, anyway.

The worst case for weighting issues might be distant geometry with few layers (given that this will be multiplied by a small number, and divided again by that number). Running the full lighting algorithm will be expensive, anyway -- so maybe it would be best to run a simplified lighting model.

It may work best for materials like the glass demo scene from the paper. That is dominated by the refractions (which aren't effected by the weighting or sorting artifacts).

What about a more interesting example with different colors overlapping? Also, can you provide shader code? I have a test program just begging me to implement this algorithm into it. xd


Oh, missed the link. Well... That's really not very impressive sadly... It seems like an improvement over WBOIT, but not by much...

In Topic: Fragmentation in my own memory allocator for GPU memory

30 March 2016 - 10:04 PM

Sparse textures would be great, but they're sadly not something I can rely on for compatibility reasons. Not even all implementations of Vulkan support sparse textures.

In Topic: A phenomenological scattering model

30 March 2016 - 10:02 PM

What's stopping us from implementing it now? Doesn't the paper contain all the information we'd need?

In Topic: Fragmentation in my own memory allocator for GPU memory

30 March 2016 - 02:53 PM

Sorry if I'm getting some of the terminology wrong here. I'm a bit outside my comfort zone. >___>



Yeah, I will try to keep memory with different lifetimes in different "pools", which in practice will be different hardware-allocated blocks I guess. In most cases I know exactly how much memory I will need before I start allocating it (model data, some static textures, etc), so I will just calculate how much I need and allocate everything at once. Texture streaming is the only time where I'm dynamically doing things that require advanced suballocations out of one big "real" allocation of memory. I would rather avoid unloading all textures from VRAM as a "panic operation", as it could take almost as much as a minute to reload 1GB of texture data from disk again, and there could be a risk of thrashing when the heap is almost full, causing it to repeatedly panic.



I do not have direct access to the memory in question. When I allocate GPU memory I receive an "object" back from the Vulkan driver that represents the requested memory. If the computer has a discrete GPU with its own VRAM, the only way I can access this memory is by mapping a CPU-side buffer and calling a Vulkan command to copy the contents of the buffer into the texture. I have no direct access to the memory. The heap structure therefore has to be stored on the CPU.

1. Again, I cannot access the memory directly. In Vulkan you first allocate device memory (VRAM) and then "bind" textures to a range inside the allocated device memory. This binding is permanent, so to move a texture I would need to create a new texture, copy the old texture's data to the new one (using Vulkan commands; the copy is entirely in device memory), wait for the GPU to finish the copy, wait for the GPU to finish using the old texture for rendering, THEN delete the original. Also, the source and destination ranges cannot overlap, so if a texture has to be shifted just a few bytes I would need to copy it to somewhere else, delete the original, then copy the copy to the correct place and delete the first copy. 
2. No, I would have a few hundred, maybe a thousand at most. All mipmaps of a texture are stored together.
3. This is hard to avoid, but at least pretty much all textures will be 512x512 to 2048x2048. Note that four 512x512 textures do not require exactly the same amount of memory as one 1024x1024 due to mipmaps.
4. I've confirmed that my region merging is working 100% correctly in all special cases. It took some time to get right, yeah.
5. Wouldn't that just result in more wasted memory as I would start with smaller blocks instead of one big block? I have a fixed memory budget to maintain anyway.
EDIT: Hmm, it might be a good idea to try to force the memory allocations to be multiples of each other, so that (for example) a 2048x2048 texture is exactly 4x as big as a 1024x1024 texture. Currently a 2048x2048 is 4.000011x as big as four 1024x1024 textures. Padding textures to keep the relationship an integer might avoid cases where a texture barely fits due to this scenario, although according to my tests increasing block sizes just made the problems caused by fragmentation worse...