Jump to content
  • Advertisement
Sign in to follow this  
vuce

xna hlsl spatial hashing

This topic is 2601 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I have a problem. I need to create a texture for spatial hashing (SH), where indices (texture coordinates) of different texture(s) would be stored. I thought this would be straightforward using a pixel shader and using the input tex coordinates to write them to the SH texture, but from what I could find there's no way of doing that in hlsl (I would imagine concurrent access might be a problem). Is there a way to somehow work around this limitation (other than running through every index for every SH texture pixel and inserting the appropriate ones)? Hope the question is not too stupid. I'm a total newb in writing shaders but I really need this. Thanks in advance.

Share this post


Link to post
Share on other sites
Advertisement
Not from the pixel shader, indeed. If I understand correctly, what you're trying to do is a so-called scattering operation - scattering not in the sense of lighting but memory access pattern, i.e. write to an arbitrary location in your render target. One approach is the one you mentioned, i.e. naively read back every possible position, which might pose a performance problem.

With XNA/DX9 scattering is only possible by changing the vertex position (in the vertex shader) and you will probably use point primitives then. If you need to read from a texture in the vertex shader the thing is called vertex texture fetch, e.g. using tex2Dlod. This can also be a problem: You will need a hardware capable of doing so (shader model 3, i.e. IIRC the HiDef profile in XNA 4). Also it seems XNA 4 stripped point primitives :(

Then it can get even more complicated: What if you want to write to a position already occupied ? In that case you might even have to resort to tricky stenciling.

The GPU Gems has an articles which could help you: Real-Time Rigid Body Simulation on GPUs: They create a uniform spatial grid with point primitives and stenciling. But as said, it's rather complex.

Also: Check if a CPU-generated solution is viable. If it's not done every frame, I'd definitively go with this.

Share this post


Link to post
Share on other sites

Not from the pixel shader, indeed. If I understand correctly, what you're trying to do is a so-called scattering operation - scattering not in the sense of lighting but memory access pattern, i.e. write to an arbitrary location in your render target. One approach is the one you mentioned, i.e. naively read back every possible position, which might pose a performance problem.

With XNA/DX9 scattering is only possible by changing the vertex position (in the vertex shader) and you will probably use point primitives then. If you need to read from a texture in the vertex shader the thing is called vertex texture fetch, e.g. using tex2Dlod. This can also be a problem: You will need a hardware capable of doing so (shader model 3, i.e. IIRC the HiDef profile in XNA 4). Also it seems XNA 4 stripped point primitives :(

Then it can get even more complicated: What if you want to write to a position already occupied ? In that case you might even have to resort to tricky stenciling.

The GPU Gems has an articles which could help you: Real-Time Rigid Body Simulation on GPUs: They create a uniform spatial grid with point primitives and stenciling. But as said, it's rather complex.

Also: Check if a CPU-generated solution is viable. If it's not done every frame, I'd definitively go with this.


The GPU Gems article looks like exactly what I need. Thank you so much.

Share this post


Link to post
Share on other sites
Just another quick question, with no point primitives in xna 4.0, what would be the best replacement? Triangles that would project inside a pixel?

Share this post


Link to post
Share on other sites
I suggest using a line list, each line one pixel wide (or high), then you need "only" two vertices per point. Also, to reduce redundancy (and memory), consider geometry instancing.

It's really a pity that points are missing. I wonder how much performance you lose for this (or any other) workaround.

Share this post


Link to post
Share on other sites
Judging by this it's not that bad (4%). Moving forward I'm at the point where I'd need to pack particle indexes into the 4 colour channels of pixel shader output, but as I understand it float4 is only 32 bits, making only indexes 0...255 possible to write? Or is there a way to use a higher colour depth (seems like texture2d can use 128 bits of colour information)? I have no idea how guys that have written the article did this...

edit: I guess I could use 2 colour channels for each particle index (since those are TEXCOORD anyway), and use 2 textures instead of 1 to store 4 of them...

Share this post


Link to post
Share on other sites
Not sure if that article applies here, it's mainly about point sprites not point primitives. Anyway, I would not yet be too concerned about performance for now, sorry if my remark lead you there. Do whatever works, optimize later.

Well float4 in hlsl does not say much yet - if that's what you mean - the format of the render target is relevant. Hmmm, don't seem to find what exactly they used in the GPU Gems article. I expect D3DFMT_A32B32G32R32F (SurfaceFormat.Vector4 in XNA), i.e. a full 32-bit float. Warning: Make sure your identification works reliably since you are now working in float arithmetic.

You could also use Rgba64, which gives you 64k ids (and integers instead of floats).

Share this post


Link to post
Share on other sites

Not sure if that article applies here, it's mainly about point sprites not point primitives. Anyway, I would not yet be too concerned about performance for now, sorry if my remark lead you there. Do whatever works, optimize later.

I used triangles for now and I'll see how that works. After all this is done the change shouldn't be that difficult, hopefully :)


Well float4 in hlsl does not say much yet - if that's what you mean - the format of the render target is relevant. Hmmm, don't seem to find what exactly they used in the GPU Gems article. I expect D3DFMT_A32B32G32R32F (SurfaceFormat.Vector4 in XNA), i.e. a full 32-bit float. Warning: Make sure your identification works reliably since you are now working in float arithmetic.

You could also use Rgba64, which gives you 64k ids (and integers instead of floats).
[/quote]
Oh, that makes (a lot of) sense. Also explains why I haven't found anything on float4 bit-size. :)


Thanks for all the help. I'll try to do better research next time before asking (stupid) questions :)

Share this post


Link to post
Share on other sites
So I think I've done stenciling and depth testing correctly, and am at a point where I need to blend these 4 shaders together (each of them only writes to the appropriate colour channel, so simple additive blending would suffice), but BlendState won't work on HalfVector4 and transfering rendered textures back to the shader seems really wasteful (is it not?)... I've searched extensively for a solution, thought that there may be a dedicated render target register that I could use, but doesn't appear so. All I could really find is that blending is (always) problematic :)
Would it be possible to call the vertex shader only once, since it's the same on all 4 passes?

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!