Jump to content
  • Advertisement
Sign in to follow this  
lomateron

reading bool array bit by bit in HLSL 4.0

This topic is 2106 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

In the CPU I read the inputs of my keyboard, I just need to know when some keys are pressed( only 43 different keys), so I make a unsigned char array and assign to every bit of this array a key( example: when I press key number one, the first bit of the char array changes to 1) 

 

I pass this inputs into the unsigned char array because I have to send them to another computer and because I pass them too to the shaders

the problem is how can i read this array in the shaders?

 

D3D10 has SetBoolArray,SetFloatArray,SetIntArray so I thoung about using SetBoolArray:

 

unsigned char keyboard[6];                                  //the array has 48 bits I only use 43 bits

g_pShadersKeyboard->SetBoolArray(keyboard,0,6); 

 

but i think the bools in HLSL 4 are 32-bit, are they?

 

anyway can I just use the asuint() HLSL function to convert every bool of the array to unsigned int and then know if a key is pressed using an "AND mask"?

 

or is the array in some way transformed into a 32 bit bool array of only true and false values so the individual bit values are lost

 

Share this post


Link to post
Share on other sites
Advertisement

The asuint() documentation seems to say that only float or int types are admissible as parameters. I wouldn't rely on all the bits in your char array making it to the GPU with the same bit pattern. It's possible it will work, but it doesn't seem very safe. The obvious approach is to just convert your char array to an integer array on the fly and send that to the shader instead, why not do that? Besides, packing your booleans into chars is going to be expensive for the shader to unpack, since you don't have that many values. For very large datasets it makes sense since you save a lot of memory and the cache is useless either way, but for only a few values having each of them aligned at some comfortable memory boundary is going to be much faster as the GPU cache will make access very fast and you'll have no ALU cost to pay to bitshift and mask. I mean, even aligned at a 4-byte boundary with four booleans per int4, the whole array would take 176 bytes of memory, that's around 0.25% of a constant buffer's maximum size.

 

Another possibility is to rethink your design. Why don't you first convert this keyboard map into something more usable for the shader, e.g. instead of passing the raw keyboard state to the shader and letting it interpret what it gets, you might want to process it CPU-side into something that the shader can more easily use, such as "enable that light source" or somesuch. I guess it depends on what you're doing, but I am having trouble imagining a situation where a shader would need to know about all 43 keyboard events, unless it's for one of those WebGL applets to play around with.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!