Jump to content
  • Advertisement

ferhua

Member
  • Content Count

    10
  • Joined

  • Last visited

Community Reputation

151 Neutral

About ferhua

  • Rank
    Member

Personal Information

  • Interests
    |programmer|
  1. OK I am sorry, in one fx  it works , but in the other one which need to transform form world coordinate to voxel coordinate it doesn't... The following code transform pixels from world pos to voxel pos. float3 world_to_svo(float3 posW,float voxel_size,float3 offset) {     float3 pos=posW;     pos=((pos+offset)/voxel_size);     return pos; } So in the voxelization.fx, I do following to save the voxel pos. svoPos=world_to_svo(posW.xyz,gVoxelSize,gVoxelOffset); gUAVColor[svoPos] = float4(litColor,1.0f); And in the other fx, I want to get the color which I saved in the texture: float3 pos_svo=world_to_svo(posW,gVoxelSize,gVoxelOffset); color=gVoxelList.SampleLevel(SVOFilter, pos_svo/gDim+0.5f/256.0f ,0); color=gVoxelList[pos_svo]; Both the two methods can't work. And I got the wrong picture like I posted before... It may be a little offset in the sampling texture coordinate, because when I set the gDim to 64, it works. But when I set it to 128,256 it is wrong... I use the same method so I don't know how to fix this bug. Anyone have some idea? Thank you very much! The first picture is set to 128 res and the second is 256, you can see the comparison, the more res it increase, the more imprecise.
  2. This one has fixed, but there's still problem... please read the next post. Thank you guys,I have a new problem in sampling when I am doing voxelization. I only return the following color in pixel shader. The pictures are the visualization of the voxelization. The first one looks no problem. I get it use //pos is from(0,256) //get every uint point in x:(0,256) y:(0,256) z:(0,256)     uint VoxelDim = 256;     uint sliceNum = VoxelDim*VoxelDim; //index:256^3     uint z = vin.index / (sliceNum);     uint temp = vin.index % (sliceNum);     uint y = temp / (uint)VoxelDim;     uint x = temp % (uint)VoxelDim;     uint3 pos = uint3(x, y, z);     float3 color=gVoxelList[pos].xyz Next, I changed operator[ ] to sampling function, float3 color=gVoxelList.SampleLevel(SVOFilter, pos/256.0f ,1).xyz; So it comes the second picture, you can see it has very strange "triangle like" pixels... I have no idea where the problem will be... Is it the accuracy problem in voxelization? I have checked the pos, every point (every point I need to get from the 3D texture) in the visualization is in the right order to correspond the point in voxelization (every point I save to RW3Dtexture).
  3. I have tried the load function, float4 output = gVoxelList.Load( uint3(1,0,0,0));  It also can work well. So the load function and Operator[ ] function both can work, why only sample can't ?
  4. I want to save the color of every point of one object in the RWTexture3D (UAV Resource) and transfer them to another object in its shader. I made a test in the two shaders. Both in the fragment shader. In first shader, I gave RWTexture3D<float4> gUAVColor; gUAVColor[uint3(0,0,0)] = float4(1.0f,0.0f,0.0f,1.0f); In second shader: Texture3D<float4> gVoxelList; float4 output = gVoxelList.SampleLevel(Filter, uint3(0,0,0),0); The result is correct, I got red as the result. But when I change the code, the texture cant be sampled correctly. In first shader, I gave RWTexture3D<float4> gUAVColor; gUAVColor[uint3(1,0,0)] = float4(1.0f,0.0f,0.0f,1.0f); In second shader: Texture3D<float4> gVoxelList; float4 output = gVoxelList.SampleLevel(Filter, uint3(1,0,0),0); I only changed the pos which save the red color from uint(0,0,0) to uint(1,0,0), but what I got changed to black, which means it's uncorrect. If I use gVoxelList[uint(1,0,0)].xyz ,it works. Does anyone have the idea where may be the problem? Besides, what's the difference between gVoxelList[pos] and gTexture.SampleLevel(Filter, uint3(1,0,0),0);  Both the two function backs the color: float4, right?
  5. hello guys, I am using directx11 in vs2015, and I want to save data into texture3D. everything works well before I adding the RWtexture3D part. I just added following code in my .fx file RWTexture3D<float4> gUAV : register(u1); PS_MAIN() { //--- //--- gUAV[uint3(x, y, z)] = float4(1.0f,1.0f,1.0f, 1.0f); } Then the graphics analyzer cant work! I dont know it's the problem of IDE or the problem of my code. In my CPP code, I bind the texture3d to Unordered Access View, but not set Render Target. I have checked the internet, someone have the same problem with me, but they are one or two years ago, far away from now. Can you debug UAV or RWTexture in VS?
  6. Sorry this topic was posted in the wrong board, I have reposted it to Math and Physics. please go here http://www.gamedev.net/topic/683086-dot-product-problem-in-calculating-reflective-vector/ if you have interests in this topic
  7. [attachment=33625:1.PNG] The vector I and n are given, n is the unit vector, the output is vector r. my question is the orientation of dot(n,l)*n. Since dot(n,l) is the signed length(negative in this case) why "the vector below n" still need to multiply by "n" ,  and why the vector dot(n,l)*n and "n" have opposite orientations.(assume that "dot(n,l)" is a scalar,after it multiply by "n", it should have the same direction as "n" ) [attachment=33626:??.PNG] In my opinion ,it should be [attachment=33627:??1q.PNG]
  8. [attachment=33620:??1.PNG] The vector I and n are given, n is the unit vector, the output is vector r. my question is the orientation of r. Since dot(n,l) is the signed length(negative in this case) ,why the vector below n still multiply by "n",  and why the vector dot(n,l)*n and "n" have opposite orientations. [attachment=33619:??.PNG]  In my opinion ,it should be [attachment=33621:??1q.PNG]  
  9. Thank you all, I have mistaken here diffuse = float4(0.0f, 0.0f, 0.0f, 0.0f); float diffuseFactor = dot(lightVec, normal); diffuse = saturate(diffuseFactor * mat.Diffuse * L.Diffuse); the correct one should be diffuse = float4(0.0f, 0.0f, 0.0f, 0.0f); float diffuseFactor = saturate(dot(lightVec, normal)); diffuse = diffuseFactor * mat.Diffuse * L.Diffuse;
  10. I am following tutorials to implement directional light in directx, the diffuse part is as below diffuse = float4(0.0f, 0.0f, 0.0f, 0.0f); float diffuseFactor = dot(lightVec, normal); diffuse = saturate(diffuseFactor * mat.Diffuse * L.Diffuse); what confused me is the result of diffuse. mat.Diffuse * L.Diffuse should be a float4, how could it be saved to "diffuse", a float type variable.   the same question to the ambient part ambient = float4(0.0f, 0.0f, 0.0f, 0.0f); ambient = mat.Ambient * L.Ambient; both mat.Ambient and L.Ambient are float4 types, after the * operation, it should be a float type.   I find materials on the internet and found there are only two multiply operations of vectors,dot product and cross product, so why the code mentioned above still use *, but not dot function?
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!