Jump to content
  • Advertisement


  • Content Count

  • Joined

  • Last visited

Community Reputation

1 Neutral

About TBWorkss

  • Rank

Personal Information

  • Interests

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Hello everyone, Recently, I am learning GPU voxelization and I am reading a implementation from others, the implementation is like GPU Voxelization. It uses a orthographic camera to voxelize the scene. As I am reading the implementation, the space transforms confuse me a lot. The code is: RWTexture3D<uint> RG0; struct v2g { float4 pos : SV_POSITION; half4 uv : TEXCOORD0; float3 normal : TEXCOORD1; float angle : TEXCOORD2; }; struct g2f { float4 pos : SV_POSITION; half4 uv : TEXCOORD0; float3 normal : TEXCOORD1; float angle : TEXCOORD2; }; v2g vert(appdata_full v) { v2g o; float4 vertex = v.vertex; o.normal = UnityObjectToWorldNormal(v.normal); float3 absNormal = abs(o.normal); o.pos = vertex; o.uv = float4(TRANSFORM_TEX(v.texcoord.xy, _MainTex), 1.0, 1.0); return o; } [maxvertexcount(3)] void geom(triangle v2g input[3], inout TriangleStream<g2f> triStream) { v2g p[3]; for (int i = 0; i < 3; i++) { p[i] = input[i]; p[i].pos = mul(unity_ObjectToWorld, p[i].pos); } float3 realNormal = float3(0.0, 0.0, 0.0); float3 V = p[1].pos.xyz - p[0].pos.xyz; float3 W = p[2].pos.xyz - p[0].pos.xyz; realNormal.x = (V.y * W.z) - (V.z * W.y); realNormal.y = (V.z * W.x) - (V.x * W.z); realNormal.z = (V.x * W.y) - (V.y * W.x); float3 absNormal = abs(realNormal); //Decide each side is suitable for projection(We want the projection has the largest area) int angle = 0; if (absNormal.z > absNormal.y && absNormal.z > absNormal.x) { angle = 0; } else if (absNormal.x > absNormal.y && absNormal.x > absNormal.z) { angle = 1; } else if (absNormal.y > absNormal.x && absNormal.y > absNormal.z) { angle = 2; } else { angle = 0; } for (int i = 0; i < 3; i ++) { //SEGIVoxelViewFront,SEGIVoxelViewLeft and SEGIVoxelViewTop are matrix sent by script. Because we may do projection from top or left, we need these transform matrix. if (angle == 0) { p[i].pos = mul(SEGIVoxelViewFront, p[i].pos); } else if (angle == 1) { p[i].pos = mul(SEGIVoxelViewLeft, p[i].pos); } else { p[i].pos = mul(SEGIVoxelViewTop, p[i].pos); } p[i].pos = mul(UNITY_MATRIX_P, p[i].pos); #if defined(UNITY_REVERSED_Z) p[i].pos.z = 1.0 - p[i].pos.z; #else p[i].pos.z *= -1.0; #endif p[i].angle = (float)angle; } triStream.Append(p[0]); triStream.Append(p[1]); triStream.Append(p[2]); } float4 frag (g2f input) : SV_TARGET { //This is the coordinate for the voxel, VoxelResolution is a integer sent by c# script, indicating the resolution of the voxel space. int3 coord = int3((int)(input.pos.x), (int)(input.pos.y), (int)(input.pos.z * VoxelResolution)); //Then the author insert the information into the RWTexture3D<uint> RG0, coord is used as the index. } The output of the vertex shader is still in local space, right? Because I don't see any space transform in the code above. In the geometry shader, the vertices are firstly transformed to world space: p[i].pos = mul(unity_ObjectToWorld, p[i].pos); . Then they are multiplied by UNITY_MATRIX_P. Now the x,y,z of p.pos should be in range(0,1) because this is a orthographic camera (w is 1). Finally they are passed to the fragment shader. However, I can't understand the line: int3 coord = int3((int)(input.pos.x), (int)(input.pos.y), (int)(input.pos.z * VoxelResolution)); It seems the x,y's value is already be mapped to (0, VoxelResolution)? And the z value is (0,1)? I feel there are some internal transforms happening between geometry shader and fragment shader. What are these internal transforms? And, how can my camera know my target resolution? There is no code to control the resolution of the camera's screen in the script. The camera's setting is: voxelCameraGO = new GameObject("SEGI_VOXEL_CAMERA"); voxelCameraGO.hideFlags = HideFlags.HideAndDontSave; voxelCamera = voxelCameraGO.AddComponent<Camera>(); voxelCamera.enabled = false; voxelCamera.orthographic = true; voxelCamera.orthographicSize = voxelSpaceSize * 0.5f; voxelCamera.nearClipPlane = 0.0f; voxelCamera.farClipPlane = voxelSpaceSize; voxelCamera.depth = -2; voxelCamera.renderingPath = RenderingPath.Forward; voxelCamera.clearFlags = CameraClearFlags.Color; voxelCamera.backgroundColor = Color.black; voxelCamera.useOcclusionCulling = false; For example, my VoxelResolution is 256. How does the fragment shader know my screen is 256*256?
  2. TBWorkss

    Confused on GPU voxelization

    NeverMind, I got it.
  3. Hello everyone, I am planing to implement GPU voxelization in Unity3D. I feel I may misunderstand something. I read the article https://developer.nvidia.com/content/basics-gpu-voxelization . It is said "2. Then you rasterize the transformed primitive using a viewport of the same dimensions as one of the 2D projections of the voxel grid. Because the orthogonal viewport frustum can cover the voxel grid exactly, and a rasterized pixel position in the render target and its depth value correspond X, Y and Z components of the voxel grid." I am so confused on the description above. A orthogonal camera must have its width and height. What if my orthogonal viewport frustum cannot cover my whole scene? What will happen if a vertex is outside the frustum, and it is multiplied by a projection matrix(UNITY_MATRIX_P)? What if an object is behind my orthogonal camera? How can they be voxelized? I really appreciate any help.
  • Advertisement

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!