Jump to content
  • Advertisement

XBTC

Member
  • Content Count

    331
  • Joined

  • Last visited

Community Reputation

122 Neutral

About XBTC

  • Rank
    Member

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Sorry for resurrecting this age old thread. Did you find a easy solution for calculating the volume per marching cube?
  2. @Hodgman: Thanks alot for the detailed explanation. Gave me a new perspective on shader programming... @Adam: Unfortunately I am not on an AMD platform but let's see what Nvidia has to offer in that departement...
  3. Hey there! Is there any way to measure execution time of individual lines or sections of HLSL code? So far I tried Intel GPA and Nvidia PerfHud. I was only able to get timings for whole Draw-Calls that way. I have to optimize a long and complicated shader and it is not possible to comment out sections of it to test for possible performance gains as the different section's runtimes are pretty much dependant on each other... Thanks in advance, XBTC
  4. Awesome + detailed post man!!!! Thank you very very much!!!! Unfortunately I need the intermediate mip maps so I cannot make use of this approach right now but I might run into a situation soon where I dont need the intermediate levels. Your post was a great read anyway as I never used compute shaders before. Now I can see their potential uses...
  5. Yeah! I need to build a mip-map where a texel of the higher level contains the maximum of the 8 corresponding texels in the lower level. How would I build these mipmaps in a high performance way? [color=#1C2837][size=2] [color=#1C2837][size=2]Bind the mip-levels of the 3D-Texture as Render Targets and fill them with the max-values from the lowerlevels via vertex/geometry/pixel shaders? [color=#1C2837][size=2] [color=#1C2837][size=2]But how can I bind the different MIP-Levels of a 3D Texture as RenderTargets?
  6. Thanks for the link! It's one of the best descriptions for the frustum corner method. I still would like to know WHY my approach does not work...
  7. Thanks for your reply. The reason for it is that I am using the linear view-space z-value of the vertices into the ShadowMap...
  8. Hi Guys, I have a horrible time reconstructing the World-Position of Pixels in a ShadowMap. I generate the Shadowmap by rendering from the light's point of view into a texture. I store the ViewSpace z-value of the vertices as depth values in the texture. Then it should be possible to reconstruct the world position in the pixel shader like that: //From Texcoords to NDC's //tpos is the position of the pixel in the Shadowmap posViewSpace.x = 2 * tpos.x - 1.0; posViewSpace.y = - 2 * tpos.y + 1.0; //Undo Projection posViewSpace.x *= tan(FOVinRadians / 2) * AspectRatio; posViewSpace.y *= vol_pos.y * tan(FOVinRadians / 2); //Read Depth Value posViewSpace.z = g_txShadowMap.SampleLevel(samShadow, tpos.xy, 0); //Get World Position posWorldSpace = mul(posViewSpace, InverseWorldViewMatrix); The resulting posWorldSpace is wrong unfortunately. Please please let me know where I am doing wrong... P.s.: I know there are many approaches which can be found via the search on these boards and I looked at them, however I would like to get my solution to work as it fits very will with other things I am using the Shadowmap for...
  9. Thank you very much! I will try GenerateMips then...if I have to do max or min MipMaps I guess I will have to generate them myself?
  10. Hi Guys, I have a 3D Volume Texture and I want to generate MipMaps for it. I looked through the DX Documentation and the Web without finding something useful. My questions: 1. Is there a way to let DX generate the MipMaps automatically? It seems OpenGL is able to do this...I guess no... 2. What would be the best way to generate them myself? My idea so far: Bind the mip-levels of the 3D-Texture as Render Targets and fill them with averaged values from the higher levels via vertex/geometry/pixel shaders. But how can I bind the different MIP-Levels of a 3D Texture as RenderTargets? Thanks in advance for any Input/Pointers, XBTC
  11. Thanks guys! Guess I'll have to do some reading...
  12. Hi there! In my GPU-based raytracer I have the need to generate rays with random directions from a pixel for gathering the diffuse lighting from the other pixels in the scene. The rays which are similar in direction of the pixel's normal are more important as they contribute more to the pixel's lighting (as opposed to pixel's that lie in directions perpendicular to the normal which should not contribute at all...simple Blinn Phong stuff;)) Can anyone point me to a paper/link/etc. for an easy algorithm to spawn a number of these rays randomly without much processing cost? Thanks in advance, XBTC! P.s.: I need to generate a random subset of the rays shown in the following picture (with a high probability for rays that have a similar direction than the points normal):
  13. [color=#1C2837][size=2]The solution is simple - instead of doing a point in box test to see if you should render back faces you do a sphere in box test where the sphere's radius is the distance from the near planes lower right corner to the camera (.1 wouldn't be the correct radius because the corner of your near plane is further then .1 away). If it intersects then render back faces and still calculate two intersection points via ray/cube.[/quote] Good idea! Thanks! What I am doing now is simple and works great: I ALWAYS render the backfaces of the cube - even if the eyepoint is outside the cube. In case the eyepoint is in the cube I just start the rays at the eypoint. That way it really doesn't matter if the eypoint is exactly in the cube or not. If 100% accuracy is needed your idea with the sphere should work great though... [color=#1C2837][size=2]As you already figured out, the problem isn't really that big with small near plane value. Do you have any screen shots? How did you implement calculating the entering and exiting point finally?[/quote] Cannot post screenshots at the moment but I will do so later. I render the cube's front faces with x,y,z as color values (as described in my first post) storing the result in a texture. In the second pass where the raytracing is done I render the back faces (which gives me the exit points) and look up the entry points in the texture from the first pass. If you are interested in more detail let me know...
  14. So far the approach seems to work perfectly, but I have the clipping plane at 0.1 so any problems might be too small to notice... What kind of trouble would you expect? A visible "jump" in the image when entering the volume due to the difference between near-clipping plane and eye-point?
  15. Cool! Thangs again! That means I could use the following approach: If eyepoint outside of volume: Render front-faces of the cube, use rays between the two interesection points with the cube If eyepoint in volume: Render back-faces of the cube, spawn rays from eypoint to (exit-) intersection with the cube I will post again if I know how it worked out...
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!