Jump to content
  • Advertisement


  • Content Count

  • Joined

  • Last visited

Community Reputation

122 Neutral

About edwinnie

  • Rank
    Advanced Member
  1. update: with only 4 slices, this is what i get for inter-slice shadow intensity. Light leaks are not fixed yet and there is probably a few bugs.
  2. Lets ignore the selfish and arrogant pple on board. Forums are supposed to be a shared resource. So I am sharing my thoughts. Taking a quote from the article, Quote: D´ecoret introduced the NBuffers to allow prefiltering with continuously placed kernels. We use them to compute the mean value of neighboring pixels. Each level l holds, for each texel, the normalized response of a box filter with a kernel size of 2^L x 2^L. This "mean" value that needs to be computed is confusing enough. Texture lookup offsets for NBuffers are always north,east, and northeast of the current texel. Offsets are calculated based on power of 2 eg 1/256, 1/128 depending on the NBuffer level that is to be processed. Would the "mean" be simply the summation of 4 texture lookups of the previous NBuffer level and divide by 4? What about the size of the filter kernel? Will or how does this filter size affect the normalization of the filter response? Apart from these questions, notice the way NBuffers tries to do its texture lookups. Its always based on a north,east, and northeast direction. So far I am trying to compute an intermediate result similar to that shown in the video. The following image below shows a reference of the "intermediate results" which I am referring to: My current intermediate results look "skewed", and so I will ask another question: How to get those intermediate results shown above? regards Edwin
  3. what are you talking about? Skimming through an article is NOT equivalent to actual understanding of an algorithm.
  4. no one knows? or you cannot be bothered about this algorithm? 5 rating stars to anyone who knows!! [Edited by - edwinnie on October 18, 2006 11:44:04 PM]
  5. Hi guys! I want to ask if anyone knows how to do the filtering and linear interpolation stated in this article Plausible Image Based Soft Shadows? Like how many pixel neighbours do we need to use? And how to linearly interpolate between 2 NBuffer levels and between 2 slices? thx! Edwin [Edited by - edwinnie on October 18, 2006 8:59:55 PM]
  6. Hi guys! ok i met this annoying problem of bridging local and world space camera coordinate system with a yaw-pitch rotational matrix. I am using the standard code provided by DXSDK. Everything works fine when the camera is looking in the positive z direction. Now what I am trying to do is to load some variables from a script that also specifies the actual world lookat direction and other variables. How do I make use of the loaded values and modify the existing codebase such that it takes into account the unmodified local coordinate system and the loaded values? //Make a rotation matrix based on the camera's yaw & pitch D3DXMATRIX mCameraRot; D3DXMatrixRotationYawPitchRoll(&mCameraRot,MATH_DEGTORAD(m_Yaw),MATH_DEGTORAD(m_Pitch),0); //Transform vectors based on camera's rotation matrix D3DXVec3TransformCoord(&m_WorldUp,&m_LocalUp,&mCameraRot); D3DXVec3TransformCoord(&m_WorldLookAt,&m_LocalLookAt,&mCameraRot); D3DXVec3Normalize(&m_WorldUp,&m_WorldUp); D3DXVec3Normalize(&m_WorldLookAt,&m_WorldLookAt); D3DXVec3Cross(&m_WorldRight,&m_WorldUp,&m_WorldLookAt); thx! Edwin
  7. edwinnie

    The Neverwinter Nights 2 toolkit rocks.

    do we still get the toolkit if we didnt place a preorder now, but when we buy it later?
  8. hi guys! With regards to the eqn found in GPU GEMS 2, Opacity(Z) = { Opacity(Zi)(Z - Zi) + Opacity(Zi+1)(Zi+1 - Z) } / (Zi+1 - Zi) I understand that additive blending is done to accumulate with the previous opacity results, but how do you relate Opacity(Zi) and Opacity(Zi+1) in the eqn to the blending stages? thx! Edwin
  9. HI guys! I just want to ask if there are any programmable logic operations being implemented on SM3 that I probably didnt realise? thx! Edwin
  10. Hi guys! I am trying to get some basic blending to work here. I have read the faq but I still couldnt get a simple blend operation to work so far. First, I output a FP16F rendersurface with color half4(0.5,1,0,0) from the pixelshader. No blendstates are set yet. For the next render pass, I output to the same rendersurface a color of half4(0.2,0,0,0). I am trying to add the result to see if I get half4(0.7,1,0,0). my renderstates for this second pass: CullMode = CCW; SrcBlend = One; DestBlend = One; BlendOp = Add; AlphaBlendEnable = true; ColorOp[0] = Add; ColorArg0[0] = Current; ColorArg1[0] = Texture; AlphaOp[0] = Add; AlphaArg0[0] = Current; AlphaArg1[0] = Texture; ZEnable = false; ZWriteEnable = false; ColorWriteEnable = RED|GREEN|BLUE|ALPHA; I am not understanding something? In any case, I am not sure what to set for ColorArg0,ColorArg1,AlphaArg0,AlphaArg1. Do I need to set the BlendOp? Does setting AlphaBlendEnable also enables color blending? pardon me, my blending really sux. So I hope I can get some insight. I have search google for a while but I cant seem to find anything useful other than msdn. Edit: I tried doing it on an A8R8G8B8 and it works but not on an FP16F rendersurface. I am using X800. thx! Edwin [Edited by - edwinnie on October 15, 2006 6:25:40 AM]
  11. Hi guys! I was wondering if the near and far depth maps given in the Fast Scene Voxelization paper are done by culling back and front faces respectively during rendering from the light source? This should be similar for bounding volume geometry too i suppose? I wonder how did they manage to keep the fragments that were to be discard by the hardware on default? I also wonder how the cellmask texture is precomputed. Anyone has done something like this before? Would be greatly appreciated if you could share your insights. Thx! Edwin [Edited by - edwinnie on October 15, 2006 2:58:12 AM]
  12. Hi guys! has anyone read this article yet? Plausible Image Based Soft Shadows Using Occlusion Textures In any case, the authors do mention that they use multiple depth layers (for use in forward projections) and yet they did not mention depth peeling. Is there is a different way of generating multiple depth layers without depth peeling? Or their algorithm does involve depth peeling and yet they did not mention it? then again, this is the best soft shadow algorithm i have seen so far. thx! Edwin
  13. I just recalled someone did imaged based AO over at GPGPU.org. Might want to check the forums over there to gain some more information. Sorry, but I am no AO expert though. Edwin
  14. edwinnie

    Fast Silhouettes Article

    ok, here's the image with the problem: The small red arrows are pointing to the part of the green lines. That part is not a silhouette. regards Edwin
  15. edwinnie

    Fast Silhouettes Article

    ok, but first, can u update ur demo to show different colors for silhouettes, ridges, and valleys. I need to be sure that what I saw in error is not a ridge nor valley first. regards Edwin
  • Advertisement

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!