# filousnt

Members

81

573 Good

• Rank
Member
1. ## Self Shadowing of Terrain - Which method to use?

Quote:Original post by Geometrian Hi, You could try calculating static shadows for self-shadowing for the terrain (which could be slow and precise the first time) and then just use small shadow maps for the various objects. I.e., if there is, say, a car on the landscape that needs to have a shadow cast on it, calculate a tight rectangle around it and then do shadowmapping just for that area (using frustum culling for the terrain of course). -G this solution seems wise. you can precalculate self-shadows for terrains : - if your sun is static in the sky, http://www.gamedev.net/reference/articles/article1817.asp http://research.cs.queensu.ca/home/jstewart/papers/tvcg97.html - if not, http://ati.amd.com/developer/gdc/2007/Oat-Ambient_Aperture_Lighting-SI3D07.pdf
2. ## Realtime Normal Map Generation

float s1 = tex2D(Elevation, float2(p_uv.x - OneOverTextureSize, p_uv.y)).r; float s2 = tex2D(Elevation, float2(p_uv.x, p_uv.y - OneOverTextureSize)).r; float s3 = tex2D(Elevation, float2(p_uv.x + OneOverTextureSize, p_uv.y)).r; float s4 = tex2D(Elevation, float2(p_uv.x, p_uv.y + OneOverTextureSize)).r; // change this value to soften / harden the normal map float coeff = 2.0f; float3 normalf = float3( (s1 - s3), (s2 - s4), coeff ); normalf = normalize( normalf ); normalf.xy = normalf.xy * 0.5f + 0.5f;
3. ## Point Sprites

Quote:Original post by Laccolith Second, is there anyway to have the point sprites not scale with distance from the camera. At the moment if the camera is close to the sprites they shrink but this doesnt work for the effect im working on. Is there anyway to tell directx that the D3DFVF_PSIZE is world space and not screen space of something like that? "Point size is determined by D3DRS_POINTSCALEENABLE. If this value is set to FALSE, the application-specified point size is used as the screen-space (post-transformed) size. Vertices that are passed to Direct3D in screen space do not have point sizes computed; the specified point size is interpreted as screen-space size. If D3DRS_POINTSCALEENABLE is TRUE, Direct3D computes the screen-space point size according to the following formula. The application-specified point size is expressed in camera space units." You will find all infos about pointsprites in Dx Sdk Doc.
4. ## Geoclipmapping with mipmaps

Quote:Original post by chot It all works pretty good, except for _some_ of these odd vertices in between (see picture). I've played around with it all day and I've found out that linear interpolation isn't really supported in vertex shader texture lookup (in this nvidia paper), which is a real bummer (I kinda liked my idea :p). However, the paper is four years old now, and it seems to work in some places for me so I'm thinking maybe something else is at play here. I've been considering if the interpolation is done in 8bit and thus isn't high res enough but I've been told that it's done in 32bit on modern gpus. chot Tri/Bilinear filtering depends on GPU & texture format. As far as i know, Dx10 GPUs only do "true" filtering with FP32bits texture formats. with FP16 texture formats (and any other format except FP32) you will end up with inacuracies on some odd vertices. Since your are using Dx9 with a Dx10 AMD GPU and that you can only bind FP32 textures to the vertex sampler, i "guess" you card can't do "true" trilinear filtering on FP32 texture formats or your texture offset is incorrect. You may do manual interpolation in the shader instead of letting the hardware do it or convert the outer degenerate triangle strip into some kind of skirt. edit : AMD doc on ATI Radeon HD 3470 says that your card support "true" FP32 filtering. Dx9 Doc says that you can't use Bi/Trilinear filtering on FP32 texture formats, but it also says "Texture filtering is allowed with vertex texturing but the available filter types depend on hardware (or reference rasterizer) support". NB : your filter should be MIN LINEAR, MAG POINT, MIP LINEAR. [Edited by - filousnt on May 1, 2009 11:24:35 AM]
5. ## DX9 ID3DXSprite billboard rotation

hi, i am using the DX9 ID3DXSprite interface to render billboards in a 3D world. I would like to rotate them along the camera view vector but my attempts were unsuccessful. With this piece of code, scaling works but not rotation : D3DXMATRIX _mWorld; D3DXMatrixIdentity( &_mWorld ); V( _pD3D9Sprite-&gt;SetWorldViewLH( &_mWorld, Cam.getViewMatrix() ) ); V( _pD3D9Sprite-&gt;Begin( D3DXSPRITE_ALPHABLEND | D3DXSPRITE_BILLBOARD | /*D3DXSPRITE_OBJECTSPACE |*/ D3DXSPRITE_SORT_DEPTH_FRONTTOBACK ) ); pVFXm-&gt;Draw( _pD3D9Sprite, _pTEX9Sprite, *Cam.getViewMatrix() ); V( _pD3D9Sprite-&gt;End() ); Draw function HRESULT hr; for( uint i = 0; i &lt; _psCount; ++i ) { //_Ps[i]._pos; WorldSpace position //_Ps[i]._size; Size //_Ps[i]._rot; Rot2D angle D3DXMATRIX matRotate; D3DXMatrixRotationZ ( &matRotate, _Ps[i]._rot * 100.0f); D3DXMATRIX Transform; // 2DTexSize 128x128 const D3DXVECTOR3 scaling( 1.0f / 128.0f, 1.0f / 128.0f, 1.0f / 128.0f ); D3DXMatrixTransformation( &Transform, &_Ps[i]._pos, NULL,//CONST D3DXQUATERNION *pScalingRotation, &scaling, &_Ps[i]._pos, D3DXQuaternionRotationMatrix( NULL, &matRotate ), NULL ); V( pD3D9Sprite-&gt;SetTransform( &Transform ) ); V( pD3D9Sprite-&gt;Draw( pTEX9Sprite, NULL, &D3DXVECTOR3( 64.0f, 64.0f, 0.0f ), // sprite center &_Ps[i]._pos, // particule WorldSpace Pos D3DCOLOR_ARGB(255,255,255,255) ) ); } }
6. ## "Rendering Natural Waters" coefficients conversion

Nice :) post some code, if you don't mind.
7. ## "Rendering Natural Waters" coefficients conversion

Quote:Original post by Viik The question is - is it possible to precalculated coefficients dependent on wavelength is such a way that they can be directly used in RGB space? yes. you can precompute a 1D or 2D texture. Both methods are based on Pre00/01. 1D way : http://artis.imag.fr/Publications/2006/BD06a/water06.pdf "4.1. Underwater light absorption" p5 2D way : http://www.graphicon.ru/2004/Proceedings/Technical/2%5B2%5D.pdf [Edited by - filousnt on March 8, 2009 11:22:09 AM]
8. ## post-TnL cache size on recent GPUs ?

You may wanna try ATI tootle lib (2007) instead of NvTriStrip which is older. http://developer.amd.com/gpu/tootle/Pages/default.aspx It can also optimize mesh to reduce overdraw (hurts a bit post TnL cache hit). On a 8800GT, using a cache size of 48 give the best results but since dx9 cache hit query don't work, i can't confirm it. Maybe it's more, less or not fixed, run some tests to find the one that suits you best.

Quote:Original post by calvinsfu but I don't see how the L-shaped interior rim comes into play. Look at figure 2-5 (p33) GPU Gems 2. "Second, there is a gap of one quad on two sides of the interior ring perimeter, to accommodate the off-center finer level. This L-shaped strip (shown in blue) can lie at any of four possible locations (top-left, top-right, bottom-left, bottom-right), depending on the relative position of the fine level inside the coarse level." Quote:Original post by calvinsfu another question is that, a different paper( by Nick Brettell http://www.cosc.canterbury.ac.nz/research/reports/HonsReps/2005/hons_0502.pdf) described a derived technique that uses a simplified ring structure that has just 12 blocks. Does that mean it's not necessary to use Hoppe's ring structure? Chap3 (p13), last paragraph : "We enforce the constraint that n = 4k +1 for any k > 1, so that each level can be exactly centred on the next-finest level." GPU Gems 2, Chap 2 (p31), last paragraph : "Because the outer perimeter of each level must lie on the grid of the next-coarser level [..] the grid size n must be odd. Hardware may be optimized for texture sizes that are powers of 2, so we choose n = 2k−1 (that is, 1 less than a power of 2)" The ring structure is linked to constraint n which can be different from a paper to another. Quote:Original post by calvinsfu Hoppe mentioned that the relative motion of the viewer within the windows decreases at coarser levels. I'm really confused. Because if the viewer moved 1.0 on the x-axis, all vertices on the coarsest level would have to resample the height from the heightmap because the window has shifted. -Every time you render the ring, each vertex need to sample the height from the heightmap. -if viewer was at (0,0) and move by 1.0 on x-axis, level 0 window is shifted by 1 and that's all. levels 1+ windows are Not shifted, but level 1 LShape location is changed. Remember that there is 4 locations for LShapes that depends on lower level ring position. [Edited by - filousnt on August 8, 2008 2:16:40 PM]
10. ## DX10 VS fp16 mipmap filtering

Quote:Original post by Zaph-0 You cannot take the coarser data from a mipmap for but have to precalculate it and but it in the fraction part of the float (like hoppe said in his paper, although there are also much better ways to handle this, though not with r16f). i am using DX10, so i don't use his compression/decompression(R32F) process which hoppes describe. i am trying some alternatives offered by DX10. Quote:Original post by Zaph-0 The Red dots are ok because they are exactly on the same height but the blue dots are interpolated between it so you have to calculate the right average value from the surrounding (red) heights and store them for doing the transition. right, but bilinear filtering of the mipmap should give me the right average of the surrouding. The problem is the small inaccuracy. Quote:Original post by Zaph-0 The mipmap in itself is not really helping for transitions (its good for storing the data, though) unless you use complex shaders that take much much more performance and texture access and do the calculation in real-time. less storage required, less bus traffic, free interpolation between mipmap level 0 & 1, texture cache friendly(not that much but better than original) and finally clipmap L mipmap 1 can be updated by the GPU from clipmap level L+1 mipmap 0.
11. ## DX10 VS fp16 mipmap filtering

hi, Currently working on gpu geometry clipmap for dx10, i store elevation data on R16F textures. For each texture, i also store coarser(level+1) elevation on mipmap level 1 to morph between levels. Texture coordinates are correct but value fetched from mipmap level 1 (Bilinear) is a bit different from the one expected for "blue dots", red dots are ok. Blue dots should be bilerp with 0.5 weight but there is sometimes inaccuracies. // tex size 64 // alpha.x = 0 on inside clipmap <=> use mipmap level 0 // alpha.x = 1 on border clipmap <=> use mipmap level 1 to morph // input.vertexToTexel.y = array index // TexPos int [0,63] half h = txClipArray.SampleLevel( samMinLinearMagMipPointWrap, float3( 1.0f/64.0f + float2(TexPos)/64.0f, input.vertexToTexel.y ), alpha.x ).r; red & blue dots are in mipmap level 0. Only red dots are in mipmap level 1. from left to right, a red dot with correct height from mipmap level 1, a blue dot with incorrect interpolation & again a correct red dot.
12. ## VTF to do terrain rendering

Quote:Original post by paic by VTF, you mean vertex texture format ? VTF or Vertex Texture Fetching. VTF is fast on all DX10 cards, slower on DX9 cards but has acceptable performance. [Edited by - filousnt on July 31, 2008 5:10:20 AM]
13. ## Procedural Skys - A Different Approach

http://www.phys.uu.nl/~0307467/docs/skydomes.htm http://es.geocities.com/kenchoweb/skydomes_en.pdf
14. ## Performance Test: Indexed Lists or not !

Quote:Original post by Zaph-0 Quote:Original post by filousnt Quote:Original post by Zaph-0 I do now render roughly about twice as much triangles with an unindexed list, but that means I do render about 6 times as much vertices. i was curious about you performance gain. i tryed with a R8G8 triangle vertex buffer, R8G8 triangle list vertex buffer + index buffer and R8G8 triangle list optimized vertex buffer + optimized index buffer. the mesh used was a regular grid 128x128 & the triangle vertex buffer was filled by row. triangles : 48 FPS indexed triangle list(no optimization) : 130FPS indexed triangle list(optimization) : 160FPS i got the results expected... did you do something else to your triangle vertex buffer ? ps : got a 8800 GT. Wow, those numbers are horribly low. Did you accidentally use SW Vertex Processing ? I mean, a 128x128 grid are about 32k triangles, right ? I rendered many times the same grid per frame using instancing to load the GPU. 76 times per frame if i remember well, which is similar to a high terrain in my terrain manager. vertices were displaced in vertex shader. give it a try on DX optimize functions for indexed triangle lists. [Edited by - filousnt on July 19, 2008 9:17:26 AM]
15. ## Performance Test: Indexed Lists or not !

Quote:Original post by Zaph-0 I do now render roughly about twice as much triangles with an unindexed list, but that means I do render about 6 times as much vertices. i was curious about you performance gain. i tryed with a R8G8 triangle vertex buffer, R8G8 triangle list vertex buffer + index buffer and R8G8 triangle list optimized vertex buffer + optimized index buffer. the mesh used was a regular grid 128x128 & the triangle vertex buffer was filled by row. triangles : 48 FPS indexed triangle list(no optimization) : 130FPS indexed triangle list(optimization) : 160FPS i got the results expected... did you do something else to your triangle vertex buffer ? ps : got a 8800 GT.