Jump to content
  • Advertisement

nini

Member
  • Content Count

    150
  • Joined

  • Last visited

Community Reputation

151 Neutral

About nini

  • Rank
    Member

Personal Information

  • Interests
    Programming
  1. nini

    FBX SDK normals

    it depends on what options have been setted in your fbx exporter, by default the exporter will smooth your normals. There is an option in order not do it , split normal at edge. Hopes this help
  2. So you want dp3 lighting constant and particularly specular : Maybe it's not what you want because it is really simple , but why don't take your Normal in object space and consider that your view vector is constantly looking downward to your torus ? ie V ( 0,-1,0) in object space hardcoded in the shader.
  3. Unless i missed something , if your patch of terrain is always the same but only shifted by an offset , the tangents and binormals are always the same 0,0,1 for binormal and 1,0,0 for normal ... Consider u on tangent space ( you already sample your heightmap like this in the vertex shader...) if you want to put it back to world space or object space just pass the matrices in the pixel shader. For the number of slots required : generally for parallax mapping u encode the height value in alpha channel of the computed normal map. if your terrain is static there will be no interest into computing normals this way... (this will be good candidate for destructible terrain or water ). you can entirely precompute your normal maps associated with each patch?
  4. Quote:Original post by Shanee Thanks! Question though: How would you calculate the normal in the vertex buffer from the height map? That's not a difficult thing all you have to do is sample your heightmap 5 times in a diamond with texcoord being something lio(vertexNumOfThisPatch/numVertsPerPatch) *x2 *x0 *x *x1 *x3 this way all you have to do , to evaluate the center pixel's normal is crossing vectors of all directions and average them ie : N0 = cross(x-x0,x2-x) N1 = cross(x1-x,x2-x) ect.. and average them. you will end up with four normals so divide the result by 4 to average. Border case will be the pbm ... be sure to emulate or enable Mirror address mode in the shader for border case ...
  5. First GPU is a parallel architecture so like all parallel architecture you need some locking mechanism if multiple threads access the same data. this applies for gpu too 1) you can compute your normals from the heightmap directly in vertex shaders so no need for an extra texture but need of more fetching...alternatively in order to avoid the extra fetch for normals you could encode RED = H HeightMap GREEN = (H)dx(derivative in height along x axis) BLUE = (H)dy same for y axis. but this will make the terrain static because displacement texture regeneration will cost you CPU time, locking against VB ect... 2) From what i know on DX9 hardware vertex texture fetch is slow (you can use PIX on PC in order to measure it to see if it's always true because this asumption was right on early DX9 hardware...) (except on Xbox360 wich has hardware shortcuts for it and perhaps modern PC cards ) 3) For the VRAM , depends on your needs , if you want to render terrain only ok but not planet sized ? ;-) but what if you have several thousands of other geom data ? Hopes this helps.
  6. Quote:Original post by InvalidPointer They mean the same thing; that LHS/RHS stuff is outdated nonsense with programmable shading. The only thing that matters is what handedness the specific application you're writing (and not the underlying graphics API) uses. I do not agree with you , coordinates systems are still very important and can cause you days of headache when implementing shaders...but i admitt that for this problem there is no influence. Green color is generally the y component of your normal but it's in tangent space coordinate system (ie image texture space) , so the pb is not the LHS/RHS conversion. Rather i think it's how the tool has produced the normal map from heightmap... the normal map is constructed by being the result of the cross product between the X Texture Axis aligned vector and Y Texture Axis aligned vector for a particular pixel (difference between pixel and 2 neighbours) Depending on order of fetching the pixels information the result can be negated : If the tool start by upper left corner of the image the vectors are like this : * -> * ^ | Y coord * otherwise if tool start at bottom right : * <- * | V Y coord * Now i've seen that some input format with image library like devils are loaded by considering the origin is bottom right ( case for TGA files for example ) So an explanation could be that one image source has been inputed in the tool with TGA , the other a BMP ? (i'm juste making a supposition ) or they have been produced with 2 different tools that doesn't calc the normal map in the same order has explained above (i argue for 1st hypothesis )... Sorry for ugly ascii art :-) hopes this help.
  7. I will give you 2 common pitfalls : 1) Stack allocated objects If you put pointers into your vector and have simply put an adress of a stack allocated object , when you leave the scope of your function , the object allocated on stack will be freed and containing invalid value. (0xfeeefeee) In code it's something like this : void foo() { CEntity entity; m_vector.push_back(&entity); } --> on leaving foo &entity point to freed memory. 2) No copy constructor in your classes/structs if you push directly reference of objects , the vector will create its own copy by calling the copy constructor ( if you don't create one , the compiler will generate one for you but wich will be minimalist and will not copy data members of your class) so your data will be invalid ... Hopes this helps.
  8. Be more precise with your question , are you talking about fetching a texture in a shader with bilinear filtering on or you want to make it for yourself ? If this is the case i will try to give you a precise and correct answer. First of all FX Composer or Render Monkey will tell you the number of instruction count of your shader. For bilinear interpolation the hardware will take 4 samples of your texture and make the blend operation on theses acording to this Color0----------------------------------Color1 | | | | | | | Color2----------------------------------Color3 (dx*Color1 + (1-dx)*Color0 + dx*Color3 + (1-dx)*Color2 + dy*Color2 + (1-dy)*Color0 + dy*Color3 + (1-dy) * Color1 ) / 4 dx = offset in range [0..1] dy By supposing the hardware is not optimized (wich is not true hopefully) this is for ALU instruction count : 8 multiplications + 8 additions + 1 division + 4 subtract this gives eq 21 ALU Instructions (in case you do it in software (for yourself)) The hardware and hlsl compiler are optimized so the final instruction count should be 8 MULADD and 2 SUBTRACT and 1 MUL (for the last divide). this gives eq 11 ALU Instructions (in case you do it in software (for yourself)) Ohterwise the HLSL tex2Dxxx do this code wired so it's only one cycle of GPU if the vendor wants to be certified Dx8/9/10 GPU. You have at least one cache miss memory cost for fetching pixels for the first time to the registers. (further Pixel shader runs will have color prefetched in cache memory) , however as we are discussing about transfer between VRAM and GPU registers this should be really really fast. You can consider that on today's GPUS bilinear filtering is free (1 cycle) for the instrinsic version.
  9. Quote:If you enable mip-mapping, this should happen automatically as well, without any manual management or changes to the shader. yep u are right on this. So just try to generate the mip and apply linear filtering on it.
  10. Hi, You must maintain a technique like depth buffering otherwise you can't determine which is the nearest point. Why don't you track the distances of intersection and points and take only the shortest nonnegative distances from the set if you don't have depth buffer ? otherwise i can't see why a light projection cannot intersect a plane , a plane is infinite and a point can't be parallel to the plane ?
  11. Simple question before talking about aliasing , what is your texture resolution ? The aliasing with projected texture come from the fact that you project a -x/x range onto a 0/1 range depending on the distance of the projector to the object that receive the texture. As you do a perspective projection it will scale your -x/x range with distance, producing a more and more difference between -x/x range and 0/1 range giving you the aliasing. The same arrive in shadow mapping techniques and some cleverer algorithm fight against this aliasing either averaging the surrounding pixels but for colors it won't produce a good solution (ie for depth it's ok since we don't need to be conservative) , some other algo will split the shadow maps into multiple map with the distance (i suggest you to do this if u want a neat effect). If you have constraint like memory so u will probably not want to do that but here is my idea on this : Complex solution : i would take the texture to different sizes(1024/512/256/128/64) and sample the right map depending on distance of receiver, the difficulty will be to fight against zooming and finding a way to downsample the hires texture to produce a projected motif that has the same size and handling smooth transition with displacement (mipmapping with linear interpolation ;-)). Easy solution : Increase texture size. Other thoughts : switch to another algorithm like decals generation ? "Bon courage" (french traduction : good luck)
  12. yep u spotted it , you cannot bind an RT and fetch from it at the same time... maybe i'm wrong but why not use D3DXFillTexture instead (it will be done on CPU i agree).. try to help regards and thanx for your water demo in x3.
  13. sorry for the late response , no , i want to simulate a waterpool and do the refraction by looking it in a cube map and only rendering a quad , i have come with a solution wich is to aim a projector downward and shift it by the dot between viewdirection and the downwardvector (0,-1,0) in order to shift the lookup to simulate the perspective of the parallepiped but it gives me weird result at grazing angles. moreover it produces a eyefish like image (:-x) i tried interior mapping , it's amazing and exciting but the object seems to be at least cutted in two if the pivot of the object is the middle..(ie the cube in object space span from -1 to 1) cheers.
  14. hi does anyone has simulated a depth with a cubemap ? i tried to sample the cubemap with the viewdirection but when the camera moves , this cause the view dir to be rotated and then the cubemap become misaligned with the geometry so the texture show a displacement... Example , i render a quad and i want the cube map be aligned with it but instead of sampling in the middle of the cubemap i want to sample from the above. Actually I do the calculation in object space in order to preview things in render monkey... any help would be appreciated thanx
  15. Quote:Original post by nini Quote:Original post by programmermattc Is it possible you need to turn the alpha channel on some how? I know when I was doing stuff in DirectX I could mess with the alpha channel all I wanted but nothing changed since it was active. i think this is a good answer ! i actually not have solved my problem but i will try some blend render state , i noticed that i cleared alpha 0xff before any writes in a dev->clear call. thanx to all for ur ideas. okay it's solved for me , it was just a commitchange on the effect that was not called (xcuse me..) nevertheless thanx again for all.
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!