Flicklizic

Members
  • Content count

    15
  • Joined

  • Last visited

Community Reputation

1067 Excellent

About Flicklizic

  • Rank
    Member
  1. You can store the data inside your heightmap, use R and G channels for the height level, B for the construction data and the alpha channel for the speed factor.   Using this approach you will only need a short lookup function. Just remember to store the texture info inside the heightmap when you "paint" your terrain.
  2. DX11 Storing textures (game engine)

    I will probably structure my engine that way, thanks for the reply!
  3. Just a short question... I need to store and compress all my textures inside a single file...   Its better to save them as .dds, compress using zlib (for example) and load using directX "D3DX11CreateShaderResourceViewFromFile" after decompressing or should I store them as raw pixel data (RGBA channels), compress using zlib (again, zlib or any other) and load creating an empty texture, updating the buffer and then generate the mip maps?   Just some notes:   - Yes I need mip maps always. - I use the 4 channels (RGBA) almost always. - Currently I'm using DirectX11 and C++.   Thanks :)
  4. Terrain render process

      Sorry, I mean by "other things" Tangent and Binormal.     But as the terrain can change frequently this will generate alot of data processing, no? I will need to calculate the normals, tangent and binormals if they arent stored.   I could use just 2 textures for the pre baked data:   For the height, the x and y in the first texture (so i can have the height up to 65536 or +- 32,768) For the normals, the z, w in the first texture and the x on the second For the tangent, the y, z, and w in the second. For the binormal I can calculate it in the shader w/ the tangent and normal.   2 texture lookups only...     GPU: GeForce 8400 GS (yeah I need a better one) CPU: I5 (belive this is ok)
  5. Hello, I'm in doubt about which method should I use to render my terrain.   First, some information about my terrain style:    - My game came is like Diablo, torchlight type so I dont need to worry about LOD or anything like it.  - Currently I divide my terrain into 9 parts, each part is divided using a quad tree, rendering only what I can see.  - Im using a 128x128 heightmap for each terrain which occupies an area of ??64 game units.  - Each terrain has it own textures, up to 8, and 2 alpha maps (so I can "paint" the map).   So, these are the methods that I thought:   1) Store only the texture info, the heightmap and the 2 alphamaps, calculate all the other things at the runtime only when that terrain is needed. Store this data and sent it to the shaders when the render time comes.   2) Pré calculate anything on the "building" phase and store ALL data, when that terrain is needed, just load the data and send to the shaders.   3) Store the texture info, the heightmap, a pre baked normal map texture and the alphamaps, when render comes send ONLY the textures, on the shader, do something like this:    - Calculate the position using the index, for a 128x128 heightmap will be like this: // PRIMITIVE_INDEX is the primitive index provided by the shader (I dont remember the semantic now) uint COUNTER = 0; // Global ----------------------------------------- uint xPos; uint yPos; uint zPos; uint currentX; uint currentZ; currentX = PRIMITIVE_INDEX%128; currentZ = PRIMITIVE_INDEX/128; if(COUNTER == 3) { COUNTER = 0; } if(COUNTER == 0) { xPos = currentX/2; zPos = currentZ/2; yPos = TextureLookUp(currentX, currentZ); } else if(COUNTER == 1) { xPos = currentX/2 + 0.5; zPos = currentZ/2; yPos = TextureLookUp(currentX, currentZ); } else { xPos = currentX/2; zPos = currentZ/2 + 0.5; yPos = TextureLookUp(currentX, currentZ); } COUNTER++;    - Compute the normals using the normal texture (same idea that I used for the position, the normal texture NOT the same texture for the pixel shader, this is a pre baked VERTEX normal texture)    - Compute the texture coordinates using the positions.    - Compute the tangent and binormal using more texture lookups.   Currently Im using the first ideia, but my fps is at 30~20 and I need to improve this (ok, my GPU is not that good, but I can play SC2 normally and Im only rendering the terrain).       Sorry for my bad english.
  6. What I mean by loading using virtual memory is that I dont use normal functions like fopen, I use memory mapping to load the asserts into the memory and then create the "buffers" for each one and give the acess to the GPU.   But really thanks for the replies, they showed me the right way.
  7. Well, I'm building a game engine using directX10 and I need to discuss some ideas and conclusions.   Some informations:   - My game is something like diablo/torchlight style mixed with DotA, usign a third camera view from top to bottom. - Currently I'm using a quadtree to split the terrain and cut some unnecessary draw calls (frustum culling). - Each chunk of terrain has its own materials: 2 alpha textures, 8 diffuse textures and 8 normal textures, the alpha texture determine where I should use each texture (I use some logic to cut some unnecessary process at the pixel shader). - For lightning Im using the Light Pre-Pass system, so I render all the geometry 2 times (only normals first). - All the meshes are well stored and indexed, so when I need to draw the scene, first I look for all meshes of the same type, put all information from them into an Instance Buffer and then just do 1 draw call for them.   1) Just a conclusion, as I will always be facing almost the same number of triangles because Its a third camera view from the top to the bottom and the camera zoom is fixed (maybe a little zoom will be allowed but almost 99% there will be no zoom) I dont need to worry about LOD, correctly?   2) Now Im using some heightmaps to store the height for the vertices from each terrain, they are stored like a texture, this way Im getting +- 1Mb for each terrain chunk, but I need a better way to do this, just using the heightmap dont allow me to do things like this:    - The terrain isnt continuous, to do this I need to store the x, y and z float information that is expensive...There is a bette way do archive the same result?   3) They use alot of meshes or just bump/parallax occlusion techniques?   -Look to the ground   -Ground too   4) A quad tree still is a good idea for this or there is a better way?   5) Each time that I need to load an assert I load it using virtual memory, this is correctly? (all my assert data (texture, meshes, etc) are edited custom file types).   Sorry for my bad english, tutorials, books and examples are welcome too!
  8. Hello, Im currently working on a project using directX 10 and I wondering how I could archive the same effects for spells at games like warcraft 3, diablo, torchlight, wow, starcraft2, etc...   I know that fire (and other similar things) I can get using billboards, particles are easy to implement, but things like those spells:   - http://www.youtube.com/watch?v=hd0xuL8bdBA (0:42 to the end).   Can someone tell me examples or articles that I could read?
  9. Hello, I'm trying to put the Light pre-pass lightning method in my game-engine, but as I use Instanced Skinning (hardware skinning) I dont know If rendering 2 times the geometry would be nice (cause I will need to skin 2 times).   Currently Im doing foward lightning, my game Is a third view camera game (like diablo and starcraft) and I really really need a good way to use many many lights at the same time, almost every mesh on the scene is skinned (instanced too if there is more the one of the same type).   Anyone know any good way to implement it or if there is another good alternative... I was thinking about using the Stream-Out, but I dont know if it would work because we are talking about a scene using something around 200~300 skinned meshes, some of then using instancing and all of them in diferent animation stages.   Another solutions and tutorials are welcome too
  10. Ok I solved the problem, if anyone is having this same problem remember to see if the previous buffer is working correctly first (this can cause a ripple effect on all upcoming buffers)
  11. Hello, I'm having problems with the use of constant buffers with arrays, currently I'm sending an array of size 100 for my vertex shader, like this:     ///////////// // DEFINES // ///////////// #define MAX_NUMBER_INSTANCES 100 ///////////// // STRUCTS // ///////////// struct InstanceInfo { matrix InstanceWorldMatrix; uint CurrentFrame; uint TotalFrames; uint AnimationType; float DeltaTime; }; ///////////// // BUFFERS // ///////////// cbuffer InstanceBuffer { InstanceInfo Instance[MAX_NUMBER_INSTANCES]; };     And I'm getting wrong results... Here is my C++ buffer: (almost the same)   struct InstanceInfo { D3DXMATRIX worldMatrix; unsigned int currentFrame; unsigned int totalFrames; unsigned int animationType; float deltaTime; };       The buffer is initialized correctly with the size: sizeof(InstanceInfo)*MAX_NUMBER_INSTANCES (MAX_NUMBER_INSTANCES in my c++ code is 100 too) and the data is copied correctly too. (I double checked them)   I know that there is the packing rule but I cant find where is my error (probably is in front of me but I cant see it...)   If someone can help me...
  12. Hello, talking about the storage and loading of models and animations, which would be better for a Game Engine:   1 - Have a mesh and a bone for each model, both in the same file, each bone system with 10~15 animations. (so each model has its own animations) 2 - Have alot of meshes and a low number of bones, but the files are separated from each other and the same bone (animations too) can be used for more then one mesh, each bone set can have alot of animations. (notice that in this case, using the same boneset and the same animations will cause a loss of uniqueness).   And now, if I need to show 120~150 models in each frame (animated and skinned by the GPU), 40 of them are the same type, is better:   1 - Use a instancing system for all models in the game, even if I only need 1 model for each type. 2 - Detect wich model need instancing (if they repeat more then one time) and use a diferent render system (other shader programs), use a non-instancing for the other models. 3 - Dont use instancing because the "gain" would be very low for this number of models.   All the "models" talked here are animated models, currently I use the MD5 file with GPU skinning but without instancing, and I would know if there are better ways to do all the process of animating.   If someone know a good tutorial or can put me on the way... I dont know how I could create a interpolated skeleton and use instancing for it, let me explain..:   I can compress all the bone transformations (matrices) for all animation for all frames in a simple texture and send it to the vertex shader, then read for each vertex for each model the repective animation/frame transformation. This is ok, I can use instancing here because I will always send the same data for the same model type, but, when I need to use a interpolate skeleton, should I do this interpolation on vertex shader too? (more loads from the texture could cause some lost of performance). I would need calculate the interpolated skeleton on the CPU too anyway, because I need it for colision...   Any solutions/ideas?   - Im using directX but I think this applies to other systems
  13. Painting texture layers - pixel shader

    Here is the info:   - All textures are 1024x1024 - The alpha textures are 256x256   - The sampler state is configured like this:       samplerDesc.Filter = D3D10_FILTER_MIN_MAG_MIP_LINEAR;     samplerDesc.AddressU = D3D10_TEXTURE_ADDRESS_WRAP;     samplerDesc.AddressV = D3D10_TEXTURE_ADDRESS_WRAP;     samplerDesc.AddressW = D3D10_TEXTURE_ADDRESS_WRAP;     samplerDesc.MipLODBias = 0.0f;     samplerDesc.MaxAnisotropy = 1;     samplerDesc.ComparisonFunc = D3D10_COMPARISON_ALWAYS;     samplerDesc.BorderColor[0] = 0;     samplerDesc.BorderColor[1] = 0;     samplerDesc.BorderColor[2] = 0;     samplerDesc.BorderColor[3] = 0;     samplerDesc.MinLOD = 0;     samplerDesc.MaxLOD = D3D10_FLOAT32_MAX;     - My terrain has 128x128 vertices   I only update the alpha textures (only when I want to paint)   And one more thing... I only use 1 MipLevel for all textures, maybe this is what is causing this?  
  14. I'm creating a little editor for my terrain so I can "paint" the terrain, but I'm getting something around 20 FPS using 8 textures and 2 alphamaps from now. My HLSL code for pixel shader:   //////////////////////////////////////////////////////////////////////////////// // Filename: terrain.ps //////////////////////////////////////////////////////////////////////////////// ///////////// // GLOBALS // ///////////// Texture2D textures[8]; Texture2D alphaTextures[2]; ////////////// // SAMPLERS // ////////////// SamplerState SampleType; ////////////// // TYPEDEFS // ////////////// struct PixelInputType { float4 position : SV_POSITION; float2 tex : TEXCOORD0; float2 alpha : TEXCOORD1; float3 normal : NORMAL; }; //////////////////////////////////////////////////////////////////////////////// // Pixel Shader //////////////////////////////////////////////////////////////////////////////// float4 TerrainPixelShader(PixelInputType input) : SV_TARGET { int i; float4 color; float4 baseColor; float4 textureColor[8]; float4 alphaMap[2]; // Sample all textures for(i=0; i<8; i++) { textureColor[i] = textures[i].Sample(SampleType, input.tex, 0); } // Sample the alphamaps for(i=0; i<2; i++) { alphaMap[i] = alphaTextures[i].Sample(SampleType, input.alpha, 0); } // Set the base color color.r = 0; color.g = 0; color.b = 0; color.a = 255; // Add the second layer using the red channel of the alpha map. color = lerp(color, textureColor[0], alphaMap[0].r); // Add the third layer using the green channel of the alpha map. color = lerp(color, textureColor[1], alphaMap[0].g); // Add the forth layer using the blue channel of the alpha map. color = lerp(color, textureColor[2], alphaMap[0].b); // Add the fifth layer using the alpha channel of the alpha map. color = lerp(color, textureColor[3], alphaMap[0].a); // Add the sixth layer using the red channel of the alpha map. color = lerp(color, textureColor[4], alphaMap[1].r); // Add the seventh layer using the green channel of the alpha map. color = lerp(color, textureColor[5], alphaMap[1].g); // Add the eigth layer using the blue channel of the alpha map. color = lerp(color, textureColor[6], alphaMap[1].b); // Add the eigth layer using the alpha channel of the alpha map. color = lerp(color, textureColor[7], alphaMap[1].a); // Return the final color return color; }       Yes I know that I could check if I really need to use the 8 textures (and skip some part of the code) but I want to imagine a situation where I NEED to use all textures. The FPS start to drop when I paint more layers, everything start black, when I paint the first layer (Just need to be a little pixel) my FPS go from 55 to 45, the second layer makes my fps go from 45 to 37 and so on... There is a better way to achieve this or this Is the only way? I'm a little new on directX so I belive there is something wrong with my code (or my theory) Thanks! Note that my first texture coordinates repeat the terrain about 4X and the alpha fills only one time, this is slowing things too because if I use the alpha coordinates for the textures too my FPS grows up +-20. (the coordinates go from 0.0 to 1.0 on both).