Jump to content

  • Log In with Google      Sign In   
  • Create Account

Banner advertising on our site currently available from just $5!

1. Learn about the promo. 2. Sign up for GDNet+. 3. Set up your advert!


Member Since 04 Mar 2013
Offline Last Active Yesterday, 02:44 PM

Topics I've Started

Storing textures (game engine)

24 March 2014 - 03:43 PM

Just a short question... I need to store and compress all my textures inside a single file...


Its better to save them as .dds, compress using zlib (for example) and load using directX "D3DX11CreateShaderResourceViewFromFile" after decompressing or should I store them as raw pixel data (RGBA channels), compress using zlib (again, zlib or any other) and load creating an empty texture, updating the buffer and then generate the mip maps?


Just some notes:


- Yes I need mip maps always.

- I use the 4 channels (RGBA) almost always.

- Currently I'm using DirectX11 and C++.


Thanks :)

Terrain render process

09 June 2013 - 06:29 PM

Hello, I'm in doubt about which method should I use to render my terrain.


First, some information about my terrain style:


 - My game came is like Diablo, torchlight type so I dont need to worry about LOD or anything like it.

 - Currently I divide my terrain into 9 parts, each part is divided using a quad tree, rendering only what I can see.

 - Im using a 128x128 heightmap for each terrain which occupies an area of ​​64 game units.

 - Each terrain has it own textures, up to 8, and 2 alpha maps (so I can "paint" the map).


So, these are the methods that I thought:


1) Store only the texture info, the heightmap and the 2 alphamaps, calculate all the other things at the runtime only when that terrain is needed. Store this data and sent it to the shaders when the render time comes.


2) Pré calculate anything on the "building" phase and store ALL data, when that terrain is needed, just load the data and send to the shaders.


3) Store the texture info, the heightmap, a pre baked normal map texture and the alphamaps, when render comes send ONLY the textures, on the shader, do something like this:


 - Calculate the position using the index, for a 128x128 heightmap will be like this:

// PRIMITIVE_INDEX is the primitive index provided by the shader (I dont remember the semantic now)

uint COUNTER = 0; // Global

uint xPos;
uint yPos;
uint zPos;
uint currentX;
uint currentZ;

currentX = PRIMITIVE_INDEX%128;
currentZ = PRIMITIVE_INDEX/128;

if(COUNTER == 3)
   COUNTER = 0;

if(COUNTER == 0)
   xPos = currentX/2;
   zPos = currentZ/2;
   yPos = TextureLookUp(currentX, currentZ);
else if(COUNTER == 1)
   xPos = currentX/2 + 0.5;
   zPos = currentZ/2;
   yPos = TextureLookUp(currentX, currentZ);
   xPos = currentX/2;
   zPos = currentZ/2 + 0.5;
   yPos = TextureLookUp(currentX, currentZ);



 - Compute the normals using the normal texture (same idea that I used for the position, the normal texture NOT the same texture for the pixel shader, this is a pre baked VERTEX normal texture)


 - Compute the texture coordinates using the positions.


 - Compute the tangent and binormal using more texture lookups.


Currently Im using the first ideia, but my fps is at 30~20 and I need to improve this (ok, my GPU is not that good, but I can play SC2 normally and Im only rendering the terrain).




Sorry for my bad english.

Diablo/Torchlight based game engine questions

01 April 2013 - 10:18 PM

Well, I'm building a game engine using directX10 and I need to discuss some ideas and conclusions.


Some informations:


- My game is something like diablo/torchlight style mixed with DotA, usign a third camera view from top to bottom.

- Currently I'm using a quadtree to split the terrain and cut some unnecessary draw calls (frustum culling).

- Each chunk of terrain has its own materials: 2 alpha textures, 8 diffuse textures and 8 normal textures, the alpha texture determine where I should use each texture (I use some logic to cut some unnecessary process at the pixel shader).

- For lightning Im using the Light Pre-Pass system, so I render all the geometry 2 times (only normals first).

- All the meshes are well stored and indexed, so when I need to draw the scene, first I look for all meshes of the same type, put all information from them into an Instance Buffer and then just do 1 draw call for them.


1) Just a conclusion, as I will always be facing almost the same number of triangles because Its a third camera view from the top to the bottom and the camera zoom is fixed (maybe a little zoom will be allowed but almost 99% there will be no zoom) I dont need to worry about LOD, correctly?


2) Now Im using some heightmaps to store the height for the vertices from each terrain, they are stored like a texture, this way Im getting +- 1Mb for each terrain chunk, but I need a better way to do this, just using the heightmap dont allow me to do things like this:


sc2_diablo3_easter_egg.jpg - The terrain isnt continuous, to do this I need to store the x, y and z float information that is expensive...There is a bette way do archive the same result?


3) They use alot of meshes or just bump/parallax occlusion techniques?



-Look to the ground


Diablo-3-6.png-Ground too


4) A quad tree still is a good idea for this or there is a better way?


5) Each time that I need to load an assert I load it using virtual memory, this is correctly? (all my assert data (texture, meshes, etc) are edited custom file types).


Sorry for my bad english, tutorials, books and examples are welcome too!

Spell special effects on directX

27 March 2013 - 10:48 PM

Hello, Im currently working on a project using directX 10 and I wondering how I could archive the same effects for spells at games like warcraft 3, diablo, torchlight, wow, starcraft2, etc...


I know that fire (and other similar things) I can get using billboards, particles are easy to implement, but things like those spells:


 (0:42 to the end).


Can someone tell me examples or articles that I could read?

Light pre-pass with instanced skinning

22 March 2013 - 10:13 AM

Hello, I'm trying to put the Light pre-pass lightning method in my game-engine, but as I use Instanced Skinning (hardware skinning) I dont know If rendering 2 times the geometry would be nice (cause I will need to skin 2 times).


Currently Im doing foward lightning, my game Is a third view camera game (like diablo and starcraft) and I really really need a good way to use many many lights at the same time, almost every mesh on the scene is skinned (instanced too if there is more the one of the same type).


Anyone know any good way to implement it or if there is another good alternative... I was thinking about using the Stream-Out, but I dont know if it would work because we are talking about a scene using something around 200~300 skinned meshes, some of then using instancing and all of them in diferent animation stages.


Another solutions and tutorials are welcome too smile.png