Jump to content
  • Advertisement


  • Content Count

  • Joined

  • Last visited

Community Reputation

411 Neutral

About chrisendymion

  • Rank

Personal Information

  • Interests
  1. Hello raRaRa,   We are on the same boat ! I'm working on a procedural planet engine, too. My process is close to yours, as I'm using quadtree and having exactly the same artifacts starting LOD 15. I did not need to post a screenshot because yours show exactly the same problem.   We can exchange some stuff for resolving this.   Some differences between our projects. I'm using 6 grids (one for each faces of the cube). Coordinates are, for the moment only in single float precision. The grid size varies on parameters, but I'm using often 256x256. I did not pass the 4 corners to the shaders, only an offset and a scale, based on the node to render. The grids are allways the same in memory, they change size and position based on the node. I'm using an heightmap and a normalmap for each node, which are computed one time per node, between frames, in a "work queue". LOD is based on CDLOD algorithm, using for each node, a parent normal map for morphing purpose.   I thought, like you, using a local noise for high LOD levels, but did not try for the moment.  My doubts are about jittering (end of float precision), and I'll try to render grids based on eye position rather than the planet center. Like descibed in this book : : http://www.virtualglobebook.com/, did you try ?   Sorry for my poor english and good luck ;)
  2. I recommend you this paper : http://www.vertexasylum.com/downloads/cdlod/cdlod_latest.pdf It did not answers all questions, but there are some clues in it.
  3. Hello ;)   I have trouble to understand a part of an algo for shadows volumes. The paper is from Eric Bruneton (thanks to him) and can be found here :  http://www-ljk.imag.fr/Publications/Basilic/com.lmc.publi.PUBLI_Article@11e7cdda2f7_f64b69/article.pdf   Bruneton uses a texture for storing deltaN, deltaZ, Zmin, Zmax.   You have no need to understand what are deltaN or deltaZ, my problem is with Zmin. Zmin is the distance from camera to the nearest shadow volume front face.   He wrotes :   We associate with each pixel 4 values deltaN, deltaZ, Zmin, Zmax initialized to 0, 0, INFINITY, 0.  In a first step we decrement (resp. increment) deltaN by 1 and deltaZ by the fragment depth z, and update Zmin and Zmax with z, for each front (resp. back) face of the shadow surface.   And in the rendering passes :   We draw the shadow volume of the terrain into a deltaN, deltaZ, Zmin, Zmax texture. For this we use the ADD and MAX blending functions, disable depth write, and use a geometry shader that extrudes the silhouette edges (as seen from the sun).   I understand the algorythm, all is ok except the Zmin computation. Why it is initialized to infinity ? Blend modes are ADD and MAX, so I suppose ADD for RGB and MAX for Alpha. With that deltaN (Red), deltaZ (Green) and Zmax (Alpha) are easy to compute..  But ADD with Zmin (Blue) ? How to get it ?   I have some workarounds for this problem, but I'm interested to understand how Bruneton did it.   Hope I was clear and sorry for my poor english ;)   Thank you,   Chris
  4. So easy.. Shame on me ;)   Thank you both !!
  5. Hello ;)   I'm in an optimization time for my project.   My shaders are actualy compiled in build time (VS 2012) to "shadername.cso" files.   For some complex computations (atmospheric scattering, ...), there is a lot of "common functions" for multiple shaders. They are duplicate in every hlsl file.. It's not very convenient when changes are made :-(   Is there a simple solution for compiling shader with only one "common hlsl file", unique for all shaders.. ?   Thank you in advance and sorry for my poor english.   Chris
  6. chrisendymion

    About fixed points

    Whoah ! Thank you all so much for all your help ! I start to understand. There is just some things to learn more...   For the moment, I will concentrate myself on only a planet, I will see later for solar system, galaxies, etc.   For very high precision, you say it's better to use int64. I think I need to create a class for converting it to better readable numbers (for coding) ? If not, with micrometers precision, I will deal with very very big numbers (without digits as it's integer), not very convenient.. ? Some tips ?   I understand the concept of  using camera position as the center, for more precision when converting to float. But subtracting the vertices x,y,z from the camera position in the CPU will be huge ?! There is lot of vertices in the node's grid (x 6 faces)... Better to do that in the vertex shader, no ? But if I do that, how to send the coordinates to the shader ?  As the max input is R32B32G32 and I will use 3 x int64... ?   My grids are allways the same, they are offseting and scaling in vertex shader based on node position and LOD.   In conclusion, when to convert to float and where ? What format to send to the shaders ?    Again, thank you for all your help ;)   PS : I know it's a hard concept for a beginner, but I'm here for learning and not affraid to spend hours on that.
  7. chrisendymion

    About fixed points

    Thank you all for your answers !   Frob, you talk about mixing numbers. It's one of my doubts, do I need to convert the fixed point back to float ? Float to Integer => no loss. If don't need to convert back, how will work the shaders, as the numbers will be to much big...  I read, a lot of engine are using fixed points, there must be a solution..    Stainless, thank you for the links, I will read them now.   Vortez, I read that it's a bad solution to use double, I tried to get my idea about. There was too much loss of performances.
  8. chrisendymion

    About fixed points

    Hello,   I'm in an optimization step for my little procedural engine. For now, I worked with floats for a full planet coordinates. So, there was some artifacts in very high LOD, caused by float precision. After reading a lot of threads and articles on the net, I came to think about using fixed points (fractional integer) instead of float. I never worked with fixed points, and so, there are some (lot of) concepts I didn't understand !   Where to start using them and.. when to stop ?   1- I want to choose freely the planet's radius, let say 5629 kilometers 194 meters 29 centimeters.  Using a 16 bits whole part and 16 bits fractional part, the fixed radius number will be 368914876 (5629.19429 * 65536), right ? But in the first place, it will be a float => 5629.19429 in the code (much readable), so I must convert it before sending it to the shader ?   2- My planet is quadtree based LOD (cube with 6 grids). Does the grids have integer coordinates ? How to scale them on the radius ? I imagine I must choose a power of 2 radius for quadtree divisions ?   3- If the coordinates of point.x is 368914876, it's a big number in the shader (with far/near frustum, Z buffer, etc.), so what to do ? Convert it to float again ? I must scale somewhere... ? Buf if 1 equals 1 centimeter.. It will be very very big ?!   4- In the shaders, what happens when I use the world, view, projection matrices ? Is there a moment when fixed points are no more working ?   It must be something I didn't see/understand..  Hope I was readable because it's very coufuse to me and my poor english did not help ;)   Thank you,   Chris
  9. Hello Xenobrain,   Thank you very much for your post. You helping so much...  I'm now in holidays, but I will post some resulting ideas latter.
  10. Hello Acharis and thank you for your answer ;)   Perhaps, I have not explained correctly...   - My first goal was learning.. Learning how a full game engine works and how make my own. I don't want the best one, like Unreal, Unity and others.. Just mine.   - The demo game I want, is for making something with the actual engine, not a big real indie game.. It will be a little, short, playable demo.   - My engine is pretty complet, separated in libraries, graphics, physics, scripting, inputs, etc. It is very flexible. But ! All the dev was thought from start in the goal of the procedural space game. So it's already exactly what I wanted. Why restarting with a technology I don't know ?   - I started with OpenGL, but (cannot remember why) I moved to DX later. Perhaps you right, but I'm very happy with DX and it's too late..   My request is to help me to find some attractive stuffs for playing the little demo.
  11. Hello,   I'm an indie dev, which working after work. I allways wanted to make my own graphical engine (DirectX) for procedural content. My final goal is a full 4X game with a complete story line, separated in episodes (where each introduces more content). Now my engine is in alpha version and I'm thinking about creating a little game for demo purpose.   Features that will be available : - Procedural planet generation (not complet, but advanced) - Volumetrics lights, clouds and dynamical weather - A physical engine for flying - Except precomputed atmospheric scattering, all parameters can be changed in real time   I hope, for the demo, to make a max real physical simulation, where a little spaceship comes from space at great speed, inserting in orbit for slowing down. After, inserting it in low orbit, going down and touch down a station. For that, I don't want an arcade style game, but a much more simulation where errors are not permitted. Managing flying parameters, coordinates, gravity, system power for shield, engine, etc... View from cockpit..  A success to touch down without damages must be very difficult. (yes I'm a star trek fan)   I have the main idea in head. But... It's not very funny.. How to add some "attractive" stuffs ? Like mission to achieve, time score, rising levels... ? What do you think ?   That will be just a first little indie game for demo (and free !).. Not a big AAA ;) And I'm alone for making it....   Thank you   Chris   PS : Sorry for my poor english..    Some screens from actual dev version (so much work to do.. and bugs to correct)  
  12. Hello ;)   For my little graphical engine, I'm trying to add some volumetrics effects (like clouds).  For that, I filled all the viewing frustum with a 3D Texture (with 128 slices at low res). Noise is applied on every slice and a final pass is used for compute all the slices into one texture 2D. The final texture is applied on top of the scene (full screen quad) with alpha blending.   Volumetric clouds are working very well ! I'm happy for that ;)   But now, I have an another problem. As the final texture 2d didn't have depth informations, my clouds are overlapping the world. Don't know how to say (and my english is bad). Imagine far away clouds, but close mountains appear behind the clouds (must be inversed). Some clouds need to be between mountains and camera, and some other, behind mountains, depending on their distances. I tested with a depth map (from camera) of the world, used when the noise is applied, but cannot get it to works..    Any idea how to achieve this or the depth map is my only solution ?   I'm working with DirectX 11 (C++).   Thank you and a happy new year !   Chris  
  13. chrisendymion

    [SOLVED] Tessellation on IcoSphere

    Thank you for the newer compiler's link ! ;)
  14. chrisendymion

    [SOLVED] Tessellation on IcoSphere

    Oh............................. Nevermind... Forget all my posts.... All the troubles were caused by the SDK June 2010 !!! All is working very well now !! ;)   Thank you again for your help !!   If someone has the same problem, you need to add this in the hull shader file : #if D3DX_VERSION == 0xa2b #pragma ruledisable 0x0802405f #endif All this drawing and posting time just for a little bug.... 
  • Advertisement

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!