Jump to content

  • Log In with Google      Sign In   
  • Create Account


Member Since 07 Jan 2000
Offline Last Active Today, 05:58 AM

Posts I've Made

In Topic: Procedural Planet - GPU normal map artifacts

01 October 2014 - 06:18 AM

Your GLSL code shows that you use FP32, not FP64, so those artifacts are expected at LODs >= 15.


One solution is to use FP64 ( dvec3 ), however not all GPUs support all the operations you need for procedural noise.


In the I-Novae engine, I ended up using an hybrid solution: FP64 shader emulation. You can read more on it on this page:



For performance reasons many operations still work in FP32 ( only the inputs to the procedural function, ie. the planet position, is using double emulation ), but that alone significantly improves the precision on the lowest LOD levels. In my tests, I can go up to level 18 with that "trick".



In Topic: The possible future of games

23 April 2010 - 12:20 AM

For a technology that claims to have unlimited details, I certainly believe that the end result looks crap. Now, I understand those guys are no artists, and that's fine, the colors are horribly chosen and the scene is repetitive, but I don't mind.

However if you look carefully at the video when the camera gets close to some objects, like plants, you'll see that the "voxel" resolution is very low. It'd be equivalent to something like a texture of 128x128 per square meter. While nowadays, any half decent 3D game will have ground and walls with 2048x2048 textures.

Which brings me to the question: if you really have the technology to display unlimited details, why don't you demonstrate it in your videos ? Like, at least having an equivalent virtual resolution than what a 3D game provides..

I don't doubt that the technology works, but the "unlimited" part is pure marketing bullshit. That, and it's all static. Show me the same scene with destructible walls, tons of walking characters and a high resolution, then I'll be impressed.


In Topic: If you could get permission to remake a game, what would it be?

18 April 2010 - 09:17 PM

Magic Carpet :)

In Topic: TitaniumGL, opengl multiwrapper for your game (opengl,d3d,multicore soft-render)

14 April 2010 - 09:37 PM

If your market are small indy developers that release small plat-form/puzzle games, there could be an interest for it.

However I agree with other posters that to be more generally useful, it needs to support GL 2. No offense, but Quake 3 and Return to Castle Wolfenstein are almost a decade old. You're not proving anything by supporting them.

By the way:

TitaniumGL is also capable, to render your game with multicore cpu rendering

Can you elaborate ? How can it do that ? Have you done some benchmarks ?


In Topic: Perlin(ish) noise, and floating point precision.

12 April 2010 - 09:44 PM

Original post by bluntman
Ah, exactly who I was thinking of when I said "someone" :). Thanks for the reply!
So on your planets, what detail level do you actually go to? Have you made them 100% real world scale, or just close enough to look right?

It is 100% real world scale, and my test planet is 6350 Km of radius. I have found that I'm starting to have precision issues at depth 13-14, but those are still acceptable. It gets much worse at level 15-16, and totally unacceptable over 16. I need depth 16-17 to get down to the meter resolution on the ground surface.

Originally I generated the procedural geometry (mesh vertices) on the CPU so I could use doubles, and had no particular precision issues.

Original post by bluntman
I remember you mentioned somewhere that you managed to move your noise generation onto the GPU (or maybe it was that you were going to), did you manage, and if so, did you manage to still get around the precision problems (as obviously there is no double support in 99% of cards)?

Correct, that was me. One thing I was disatisfied with in the previous version was that I only generated the mesh on the cpu, and I wanted to generate normal maps too, which was too slow on the CPU (imagine generating a 512x512 unique texture per chunk, each texel requiring 40+ octaves of noise). So I implemented the procedural generation on the GPU, and now use it both for geometry (with a read-back to the CPU) and for normal maps generation.

To this day, I still have the precision issues, and as it works on the GPU I cannot use doubles (yet). I actually plan on moving back the mesh generation on the CPU, just to be able to reuse doubles and fix "cracks" in the terrain due to that lack of precision; and keep the normal maps on the GPU with the limited precision, but also limit the maximum depth the normal maps can go to, probably around depth 14-15.

Not an ideal solution but it's the best I can think of at the moment.

Original post by bluntman
So I converted my entire algorithm over to using double for everything (iteratively, as each successive change didn't fix the quantization problems), and I still have the same problems. I'm thinking it may have to do with using a local transform at each chunk-lod root. I have heard that double precision numbers have the capability to represent coordinates accurate to within a cm in billions of km.

That's correct, doubles have enough precision for a planetary engine. If I remember well, they were good enough to have millimeter accuracy at a distance of 100 AUs (1 AU = distance Sun-Earth). If you're using doubles and still have precision issues, you must have a bug in your code somewhere, like a cast to float that you forgot somewhere..