Jump to content

  • Log In with Google      Sign In   
  • Create Account


voodoodrul

Member Since 19 Jun 2012
Offline Last Active Nov 16 2013 12:45 PM
-----

Posts I've Made

In Topic: glTranslatef() very far from origin results in gaps between neighboring trans...

13 November 2013 - 12:04 AM

I switched to the local rendering, placing chunks relative to the camera, and keeping the camera at (0,y,0), at least as far as glTranslatef() is concerned. I also switched to doubles throughout. I can now travel to extremely far locations from the origin. 

 

Thanks again Wintertime


In Topic: glTranslatef() very far from origin results in gaps between neighboring trans...

12 November 2013 - 08:40 PM

Thanks wintertime! You're suggesting an idea similar to the "localized world" I was thinking of. That did indeed fix the gaps. However, now I have the same problem with my camera - it lacks float precision and jitters around when far from the origin, simply because the floats don't have the decimal precision. 

 

My outer render loop does:

 

1) glLoadIdentity();

2) Look through the camera - rotate pitch/yaw, then translate to some obscenely large x,y,z

    For some reason, I have to negate all the values when I do this.  glTranslatef(-vectorPos.x, -vectorPos.y, -vectorPos.z);

3) Render the world

    First translate to camera position

4) For each chunk, translate only the difference between the camera position relative to the chunks position. 

 

I wonder if I'm doing this all wrong. Should I keep the entire world near the origin ( camera essentially fixed at (0,y,0) ) and just move the world around relative to the origin?

 

Eh.. That probably won't help. The fundamental problem is that there isn't enough decimal precision for acceleration on vectors to be smooth. I guess I need to convert to doubles instead of floats, or accept the fact that things jitter around when I go to 500000,0,500000 and beyond. 


In Topic: Procedural terrain generation - "stitching" noise together

14 July 2013 - 10:36 PM

As mentioned above, noise generated by a mathematical function will extend continuously no matter how large the region, so if the only input into your noise algorithm is the x,y position in the world, you can generate it in chunks the same way that you'd generate the whole world - just ensure that the x,y positions that get used are offset based on the chunk location.

 

But if you use different seed values for different areas - eg. to implement biomes or some sort of designer-specified input - then you essentially have separate regions that will be discontinuous. I solved this by using bilinear blending so that every point is actually a mix of the 4 nearest regions.

 

(That sort of thing seems to come up a lot - in this thread here http://www.gamedev.net/topic/643213-procedural-terrain-and-biomes/ and in my thread here http://www.gamedev.net/topic/639342-infinite-terrain-generation-in-chunks-need-hints-on-handling-chunk-edges/ )

Awesome. This was my next logical question and it's good to know that blending nearest regions is a legitimate way to go about it - when needed. 


In Topic: Procedural terrain generation - "stitching" noise together

14 July 2013 - 01:16 PM

You don't really need to do anything weird or tricky like you imagine if you are using Perlin noise. Perlin noise is continuous across the entire range of float/double (whatever you use) and while it does have a period due to the underlying implementation, you can use hashing tricks to make the period so large that it might as well be infinite as far as the player is concerned. If you use double precision coordinates, your world will be continuous and effectively infinite. At that point it's a simple matter of mapping out sub-regions to a discrete map chunk to get your geography.

Thanks! Looks like I was overcomplicating things.. The perlin noise algorithm I was using was designed to generate the entire map in one go. I'm now trying to hijack https://code.google.com/p/mikeralib/source/browse/trunk/Mikera/src/main/java/mikera/math/PerlinNoise.java for this purpose, but it seems highly optimized code and difficult to interpret the algorithm. I'll keep plugging away on it until it works. 


In Topic: Reusing VBOs

06 July 2013 - 12:31 PM

Thanks MarkS. The community here is really helpful. I think I've solved most of my current issues and I think I'm getting some really decent performance out of my prototype voxel renderer. I want to share it with the world so I'll post a link to my app here: 

 

https://voodoo.arcanelogic.net/CYDI-latest.jar

 

If anyone cares, this represents about 4-5 weeks of entirely from-scratch effort to teach myself OpenGL. Despite being a programmer for a few years, I have never worked with graphics programming before and I just wanted to see how long it would take to get something simple off the ground. 

 

My renderer uses perlin noise to generate a seeded, random, and unique heightmap with each app restart and uses chunked rendering to page in and out chunks/tiles. 

 

Controls are Minecraft-like. Space to jump, double space to fly up, shift to fly down. Turn off camera collision to fly outside the chunk boundaries. Move around faster with +/- keys. Then turn up the view distance (F5) and turn off vsync (F8) and take in the view. 

 

Oddly, I get really excellent performance on Intel integrated graphics cards - on an HD 5000 card I'm generally getting 60fps with a million exposed block faces. 250-300fps with view distance 11, which I think is plenty. Still holds 60fps at view distance 33 which I think is nutty, especially for an integrated graphics card...

 

My GTX 690, though, holds steady at ~2450 fps. smile.png


PARTNERS