Jump to content

  • Log In with Google      Sign In   
  • Create Account

DementedCarrot

Member Since 18 May 2011
Offline Last Active Today, 01:01 AM

#5301405 Dealing With Quadtree Terrain Cracks

Posted by DementedCarrot on 19 July 2016 - 02:18 PM

1) Detect edge vertices when generating a mesh.
2) Detect which edge vertices are 'odd', these are the ones that mismatch with the next LOD and create holes.
3) Get the neighboring even-vertex-edge positions.

4) Set the odd-vertex position to the average of the two neighboring even positions.

This should close your cracks. The higher-LOD vertices follow a straight line between the two even positions, where the more detailed mesh just has another sample point at the half-way mark.

Then your next problem is knowing when to close the cracks on a side. The most straightforward approach is probably to regenerate a mesh whenever a neighboring mesh changes LOD.

There is another approach where you store two positions per vertex (the main position, and an edge transition position) and a bitflag mask for which edge a vertex is on (or zero if it isn't on an edge), and then when you render the mesh you pass in a bitflag for which edges of the mesh are bordering higher LODs. In the shader you check the edge mask of the vertex against the neighboring LOD bit mask, and if the vertex is on the edge with the higher LOD you set the output position to the transition position. The extra vertex data isn't that bad in the grand scheme of things, and this lets you use the same pre-generated mesh without having to recalculate it every time a neighboring LOD changes. All you have to do is pass in the neighboring-higher-LOD bitmask.
 




#5283056 Why didn't somebody tell me?

Posted by DementedCarrot on 23 March 2016 - 09:56 PM

Use [Alt + Print Screen] to take a screenshot of the active window instead of everything. Especially useful if you ever need to take a specific screenshot and you have 2+ monitors.

Edit to fit the topic:

I was caught cropping a screenshot of a specific window out of a 3 monitor Print-Screen when I was told about it.




#5282751 Billboard grass rendering-visual bug

Posted by DementedCarrot on 22 March 2016 - 04:20 PM

My guess is that you are doing one-bit alpha on the grass, and that the grass texture is mipmapped. The mipmap downsampling is producing results that make your 1-bit alphas disappear in the higher mip levels.

In the past I have fixed this with distance field alpha instead of a 1-bit interpreted alpha channel. As long as you are using a 1 bit alpha you can adjust your threshold to make the further mips 'thinner' or 'thicker'. Both versions tend to look bad! A thinner threshold will make grass disappear like you are seeing.

Here's the paper. http://www.valvesoftware.com/publications/2007/SIGGRAPH2007_AlphaTestedMagnification.pdf

 

You use that technique but for grass instead of text. A distance field is much more tolerant of the mip downsampling.




#5216969 Procedural texture smoothing

Posted by DementedCarrot on 16 March 2015 - 06:03 PM

It looks like that river texture is being sampled with a point texture sampler. The nearest river pixel value is returned, which gives it the blocky look. You need to use a texture sampler with filtering that returns a blend of nearby river pixel values. Try using a bilinear texture sampler. If that doesn't look good enough generate mipmaps (as mhagain mentioned) and use a trilinear/anisotropic texture sampler to make it so the texture values are blended together smoothly.




#5212830 How do I know if I'm an intermediateprogramming level?

Posted by DementedCarrot on 24 February 2015 - 08:18 PM

I consider an intermediate level to be when you are comfortable enough with a language that you fight with concepts instead of the language that you are working with. When your thought process becomes "How does this work, and what's the best way to make it happen?" instead of "How do I do this, and why is it not compiling?". You are 'intermediate' when you can comfortably express your thoughts in code, and you are more concerned about the concepts behind making a piece of code work the way you want it to. You will know the right questions to ask to accomplish roughly anything that you want to do.

 

Beyond that I consider an advanced level to be when you have specific knowledge about different types of programming. These can include graphics, networking, front/back-end web development, etc. You start to learn the best practices, interesting ways of doing things, and the specifics of a field. It's best not to put a label on how "advanced" you are, because at this point it really just depends on what you know.




#5208758 Why is scaling an mmo so difficult?

Posted by DementedCarrot on 04 February 2015 - 05:47 PM

I don't think this has been done, and there's probably a good reason, but does it seem feasible to use a cloud service to augment your own servers? If your net code was built to mirror a particular cloud service you could build an abstracted "spin up an instance" function to prefer your own servers over the cloud service servers if they aren't busy. This might not work very well if the game in question requires a lot of interaction between your own instances and cloud instances though. I think it could work well as a means of keeping your game available to as many people that want it while having time to expand your own server capabilities.




#5208282 Cascaded Shadow Maps Look Bad

Posted by DementedCarrot on 02 February 2015 - 04:26 PM

You need to filter/anti-alias your shadow maps! The most straight-forward approach is percentage closer filtering, where you take a number of samples around a shadow-map projection and set the shadow intensity based on how many of the samples are in shadow or not. Variations of this approach can make it so the shadows are soft and fall off. http://http.developer.nvidia.com/GPUGems/gpugems_ch11.html is a good first read. An alternate approach is to use something like variance shadow maps, which have the advantage that you can pre-filter your shadow maps for soft shadows before applying shadows to the scene. There is a lot of reading material on shadow map filtering.
 
You could also reduce the effect by making your shadow maps larger, and making your view distance smaller (if you are frustum fitting your cascades). It's not viable right now, but there is ongoing research to fit shadow cascades to the depth-ranges of the world that actually matter. Frustum fitting schemes generally waste shadow map space on depth-ranges of the world that can't be seen. For instance if you are looking at a nearby wall you could focus more of the shadow map resolution near the wall than for the full view distance of world behind it that can't be seen.



#5207987 Terrain sculpting

Posted by DementedCarrot on 31 January 2015 - 08:43 PM

If there's a maximum sculpt size you could write the updated values into a smaller texture and update the full size texture on the GPU. You can render the smaller texture onto the larger texture. You would bind the full-size texture as a render target. The vertex shader could use a screen quad and an orthogonal matrix projection to position the rendering such that you update only the part of the full-size texture that is being modified. The pixel shader would sample from the smaller update texture and be written out to the larger texture. That would cut down on GPU bandwidth a lot, and I think it could easily handle sizes larger than 16x16.

You could go a step further if you wanted and do all of the updating on the GPU. You could get fancy with render targets and write some shaders to make the modifications. Then all you would have to send over to the GPU to update a texture is an operation (up/down I presume), a position, a delta-time, and a strength. Distance strength-falloffs could be done in the shader too. This would probably be faster than doing the updates on the CPU, heh. You would ultimately have to read the data back from the GPU if you want to save the modified values. You would just have to make sure the read-back to the CPU happens after a person is done holding down a modification to avoid some render stalls.

I can help you plan this more depending on what you want to do. Are you using GL or DX?




#5205924 Projectiles in a video game

Posted by DementedCarrot on 22 January 2015 - 12:00 AM

As Nypyren linked, it's really good to use an object pool. Projectiles and particle systems can create lots of short-lifetime objects that are constantly going in and out of existence. Dynamic memory allocation is actually pretty expensive compared to logic and math. It takes time for the requested memory to be allocated and handed back to you, and over time lots of dynamic memory allocation can cause memory fragmentation. Fragmented memory slows things down because memory relevant to an algorithm is spread out through the memory in random locations. It is faster to read through contiguous memory in a straight line, or nearby memory, than to jump around between random memory addresses. Pool allocation is awesome for cache coherency because you can allocate the space for many objects at one time, which puts them close together in memory. It's faster than dynamic allocation because your pool can immediately hand you back an object that was already allocated. Memory allocators are a pretty fun topic with a lot of internet resources.

The best idea would be to write a pool to hold your projectiles. When a bullet is shot you grab an object from the pool, and when it hits something you return that object to the pool to be reused later. Done correctly there will be no allocation/deallocation to slow the processing of your bullets down.




#5191127 How do triangle strips improve cache coherency ?

Posted by DementedCarrot on 04 November 2014 - 09:41 AM

zIK6m.png

They improve cache coherency by reading straight through the vertex buffer. There are a minimized number of cache-misses because it's reading through the vertex buffer memory in a straight line. Every triangle after the first one only needs to read in one additional vertex (the next vertex in the vertex buffer) to form a full triangle. When you index vertices you are potentially jumping between random vertex locations anywhere inside the vertex buffer memory depending on how the mesh verts are connected.




#5190195 What is a lobby server?

Posted by DementedCarrot on 30 October 2014 - 01:20 PM

Don't forget about RakNet either! It was just open sourced with a BSD license after Oculus bought it. It takes care of a lot of networking stuff including packet priority/reliability, data-replication across client/server, events, and it all uses a nice "TCP-over-UDP" algorithm that keeps things fast. It can also communicate between 32/64 bit clients/servers with no issue. It's a pretty mature library.

 

It is only free on computers, however. It will cost you money if you want to branch into consoles.




#5175385 Quick Multitexturing Question - Why is it necessary here to divide by the num...

Posted by DementedCarrot on 21 August 2014 - 08:01 PM

Also, don't let color averaging stop you there with texture blending!

If you use a linear interpolation you can blend any amount of a texture into another. With a lerp you can blend more or less of a texture in to taste as long as the interpolate value is between 0 and 1.



vec3 red = vec3(1,0,0);
vec3 black = vec3(0,0,0);
vec3 mixedColor = mix(red, black, 0.25);

// This gives you 75% Red and 25% Black.

Another cool application is smooth texture blending. You can use color lerping on outdoor terrain to seamlessly blend different textures together, like grass and dirt, in irregular ways that break up the texture on a mesh so that it isn't solid. You give different vertices on a mesh different lerp parameters, and vertex interpolation will give you all of the interpolation values in between so it fades from one blend percentage to the other. Check out the screenshot of the day and notice the texture blending on the ground in the back. http://www.gamedev.net/page/showdown/view.html/_/slush-games-r46850

 

Texture blending is pretty handy.




#5147532 Use 64bit precision (GPU)

Posted by DementedCarrot on 16 April 2014 - 09:31 PM

If you want to render stuff relative to the eye in float space using doubles, you:

 

1. Use doubles for your position vectors.

2. Use the double vector for every object position, and for your camera.

 

Then you have to translate your positions into a float-capable space for rendering. You translate every object position to get the position relative to the eye with:



DoubleVector3 objectPosition = object.somePosition;
DoubleVector3 cameraPosition = camera.position;
DoubleVector3 doubleRelativePosition = objectPosition - cameraPosition;

// When you translate the object by the camera position, the resulting number is representable by a float.
// Just cast the double-vector components down to floats!

FloatVector3 relativePosition;
relativePosition.x = (float)doubleRelativePosition.x;
relativePosition.y = (float)doubleRelativePosition.y;
relativePosition.z = (float)doubleRelativePosition.z;

and then that's the position you pass into the shader for rendering.

 

This is really cumbersome for a ton of objects because you have to recompute this translation every time you move your camera. There is an extension of this method to keep you from creating relative coordinates every frame. You have to create a relative anchor point that moves with your camera. To do this you have to:

 

1. Create a double-vector anchor point that moves with your camera periodically. You move this anchor point when float-precision starts to become insufficient to represent points inside the float-anchor-area.
2. You build relative float-vector positions for everything relative to the anchor point, as we did before with the camera but with the anchor point.

3. When you move far enough away from the anchor, you re-locate it.

4. When the anchor moves you re-translate everything relative to the new anchor point. This means everything has a double-vector world position and a float-vector relative anchor position.

5. You use a regular camera view matrix to move around inside this anchor float space.
6. Draw everything normally as if the anchor-relative position is the position, and the anchor relative camera position is the camera location.

I hope this helps!

Edits: Typo-city




#5110076 GPU to CPU stalling questions in DX11.

Posted by DementedCarrot on 17 November 2013 - 08:34 PM

When it comes to GPU->CPU read back in DX11, what is the main concern with stalling? If work gets finished and the results are copied over to a staging buffer that you read from, what is the main cause of stalling aside from the transfer latency?

 

I want to generate 2D/3D noise on the GPU and copy it back to the CPU for usage in the middle of a game loop, but I'm not sure what measures I should be taking to reduce stalling. Do I just have to wait some amount of time until I'm sure the work is done? Is there a callback function or some other means of telling that the data is ready to be read?




#5067476 Larger Structured Buffers?

Posted by DementedCarrot on 04 June 2013 - 04:33 PM

Sorry for the post! I copied the buffer creation code from elsewhere, and forgot to remove the constant buffer flag.






PARTNERS