Jump to content

  • Log In with Google      Sign In   
  • Create Account

We're offering banner ads on our site from just $5!

1. Details HERE. 2. GDNet+ Subscriptions HERE. 3. Ad upload HERE.


DementedCarrot

Member Since 18 May 2011
Offline Last Active Nov 26 2014 01:54 PM

#5191127 How do triangle strips improve cache coherency ?

Posted by DementedCarrot on 04 November 2014 - 09:41 AM

zIK6m.png

They improve cache coherency by reading straight through the vertex buffer. There are a minimized number of cache-misses because it's reading through the vertex buffer memory in a straight line. Every triangle after the first one only needs to read in one additional vertex (the next vertex in the vertex buffer) to form a full triangle. When you index vertices you are potentially jumping between random vertex locations anywhere inside the vertex buffer memory depending on how the mesh verts are connected.




#5190195 What is a lobby server?

Posted by DementedCarrot on 30 October 2014 - 01:20 PM

Don't forget about RakNet either! It was just open sourced with a BSD license after Oculus bought it. It takes care of a lot of networking stuff including packet priority/reliability, data-replication across client/server, events, and it all uses a nice "TCP-over-UDP" algorithm that keeps things fast. It can also communicate between 32/64 bit clients/servers with no issue. It's a pretty mature library.

 

It is only free on computers, however. It will cost you money if you want to branch into consoles.




#5175385 Quick Multitexturing Question - Why is it necessary here to divide by the num...

Posted by DementedCarrot on 21 August 2014 - 08:01 PM

Also, don't let color averaging stop you there with texture blending!

If you use a linear interpolation you can blend any amount of a texture into another. With a lerp you can blend more or less of a texture in to taste as long as the interpolate value is between 0 and 1.



vec3 red = vec3(1,0,0);
vec3 black = vec3(0,0,0);
vec3 mixedColor = mix(red, black, 0.25);

// This gives you 75% Red and 25% Black.

Another cool application is smooth texture blending. You can use color lerping on outdoor terrain to seamlessly blend different textures together, like grass and dirt, in irregular ways that break up the texture on a mesh so that it isn't solid. You give different vertices on a mesh different lerp parameters, and vertex interpolation will give you all of the interpolation values in between so it fades from one blend percentage to the other. Check out the screenshot of the day and notice the texture blending on the ground in the back. http://www.gamedev.net/page/showdown/view.html/_/slush-games-r46850

 

Texture blending is pretty handy.




#5147532 Use 64bit precision (GPU)

Posted by DementedCarrot on 16 April 2014 - 09:31 PM

If you want to render stuff relative to the eye in float space using doubles, you:

 

1. Use doubles for your position vectors.

2. Use the double vector for every object position, and for your camera.

 

Then you have to translate your positions into a float-capable space for rendering. You translate every object position to get the position relative to the eye with:



DoubleVector3 objectPosition = object.somePosition;
DoubleVector3 cameraPosition = camera.position;
DoubleVector3 doubleRelativePosition = objectPosition - cameraPosition;

// When you translate the object by the camera position, the resulting number is representable by a float.
// Just cast the double-vector components down to floats!

FloatVector3 relativePosition;
relativePosition.x = (float)doubleRelativePosition.x;
relativePosition.y = (float)doubleRelativePosition.y;
relativePosition.z = (float)doubleRelativePosition.z;

and then that's the position you pass into the shader for rendering.

 

This is really cumbersome for a ton of objects because you have to recompute this translation every time you move your camera. There is an extension of this method to keep you from creating relative coordinates every frame. You have to create a relative anchor point that moves with your camera. To do this you have to:

 

1. Create a double-vector anchor point that moves with your camera periodically. You move this anchor point when float-precision starts to become insufficient to represent points inside the float-anchor-area.
2. You build relative float-vector positions for everything relative to the anchor point, as we did before with the camera but with the anchor point.

3. When you move far enough away from the anchor, you re-locate it.

4. When the anchor moves you re-translate everything relative to the new anchor point. This means everything has a double-vector world position and a float-vector relative anchor position.

5. You use a regular camera view matrix to move around inside this anchor float space.
6. Draw everything normally as if the anchor-relative position is the position, and the anchor relative camera position is the camera location.

I hope this helps!

Edits: Typo-city




#5110076 GPU to CPU stalling questions in DX11.

Posted by DementedCarrot on 17 November 2013 - 08:34 PM

When it comes to GPU->CPU read back in DX11, what is the main concern with stalling? If work gets finished and the results are copied over to a staging buffer that you read from, what is the main cause of stalling aside from the transfer latency?

 

I want to generate 2D/3D noise on the GPU and copy it back to the CPU for usage in the middle of a game loop, but I'm not sure what measures I should be taking to reduce stalling. Do I just have to wait some amount of time until I'm sure the work is done? Is there a callback function or some other means of telling that the data is ready to be read?




#5067476 Larger Structured Buffers?

Posted by DementedCarrot on 04 June 2013 - 04:33 PM

Sorry for the post! I copied the buffer creation code from elsewhere, and forgot to remove the constant buffer flag.




#4944580 Constant Buffers in Pixel Shader

Posted by DementedCarrot on 30 May 2012 - 01:01 AM

'Doh.

I'm still in Effect framework withdrawal.


#4941281 coordinate system conversion

Posted by DementedCarrot on 18 May 2012 - 03:01 PM

If I'm reading this right, I think you have your concepts mixed up.

XNA doesn't necessarily follow a specific coordinate system, but for 99% of all examples on the internet the "XNA" coordinate system is a cartesian coordinate system. A cartesian coordinate system is simply a coordinate system where the X/Y/Z coordinates define locations along the cardinal directions (generally) right/up/forward. I recommend you give http://en.wikipedia....ordinate_system a glance.

Even so, I recommend against defining the origin around your camera. Assuming your camera moves, you won't want to constantly redefine the locations of every object in the world in relation to your camera. That code would be harder and much less efficient. I also recommend you look for a basic introduction to vector math and matrix transformations.

Edit: Also, are the lines you mention straight or wrapped around the planet? If they are wrapped, you are talking about a polar coordinate system.


#4814415 Object Management?

Posted by DementedCarrot on 22 May 2011 - 08:03 PM

You need to define an abstract "GameObject" class that has Update() and Draw() virtual functions that have to be overloaded. You can also define position/orientation/scale stuff inside a game object class. Every object in your game (players, bullets, etc) needs to derive from the GameObject.

The object manager should deal with GameObject's. It should also have an Update() and Draw() function that loops through every GameObject the object manager manages and individually updates and draws them. Be sure update comes before draw though.

I generally store my game objects in lists, but usually thats just because I'm too lazy to write fixed array resizing code. Its more efficient to write one that works with fixed arrays (for static stuff that doesn't change often) and linked lists (for the dynamic stuff like those bullets, because things can be added to and removed from linked lists without re-sorting or re-sizing an array internally). If you don't know the differences between linked lists and array storage, I recomend you look into them!

Further more.. I usually keep "lists of lists" to keep separate types of objects together. This way you can update certain GameObject's that need to loop through specific other types of GameObject without having to search through -everything- for the correct type. This way you can pull lists of players or enemies separately. For that you define a generic function "public GetTypeList<T>()" so you can get a list of a certain type back. If you dont know about generics you should read up on those too. This comes in handy for things like bullets especially. If you're calling update on individual bullets that need to check enemies for collision, you dont have to search through every type of object blindly. You can do something like this in the update function of the bullet:

List<Enemy> enemies = ObjectManager.GetTypeList<Enemy>(); // Grab the list of enemies.

foreach(Enemy e in enemies)

{

  if(bulletCollidesWithEnemy)
  {


    Enemy.Kill() // Kill the enemy, he's been shot!
    this.Kill() // Kill the bullet because it's hit an enemy.
  }


}


You have to implement a function in the ObjectManager that removes GameObjects though. Something like ObjectManager.Remove(GameObject obj). It really depends on the type of game and the data structures you use to determine the best way to remove objects. If you call list.remove(), it will search the list sequentially until it finds the object to remove. This could get really inefficient the larger your game is, so it might be better to add a removal bool value to your GameObjects and add another loop to search for and delete those later. You have to be VERY careful about things that get removed as other objects could still be referring to the object you are "removing". If you remove something from the object manager and another GameObject has a reference to the object that you are removing, the object wont be removed. C#'s garbage collector wont get rid of it because it still has a reference, and since its not managed by the object manager anymore it could cause problems. If you remove an object you have to do checks to make sure all of its references are gone, OR add checks inside the functions that use those references "if(object.isDead())".

I would give you a copy of my ObjectManager and GameObject classes if the power supply on my programming computer didn't recently die. :P

But yeah.. Back to your original post. I dont think your game objects had separate Update() and Draw() functions. Your objects need to be able to kill themselves or kill others inside update functions. Please ask more questions; I've just uber-ranted.


#4812761 What is a vertex buffer input slot in DX 11?

Posted by DementedCarrot on 18 May 2011 - 04:11 PM

How and where do you actually make use of the others though? You have to specify somewhere that you're using one of the other slots and I'm not sure where that happens. :P

Does it happen in the shader or the c++ code? Brief examples would be great, heh.




Edit: Solved! I didn't notice that the slots of the objects used by the shader were specified in the Input Layout. Right-on.




#4812502 What is a vertex buffer input slot in DX 11?

Posted by DementedCarrot on 18 May 2011 - 07:21 AM

DirectX 11 has vertex buffer slots that you specify [0-15] when you bind vertex buffers to the context/device. I'm really not sure how they're used beyond actually setting them as none of the draw calls have slot numbers tied to them. I just don't know where they factor in.

How do you make use of buffers in different slots?


Pre-emptive thanks!




PARTNERS