Jump to content

  • Log In with Google      Sign In   
  • Create Account

Banner advertising on our site currently available from just $5!

1. Learn about the promo. 2. Sign up for GDNet+. 3. Set up your advert!


Member Since 23 Feb 2001
Offline Last Active Yesterday, 01:20 PM

#5180195 Generating a minimap

Posted by slayemin on 14 September 2014 - 12:49 AM

Hmm, I was able to get a minimap for my terrain system up and running in about 30 minutes. Here is a screenshot sample:

These are my design requirements:
1. The minimap dimensions may be any size. There's no relationship to the actual terrain system. The terrain could be 512x512, 1024x1024, 128x768, etc. The minimap is going to be something like 256x256.

2. I want to have height information color coded into the minimap, with contour lines to give it more of a "map" look. You should be able to tell what the elevation is.



This was my approach:

1. We get a width/height for the minimap from the user and we return a Texture2D to them.

2. We're going to go through each pixel in the minimap and map it to a position on the terrain.

2a: Since the terrain and minimap dimensions are independent, I am going to want to normalize my sample point.


For example, if my minimap is 256x256 and my terrain is 512x1024 (arbitrary size), and I am sampling the pixel (50,60) on the minimap, the normalized position is going to be:
normX = 50 / 256;
normY = 60 / 256;

Then, we sample the height map or terrain system by taking the normalized coordinate and switching it into their coordinate space...

sampleX = normX * terrainWidth;
sampleY = normY * terrainHeight;


And then we sample the terrain with these coordinates.
You would only super-sample neighboring pixels if the size of the minimap is greater than the size of the terrain (but why would you ever do that?!).
I figure you can just say that a map is a rough sketch of what the terrain actually looks like. If the spacing between sample points skips a few data points on the terrain, who cares? Can the player tell the difference? Nope! So don't over-engineer it.

Anyways, here is my implementation code which generated the minimap above:

public Texture2D GenerateMinimap(int width, int height)
	Texture2D ret = new Texture2D(BaseSettings.Graphics, width, height);
	Color[] data = new Color[width * height];

	float maxElevation = m_settings.MaxElevation;

	float t0 = 0;                           //sand height
	float t1 = maxElevation / 4.0f;           //grass height
	float t2 = t1 * 2;    //granite height
	float t3 = t1 * 3;    //snow height

	Color sand = new Color(255, 128, 0);
	Color dirt = new Color(128, 64, 0);
	Color grass = new Color(0, 192, 0);
	Color granite = new Color(192, 192, 192);
	Color snow = new Color(240, 240, 240);

	for (int y = 0; y < height; y++)
		for (int x = 0; x < width; x++)
			float h = m_heightMap.SampleTexel(x, y);
			Color c;

			if (h % 32 == 0)
				c = new Color(0, 0, 0);
				if (h < t1)
					//should lerp colors
					float f = h / t1;
					c = Color.Lerp(sand, dirt, f);
				else if (h >= t1 && h < t2)
					float f = (h - t1) / (t2 - t1);
					c = Color.Lerp(dirt, grass, f);
				else if (h >= t2 && h < t3)
					float f = (h - t2) / (t3 - t2);
					c = Color.Lerp(grass, granite, f);
					float f = (h - t3) / (maxElevation - t3);
					c = Color.Lerp(granite, snow, f);

			data[y * height + x] = c;

	return ret;

#5180191 Which basics do you need to know

Posted by slayemin on 14 September 2014 - 12:23 AM

Thanks everyone,


When i read my post back, while thinking of what you guys said, i understand that it is vague for other people,

so sorry for that.

Basically what i mean't is: Which basic aspects of the c++ language, such as functions, statements, classes, etc.

are required to start making small games (like pong at the beginning and then all the way up to something 3D) 

and which aren't required, but will be very handy to know.

For example: Maybe you can say that knowing about and how to use classes is required, but something like polyformism isn't (I saw this in another topic)


I hope this might be less vague and possible to answer,


C++ isn't the only language (or even best language) you can make games in. Everything you learn about programming is another tool you get to put in your tool belt which you can later use to better solve the programming problems you'll face during any programming project (doesn't have to be games!).


Think of it like building a house. You are a carpenter. At the most basic level, you can stack boards on top of each other (like a log cabin) to build a very crude, rudimentary house. If you learn how to use a hammer and nails, you can build a slightly less crude house. Maybe, if you learn how to do "framing", you can build together a house which is much more structurally sound and uses less wood. But, all you have is a hammer and nails, so you're a bit limited by the tools you can use. You can build a better house if you can learn how to use a saw to cut wood into smaller pieces. This would let you build something that actually looks something like a shack, though you still won't be able to build anything much more complicated. Perhaps, you eventually learn how to pour concrete and discover how to build a foundation for your house building projects. This makes your houses more stable. Maybe you also learn something about load bearing walls and this lets you build a house with multiple stories. You find that it takes a lot of trial and error to build the houses, so rather than starting to slap some wood together, you decide to spend a bit of time drawing up blueprints for what you're going to build. After all, it's a lot more efficient to make your errors in your blueprints and correct them than to tear down a bit of construction. The more skills, techniques, and tools you have at your disposal, the better and faster you can build something. 


The same principles apply to programming. You are just barely learning how to use a hammer and nails for making games. At best, you can build a very crude game like pong, but you're telling us that you don't want to learn how to use the tools and techniques of the trade. By avoiding learning how to use tools/techniques which will greatly help you, you are severely limiting your capabilities. You're essentially saying, "I don't want to learn how to use a saw! Instead, when I need to cut a board in half, I will use my teeth and chew it in half by biting out teeny slivers of wood." How silly, right? Learning the fundamentals of control structures, loops, functions and variables is the very foundation of programming. Learning how to assemble these core primitives into classes is 100% essential to building any sort of game with any complexity. Polymorphism is a great and valuable tool to have at your disposal, even if you're not going to use it all the time. It's there if you need it! Never avoid learning something new because its hard or new.


The other great big tool you'll be using every day during the programming of games is mathematics. It too is a super duper valuable and useful tool. Getting better at math is getting better at making games. Don't limit yourself and your capabilities by trying to get by without learning how to use a valuable tool.


#5179799 Which algorithm is more efficient for visibility culling? BSP or OCTree?

Posted by slayemin on 12 September 2014 - 01:52 AM

I am using an Octree in my prototype and it's working well enough. Here are some high level notes on my implementation:

* I rebuild the whole tree every frame. Yes, it's a bit more expensive than doing an in-place update, but it is a simpler solution. Good enough for a prototype!
* I store every octree region as a bounding box (in XNA, that's just two vector3's, or 24 bytes of memory)
* I only let a node exist if there is actually data inside of it, whether its objects or child nodes.

* In order to figure out what I need to draw, I intersect the camera view frustum against the octree and get a list of intersections. The intersected objects are the only objects I need to draw.
* I maintain a static list of all objects in the octree. Rather than visiting each node recursively and updating objects, I just traverse the master list and update from there (so much more simple).
* Collision detection is pretty straight forward: You only have to test for collision of an object against all objects within its containing node and any child nodes. In a perfectly balanced octree, this runs in O(log base 8 N).

Before you go rushing off to implement an Octree, you should back up a few steps and implement and profiler to measure how long each section of your code takes. What is its actual performance (measured in ticks or microseconds)? This shouldn't take more than 2-3 hours max to implement (compared to days for a good tree implementation from scratch). Once you have a good profiler to measure execution time, you can measure specific sections of your code and identify bottlenecks. It's much more scientific than guesswork. You might be optimizing a problem you don't have... ie, you're rendering hundreds of high res meshes to the screen even if they're really far from the camera, in which case you want to look at a "Level of Detail" switching scheme instead.

#5176506 List of generic objects

Posted by slayemin on 27 August 2014 - 02:31 PM


Okay, I really like this technique better than generics. It's so much cleaner and straight forward and doesn't require creating abstract method signatures in the base Unit class. Thanks!

#5147728 Triangle and rectangle intersect find position points of rectangle that inter...

Posted by slayemin on 17 April 2014 - 03:13 PM


The first problem:

I need integer positions so eventually i would need to cast them to int and i never use it as float, so rounding down or up doesn't really matter to me.

That's a valid need, but you still want to do all your math using floating point numbers since your inputs are floats. When all of your math has been completed, then cast the result into an int (or even better, use a rounding function! casting 1.9f into an int will result in 1, not 2.).

Consider this example:
float 1/3 = 0.3333333333
1/3 + 1/3 + 1/3 = 1.0

If you cast 1/3 as a float, then do the calculations, you get 1.0, a correct answer.

(int)1/3 = 0.0
1/3 + 1/3 + 1/3 = 0.0 (wrong!)

This is a simple example, but the point is that the fractional numbers matter and add up. The same math equations can yield different results. The more fractions you write off, the more off your calculations will be. This can lead to some weird, unexpected bugs later on and you'll be squinting at your code for hours, saying "There is no error here, the math equations are perfect! The logic checks out!"

#5147469 Triangle and rectangle intersect find position points of rectangle that inter...

Posted by slayemin on 16 April 2014 - 03:39 PM

The code looks a lot more simple than mine, so I tested your code to see if it worked -- and I found a few problems!

First problem: You're using integers to store the results of mathematical operations on floating point numbers. This gives you wrong math. So, I went ahead and changed those..


Second problem: If you try the two line segments (-1,0)->(1,0) and (20,1) -> (20, -1) and run the code, you should expect there to not be any intersections. Yet, the results show an intersection at (20,0), which is totally wrong.


Potential problems:

-If two lines are the same, you get infinity results. If you're only expecting one X and Y value, this could be problematic.

-If two lines are parallel, you get infinity and NaN results (you're dividing by zero). This could also be problematic if you're not testing for them.

Here's my test code:

class Program
	static void Main(string[] args)
		int a = (int)(1.1f + 1.2f);//<-- see? you lose precision and get wrong values.
		//PrintIntersectionOfTwoLines(new Vector2(-1, 0), new Vector2(1, 0), new Vector2(2, 1), new Vector2(2, -1));  //good
		//PrintIntersectionOfTwoLines(new Vector2(-1, 0), new Vector2(1, 0), new Vector2(2, 1), new Vector2(2, -1));  //good
		//PrintIntersectionOfTwoLines(new Vector2(-1, 0), new Vector2(1, 0), new Vector2(2, 1), new Vector2(2, -1));  //wrong
		//PrintIntersectionOfTwoLines(new Vector2(-1, 0), new Vector2(1, 0), new Vector2(-1, 1), new Vector2(1, 1));    //parallel: funky values
		//PrintIntersectionOfTwoLines(new Vector2(-1, 0), new Vector2(1, 0), new Vector2(-1, 0), new Vector2(1, 0));    //same line: wrong
		PrintIntersectionOfTwoLines(new Vector2(-1, 0), new Vector2(1, 0), new Vector2(20, 1), new Vector2(20, -1));  //wrong

	static void PrintIntersectionOfTwoLines(Vector2 one, Vector2 two, Vector2 three, Vector2 four)
	//To not calculate it two times
		float calcFirst = (one.X*two.Y)-(one.Y*two.X);
		float calcSecond = (three.X-four.X);
		float calcThird = (one.X-two.X);
		float calcFourth = (three.X*four.Y)-(three.Y*four.X);
		float calcFifth = (three.Y-four.Y);
		float calcSixth = (one.Y-two.Y);

	//x is the intersection point on x axis
		float x = (calcFirst * calcSecond - calcThird * calcFourth) / (calcThird * calcFifth - calcSixth * calcSecond);

	//y is the intersection point on y axis
		float y = (calcFirst * calcFifth - calcSixth * calcFourth) / (calcThird * calcFifth - calcSixth * calcSecond);

		Console.Write("X: " + x + "\nY: " + y);

#5147281 Triangle and rectangle intersect find position points of rectangle that inter...

Posted by slayemin on 16 April 2014 - 12:00 AM

If this is in 2D space, the math can be a bit challenging. I solved this a few years ago by digging up some musty old grade school algebra books. The general principle here is to think of your shapes as a collection of lines, or a collection of vertex points. A triangle has three vertices, a rectangle has four, a pentagon has five, etc. What we want to note is that the number of sides/vertices in a shape shouldn't really matter very much. To detect if any two shapes overlap, or intersect, we can test each line in the first shape against every line in the second shape. If no lines in the first shape intersect with lines in the second shape, then we might not have a collision (note: A shape could still be completely enclosed within another shape!).


How do we do this?

Well, we have to define a "line". Algebraically,  it would be defined as "Y = mx + b", but that's a general line formula which stretches on to infinity. For the sides of a shape, you want to define a line by a starting point and an ending point. Then, you want to do a line intersection test against every other line in the other shape (We're looking at an N^2 for loop here... watch out!)


Anyways, the code I came up with is here:

/// <summary>
/// Given the end points for two line segments, this will tell you if those two line segments intersect each other.
/// </summary>
/// <param name="A1">First endpoint for Line A</param>
/// <param name="A2">Second endpoint for Line A</param>
/// <param name="B1">First endpoint for Line B</param>
/// <param name="B2">Second endpoint for line B</param>
/// <returns>true or false depending on if they intersect.</returns>
public static bool DoIntersect(Vector2 A1, Vector2 A2, Vector2 B1, Vector2 B2)

	//NOTE: Floating point precision errors can cause bugs here. This can result in false-positives or true-negatives.

	//Based off of the Y = mx + b formula for two lines.

	//calculate the slopes for Line A and Line B
	double mx1 = (double)(A2.Y - A1.Y) / (double)(A2.X - A1.X);
	double mx2 = (double)(B2.Y - B1.Y) / (double)(B2.X - B1.X);  

	//calculate the y-intercepts for Line A and Line B
	double b1 = (double)(-mx1 * A2.X) + A2.Y;
	double b2 = (double)(-mx2 * B2.X) + B2.Y;

	//calculate the point of intercept for Line A and Line B
	double x = (b2 - b1) / (mx1 - mx2);
	double y = mx1 * x + b1;

	if (double.IsInfinity(mx1) && double.IsInfinity(mx2))
		//we're dealing with two vertical lines. If both lines share an X value, then we just have to compare
		//y values to see if they intersect.
		return (A1.X == B1.X) && ((B1.Y <= A1.Y && A1.Y <= B2.Y) || (B1.Y <= A2.Y && A2.Y <= B2.Y));
	else if (double.IsInfinity(mx1) && !double.IsInfinity(mx2))
		//Line A is a vertical line but Line B is not.
		x = A1.X;
		y = mx2 * x + b2;

		return (
			((A1.Y <= A2.Y && LTE(A1.Y , y) && GTE(A2.Y, y)) || (A1.Y >= A2.Y && GTE(A1.Y, y) && LTE(A2.Y, y))) &&
			((B1.X <= B2.X && LTE(B1.X, x) && GTE(B2.X, x)) || (B1.X >= B2.X && GTE(B1.X, x) && LTE(B2.X, x))) &&
			((B1.Y <= B2.Y && LTE(B1.Y, y) && GTE(B2.Y, y)) || (B1.Y >= B2.Y && GTE(B1.Y, y) && LTE(B2.Y, y))));
	else if (double.IsInfinity(mx2) && !double.IsInfinity(mx1))
		//Line B is a vertical line but line A is not.
		x = B1.X;
		y = mx1 * x + b1;

		return (
		((A1.X <= A2.X && LTE(A1.X, x) && GTE(A2.X, x)) || (A1.X >= A2.X && GTE(A1.X, x) && LTE(A2.X, x))) &&
		((A1.Y <= A2.Y && LTE(A1.Y, y) && GTE(A2.Y, y)) || (A1.Y >= A2.Y && GTE(A1.Y, y) && LTE(A2.Y, y))) &&
		((B1.Y <= B2.Y && LTE(B1.Y, y) && GTE(B2.Y, y)) || (B1.Y >= B2.Y && GTE(B1.Y, y) && LTE(B2.Y, y))));

	//figure out if the point of interception is between all the given points
	return (
		((A1.X <= A2.X && LTE(A1.X, x) && GTE(A2.X, x)) || (A1.X >= A2.X && GTE(A1.X, x) && LTE(A2.X, x))) &&
		((A1.Y <= A2.Y && LTE(A1.Y, y) && GTE(A2.Y, y)) || (A1.Y >= A2.Y && GTE(A1.Y, y) && LTE(A2.Y, y))) &&
		((B1.X <= B2.X && LTE(B1.X, x) && GTE(B2.X, x)) || (B1.X >= B2.X && GTE(B1.X, x) && LTE(B2.X, x))) &&
		((B1.Y <= B2.Y && LTE(B1.Y, y) && GTE(B2.Y, y)) || (B1.Y >= B2.Y && GTE(B1.Y, y) && LTE(B2.Y, y))));

/// <summary>
/// Equal-Equal: Tells you if two doubles are equivalent even with floating point precision errors
/// </summary>
/// <param name="Val1">First double value</param>
/// <param name="Val2">Second double value</param>
/// <returns>true if they are within 0.000001 of each other, false otherwise.</returns>
public static bool EE(double Val1, double Val2)
return (System.Math.Abs(Val1 - Val2) < 0.000001f);

/// <summary>
/// Equal-Equal: Tells you if two doubles are equivalent even with floating point precision errors
/// </summary>
/// <param name="Val1">First double value</param>
/// <param name="Val2">Second double value</param>
/// <param name="Epsilon">The delta value the two doubles need to be within to be considered equal</param>
/// <returns>true if they are within the epsilon value of each other, false otherwise.</returns>
public static bool EE(double Val1, double Val2, double Epsilon)
return (System.Math.Abs(Val1 - Val2) < Epsilon);

/// <summary>
/// Less Than or Equal: Tells you if the left value is less than or equal to the right value
/// with floating point precision error taken into account.
/// </summary>
/// <param name="leftVal">The value on the left side of comparison operator</param>
/// <param name="rightVal">The value on the right side of comparison operator</param>
/// <returns>True if the left value and right value are within 0.000001 of each other, or if leftVal is less than rightVal</returns>
public static bool LTE(double leftVal, double rightVal)
return (EE(leftVal, rightVal) || leftVal < rightVal);


/// <summary>
/// Greather Than or Equal: Tells you if the left value is greater than or equal to the right value
/// with floating point precision error taken into account.
/// </summary>
/// <param name="leftVal">The value on the left side of comparison operator</param>
/// <param name="rightVal">The value on the right side of comparison operator</param>
/// <returns>True if the left value and right value are within 0.000001 of each other, or if leftVal is greater than rightVal</returns>
public static bool GTE(double leftVal, double rightVal)
return (EE(leftVal, rightVal) || leftVal > rightVal);

It could probably be improved by someone smarter than me, but it works well enough for the time being. Note that math like this is susceptible to floating point precision errors so I had to write my own floating point comparison "==" methods with a small epsilon value. I'm still not 100% confident this is perfectly bug free.

Also, since the method I suggest above uses an N^2 approach, you might want to consider using a broad-phase and narrow-phase collision check (if you notice performance problems!). The broad phase check could just be sphere vs sphere intersections, where each sphere completely encloses a shape. It's very fast to implement and doesn't cost much. If the spheres intersect, then you can run the N^2 check for more precision.

There's one other thing you might want to consider: If you're just trying to use one shape to overlap another shape, why don't you just toggle the drawing order of the shapes? Is there a game logic reason behind it?

#5141847 Debugging Graphics

Posted by slayemin on 24 March 2014 - 06:22 PM

I've run into tons of trouble with this exact problem. After spending hours and hours on trying to figure out why my "stuff" isn't rendering, I've come up with a comprehensive checklist of things to verify and check.

If you've never rendered a model or primitive to the screen before using your current API, you want to establish a baseline by trying to do the most basic thing you can: render the simplest model/primitive you can. This is akin to writing your first "hello world" program for graphics. If you can do this, then the rest of graphics programming is simply a matter of adding on additional layers of complexity. The general debugging step then becomes a matter of adding on each subsequent layer of complexity and seeing which one breaks.

At the core, debugging is essentially just a matter of isolating and narrowing the problem down to as few possibilities as possible, then focusing in on each possibility.


This is for the C# and XNA API, but you can generalize or translate these points to your own language and API.

Let's start with the comprehensive checklist for primitive rendering (triangle lists, triangle strips, line lists, line strips):
1. Base case: Can you render a triangle to the screen without doing anything fancy?

       -Are you setting vertex positions for the three corners of the triangle? Are they different from each other? Is it rendering a triangle which should be visible in the current view?

       -Are you actually calling the "DrawPrimitive()" method, or equivalent in your API?
       -Are you using vertex colors which contrast against the background color?
       -Are you correctly applying a shader? Is the shader the correct shader? Have all shader settings been set correctly before you call the draw call?
       -Are you using a valid view and projection matrix which would actually let you view the triangle?
       -Are you using a world matrix which is transforming the triangle off screen? (You shouldn't even need a world matrix yet)
      -Are you using the right primitive type in your DrawPrimitives call? (triangle list vs triangle strip, etc)
2. Indexed verticies: Are you using an index buffer to specify the vertex drawing order?
       -Is the vertex drawing order compatible with your current cull mode? To find out, either toggle your cull mode or change your drawing order.
       -Are you actually creating an index buffer? Are you copying an array of ints into your index buffer to fill it with data? Are the array values correct?
       -If your index buffer is created, are you actually setting the graphics cards active index buffer to your index buffer?
       -Are you using "DrawIndexedPrimitives()" or your API's equivalent draw call? Are you correctly specifying the correct number of primitives to draw?
      -Does the drawing order make sense with regard to the primitive type you're using? ie, the vertex order in a triangle strip is very different from a triangle list.
3. Vertex Data:
   -Are you using a custom vertex declaration? If yes, skip to #4.

   -Are you using a vertex buffer? If yes:
       -You must use a vertex array of some sort, at some point, to populate the vertex buffer. Verify that you're getting an array of verticies in your code. Using your IDE debugger, verify that the vertex data is correct.
      -Are you moving your vertex array data into a vertex buffer? Is the vertex buffer the correct size? Does the vertex buffer have the vertex data from your vertex array?
      -On the graphics card, are you setting the active vertex buffer before drawing? Is there an associated index buffer?
4. Custom Vertex Declarations: Are you using a custom vertex declaration?
  Yes: Then you must be defining your vertex in a Struct.
    -Does your vertex declaration include position information? If not, how are you computing the vertex position in your shader?
     -Does your vertex declaration include every field you want to use?
   -Are you creating a Vertex Declaration correctly?
       -Are your vertex elements being defined in the same order as they are in the struct fields? This is one of the few times declaration variable order really matters because it's specifying the order they appear in the struct memory block.
       -Are you correctly calculating the BYTE size of each variable in the vertex? Are you correctly calculating the field offset in bytes?

       -Are you correctly specifying the vertex element usage?

       -Are you correctly using the right usage index for the vertex element?
       -Are you specifying the correct total byte size for your custom vertex declaration?
 -Is your code correctly using the custom vertex data? ie, putting position information into a position variable.

5. High Level Shader Language (HLSL): Are you using a shader other than "BasicEffect"?
      -Are you actually loading the content for the shader and storing it in an "Effect" data structure?
      -Are you correctly initializing the effect?
      -Are you setting a "Current Technique" in your render call to one which exists in the shader?
      -Does the technique which you use include a vertex shader and a pixel shader? Are they supported by your API and graphics card?
      -Does the vertex shader require any global variables to be set? (ie, camera position, world matricies, textures, etc). Are they being set to valid data?
       -Does the vertex shader output valid data which the pixel shader can use?
       -Does the pixel shader actually output color information?
       -Does your vertex shader math and logic check out correctly? (If you don't know or aren't sure, it's time to use a shader debugger).

6. Shader debuggers:

    I'm using Visual Studio 2010, so I can't use the built-in shader debugger from VS2012. I have to use external tools. Here are the ones I've tried and my thoughts on them:
    NVidia FX Composer: It sucks. It is unstable and crashes frequently, has a high learning curve, and can't attach a shader debugger to an executable file (your game). You can't push custom vertex data into a shader and see how the shader handles it. This program is mostly useful for creating shaders for existing models.
   ATI's GPU PerfStudio: It doesn't work with DirectX 9.0, so if you're using XNA, you're out of luck. Sorry, ATI doesn't care enough. It's also a bit confusing to setup and get running.
    Microsoft PIX: It's a mediocre debugger, but is the best one I've found. It is included in the DirectX SDK. The most useful feature is being able to attach to an EXE and capturing a frame by pressing F12. You can then view every single method call used to draw that frame, along with the method parameters. This tool also lets you view every single resource (DX Surfaces, vertex buffers, index buffers, rasterizer settings, etc) on the graphics card, along with that resources data. This is the best way to see if your vertex data and index buffer data is legit. You can also debug a pixel in your vertex data. This lets you step through your shader code (HLSL or ASM) line by line and see what the actual variable values are being set to. It's an okay debugger, but it doesn't have any intellisense or let you mouse over a variable to see its values like the Visual Studio IDE debugger does. This is the debugger I currently use to debug my shaders. The debugging workflow is a bit cumbersome since you have to rebuild your project, start a new experiment, take a snapshot, find the frame, find the data object you want to see, step through the shader debugger to the variable you're interested in (~2 minutes). Here are a few "nice to know" notes on PIX:
  -If you're looking at the contents of a vertex buffer:

      -Each block is 32 bits, or 4 bytes in size. Keep this in mind if you're using a custom vertex declaration to pack data into a 4 byte block (such as with Color data).

      -0xFF is displayed as a funky value: -1.#Q0
     -Each 4-byte block is displayed in the order it appears in your custom vertex declaration. Each vertex data block is your vertex declaration size / 4. (ie, 36 bytes = 36 / 4 = 9 blocks per vertex)
     -The total size of the buffer is the blocks per vertex multiplied by the number of verticies you have (ie, 9 * 3 = 27 4-byte blocks)
      -Usage: If your vertex declaration byte offsets are off by a byte or more, you should expect to see funky data in the buffer.
  -Vertex Declaration should always match the vertex declaration in your custom vertex declaration struct.

-By selecting the actual draw call in the events list and then looking at the mesh, you can see the vertex information as it appears in the pre-vertex shader (object space), the post-vertex shader (world space), and Viewport (screen space). If the vertex data doesn't look right in any of these steps, you should know where to start debugging.
   *Special note: If you're creating geometries on the graphics card within your shader, you won't see much of value in the pre-vertex shader.
-The debugger includes an shader assembly language debugger. It's nice to have but not very useful.
-The shader compiler will remove any code which isn't used in the final output of a vertex. This is extra annoying when you're trying to set values to a variable and debug them.

Model Debugging:

The same principles from the primitive rendering apply, except you have to verify that you've correctly loaded the model data into memory and are calling the right method to render a model.

One handy tip which may help you for your project: Write down each step it takes to add and render a new model within your project (ie, your projects content creation pipeline & workflow). It's easy to accidentally skip a step as you're creating new assets and end up wasting time trying to isolate the problem to that missed step. An ounce of prevention is worth a pound of cure, right?

#5134613 C# Indexer overloading?

Posted by slayemin on 25 February 2014 - 07:41 PM

I'm trying to create a class which has three dictionaries, each of which store different types of objects. I want to be able to access those dictionary items by using the dictionary key via an indexer.

/// <summary>
/// This is the asset database for all assets being used in the game.
/// </summary>
public class AssetDB
	Dictionary<string, Texture2D> m_textureDB;
	Dictionary<string, Model> m_modelDB;
	Dictionary<string, Effect> m_effectDB;

	public enum AssetType

	public AssetDB()
		m_textureDB = new Dictionary<string, Texture2D>();
		m_modelDB = new Dictionary<string, Model>();
		m_effectDB = new Dictionary<string, Effect>();

	public Texture2D this[string Name]      //ambiguous overload: differs only by return type!
		get { return m_textureDB[Name]; }
		set { m_textureDB.Add(Name, value); }

	public Model this[string Name]      //ambiguous overload: differs only by return type!
		get { return m_modelDB[Name]; }
		set { m_modelDB.Add(Name, value); }

	public Effect this[string Name]      //ambiguous overload: differs only by return type!
		get { return m_effectDB[Name]; }
		set { m_effectDB.Add(Name, value); }

	public void Clear()

Ideally, I'd like to interact with this class by doing something like this:


AssetDB m_db = new AssetDB();

m_db[Texture, "Grass"] = Content.Load<Texture2d>("Textures\\Grass");
m_db[Model, "Sphere"] = Content.Load<Model>("Models\\Sphere");

But, I don't know how to overload my indexer to do this correctly.

#5134577 Voxel terrain move

Posted by slayemin on 25 February 2014 - 04:48 PM

Let's think about this.


Let's say you've got millions of voxels in your terrain environment. You realize that there's a performance issue with rendering them at a high level of detail, so you want to render the nearby voxels at a lower level of detail if they're far away from the camera. This is the gist of what we're trying to do. But we don't want to spend a huge amount of CPU time trying to figure out which level of detail we want to use for a voxel either! In the most brute force method for the algorithm, we'd calculate the distance from the camera position to the voxel position. If that distance exceeds a set threshold, you drop the LOD by one. However, if we used the brute force method, we'd be calculating the distance between the camera and all million voxels (which comes with an expensive square root) and any performance gains in switching to another LOD would be much lower. So, we want to keep the same idea for calculating distances with voxels, but try to use fewer distance computations.

Here's where Octreescome in handy, and that's why you are reading about them. You can divide your terrain into chunks of blocks, say, 16x16x16 (use powers of 2 if you can). Each chunk can be inserted into your octree. When you're going to calculate the camera distance, you could instead calculate the camera distance to the octree bounding regions and set the LOD of all contained objects to a preset value. This would help you reduce the number of distance checks, and would also scale very well with any number of game world objects (I assume you're going to have more than just terrain).

I don't know if its relevant to you or not, but there was a white paper a while back on rendering terrain using geomipmaps. The author had an interesting technique for deciding when to switch to a different LOD which was not based on camera distance, but rather on how much the terrain would pop if you transitioned it to a lower LOD (2.3.1). He basically measures the vertical change in "pop" between one LOD and another in screen space according to the camera viewing angle, and then switches the terrain LOD if the pop is below some acceptable threshold (ie, 2 pixels). I implemented a variation of this myself and I like the results.


Here is my code for that:

/// <summary>
/// This calculates the maximum amount of vertical error when switching from one level of detail to another.
/// </summary>
private void CalcError()
	for (int LoD = 0; LoD < 4; LoD++)
		int stepSize = (int)Math.Pow(2, LoD + 1);
		float d_max = 0;

		//traverse horizontally
		for (int z = 0; z < m_settings.TileCount; z++)
			for (int x = 0; x < m_settings.TileCount; x += stepSize)
				Vector3 p0 = m_verts[x + (z * (m_settings.TileCount + 1))].Position;
				Vector3 p1 = m_verts[(x + (stepSize / 2)) + (z * (m_settings.TileCount + 1))].Position;
				Vector3 p2 = m_verts[x + stepSize + (z * (m_settings.TileCount + 1))].Position;

				Vector3 P = (p0 + p2) / 2.0f;        //the phantom position of p1 is just the average of P0 and P2
				float d = (p1 - P).Length();         //find the error difference between p1 and the phantom point
				if (d > d_max)
					d_max = d;
		m_max_dy[LoD + 1] = d_max;            //this is the most error we'd get if we switched from LODX -> LODX+1

On screen and in the game, what ends up happening is that we try to use the lowest possible LOD we can get away with without getting unwanted popping artifacts. So, if you're pointing your camera straight down and you're viewing the terrain from above, the very bottom terrain chunk will be at a very low LOD, but you can't really tell since the vertical portions are not really coming into play based on your viewing angle.

#5133036 Game engine

Posted by slayemin on 20 February 2014 - 02:35 PM

Short answer: Yes.

A game engine can use any Application Programmer Interface (API), such as DirectX or OpenGL (these aren't the only available APIs though). All game engines are made using programming languages.

Game engine programming is very different from game design however. A game designer uses a much higher level of abstraction to create games. They're more interested in the game design itself, the game mechanics, the aesthetics, and any back end game systems. Game engines are more focused on building the foundation/workspace for these higher level game design decisions to be created within. That usually involves creating the backend infrastructure for drawing things, physics, collision detection, particle effects, sounds, asset management, etc.

#5129159 Rendering point sprites / textured quads purely on GPU using HLSL

Posted by slayemin on 05 February 2014 - 04:30 PM

D3D9 does support instancing.  You could make one vertex buffer that contains a single quad (either 4 verts + index buffer, or 6 verts), and then store the per-instance billboard information in another vertex buffer.


This is exactly what I'm trying to do. Thanks for giving me the direction I was looking for.

In case anyone else finds this thread in the future and wants to know how to instance a bunch of similar geometries on the graphics card, here are two articles which perfectly describes the technique: 
XNA 3.1: http://www.float4x4.net/index.php/2010/06/hardware-instancing-in-xna/

XNA 4.0: http://www.float4x4.net/index.php/2011/07/hardware-instancing-for-pc-in-xna-4-with-textures/


Blog post by Shawn Hargreaves: http://blogs.msdn.com/b/shawnhar/archive/2010/06/17/drawinstancedprimitives-in-xna-game-studio-4-0.aspx

#5105820 Trying to simulate the flamethrower from GTA2

Posted by slayemin on 30 October 2013 - 05:56 PM

You can do this by simply using a bunch of fire particles. Your player will have a particle emitter positioned a set distance away from the center of the player and based on the players orientation. There are a few things you'll want to think about when setting the properties for your emitter:
-The frequency/rate of particle emission

-The initial velocity of a particle (which should also factor in the current velocity of the emitter)

-The initial size of a particle

-etc. etc.


For your flame thrower, all you'd really have to do is create a flame particle every fraction of a second, give it an initial position and an initial velocity, and then let your particle system update all of the particles in its list of active particles. Once a particle has been emitted from the emitter, its position and velocity should be completely independent of the emitter.

It looks like the GTA2 flame thrower is shooting out flame particles at a high rate and the lifespan of each particle is quite short. I predict that if your character is initially stationary and shooting flames, then the distance of the farthest flame from the player should be at its standard length (let's call it 1 Flame Length). I also predict that if your character suddenly starts moving towards the direction of flames, the flame length should shorten for a brief period since the player is moving with the slower moving flame particles and the flame length may be 0.8 units of flame length until the first flame emitted with the player velocity reaches its lifespan. If the player moves backwards, then we should expect the flame length to extend (1.2 flame length?) for a brief moment (and we may even expect to see gaps between particles if the emitter frequency is too low).

Anyways, here are a few things you can try:
-You don't need to store the angle on a flame particle. Give it a velocity instead. Velocity is a 2D or 3D vector. The orientation of the vector contains the direction of movement. The magnitude of the velocity vector contains the speed. Every frame, all you have to do is add the velocity vector to the position coordinate.
-If you use spherical fire sprites, you don't have to rotate your flames. If you simply must use non-spherical sprites, you can still get the direction of travel by normalizing your velocity vector and using those normalized values to rotate your sprite. A normalized 2D velocity vector already contains [Cos(theta), Sin(theta)]
-I don't know if you're doing it already, but I'd recommend using a memory pool for your particles (you don't want to allocate and deallocate memory for each particle each time its created and dies).
-If you're feeling crazy, you can also store an "Acceleration" value in a particle. If you remember your calculus and physics, velocity is the change in position over time. Accelleration is the change in velocity over time. (And surge is the change in accelleration over time, such as a gas pedal on a car). For each frame and for every active particle, you'd do: Velocity += Accelleration; Position += Velocity; If you set a small negative number to the accelleration, your particle will gradually slow down as it travels.

#5001695 Which game engine.. an indie but experienced programmer?

Posted by slayemin on 16 November 2012 - 10:54 PM

Unity3D is my first recommendation. You can get something up and running pretty fast. Admitedly though, I'm not very experienced with Unity at the moment. I've spent about a week on it. I like that it supports C#, but the built in IDE just doesn't compare to Visual Studio.

If Unity3D isn't suitable, you could use XNA. With your time budget, I think you could reach your goal without too much pain. Though, you'd have to recreate engine features, like particle engines, which come out of the box with Unity and is quite powerful.

#4999452 Final Year Project

Posted by slayemin on 09 November 2012 - 04:12 PM

Considering your C++ background, I'd recommend picking up C# and going with XNA. You can have a simple game up and running in hours or a day (simple, like asteroids).

The net coding isn't too hard in C#, but it will take some planning and effort. Here are some questions to think about:
-What data do you want to send to other players? How are you going to organize it? (you're going to have to serialize and deserialize data off of a "stream")
-What happens if a player loses their connection?
-What if there is a data mismatch and your game states get out of synch?
-Are you going to use a peer-to-peer model or a client-server model? If client-server, how do you decide who is the server?

These are areas you shouldn't waste your time on:
-Shrinking the size of a data packet (you're on a LAN)
-Network and application security (This is a school project, not a public release)
-data validation (Use TCP/IP and just trust your clients)

My recommendation is to get something super simple up and running first. Then build the networking component, then make the game more complicated.
Example: Get two player controlled tanks to sit on flat terrain, each taking turns to shoot each other. Keep it simple. It should take a couple hours. Then, write the net code to get the games to connect and talk on a LAN, and then sharing game state data. You've now got a basic, networked, multiplayer game! Now, you can add features, update the network protocol, and see the changes in game play.

As far as network coding is concerned, most of the answers you're looking for a a google search away...
the basic outline is something like this:
1. Create a thread which listens for connections on a port
2. When a connection comes in, handle it and then wait for more connections
3. When a connection is established, think of both ends as reading and writing data from a file stream which can change under your feet. Before you write data out to the network stream, it has to be converted into an array of bytes (serialization). What those bytes mean is up to you to decide. When you get a stream of bytes, its up to you to decode and decipher their meaning as well. So, the common practice is to create a "header" which contains something to identify the type of data being sent, the byte length of the data, and then the data itself.
Trivial example:
suppose I have a "Mage" class like so:

class Mage
unsigned int ID = 1;
string Name;
If I want to send an instance of the Mage who has the name "Tim" to another computer, my serialized data would be:
(but all the data would be converted to bytes)

The receiving computer will get an array of bytes containing this data. It reads the first byte, which is the ID. Then it says, "Aha, I know that I'm going to be deserializing a Mage object! The next item will be the length!", and then it reads off the value "3". Then it says, "Aha! The next value will be the data and its length will be three bytes! This data is going to be the name of the mage, so I should read it as a string!" and then it reads the next three bytes and converts them into characters which form the name "Tim".

You can also add extra metadata to handle the basic C.R.U.D. principles of all your objects (create, read, update, delete). You're also going to have to decide how you want to handle weird shit, like you sent a "Create" command for an object, followed by an "Update" command, but the "Update" command gets there first, so its trying to update an object which doesn't exist yet (which suggests adding frame numbers to the packet meta data and storing actions in a queue, but that may be getting unnecessarily complicated). It's great fun :)