Jump to content

  • Log In with Google      Sign In   
  • Create Account

slayemin

Member Since 23 Feb 2001
Offline Last Active Yesterday, 04:58 PM

#5187257 what is the correct way to move object along direction

Posted by slayemin on 15 October 2014 - 04:46 PM

You can also get a direction vector from an angle:

//theta is a float value between 0->2pi
dirX = speed * cos(theta);
dirY = speed * sin(theta);

So, movement in a direction is pretty simple:

Position.X += dirX;
Position.Y += dirY;

Then you can wire up some input controls to change the direction and speed.




#5186218 Your thoughts on me hiring a game developer/studio

Posted by slayemin on 10 October 2014 - 12:16 PM

I avoided the word "hire" because he'll need to feel like this is as much his project he's interested in doing.



So you're not going to pay someone to work for you?!

No, no no. I get what you're trying to say, but this is a really bad move. If you go through the effort and trouble to find that perfect fit for your project and team, you want to lock that person down so that they stay on the project/team! The best way to do this is to take care of them by satisfying their interests, which include getting paid well, regularly, and on time. This is business 101. You need this programmer to be a stakeholder in your project, and if you aren't a stakeholder invested in them, they aren't going to be a stakeholder invested in you. They'll jump ship as soon as something else comes along which satisfies their interests / desires more than you do. This would be a catastrophe and spell doom for your project since its such a large setback. It's hard to bounce back from. So, don't make the foolish mistake of letting it happen in the first place. Hire people.
 

My ideas are just ideas for him to see whether he wants to be interested in doing it and pretty much making it his own, with the input of my ideas as well

I'm a programmer and design my own games. This is a huge red flag. I underestimated the difficulty of good game design, and understand it takes a lot of effort to do it right. If *you* are going to be the game designer, then you need to be beyond stellar and work harder than anyone else on the team, and spell out in precise, exact detail how every minute aspect of the game is going to work together. The fact that you want to do some vague hand waving and essentially tell the programmer to fill in the gaps with their own ideas, screams the fact that you don't know exactly what you want. There's nothing more frustrating than to spend a ton of effort writing code to implement something which ends up changing completely or getting scrapped on the whims of an uncertain and constantly shifting design. If you're expecting the programmer to take control on the direction of this project, you reduce yourself to a niggling background voice which needs to be appeased while the programmer works on his own ideas of what should be built. Warning: Programmers aren't necessarily good designers!!!
 

I tell them to make a site like this site here, and if they cannot, then I regard them as failing to meet client's expectations.


This is not necessarily the right conclusion to make. Just because a developer fails to "build a site" doesn't mean that the developer can't do the job and is thus incompetent. No project exists in a vacuum. The project can be doomed before it even begins due to a failure in any number of things which lead up to the project and its management. I strongly recommend that you familiarize yourself with the software development life cycle. Whether you're building websites, video games, or applications, it's all the same process.
Just to refresh your memory on the lead up to constructing software, you have the following phases:
-1. Visioning step - What would be awesome to do?
0. Feasibility assessment - Can the vision actually be done within the limitations?
1. Requirements gathering - What *must* the software do? Does it meet the vision? Is it feasible?
2. Design - How will the software work to satisfy the requirements? Is it feasible? does it satisfy the vision?
3. Construction - build the damned thing! write the code! create the assets! make it come to life! Does it meet the design? Does it meet the requirements? Can it even work with the technical limitations / constraints?

Notice how each phase of the process depends on the phases preceding it? If any of the steps leading up to the construction phase are shit, it doesn't matter how good the programmer may be, the project is going to fail. The only way a shit project can be saved is if the programmer realizes he got set up for failure and goes through and redoes the work proceeding the work he's supposed to be doing. The right move to do when this happens is to just abandon the project/client and move onto working for/with someone who actually knows what the hell they're doing (some more unscrupulous devs will just string the client along and suck their money dry to get a steady paycheck). Sure, the shit client will bad mouth the programmer for being incompetent/incapable, but what does the programmer care? They've got better people to work for than to concern themselves with what bad clients have to say about them.
 

Yea, maybe that could be more my role, as I'm already an investor for two other companies. I feel maybe being an investor for this one also could be a possibility. But for this one, I have that itch of wanting to have that perfect game I've always wanted to play be realized. Alot of games out there may have 90% of what I'm looking for, and they are all very great games, just that I have that itch to make it 100% so if I can't find it out there already, then only other option is to try to make it. Investing, I'm always going to be doing even after I die.

I'm also an investor (95% stocks). I have also invested in my own indie game studio and put my money where my mouth is. My investment is more than just throwing money at a project/company and hoping for success, it's also a very deeply personal investment of my own time, commitment, and 100% energies. I'm totally invested in myself, and it's a do-or-die situation. With every investment, someone somewhere down the money stream needs to stand up and make use of the financial resources they have been granted to further the goal of the project/organization. The worst way to 'invest' in this sense is to throw money at a new studio/project with a half-assed game design and non-commitaly hiring some shammy programmer to make it happen -- you need to be the leader in the trenches, bringing your troops to victory by being the point man leading the charge. If you aren't/can't, your ROI will be -100%. Even if you have $0 financially invested, you can still be heavily invested in the success of a project. If you aren't 100% invested in a project/company in every way, you can't be the leader who asks others to invest 100%.




#5184142 Geometry shader-generated camera-aligned particles seemingly lacking Z writing

Posted by slayemin on 30 September 2014 - 03:09 PM

I had this problem last week and I was struggling to figure out what exactly was going on. Then I found this super awesome article by Shawn Hargreaves:

http://blogs.msdn.com/b/shawnhar/archive/2009/02/18/depth-sorting-alpha-blended-objects.aspx

 

He pretty much explains in perfect detail what the problem is and provides some solutions. IF you are using alpha blending, the "best" solution is to sort your objects based on their distance from the camera and draw them from back to front.

 

I too decided to write my billboards into HLSL code. Unfortunately since I'm using XNA, I only have the vertex shader and pixel shader at my disposal, so I can't add extra verticies as you would be able to do in DX10+. This means that I just have to include the vertex positions in the quad which gets rendered instead of inferring the vertices from a point positition. In my implementation, I take advantage of instancing which allows me to send a huge batch of vertex instance data to the GPU once, and then I let the shader process the primitive data and draw updates, using just one draw call. 

 

Here is the HLSL code I'm using for DX9:
 

float4x4 World;
float4x4 View;
float4x4 Projection;
float3 CameraPosition;
float3 CameraUp;
float CurrentTime;			//current time in seconds
float4 Tint : COLOR0 = (float4)1;
bool UseWorldTransforms;
float4 Ambient : COLOR = (float4)1;			//the ambient light color in the scene
float3 LightDirection;			//a vector3 for the directional light
float4 LightColor : COLOR;		//the directional light color

const float Tau = 6.283185307179586476925286766559;

//------- Texture Samplers --------
Texture g_Texture;
sampler TextureSampler = sampler_state { texture = <g_Texture>; magfilter = LINEAR; minfilter = LINEAR; mipfilter=LINEAR; AddressU = wrap; AddressV = wrap;}; 

struct QuadTemplate
{
	float3 Position	    : POSITION0;
	float2 TexCoord	    : TEXCOORD0;
};

struct QuadInstance
{
	float3 Position		: POSITION1;
	float3 Velocity		: POSITION2;
	float3 Normal		: NORMAL0;
	float3 Up			: NORMAL1;
	float3 ScaleRot		: POSITION3;
	float3 EndScaleRot	: POSITION4;
	float2 Time			: POSITION5;
	float4 Color		: COLOR0;
	float4 EndColor		: COLOR1;
};

struct LineSegmentInput
{
	float3 Position		: POSITION0;
	float4 StartColor	: COLOR0;
	float4 EndColor		: COLOR1;
	float3 Velocity		: TEXCOORD0;
	float2 Time			: TEXCOORD1;
	float2 UV		    : TEXCOORD2;
};

struct VSOUT
{
    float4 Position		: POSITION0;			//screen space coordinate of pixel
	float4 Color		: COLOR;				//vertex color
	float2 TexCoord		: TEXCOORD0;			//texture coordinate
};

float3x3 CreateRotation(float myAngle, float3 rotAxis)
{
	float c = cos(myAngle);
	float s = sin(myAngle);
	float3 u = rotAxis;

	return float3x3(
	c + (u.x*u.x)*(1-c), u.x*u.y*(1-c) - u.z*s, u.x*u.z * (1-c) + u.y*s,
	u.y*u.x*(1-c) + u.z * s, c + u.y*u.y*(1-c), u.y*u.z*(1-c) - u.x*s,
	u.z*u.x*(1-c) - u.y * s, u.z*u.y*(1-c)+u.x*s, c + u.z*u.z*(1-c)
	);
}

//3D TEXTURED//////////////////////////////////////////////////////////////////////////////////
VSOUT VS_3DTex(QuadTemplate input)
{
    VSOUT output = (VSOUT)0;

    //float4 worldPosition = mul(input.Position, World);
    //float4 viewPosition = mul(worldPosition, View);
    //output.Position = mul(viewPosition, Projection);
	output.Position = (float4)0;
	output.Color = (float4)1;
	//output.Color.r = input.TextureCoord.x;
	//output.Color.g = input.TextureCoord.y;
	//output.TexCoord = input.TextureCoord;
	output.TexCoord = (float2)0;

    return output;
}

//3D Textured thick lines/////////////////////////////////////////////////////////////////
VSOUT VS_3DTexturedLine(LineSegmentInput input)
{
	VSOUT output = (VSOUT)0;

	float age = CurrentTime - input.Time.x;
	float lifeAmt = 0;														//the life amount is a percentage between birth and death, if we're not -1
	if(input.Time.y != -1.0f)
	{
		lifeAmt = saturate(age / instance.Time.y);
	}

	float4 pos = (float4)1;
	pos.xyz = input.Position + (input.Velocity * age);
	if(UseWorldTransforms == false)
		pos.xyz += CameraPosition;

	output.Position = mul(mul(pos, View), Projection);
	output.Color = lerp(input.StartColor, input.EndColor, lifeAmt);
	output.TexCoord = input.UV;

	return output;
}

//3D colored 0px lines////////////////////////////////////////////////////////////////////
VSOUT VS_3DLineSegment(LineSegmentInput input)
{
	VSOUT output = (VSOUT)0;

	float age = CurrentTime - input.Time.x;
	float lifeAmt = 0;														//the life amount is a percentage between birth and death, if we're not -1
	if(input.Time.y != -1.0f)
	{
		lifeAmt = saturate(age / instance.Time.y);
	}

	float4 pos = (float4)1;
	pos.xyz = input.Position + (input.Velocity * age);
	if(UseWorldTransforms == false)
		pos.xyz += CameraPosition;

	output.Position = mul(mul(pos, View), Projection);
	output.Color = lerp(input.StartColor, input.EndColor, lifeAmt);

	return output;
}

//3D Textured Quads///////////////////////////////////////////////////////////////////////
VSOUT VS_3DQuadTex(QuadTemplate input, QuadInstance instance)
{

	float age = CurrentTime - instance.Time.x;

	float lifeAmt = 0;														//the life amount is a percentage between birth and death, if we're not -1
	if(instance.Time.y != -1.0f)
	{
		lifeAmt = saturate(age / instance.Time.y);
	}

	float3 m_scale = lerp(instance.ScaleRot, instance.EndScaleRot, lifeAmt);		//linear interpolate the scale values to get current scale
	float m_rotation = instance.ScaleRot.z + (instance.EndScaleRot.z * age);	//current rotation is initial rotation + sum of rotational speed over time
	float3 m_center = instance.Position;			//this is the transformed center position for the quad.
	
	m_center +=  (instance.Velocity * age);

	//TODO: Handle the case where the normal is set to (0,1,0) or (0,-1,0)
	//Note: this is done in the application, not the shader.

	float3 m_normal = instance.Normal;										//the normal is going to be given to us and is fixed.
	//float3 m_up = float3(0,1,0);											//the up vector is simply a cross of the left vector and normal vector
	float3 m_up = instance.Up;
	float3 m_left = cross(m_normal, m_up);									//the left vector can be derived from the camera orientation and quad normal
	m_up = cross(m_left, m_normal);

	float3x3 m_rot = CreateRotation(-m_rotation, m_normal);					//Create a rotation matrix around the object space normal axis by the given radian amount.
																			//This rotation matrix must then be applied to the left and up vectors.
	m_left = mul(m_left, m_rot) * m_scale.x;								//apply rotation and scale to the left vector
	m_up = mul(m_up, m_rot) * m_scale.y;									//apply rotation and scale to the up vector

	//Since we have to orient our quad to always face the camera, we have to change the input position values based on the left and up vectors.
	//the left and up vectors are in untranslated space. We know the translation, so we just set the vertex position to be the translation added to
	//the rotated and scaled left/up vectors.

	float3 pos = (float)0;
	if(input.Position.x == -1 && input.Position.y == -1)			//bottom left corner
	{
		pos = m_center + (m_left - m_up);	
	}
	else if(input.Position.x == -1 && input.Position.y == 1)		//top left corner
	{
		pos = m_center + (m_left + m_up);
	}
	else if(input.Position.x == 1 && input.Position.y == 1)			//top right corner
	{
		pos = m_center - (m_left - m_up);
	}
	else															//bottom right corner
	{
		pos = m_center - (m_left + m_up);
	}

	//Since we've already manually applied our world transformations, we can skip that matrix multiplication.
	//note that we HAVE to use a Vector4 for the world position because our view & projection matrices are 4x4.
	//the matrix multiplication function isn't smart enough to use a vector3. The "w" value must be 1.

	float4 worldPosition = 1.0f;
	worldPosition.xyz = pos;
	if(UseWorldTransforms == false)
		worldPosition.xyz += CameraPosition;

	VSOUT output;
    output.Position = mul(mul(worldPosition, View), Projection);
	output.Color = lerp(instance.Color, instance.EndColor, lifeAmt);
	output.TexCoord = input.TexCoord;
	return output;
}

//3D Textured point sprites///////////////////////////////////////////////////////////////////////
VSOUT VS_3DPointSpriteTex(QuadTemplate input, QuadInstance instance)
{
	/* 
	SUMMARY: A point sprite is a special type of quad which will always face the camera. The point sprite
	can be scaled and rotated around the camera-sprite axis (normal) by any arbitrary angle. Because of these
	special behaviors, we have to apply some special instructions beyond just multiplying a point by the world
	matrix.
	*/

	float age = CurrentTime - instance.Time.x;

	float lifeAmt = 0;														//the life amount is a percentage between birth and death, if we're not -1
	if(instance.Time.y != -1.0f)
	{
		lifeAmt = saturate(age / instance.Time.y);
	}

	float3 m_scale = lerp(instance.ScaleRot, instance.EndScaleRot, lifeAmt);		//linear interpolate the scale values to get current scale
	float m_rotation = (instance.ScaleRot.z * Tau) + (instance.EndScaleRot.z * Tau * age);	//current rotation is initial rotation + sum of rotational speed over time
	float3 m_center = instance.Position;			//this is the transformed center position for the quad.
	m_center +=  (instance.Velocity * age);

	float3 m_normal = normalize(CameraPosition - m_center);					//the normal is going to be dependent on the camera position and the center position
	float3 m_left = cross(m_normal, CameraUp);								//the left vector can be derived from the camera orientation and quad normal
	float3 m_up = cross(m_left, m_normal);									//the up vector is simply a cross of the left vector and normal vector
	float3x3 m_rot = CreateRotation(m_rotation, m_normal);					//Create a rotation matrix around the object space normal axis by the given radian amount.
																			//This rotation matrix must then be applied to the left and up vectors.
	m_left = mul(m_left, m_rot) * m_scale.x;								//apply rotation and scale to the left vector
	m_up = mul(m_up, m_rot) * m_scale.y;									//apply rotation and scale to the up vector

	//Since we have to orient our quad to always face the camera, we have to change the input position values based on the left and up vectors.
	//the left and up vectors are in untranslated space. We know the translation, so we just set the vertex position to be the translation added to
	//the rotated and scaled left/up vectors.

	float3 pos = (float)0;
	if(input.Position.x == -1 && input.Position.y == -1)			//bottom left corner
	{
		pos = m_center + (m_left - m_up);	
	}
	else if(input.Position.x == -1 && input.Position.y == 1)		//top left corner
	{
		pos = m_center + (m_left + m_up);
	}
	else if(input.Position.x == 1 && input.Position.y == 1)			//top right corner
	{
		pos = m_center - (m_left - m_up);
	}
	else															//bottom right corner
	{
		pos = m_center - (m_left + m_up);
	}

	//Since we've already manually applied our world transformations, we can skip that matrix multiplication.
	//note that we HAVE to use a Vector4 for the world position because our view & projection matrices are 4x4.
	//the matrix multiplication function isn't smart enough to use a vector3. The "w" value must be 1.

	float4 worldPosition = 1.0f;
	worldPosition.xyz = pos;
	if(UseWorldTransforms == false)
		worldPosition.xyz += CameraPosition;

	VSOUT output;
    output.Position = mul(mul(worldPosition, View), Projection);
	output.Color = lerp(instance.Color, instance.EndColor, lifeAmt);
	output.TexCoord = input.TexCoord;
	return output;
}

//3D Textured Billboard///////////////////////////////////////////////////////////////////////
VSOUT VS_3DBillboardTex(QuadTemplate input, QuadInstance instance)
{
	/* 
	SUMMARY: A billboard is a special type of quad which will always face the camera, but is constrained along the
	y-axis. The billboard can be scaled and rotated around the camera-sprite axis (normal) by any arbitrary angle. 
	Because of these special behaviors, we have to apply some special instructions beyond just multiplying a point 
	by the world matrix.
	*/

	float age = CurrentTime - instance.Time.x;								//total elapsed time since birth

	float lifeAmt = 0;															//the age is a percentage between birth and death, if we're not -1
	if(instance.Time.y != -1.0f)
	{
		lifeAmt = saturate(age / instance.Time.y);
	}

	float3 m_scale = lerp(instance.ScaleRot, instance.EndScaleRot, lifeAmt);		//linear interpolate the scale values to get current scale
	float m_rotation = (instance.ScaleRot.z * Tau) + (instance.EndScaleRot.z * Tau * age);	//current rotation is initial rotation + sum of rotational speed over time
	float3 m_center = instance.Position;			//this is the transformed center position for the quad.
	m_center +=  (instance.Velocity * age);

	float3 m_normal = CameraPosition - m_center;							//the normal is going to be dependent on the camera position and the center position
	m_normal.y = 0;
	m_normal = normalize(m_normal);
	float3 m_up = float3(0,1,0);											//the up vector is simply the unit Y value
	float3 m_left = cross(m_normal, m_up);								//the left vector can be derived from the camera orientation and quad normal
	
	float3x3 m_rot = CreateRotation(m_rotation, m_normal);					//Create a rotation matrix around the object space normal axis by the given radian amount.
																			//This rotation matrix must then be applied to the left and up vectors.
	m_left = mul(m_left, m_rot) * m_scale.x;								//apply rotation and scale to the left vector
	m_up = mul(m_up, m_rot) * m_scale.y;									//apply rotation and scale to the up vector

	//Since we have to orient our quad to always face the camera, we have to change the input position values based on the left and up vectors.
	//the left and up vectors are in untranslated space. We know the translation, so we just set the vertex position to be the translation added to
	//the rotated and scaled left/up vectors.

	float3 pos = (float)0;
	if(input.Position.x == -1 && input.Position.y == -1)			//bottom left corner
	{
		pos = m_center + (m_left - m_up);	
	}
	else if(input.Position.x == -1 && input.Position.y == 1)		//top left corner
	{
		pos = m_center + (m_left + m_up);
	}
	else if(input.Position.x == 1 && input.Position.y == 1)			//top right corner
	{
		pos = m_center - (m_left - m_up);
	}
	else															//bottom right corner
	{
		pos = m_center - (m_left + m_up);
	}

	//Since we've already manually applied our world transformations, we can skip that matrix multiplication.
	//note that we HAVE to use a Vector4 for the world position because our view & projection matrices are 4x4.
	//the matrix multiplication function isn't smart enough to use a vector3. The "w" value must be 1.

	float4 worldPosition = 1.0f;
	worldPosition.xyz = pos;
	if(UseWorldTransforms == false)
		worldPosition.xyz += CameraPosition;

	VSOUT output;
    output.Position = mul(mul(worldPosition, View), Projection);
	output.Color = lerp(instance.Color, instance.EndColor, lifeAmt);
	output.TexCoord = input.TexCoord;
	return output;
}

//3D Vertex colors only///////////////////////////////////////////////////////////////////////////
VSOUT VS_3D(float4 inPosition : POSITION, float4 inColor : COLOR)
{
    VSOUT output;

    float4 worldPosition = mul(inPosition, World);
    float4 viewPosition = mul(worldPosition, View);
    output.Position = mul(viewPosition, Projection);
	output.Color = inColor;
	output.TexCoord = 0;

    return output;
}

VSOUT VS_2D(float4 inPos : POSITION, float4 inColor : COLOR)
{
    VSOUT Output = (VSOUT)0;

	Output.Position = inPos;
	Output.Color = inColor;

    return Output;
}
//PIXEL SHADERS///////////////////////////////////////////////////////////////////////////

float4 PS_3D(VSOUT output) : COLOR0
{
	//Output.Color = tex2D(TextureSampler, vs_output.TextureCoord);
	//Output.Color.rgb *= saturate(PSIn.LightingFactor) + xAmbient;

	//return tex2D(TextureSampler, output.TexCoord);

	return output.Color * Tint * Ambient;
}

float4 PS_2D(VSOUT vs_output) : COLOR0
{
	return vs_output.Color * Tint;
}

float4 PS_3DTex(VSOUT input) : COLOR0
{
	float4 c = tex2D(TextureSampler, input.TexCoord);
	if(c.a <= 0.1)
		discard;
	//return c * (input.Color + Ambient);
	float3 Up = (float3)0;
	Up.y = 1;

	float LightFactor = dot(Up, -LightDirection);

	return  (c * input.Color * Tint) * saturate(LightColor + Ambient);
	//return  ((c * input.Color ) * saturate((LightColor * LightFactor) + Ambient)) ;
}

//TECHNIQUES///////////////////////////////////////////////////////////////////////////
technique Technique2D
{
    pass Pass0
    {
        VertexShader = compile vs_2_0 VS_2D();
        PixelShader = compile ps_2_0 PS_2D();
    }
}

technique VertexColor3D
{
    pass Pass0
    {
        VertexShader = compile vs_2_0 VS_3D();
        PixelShader = compile ps_2_0 PS_3D();
    }
}

technique Textured3D
{
    pass Pass0
    {
        VertexShader = compile vs_2_0 VS_3DTex();
        PixelShader = compile ps_2_0 PS_3DTex();
    }
}

technique TexturedQuad3D
{
	pass Pass0
	{
	    VertexShader = compile vs_3_0 VS_3DQuadTex();
            PixelShader = compile ps_3_0 PS_3DTex();
	}
}

technique TexturedPointSprite3D
{
	pass Pass0
	{
	    VertexShader = compile vs_3_0 VS_3DPointSpriteTex();
            PixelShader = compile ps_3_0 PS_3DTex();
	}
}

technique TexturedBillboard3D
{
	pass Pass0
	{
	   VertexShader = compile vs_3_0 VS_3DBillboardTex();
           PixelShader = compile ps_3_0 PS_3DTex();
	}
}

technique LineSegment3D
{
	pass Pass0
	{
	    VertexShader = compile vs_3_0 VS_3DLineSegment();
            PixelShader = compile ps_3_0 PS_3D();
	}
}

technique TexturedLine3D
{
	pass Pass0
	{
		VertexShader = compile vs_3_0 VS_3DTexturedLine();
        PixelShader = compile ps_3_0 PS_3DTex();
	}
}

And here is my complete rendering code for drawing quads, billboards, and point sprites using the above HLSL:
 

public void Render(Camera3D camera, coreTime worldTime)
{
	
	//...snipped irrelevant code...

	if (m_settings.PainterSort)
	{
		PainterSort(camera.Position);
	}

	//rebuild the vertex and index buffers if the collection has been changed.
	if (m_dirtyBuffers > 0)
		RebuildBuffers();

	//activate our buffers
	//m_settings.GraphicsDevice.SetVertexBuffer(m_psVB);
	RasterizerState rs = m_settings.GraphicsDevice.RasterizerState;
	if (m_settings.DoubleSided == true)
	{
		RasterizerState rs2 = new RasterizerState();
		rs2.CullMode = CullMode.None;
		m_settings.GraphicsDevice.RasterizerState = rs2;
	}

	m_settings.GraphicsDevice.DepthStencilState = DepthStencilState.DepthRead;
	m_settings.GraphicsDevice.Indices = m_IB;
	m_settings.GraphicsDevice.BlendState = m_settings.BlendState;

	m_effect.Parameters["g_Texture"].SetValue(m_settings.Texture);
	m_effect.Parameters["UseWorldTransforms"].SetValue(m_settings.UseWorldTransforms);
	m_effect.Parameters["View"].SetValue(camera.View);
	m_effect.Parameters["Projection"].SetValue(camera.Projection);
	m_effect.Parameters["CameraPosition"].SetValue(camera.Position);
	m_effect.Parameters["CameraUp"].SetValue(camera.Up);
	m_effect.Parameters["CurrentTime"].SetValue((float)worldTime.TotalWorldTime.TotalSeconds);
	m_effect.Parameters["Tint"].SetValue(m_settings.Tinting.ToVector4());

	if (m_settings.UseWorldLights)
	{
		m_effect.Parameters["Ambient"].SetValue(BaseSettings.AmbientLight.ToVector4());
		m_effect.Parameters["LightColor"].SetValue(BaseSettings.AllDirLights[0].Color.ToVector4());
		m_effect.Parameters["LightDirection"].SetValue(BaseSettings.AllDirLights[0].Direction);
	}
	

	#region Draw Quads
	if (m_quadVB != null && m_quadVB.VertexCount > 0)
	{
		m_effect.CurrentTechnique = m_effect.Techniques["TexturedQuad3D"];

		m_settings.GraphicsDevice.SetVertexBuffers(
		new VertexBufferBinding(m_VB, 0, 0),
		new VertexBufferBinding(m_quadVB, 0, 1));

		foreach (EffectPass pass in m_effect.CurrentTechnique.Passes)
		{
			pass.Apply();
			//m_settings.GraphicsDevice.DrawIndexedPrimitives(PrimitiveType.TriangleList, 0, 0, m_psList.Count * 4, 0, m_psList.Count * 2);
			m_settings.GraphicsDevice.DrawInstancedPrimitives(PrimitiveType.TriangleList, 
				0, //base vertex
				0,  //min vertex index
				4, //vertex count
				0, //start index
				2, //primitive count
				m_quadVB.VertexCount  //instance count
				);
		}
	}
	#endregion

	#region Draw Point sprites
	if (m_psVB != null && m_psVB.VertexCount > 0)
	{
		m_effect.CurrentTechnique = m_effect.Techniques["TexturedPointSprite3D"];

		m_settings.GraphicsDevice.SetVertexBuffers(
		new VertexBufferBinding(m_VB, 0, 0),
		new VertexBufferBinding(m_psVB, 0, 1));

		foreach (EffectPass pass in m_effect.CurrentTechnique.Passes)
		{
			pass.Apply();
			//m_settings.GraphicsDevice.DrawIndexedPrimitives(PrimitiveType.TriangleList, 0, 0, m_psList.Count * 4, 0, m_psList.Count * 2);
			m_settings.GraphicsDevice.DrawInstancedPrimitives(PrimitiveType.TriangleList, 0, 0,
				4, //vertex count
				0, //start index
				2, //primitive count
				m_psVB.VertexCount  //instance count
				);
		}
	}
	#endregion

	#region Draw billboards
	if (m_bbVB != null && m_bbVB.VertexCount > 0)
	{
		m_effect.CurrentTechnique = m_effect.Techniques["TexturedBillboard3D"];

		m_settings.GraphicsDevice.SetVertexBuffers(
		new VertexBufferBinding(m_VB, 0, 0),
		new VertexBufferBinding(m_bbVB, 0, 1));

		foreach (EffectPass pass in m_effect.CurrentTechnique.Passes)
		{
			pass.Apply();
			//m_settings.GraphicsDevice.DrawIndexedPrimitives(PrimitiveType.TriangleList, 0, 0, m_psList.Count * 4, 0, m_psList.Count * 2);
			m_settings.GraphicsDevice.DrawInstancedPrimitives(PrimitiveType.TriangleList, 0, 0,
				4, //vertex count
				0, //start index
				2, //primitive count
				m_bbVB.VertexCount  //instance count
				);

		}
	}
	#endregion

	m_settings.GraphicsDevice.DepthStencilState = DepthStencilState.Default;
	m_settings.GraphicsDevice.RasterizerState = rs;
	m_settings.GraphicsDevice.BlendState = BlendState.Opaque;
}

Here are both custom vertex definitions I came up with which are necessary for drawing the instanced primitives using HLSL:
QuadInstanceVertex:
 

public struct QuadInstanceVertex
{

	//Optimize only if you have been able to profile a problem with the vertex byte size. Don't prematurely optimize and over-engineer.

	/// <summary>
	/// Offset from origin
	/// </summary>
	public Vector3 Position;

	/// <summary>
	/// Particle velocity
	/// </summary>
	public Vector3 Velocity;

	/// <summary>
	/// Normal direction for the quad face
	/// Quad: Set value; PointSprite: (0,0,0); BillBoard: (0,0,0)
	/// </summary>
	public Vector3 Normal;

	/// <summary>
	/// We actually have to include the up vector for quads because we just can't derive it within the shader.
	/// </summary>
	/// <remarks>
	/// The problem with trying to derive an up direction within the shader is that the shader compiler will UNROLL
	/// all of your branching logic. If you try to write any code to avoid dividing by zero, one of the branches will
	/// take that path anyways and divide by zero, causing *visual studio* to crash. So, rather than trying to run
	/// logic in the shader, we have to run it in the application.
	/// </remarks>
	public Vector3 Up;

	/// <summary>
	/// The starting width, length, and normal-axis rotation
	/// </summary>
	public Vector3 Scale;

	/// <summary>
	/// the end width, length and normal-axis rotational speed.
	/// length and width with be lerp'd, rotational speed will be added to initial value
	/// </summary>
	public Vector3 EndScale;

	/// <summary>
	/// Crucial timing values for the vertex shader.
	/// X = Spawn time of the particle/quad
	/// Y = lifespan of the quad; 
	///       -1: always alive
	///        0: dead
	///       0+: alive
	/// </summary>
	/// <remarks>Your quad manager is responsible for removing a quad/particle when the lifespan reaches zero.</remarks>
	public Vector2 Time;

	/// <summary>
	/// The starting color.
	/// </summary>
	public Color Color;

	/// <summary>
	/// The ending color. Current value will be lerp'd between this and start color.
	/// </summary>
	public Color EndColor;


	
	/// <summary>
	/// Creates a particle with the following properties.
	/// </summary>
	/// <param name="pos">The position offset from the origin</param>
	/// <param name="norm">the normal of the face</param>
	/// <param name="scaleRot">initial length and width</param>
	/// <param name="color">initial color</param>
	/// <param name="zrot">initial rotation around the z-axis</param>
	/// <param name="endScaleRot">end length and width</param>
	/// <param name="t">x = spawn time; y = lifespan (-1: infinite, 0: dead; gt 0: alive)</param>
	/// <param name="dz">rotational speed</param>
	/// <param name="vel">velocity of the particle (if you want it to move)</param>
	/// <param name="endColor">end color of the particle</param>
	public QuadInstanceVertex(Vector3 pos, Vector3 vel, Vector3 norm, Vector3 up, Vector3 scaleRot, Vector3 endScaleRot, Color color, Color endColor, Vector2 t)
	{
		Position = pos;
		Velocity = vel;
		Normal = norm;
		Up = up;

		Scale = scaleRot;
		EndScale = endScaleRot;

		Color = color;
		EndColor = endColor;
		
		Time = t;
	}

	/// <summary>
	/// Creates a quad instance with the given properties.
	/// </summary>
	/// <param name="pos"></param>
	/// <param name="norm"></param>
	/// <param name="scaleRot"></param>
	/// <param name="color"></param>
	/// <param name="zrot"></param>
	public QuadInstanceVertex(Vector3 pos, Vector3 norm, Vector3 up, Vector3 scaleRot, Color color)
	{
		Position = pos;
		Velocity = Vector3.Zero;
		Normal = norm;
		Up = up;

		Scale = scaleRot;
		EndScale = scaleRot;

		Color = color;
		EndColor = color;
		
		Time = new Vector2(0,-1);
	}

	/*Note: The semantic usage index must be unique across ALL vertex buffers. The geometry vertex buffer already uses Position0 and TexCoord0.*/

	public static readonly VertexDeclaration VertexDeclaration = new VertexDeclaration(
		new VertexElement( 0, VertexElementFormat.Vector3, VertexElementUsage.Position, 1),          //12 (pos)
		new VertexElement(12, VertexElementFormat.Vector3, VertexElementUsage.Position, 2),         //12 (velocity)
		new VertexElement(24, VertexElementFormat.Vector3, VertexElementUsage.Normal, 0),           //12 (norm)
		new VertexElement(36, VertexElementFormat.Vector3, VertexElementUsage.Normal, 1),           //12 (up)
		new VertexElement(48, VertexElementFormat.Vector3, VertexElementUsage.Position, 3),         //12 (scale/rot)
		new VertexElement(60, VertexElementFormat.Vector3, VertexElementUsage.Position, 4),         //12 (end scale/rot)
		new VertexElement(72, VertexElementFormat.Vector2, VertexElementUsage.Position, 5),         //8  (time)
		new VertexElement(80, VertexElementFormat.Color, VertexElementUsage.Color, 0),              //4  (color)
		new VertexElement(84, VertexElementFormat.Color, VertexElementUsage.Color, 1)               //4  (end color)
		); 

	public const int SizeInBytes = 76;
}

QuadVertex:

/// <summary>
/// A vertex structure with Position and Texture coordinate data
/// </summary>
public struct QuadVertex : IVertexType
{
	/*So, we're gonna get funky here. The R,G,B components of the color denote any color TINT for the quad. Since we also have an alpha channel, we're going
	 to store the CornerID of the vertex within it!*/
	public Vector3 Position;
	public Vector2 UV;


	/// <summary>
	/// Creates a vertex which contains position, normal, color, and texture UV info
	/// </summary>
	/// <param name="position">The position of the vertex, relative to the center</param>
	/// <param name="uv">The UV texture coordinates</param>
	/// <param name="rotation">A radian value indicating rotation around the normal axis</param>
	public QuadVertex(Vector3 position, Vector2 uv)
	{
		Position = position;
		UV = uv;
	}

	public static readonly VertexDeclaration VertexDeclaration = new VertexDeclaration(
		new VertexElement(0, VertexElementFormat.Vector3, VertexElementUsage.Position, 0),                              //12 bytes
		new VertexElement(12, VertexElementFormat.Vector2, VertexElementUsage.TextureCoordinate, 0)                    //8 bytes
		);

	public const int SizeInBytes = 20;

	VertexDeclaration IVertexType.VertexDeclaration
	{
		get { return VertexDeclaration; }
	}
}

Quad Class:

public class Quad
{
	static QuadVertex[] m_verts;
	static int[] m_indices;

	public int Key = -1;

	//This is a vertex which contains all of our instance info. You can use it as either a data container
	//or as a vertex to be used by a vertex shader.
	public QuadInstanceVertex Info;

	private void Init(Vector3 position, Vector3 velocity, Vector3 normal, Vector3 up, Vector3 startSize, Vector3 endSize, Color startColor, Color endColor, Vector2 time)
	{

		BuildVerts();
		BuildIndices();

		Info = new QuadInstanceVertex(position, velocity, normal, up, startSize, endSize, startColor, endColor, time);
	}

	public Quad()
	{
		BuildVerts();
		BuildIndices();
	}

	/// <summary>
	/// Creates a quad based on the given values
	/// </summary>
	/// <param name="position">The center position of the quad</param>
	/// <param name="normal">The normal direction indicates the facing direction for the quad</param>
	/// <param name="orientation">This is the orientation of the quad around the normal axis</param>
	/// <param name="size">This is the scaled size of the quad</param>
	public Quad(Vector3 position, Vector3 normal, Vector3 up, float size, float orientation = 0)
	{
		Init(position, Vector3.Zero, normal, up, new Vector3(size,size, orientation), new Vector3(size,size, orientation), Color.White, Color.White, new Vector2(0,-1));
	}

	/// <summary>
	/// Creates a POINT SPRITE at the given location. Use HLSL code for the rest.
	/// </summary>
	/// <param name="center">the center position of the point sprite</param>
	/// <param name="size">the size of the point sprite</param>
	/// <param name="orientation">the rotation around the normal axis for the point sprite</param>
	public Quad(Vector3 position, float size, float orientation = 0)
	{
		Init(position, Vector3.Zero, Vector3.Zero, Vector3.Up, new Vector3(size, size, orientation), new Vector3(size, size, orientation), Color.White, Color.White, new Vector2(0, -1));
	}

	/// <summary>
	/// Creates a generalized quad for use with hardware instancing.
	/// </summary>
	/// <param name="position">This is the position in the game world</param>
	/// <param name="velocity">This is how much the quad moves each frame</param>
	/// <param name="normal">QUAD Only: This is the facing direction of the quad. Point sprites and billboards will derive this value based on camera position.</param>
	/// <param name="startSize">Starting scale and rotation: X = width, Y = height, Z = initial radian rotation</param>
	/// <param name="endSize">Ending scale and rotation: X = width, Y = height, Z = change in rotation over time</param>
	/// <param name="startColor">The starting color values for tinting. Use Color.White if you don't want tinting</param>
	/// <param name="endColor">The ending color values for tinting. Use Color.White if you don't want tinting</param>
	/// <param name="time">X = Birth time in gametime seconds. Y = lifespan in seconds. Set lifespan to -1 if the quad is static. Default: (0, -1)</param>
	public Quad(Vector3 position, Vector3 velocity, Vector3 normal, Vector3 up, Vector3 startSize, Vector3 endSize, Color startColor, Color endColor, Vector2 time)
	{
		Init(position, velocity, normal, up, startSize, endSize, startColor, endColor, time);
	}

	static void BuildIndices()
	{
		if (m_indices == null)
		{
			m_indices = new int[6];

			//create the indicies for this quad. Note: winding order is in clockwise order.
			m_indices[0] = 0;
			m_indices[1] = 1;
			m_indices[2] = 2;
			m_indices[3] = 0;
			m_indices[4] = 2;
			m_indices[5] = 3;
		}
	}

	/// <summary>
	/// This gets six indicies for this quad.
	/// The indicies can then be inserted into an index buffer.
	/// </summary>
	/// <returns>Six indicies for drawing a triangle list</returns>
	public static int[] Indicies
	{
		get
		{
			if (m_indices == null)
				BuildIndices();

			return m_indices;
		}
	}

	static void BuildVerts()
	{
		if (m_verts == null)
		{
			m_verts = new QuadVertex[4];

			m_verts[0] = new QuadVertex(new Vector3(-1, -1, 0), new Vector2(0, 1));    //bottom left corner
			m_verts[1] = new QuadVertex(new Vector3(-1, 1, 0), new Vector2(0, 0));    //top left corner
			m_verts[2] = new QuadVertex(new Vector3(1, 1, 0), new Vector2(1, 0));    //top right corner
			m_verts[3] = new QuadVertex(new Vector3(1, -1, 0), new Vector2(1, 1));    //bottom right corner
		}
	}

	public static QuadVertex[] Verts
	{
		get
		{
			if (m_verts == null)
				BuildVerts();
			
			return m_verts;
		}
	}

}



#5181090 I need some guidance please

Posted by slayemin on 17 September 2014 - 01:37 PM

Okay... Let's take a step back. (Warning: ample amounts of coffee has been consumed!)

1. It's turn based.

2. It's a card game

3. It's multiplayer

4. It's all in your head at the moment

 

Therefore, the number one first step you NEED to do is get it out of your head and onto something solid. As it stands at the moment, there is no reason this needs to be a computer game at the moment. What you want to do right now is prototype the hell out of this card game. That's really easy for a turn based multiplayer card game. Get a bunch of paper, some scissors, pens/pencils, a couple friends, and make some cards. Explain to your friends how the card game is played and then play it. Does it work? do the mechanics work? Is it fun? Do you actually have a game on your hands? What needs to be tweaked/modified? It's super easy for you to modify all of the game rules when they aren't written into code!

 

Once you have a super fun, well balanced card game prototype working in the physical world, THEN think about creating it on the computer. You'll then seriously know what works and what doesn't work and you won't be scratching your head at the keyboard wondering if a game mechanic works and trying to test out ideas in isolation by writing lots of code which may or may not get thrown away. You'll have the game you want to make, so the rest of the struggle in your development will only consist of writing up the code and the networking, rather than trying to design and develop simultaneously (don't do this! okay?! Design first, develop second! trust me. Doing this will save you months of time and rework!!! You can thank me later.)

 

Let's talk about why you NEED to get this game out of your head and onto physical medium (documentation, spreadsheets, cards, rulebooks, etc). IF you are building this game by yourself, you need to keep your rules straight. If they sit in your head, you will forget. Head space knowledge shifts like the winds, so tamp it down by writing it down. If you don't, you're going to have constantly changing requirements which will be a nightmare to code for. IF you are going to bring OTHER PEOPLE onto the project, then you need to COMMUNICATE to them what needs to be built in very explicit detail! Communication is the word of the day. If there's anything YOU need to do really well, it is communicate your idea perfectly so that everyone understands it exactly the same, exactly how you envisioned it, and everyone on the team knows exactly what needs to be done. None of this can happen if the whole idea is in your head. Nobody is a mind reader. They can't reach in there and figure out what's going on in there, and if you say something, there is a high chance that it can be interpreted wrong. Therefore, super good documentation, preferably a physical prototype of the game, and a way for all team members to talk to each other spontaneously in a face to face setting will be the best thing you can do for your project at this stage.

Chances are also very good that the idea in your head seems to work in your head, but there's this nasty thing about head space ideas: they are like dreams. If you look at them at a quick glance, it all appears rosy and clean. But if you start focusing in on something in particular, it gets murky and uncertain. Determining how exactly that vision should look at the granular level is what separates the pros from the amateurs. Thus, documenting and prototyping the details is to your benefit because it brings up these weak points in the vision and forces you to address them.

 

Once you have all of this done, and only when you have this done, should you start looking at building the game in the digital space.




#5180763 Quad trees vs R-trees vs Spacial hashmaps

Posted by slayemin on 16 September 2014 - 11:33 AM

I've only implemented quad trees and octrees. Here is what I like about them:
*They are easy to understand conceptually

*They don't take very long to implement (4-12 hours)

*They can be used recursively, which shortens code

*The runtime is O(LogN)

*Rather than trying to collide with objects directly, you collide with the object container first to cull out unneeded collision checks
 

Here's what I don't like:

*They can be a pain in the ass to debug. How many branches deep do you want to sift through in your IDE's debugger before you find your object?

*Generally, you just dump the tree and rebuild it every frame. What a waste of CPU time :(  (also, its a waste of programmer time to try to optimize it -- not because it can't be done, but because of all the bugs you'll probably introduce with the additional complexity)

 

I'm sure there are better solutions out there (maybe KD trees?). Today, I'd still implement a quadtree or octree (or maybe try a balanced KD tree).




#5180622 Using C++ codebase

Posted by slayemin on 15 September 2014 - 11:51 PM

Your questions are a bit vague, even though you're trying to be precise. I'll try to answer them though...

 

"What is the general structure of a larger C++ program?"

That really depends on what the program is trying to do and how it is architected. This is very hard to answer because it's similar to asking "What is the general structure of a building?"

So... for most C++ programs, you usually have tons of files broken down by classes. Usually you have a header file which defines all of the class headers, variables, functions, etc. Then the CPP file usually contains the implementations of the "stuff" described in the header file. I'm sure you already know that though, so I'm not sure what you're trying to answer. The second bit is how all of those classes interact with each other (aka, object oriented programming). When enough classes are used often enough and are general enough to be used all over the place, people create "libraries" which expose the commonly used data structures (such as lists/vectors, hash tables, sorting methods, etc). Usually you don't care about the implementation so long as it works as expected.

 

I think that if you're trying to incorporate the exposed functions from "math.h" into UE4, you'd probably want to create wrappers for each of the methods in math.h you'd like to use in UE4. But, as I recall, UE4 already comes with a robust math library so you're probably wasting your time and instead should be trying to figure out how to use their math library. I'm sure the differences in the implementation of Sine and Cosine yield the same results, right? Regardless, it doesn't sound right that you should be having trouble with including a header file into your program. Usually when you include a header file, you're giving the compiler a path to a distinct file which exists somewhere on the hard drive. When the compiler compiles the program, it takes all of the header files and cpp files and creates a single binary executable. If your project includes or references an external library (like a DLL), then the program it compiles will try to reference the DLL instead of adding its binary into your executable. For this reason, it is very important that the DLL on all computers which run your program are the same version or support backwards compatibility. Many games will bundle the dependent DLL's as a part of the installation process just to make sure that the right versions are being used.

If you're using Visual Studio, usually you can step through your code line by line, and step into functions to see where the implementation lives (or just highlight a method and press F12). This should tell you where any calls to math.h are being made. This also gives you a pretty good idea on how stuff is laid out in the framework you're using. For most programs/frameworks/libraries, the best way to get an understanding of their layout is to read the Application Programming Interface documentation. It's always specific to the API, so that's about all the help I can give you on program internals... Hope this helps!




#5180615 Can you guys help get me started making a 3d game?

Posted by slayemin on 15 September 2014 - 10:52 PM

1. Forget about programming languages for a bit and focus on the mathematics. The mathematics is 1000% more important than picking a programming language. The fact is this: You can make a 3D game in any programming language, but the mathematics is going to be the same regardless of whatever language you pick. Programming languages are interchangeable for the most part. Once you know how to do the math and think like a programmer, you will only be asking "How do I implement XYZ in this programming language?! Oh, let me just look that up in the API.". So, the best mathematics you can get good at to prepare yourself for 3D programming are the following:
A) Linear Algebra (vectors, matricies, dot products, cross products, etc)
B) Trigonometry

C) Algebra
D) Calculus

 

2. Don't focus on building a team quite yet. Focus on getting really good at building stuff. You can use "programmer art" as placeholders for the art assets you'll eventually need.

 

3. You actually don't want a tutorial. They're too short and usually don't get deep enough. You want a few really good books. The beauty of books is that they're usually comprehensive, very well written, and you don't have to filter out any chaff to get to the good stuff. It's all good!

 

4. You didn't ask this, but your next question needs to be a question you ask yourself: "How much of my life am I actually willing to commit/dedicate to making games?" If the answer isn't "Decades!", then you need to think long and hard about your motivations. It has literally taken me 16 years to get to where I am today. I have been too stubborn and persistent to quit, and I slogged through some really hard work to get here. Are you willing to do that? To go through the pain and dedication it takes to really get the skills it takes to make a game, and the grind it takes to slog through the hard parts of making a game, ie, the non-glorious, mundane, not fun, repetitious parts?




#5180195 Generating a minimap

Posted by slayemin on 14 September 2014 - 12:49 AM

Hmm, I was able to get a minimap for my terrain system up and running in about 30 minutes. Here is a screenshot sample:
minimap.jpg


These are my design requirements:
1. The minimap dimensions may be any size. There's no relationship to the actual terrain system. The terrain could be 512x512, 1024x1024, 128x768, etc. The minimap is going to be something like 256x256.

2. I want to have height information color coded into the minimap, with contour lines to give it more of a "map" look. You should be able to tell what the elevation is.

 

 

This was my approach:

1. We get a width/height for the minimap from the user and we return a Texture2D to them.

2. We're going to go through each pixel in the minimap and map it to a position on the terrain.

2a: Since the terrain and minimap dimensions are independent, I am going to want to normalize my sample point.

 

For example, if my minimap is 256x256 and my terrain is 512x1024 (arbitrary size), and I am sampling the pixel (50,60) on the minimap, the normalized position is going to be:
normX = 50 / 256;
normY = 60 / 256;


Then, we sample the height map or terrain system by taking the normalized coordinate and switching it into their coordinate space...

sampleX = normX * terrainWidth;
sampleY = normY * terrainHeight;

 

And then we sample the terrain with these coordinates.
You would only super-sample neighboring pixels if the size of the minimap is greater than the size of the terrain (but why would you ever do that?!).
I figure you can just say that a map is a rough sketch of what the terrain actually looks like. If the spacing between sample points skips a few data points on the terrain, who cares? Can the player tell the difference? Nope! So don't over-engineer it.


Anyways, here is my implementation code which generated the minimap above:
 

public Texture2D GenerateMinimap(int width, int height)
{
	Texture2D ret = new Texture2D(BaseSettings.Graphics, width, height);
	Color[] data = new Color[width * height];

	float maxElevation = m_settings.MaxElevation;

	float t0 = 0;                           //sand height
	float t1 = maxElevation / 4.0f;           //grass height
	float t2 = t1 * 2;    //granite height
	float t3 = t1 * 3;    //snow height

	Color sand = new Color(255, 128, 0);
	Color dirt = new Color(128, 64, 0);
	Color grass = new Color(0, 192, 0);
	Color granite = new Color(192, 192, 192);
	Color snow = new Color(240, 240, 240);

	for (int y = 0; y < height; y++)
	{
		for (int x = 0; x < width; x++)
		{
			float h = m_heightMap.SampleTexel(x, y);
			Color c;

			if (h % 32 == 0)
				c = new Color(0, 0, 0);
			else
			{
				if (h < t1)
				{
					//should lerp colors
					float f = h / t1;
					c = Color.Lerp(sand, dirt, f);
				}
				else if (h >= t1 && h < t2)
				{
					float f = (h - t1) / (t2 - t1);
					c = Color.Lerp(dirt, grass, f);
				}
				else if (h >= t2 && h < t3)
				{
					float f = (h - t2) / (t3 - t2);
					c = Color.Lerp(grass, granite, f);
				}
				else
				{
					float f = (h - t3) / (maxElevation - t3);
					c = Color.Lerp(granite, snow, f);
				}
			}

			data[y * height + x] = c;
		}
	}

	ret.SetData<Color>(data);
	return ret;
}



#5180191 Which basics do you need to know

Posted by slayemin on 14 September 2014 - 12:23 AM

Thanks everyone,

 

When i read my post back, while thinking of what you guys said, i understand that it is vague for other people,

so sorry for that.

Basically what i mean't is: Which basic aspects of the c++ language, such as functions, statements, classes, etc.

are required to start making small games (like pong at the beginning and then all the way up to something 3D) 

and which aren't required, but will be very handy to know.

For example: Maybe you can say that knowing about and how to use classes is required, but something like polyformism isn't (I saw this in another topic)

 

I hope this might be less vague and possible to answer,

Dalphin

C++ isn't the only language (or even best language) you can make games in. Everything you learn about programming is another tool you get to put in your tool belt which you can later use to better solve the programming problems you'll face during any programming project (doesn't have to be games!).

 

Think of it like building a house. You are a carpenter. At the most basic level, you can stack boards on top of each other (like a log cabin) to build a very crude, rudimentary house. If you learn how to use a hammer and nails, you can build a slightly less crude house. Maybe, if you learn how to do "framing", you can build together a house which is much more structurally sound and uses less wood. But, all you have is a hammer and nails, so you're a bit limited by the tools you can use. You can build a better house if you can learn how to use a saw to cut wood into smaller pieces. This would let you build something that actually looks something like a shack, though you still won't be able to build anything much more complicated. Perhaps, you eventually learn how to pour concrete and discover how to build a foundation for your house building projects. This makes your houses more stable. Maybe you also learn something about load bearing walls and this lets you build a house with multiple stories. You find that it takes a lot of trial and error to build the houses, so rather than starting to slap some wood together, you decide to spend a bit of time drawing up blueprints for what you're going to build. After all, it's a lot more efficient to make your errors in your blueprints and correct them than to tear down a bit of construction. The more skills, techniques, and tools you have at your disposal, the better and faster you can build something. 

 

The same principles apply to programming. You are just barely learning how to use a hammer and nails for making games. At best, you can build a very crude game like pong, but you're telling us that you don't want to learn how to use the tools and techniques of the trade. By avoiding learning how to use tools/techniques which will greatly help you, you are severely limiting your capabilities. You're essentially saying, "I don't want to learn how to use a saw! Instead, when I need to cut a board in half, I will use my teeth and chew it in half by biting out teeny slivers of wood." How silly, right? Learning the fundamentals of control structures, loops, functions and variables is the very foundation of programming. Learning how to assemble these core primitives into classes is 100% essential to building any sort of game with any complexity. Polymorphism is a great and valuable tool to have at your disposal, even if you're not going to use it all the time. It's there if you need it! Never avoid learning something new because its hard or new.

 

The other great big tool you'll be using every day during the programming of games is mathematics. It too is a super duper valuable and useful tool. Getting better at math is getting better at making games. Don't limit yourself and your capabilities by trying to get by without learning how to use a valuable tool.

 




#5179799 Which algorithm is more efficient for visibility culling? BSP or OCTree?

Posted by slayemin on 12 September 2014 - 01:52 AM

I am using an Octree in my prototype and it's working well enough. Here are some high level notes on my implementation:

* I rebuild the whole tree every frame. Yes, it's a bit more expensive than doing an in-place update, but it is a simpler solution. Good enough for a prototype!
* I store every octree region as a bounding box (in XNA, that's just two vector3's, or 24 bytes of memory)
* I only let a node exist if there is actually data inside of it, whether its objects or child nodes.

* In order to figure out what I need to draw, I intersect the camera view frustum against the octree and get a list of intersections. The intersected objects are the only objects I need to draw.
* I maintain a static list of all objects in the octree. Rather than visiting each node recursively and updating objects, I just traverse the master list and update from there (so much more simple).
* Collision detection is pretty straight forward: You only have to test for collision of an object against all objects within its containing node and any child nodes. In a perfectly balanced octree, this runs in O(log base 8 N).


Before you go rushing off to implement an Octree, you should back up a few steps and implement and profiler to measure how long each section of your code takes. What is its actual performance (measured in ticks or microseconds)? This shouldn't take more than 2-3 hours max to implement (compared to days for a good tree implementation from scratch). Once you have a good profiler to measure execution time, you can measure specific sections of your code and identify bottlenecks. It's much more scientific than guesswork. You might be optimizing a problem you don't have... ie, you're rendering hundreds of high res meshes to the screen even if they're really far from the camera, in which case you want to look at a "Level of Detail" switching scheme instead.




#5176506 List of generic objects

Posted by slayemin on 27 August 2014 - 02:31 PM

...

Okay, I really like this technique better than generics. It's so much cleaner and straight forward and doesn't require creating abstract method signatures in the base Unit class. Thanks!




#5147728 Triangle and rectangle intersect find position points of rectangle that inter...

Posted by slayemin on 17 April 2014 - 03:13 PM

 

The first problem:

I need integer positions so eventually i would need to cast them to int and i never use it as float, so rounding down or up doesn't really matter to me.

That's a valid need, but you still want to do all your math using floating point numbers since your inputs are floats. When all of your math has been completed, then cast the result into an int (or even better, use a rounding function! casting 1.9f into an int will result in 1, not 2.).

Consider this example:
float 1/3 = 0.3333333333
1/3 + 1/3 + 1/3 = 1.0

If you cast 1/3 as a float, then do the calculations, you get 1.0, a correct answer.

(int)1/3 = 0.0
1/3 + 1/3 + 1/3 = 0.0 (wrong!)

This is a simple example, but the point is that the fractional numbers matter and add up. The same math equations can yield different results. The more fractions you write off, the more off your calculations will be. This can lead to some weird, unexpected bugs later on and you'll be squinting at your code for hours, saying "There is no error here, the math equations are perfect! The logic checks out!"




#5147469 Triangle and rectangle intersect find position points of rectangle that inter...

Posted by slayemin on 16 April 2014 - 03:39 PM

The code looks a lot more simple than mine, so I tested your code to see if it worked -- and I found a few problems!

First problem: You're using integers to store the results of mathematical operations on floating point numbers. This gives you wrong math. So, I went ahead and changed those..

 

Second problem: If you try the two line segments (-1,0)->(1,0) and (20,1) -> (20, -1) and run the code, you should expect there to not be any intersections. Yet, the results show an intersection at (20,0), which is totally wrong.

 

Potential problems:

-If two lines are the same, you get infinity results. If you're only expecting one X and Y value, this could be problematic.

-If two lines are parallel, you get infinity and NaN results (you're dividing by zero). This could also be problematic if you're not testing for them.

Here's my test code:
 

class Program
{
	static void Main(string[] args)
	{
		int a = (int)(1.1f + 1.2f);//<-- see? you lose precision and get wrong values.
		Console.Write(a);
		//PrintIntersectionOfTwoLines(new Vector2(-1, 0), new Vector2(1, 0), new Vector2(2, 1), new Vector2(2, -1));  //good
		//PrintIntersectionOfTwoLines(new Vector2(-1, 0), new Vector2(1, 0), new Vector2(2, 1), new Vector2(2, -1));  //good
		//PrintIntersectionOfTwoLines(new Vector2(-1, 0), new Vector2(1, 0), new Vector2(2, 1), new Vector2(2, -1));  //wrong
		//PrintIntersectionOfTwoLines(new Vector2(-1, 0), new Vector2(1, 0), new Vector2(-1, 1), new Vector2(1, 1));    //parallel: funky values
		//PrintIntersectionOfTwoLines(new Vector2(-1, 0), new Vector2(1, 0), new Vector2(-1, 0), new Vector2(1, 0));    //same line: wrong
		PrintIntersectionOfTwoLines(new Vector2(-1, 0), new Vector2(1, 0), new Vector2(20, 1), new Vector2(20, -1));  //wrong
	}

	static void PrintIntersectionOfTwoLines(Vector2 one, Vector2 two, Vector2 three, Vector2 four)
	{
	//To not calculate it two times
		float calcFirst = (one.X*two.Y)-(one.Y*two.X);
		float calcSecond = (three.X-four.X);
		float calcThird = (one.X-two.X);
		float calcFourth = (three.X*four.Y)-(three.Y*four.X);
		float calcFifth = (three.Y-four.Y);
		float calcSixth = (one.Y-two.Y);

	//x is the intersection point on x axis
		float x = (calcFirst * calcSecond - calcThird * calcFourth) / (calcThird * calcFifth - calcSixth * calcSecond);

	//y is the intersection point on y axis
		float y = (calcFirst * calcFifth - calcSixth * calcFourth) / (calcThird * calcFifth - calcSixth * calcSecond);

		Console.Write("X: " + x + "\nY: " + y);
	}
}



#5147281 Triangle and rectangle intersect find position points of rectangle that inter...

Posted by slayemin on 16 April 2014 - 12:00 AM

If this is in 2D space, the math can be a bit challenging. I solved this a few years ago by digging up some musty old grade school algebra books. The general principle here is to think of your shapes as a collection of lines, or a collection of vertex points. A triangle has three vertices, a rectangle has four, a pentagon has five, etc. What we want to note is that the number of sides/vertices in a shape shouldn't really matter very much. To detect if any two shapes overlap, or intersect, we can test each line in the first shape against every line in the second shape. If no lines in the first shape intersect with lines in the second shape, then we might not have a collision (note: A shape could still be completely enclosed within another shape!).

 

How do we do this?

Well, we have to define a "line". Algebraically,  it would be defined as "Y = mx + b", but that's a general line formula which stretches on to infinity. For the sides of a shape, you want to define a line by a starting point and an ending point. Then, you want to do a line intersection test against every other line in the other shape (We're looking at an N^2 for loop here... watch out!)

 

Anyways, the code I came up with is here:

/// <summary>
/// Given the end points for two line segments, this will tell you if those two line segments intersect each other.
/// </summary>
/// <param name="A1">First endpoint for Line A</param>
/// <param name="A2">Second endpoint for Line A</param>
/// <param name="B1">First endpoint for Line B</param>
/// <param name="B2">Second endpoint for line B</param>
/// <returns>true or false depending on if they intersect.</returns>
public static bool DoIntersect(Vector2 A1, Vector2 A2, Vector2 B1, Vector2 B2)
{

	//NOTE: Floating point precision errors can cause bugs here. This can result in false-positives or true-negatives.

	//Based off of the Y = mx + b formula for two lines.

	//calculate the slopes for Line A and Line B
	double mx1 = (double)(A2.Y - A1.Y) / (double)(A2.X - A1.X);
	double mx2 = (double)(B2.Y - B1.Y) / (double)(B2.X - B1.X);  

	//calculate the y-intercepts for Line A and Line B
	double b1 = (double)(-mx1 * A2.X) + A2.Y;
	double b2 = (double)(-mx2 * B2.X) + B2.Y;

	//calculate the point of intercept for Line A and Line B
	double x = (b2 - b1) / (mx1 - mx2);
	double y = mx1 * x + b1;

	if (double.IsInfinity(mx1) && double.IsInfinity(mx2))
	{
		//we're dealing with two vertical lines. If both lines share an X value, then we just have to compare
		//y values to see if they intersect.
		return (A1.X == B1.X) && ((B1.Y <= A1.Y && A1.Y <= B2.Y) || (B1.Y <= A2.Y && A2.Y <= B2.Y));
	}
	else if (double.IsInfinity(mx1) && !double.IsInfinity(mx2))
	{
		//Line A is a vertical line but Line B is not.
		x = A1.X;
		y = mx2 * x + b2;

		return (
			((A1.Y <= A2.Y && LTE(A1.Y , y) && GTE(A2.Y, y)) || (A1.Y >= A2.Y && GTE(A1.Y, y) && LTE(A2.Y, y))) &&
			((B1.X <= B2.X && LTE(B1.X, x) && GTE(B2.X, x)) || (B1.X >= B2.X && GTE(B1.X, x) && LTE(B2.X, x))) &&
			((B1.Y <= B2.Y && LTE(B1.Y, y) && GTE(B2.Y, y)) || (B1.Y >= B2.Y && GTE(B1.Y, y) && LTE(B2.Y, y))));
	}
	else if (double.IsInfinity(mx2) && !double.IsInfinity(mx1))
	{
		//Line B is a vertical line but line A is not.
		x = B1.X;
		y = mx1 * x + b1;

		return (
		((A1.X <= A2.X && LTE(A1.X, x) && GTE(A2.X, x)) || (A1.X >= A2.X && GTE(A1.X, x) && LTE(A2.X, x))) &&
		((A1.Y <= A2.Y && LTE(A1.Y, y) && GTE(A2.Y, y)) || (A1.Y >= A2.Y && GTE(A1.Y, y) && LTE(A2.Y, y))) &&
		((B1.Y <= B2.Y && LTE(B1.Y, y) && GTE(B2.Y, y)) || (B1.Y >= B2.Y && GTE(B1.Y, y) && LTE(B2.Y, y))));
	}

	//figure out if the point of interception is between all the given points
	return (
		((A1.X <= A2.X && LTE(A1.X, x) && GTE(A2.X, x)) || (A1.X >= A2.X && GTE(A1.X, x) && LTE(A2.X, x))) &&
		((A1.Y <= A2.Y && LTE(A1.Y, y) && GTE(A2.Y, y)) || (A1.Y >= A2.Y && GTE(A1.Y, y) && LTE(A2.Y, y))) &&
		((B1.X <= B2.X && LTE(B1.X, x) && GTE(B2.X, x)) || (B1.X >= B2.X && GTE(B1.X, x) && LTE(B2.X, x))) &&
		((B1.Y <= B2.Y && LTE(B1.Y, y) && GTE(B2.Y, y)) || (B1.Y >= B2.Y && GTE(B1.Y, y) && LTE(B2.Y, y))));
}

/// <summary>
/// Equal-Equal: Tells you if two doubles are equivalent even with floating point precision errors
/// </summary>
/// <param name="Val1">First double value</param>
/// <param name="Val2">Second double value</param>
/// <returns>true if they are within 0.000001 of each other, false otherwise.</returns>
public static bool EE(double Val1, double Val2)
{
return (System.Math.Abs(Val1 - Val2) < 0.000001f);
}

/// <summary>
/// Equal-Equal: Tells you if two doubles are equivalent even with floating point precision errors
/// </summary>
/// <param name="Val1">First double value</param>
/// <param name="Val2">Second double value</param>
/// <param name="Epsilon">The delta value the two doubles need to be within to be considered equal</param>
/// <returns>true if they are within the epsilon value of each other, false otherwise.</returns>
public static bool EE(double Val1, double Val2, double Epsilon)
{
return (System.Math.Abs(Val1 - Val2) < Epsilon);
}

/// <summary>
/// Less Than or Equal: Tells you if the left value is less than or equal to the right value
/// with floating point precision error taken into account.
/// </summary>
/// <param name="leftVal">The value on the left side of comparison operator</param>
/// <param name="rightVal">The value on the right side of comparison operator</param>
/// <returns>True if the left value and right value are within 0.000001 of each other, or if leftVal is less than rightVal</returns>
public static bool LTE(double leftVal, double rightVal)
{
return (EE(leftVal, rightVal) || leftVal < rightVal);

}

/// <summary>
/// Greather Than or Equal: Tells you if the left value is greater than or equal to the right value
/// with floating point precision error taken into account.
/// </summary>
/// <param name="leftVal">The value on the left side of comparison operator</param>
/// <param name="rightVal">The value on the right side of comparison operator</param>
/// <returns>True if the left value and right value are within 0.000001 of each other, or if leftVal is greater than rightVal</returns>
public static bool GTE(double leftVal, double rightVal)
{
return (EE(leftVal, rightVal) || leftVal > rightVal);
}

It could probably be improved by someone smarter than me, but it works well enough for the time being. Note that math like this is susceptible to floating point precision errors so I had to write my own floating point comparison "==" methods with a small epsilon value. I'm still not 100% confident this is perfectly bug free.

Also, since the method I suggest above uses an N^2 approach, you might want to consider using a broad-phase and narrow-phase collision check (if you notice performance problems!). The broad phase check could just be sphere vs sphere intersections, where each sphere completely encloses a shape. It's very fast to implement and doesn't cost much. If the spheres intersect, then you can run the N^2 check for more precision.

There's one other thing you might want to consider: If you're just trying to use one shape to overlap another shape, why don't you just toggle the drawing order of the shapes? Is there a game logic reason behind it?




#5141847 Debugging Graphics

Posted by slayemin on 24 March 2014 - 06:22 PM

I've run into tons of trouble with this exact problem. After spending hours and hours on trying to figure out why my "stuff" isn't rendering, I've come up with a comprehensive checklist of things to verify and check.

If you've never rendered a model or primitive to the screen before using your current API, you want to establish a baseline by trying to do the most basic thing you can: render the simplest model/primitive you can. This is akin to writing your first "hello world" program for graphics. If you can do this, then the rest of graphics programming is simply a matter of adding on additional layers of complexity. The general debugging step then becomes a matter of adding on each subsequent layer of complexity and seeing which one breaks.

At the core, debugging is essentially just a matter of isolating and narrowing the problem down to as few possibilities as possible, then focusing in on each possibility.

 

This is for the C# and XNA API, but you can generalize or translate these points to your own language and API.

Let's start with the comprehensive checklist for primitive rendering (triangle lists, triangle strips, line lists, line strips):
1. Base case: Can you render a triangle to the screen without doing anything fancy?
    No:

       -Are you setting vertex positions for the three corners of the triangle? Are they different from each other? Is it rendering a triangle which should be visible in the current view?

       -Are you actually calling the "DrawPrimitive()" method, or equivalent in your API?
       -Are you using vertex colors which contrast against the background color?
       -Are you correctly applying a shader? Is the shader the correct shader? Have all shader settings been set correctly before you call the draw call?
       -Are you using a valid view and projection matrix which would actually let you view the triangle?
       -Are you using a world matrix which is transforming the triangle off screen? (You shouldn't even need a world matrix yet)
      -Are you using the right primitive type in your DrawPrimitives call? (triangle list vs triangle strip, etc)
2. Indexed verticies: Are you using an index buffer to specify the vertex drawing order?
    Yes:
       -Is the vertex drawing order compatible with your current cull mode? To find out, either toggle your cull mode or change your drawing order.
       -Are you actually creating an index buffer? Are you copying an array of ints into your index buffer to fill it with data? Are the array values correct?
       -If your index buffer is created, are you actually setting the graphics cards active index buffer to your index buffer?
       -Are you using "DrawIndexedPrimitives()" or your API's equivalent draw call? Are you correctly specifying the correct number of primitives to draw?
      -Does the drawing order make sense with regard to the primitive type you're using? ie, the vertex order in a triangle strip is very different from a triangle list.
3. Vertex Data:
   -Are you using a custom vertex declaration? If yes, skip to #4.

   -Are you using a vertex buffer? If yes:
       -You must use a vertex array of some sort, at some point, to populate the vertex buffer. Verify that you're getting an array of verticies in your code. Using your IDE debugger, verify that the vertex data is correct.
      -Are you moving your vertex array data into a vertex buffer? Is the vertex buffer the correct size? Does the vertex buffer have the vertex data from your vertex array?
      -On the graphics card, are you setting the active vertex buffer before drawing? Is there an associated index buffer?
4. Custom Vertex Declarations: Are you using a custom vertex declaration?
  Yes: Then you must be defining your vertex in a Struct.
    -Does your vertex declaration include position information? If not, how are you computing the vertex position in your shader?
     -Does your vertex declaration include every field you want to use?
   -Are you creating a Vertex Declaration correctly?
       -Are your vertex elements being defined in the same order as they are in the struct fields? This is one of the few times declaration variable order really matters because it's specifying the order they appear in the struct memory block.
       -Are you correctly calculating the BYTE size of each variable in the vertex? Are you correctly calculating the field offset in bytes?

       -Are you correctly specifying the vertex element usage?

       -Are you correctly using the right usage index for the vertex element?
       -Are you specifying the correct total byte size for your custom vertex declaration?
 -Is your code correctly using the custom vertex data? ie, putting position information into a position variable.

5. High Level Shader Language (HLSL): Are you using a shader other than "BasicEffect"?
      -Are you actually loading the content for the shader and storing it in an "Effect" data structure?
      -Are you correctly initializing the effect?
      -Are you setting a "Current Technique" in your render call to one which exists in the shader?
      -Does the technique which you use include a vertex shader and a pixel shader? Are they supported by your API and graphics card?
      -Does the vertex shader require any global variables to be set? (ie, camera position, world matricies, textures, etc). Are they being set to valid data?
       -Does the vertex shader output valid data which the pixel shader can use?
       -Does the pixel shader actually output color information?
       -Does your vertex shader math and logic check out correctly? (If you don't know or aren't sure, it's time to use a shader debugger).

6. Shader debuggers:

    I'm using Visual Studio 2010, so I can't use the built-in shader debugger from VS2012. I have to use external tools. Here are the ones I've tried and my thoughts on them:
    NVidia FX Composer: It sucks. It is unstable and crashes frequently, has a high learning curve, and can't attach a shader debugger to an executable file (your game). You can't push custom vertex data into a shader and see how the shader handles it. This program is mostly useful for creating shaders for existing models.
   ATI's GPU PerfStudio: It doesn't work with DirectX 9.0, so if you're using XNA, you're out of luck. Sorry, ATI doesn't care enough. It's also a bit confusing to setup and get running.
    Microsoft PIX: It's a mediocre debugger, but is the best one I've found. It is included in the DirectX SDK. The most useful feature is being able to attach to an EXE and capturing a frame by pressing F12. You can then view every single method call used to draw that frame, along with the method parameters. This tool also lets you view every single resource (DX Surfaces, vertex buffers, index buffers, rasterizer settings, etc) on the graphics card, along with that resources data. This is the best way to see if your vertex data and index buffer data is legit. You can also debug a pixel in your vertex data. This lets you step through your shader code (HLSL or ASM) line by line and see what the actual variable values are being set to. It's an okay debugger, but it doesn't have any intellisense or let you mouse over a variable to see its values like the Visual Studio IDE debugger does. This is the debugger I currently use to debug my shaders. The debugging workflow is a bit cumbersome since you have to rebuild your project, start a new experiment, take a snapshot, find the frame, find the data object you want to see, step through the shader debugger to the variable you're interested in (~2 minutes). Here are a few "nice to know" notes on PIX:
  -If you're looking at the contents of a vertex buffer:

      -Each block is 32 bits, or 4 bytes in size. Keep this in mind if you're using a custom vertex declaration to pack data into a 4 byte block (such as with Color data).

      -0xFF is displayed as a funky value: -1.#Q0
     -Each 4-byte block is displayed in the order it appears in your custom vertex declaration. Each vertex data block is your vertex declaration size / 4. (ie, 36 bytes = 36 / 4 = 9 blocks per vertex)
     -The total size of the buffer is the blocks per vertex multiplied by the number of verticies you have (ie, 9 * 3 = 27 4-byte blocks)
      -Usage: If your vertex declaration byte offsets are off by a byte or more, you should expect to see funky data in the buffer.
  -Vertex Declaration should always match the vertex declaration in your custom vertex declaration struct.

-By selecting the actual draw call in the events list and then looking at the mesh, you can see the vertex information as it appears in the pre-vertex shader (object space), the post-vertex shader (world space), and Viewport (screen space). If the vertex data doesn't look right in any of these steps, you should know where to start debugging.
   *Special note: If you're creating geometries on the graphics card within your shader, you won't see much of value in the pre-vertex shader.
-The debugger includes an shader assembly language debugger. It's nice to have but not very useful.
-The shader compiler will remove any code which isn't used in the final output of a vertex. This is extra annoying when you're trying to set values to a variable and debug them.


Model Debugging:

The same principles from the primitive rendering apply, except you have to verify that you've correctly loaded the model data into memory and are calling the right method to render a model.

One handy tip which may help you for your project: Write down each step it takes to add and render a new model within your project (ie, your projects content creation pipeline & workflow). It's easy to accidentally skip a step as you're creating new assets and end up wasting time trying to isolate the problem to that missed step. An ounce of prevention is worth a pound of cure, right?






PARTNERS