• Advertisement

All Activity

This stream auto-updates     

  1. Past hour
  2. Hello, I have limited programming experience in C++, but have always had a passion for games. Where do I start? I have a rough idea of the type of game I want to develop and am ready to commit the time necessary to learn new languages. Are mobile games too difficult to begin with? Should I begin learning the basics of keyboard controls before attempting touch screens? I would appreciate any input or advice! Thanks! Nick1
  3. Off The Grid is a thrilling first-person, survival horror game. We don't have much of the story-line written yet but we know it will be set in a remote location maybe a forest, and our player who has no character name at this time is an adventurer exploring historical places, mines/hotels/etc with deep pasts to them. It's going to be an AAA release to multiple platforms such as PC, Mac, Xbox, PS4. If you're interested in this project you can join our discord here and share your ideas as we go along with development, The game is expected to release in early 2019. Thank you for reading, Matical Studios
  4. Is there a doctor in the house?

    Just a point of order, but "obsession" refers to repetitive thoughts. The term "compulsion" refers to repetitive actions. So, it would be Button Pressing Compulsion Disorder (BPCD) if anything. Doesn't play as nicely as an acronym but could still get you sympathy at parties ("My wife was diagnosed with BPCD and she keeps pushing for a cure...").
  5. One thing you could do to make your 2D enemies behave in a more interesting manner is physics. You don't have to do anything complicated, just give them some momentum and inertia and it will make the way the move more interesting, rather than just heading at a target in a straight line. https://gamedevelopment.tutsplus.com/tutorials/how-to-create-a-custom-2d-physics-engine-the-basics-and-impulse-resolution--gamedev-6331 Simple AI can make a 2D enemy more interesting, this can include: - Target leading - dodging or taking cover - Following another objects path (not heading toward it, but keeping an array of last points and going through them). Path finding can help your enemies seem more intelligent (there are a million articles about this) https://en.wikipedia.org/wiki/A*_search_algorithm Flocking is always cool http://harry.me/blog/2011/02/17/neat-algorithms-flocking/
  6. 3D Collision Resolution - Sequential Impulses?

    This looks conceptually correct. Did you add gravity into the linear velocities before the solve? E.g. v += g * dt Another thing I am not sure is your effective mass (k). I use me = invM1 + invM2 + dot( r1 x n, invI1 * (r1 x n) ) + dot( r2 x n, invI2 * (r2 x n) ) It is totally possible (and also likely) that our formulas are equivalent! I just cannot tell from just looking and would need to write it down.
  7. Make sure you also take a look at Erin's old Box2D *Lite* from 2006 (not the full Box2D library) download here, so you can see a straightforward implementation. Then you can verify your own equations and understanding by comparing with working code: http://box2d.org/files/GDC2006/Box2D_Lite.zip The solver is very easy to port to 3D. Getting collision detection in 3D is another story. For a full 3D example you can try reading my own code here (basically a Box2D *Lite* port to 3D): https://github.com/RandyGaul/qu3e
  8. Today
  9. Rpg - Moba Hybrid

    Well im not exactly new to writing design concepts... I do them as a side hobby for the fun of it. If i could turn it into a career than I would, but unfortunetly it shall be a hobby for now. I also mathmatically figure out my own board game concepts too when I have free time. This is just one of my ideas. Mad Army of Thou Player controls a custom character (a moba without champions, how chaotic) The custom character will have a weapon, armor, trousers, and shoe of players equipment choice before the match begins. The players will also have 4 abilitiy rune slots, 3 regular and 1 ultimate. This allows ability to stay accustomed to play style, in a play it your way fashion. The equipment will effect players characters appearence as well as gameplay to provide the individual feel of a moba rpg hybrid. Ever get tired of the same old minions? In a boring 3 lane tower map of pure repetition. Customize your own minion layout with 100 different minion types. Players will craft their own minions (robotics? Jk... Sci-fi!!!) Players do get 10 starting minions. The minions can then be choiced a spawn point and route the player wish they took in PVP and survival mode. Minions are not used in story mode unless its a boss fight or 1v1 story ai. The PvP map will have becons, the goal is to aquire the most becon camps before 20 minutes to win the game, becons are spawn points for minions, players can acess the becon menu at any ally becons allowing the player to adjust their own personal minion spawns. Its a massive 5v5 map, take in mind players can only access their personal minion army choice of location not an allies, and thus communication is key to choice where a minion should go. Minions have unique stats to each type of minion. As for balancing of stats for the minions and player equipment, i have a formula to keep the game balanced. Minion stats example Mice- health 500, respawn 10 seconds, damage 50, attack spd .5, movement speed 10, range 1, passive none Fly- health 300, respawn 15 seconds, damage 50, attack spd .7, movement speed 10, range 6, passive none Spirit-health 300, respawn 20, damage 70, attack speed .8, speed 10, range 5, passive: Lifesteal 30% All minions are obtainable by playing story runs, raids, survival, and PvP, being that some are exclusive to some modes. Story, maps are the size of an arena, large but not too large. Easter eggs with tons of rewards, and i do love writing eggs. Its not an egg if it was easy, and it wont have a valuable reward unless it was well deserved. Can be co-op with a 5 man team, no minion to help you unfortunetly. There are a total of 10 story mission and maps to play on. More players the better the rewards but harder the difficulty. Raids - in a big open space or narrow single lane bridge, players have to work together either in a solo, 5 man, or 10 man co op to defeat the enemy. Player woth the army that did the most in scoring gets more loot. Score chart for raid loot 1st- 20 items 2nd- 18 items 3rd- 16 items 4th/5th 12 items 6th to 8th 10 items 9th/10th 8 items. Survival - defend a singe base camp from onslaught of enemy minions, if 20 touches base then its GG. Game ends when all 20 health is lost but player gets loot depending on how many enemies the player has defeated. Rare minion kills loot is shared with whole team. Uncommon and below spawns are depending on individual kill. I have a lot more balancing and design in my journal of this concept and many more. And i thought why not share for feedback.
  10. Cardicus

    version B.34 released (thought I'd give an update every ten versions or so). Still no summon sickness but wall/tower/cactus can now be cast on any allied square. Barbed and Regenerate mechanics added. Slightly smarter AI. Few other things added including an All Chat in multiplayer lobby. (also a weak curse filter added)
  11. 3D Forward Plus Lighting

    All this is in my .fx file //-------------------------------------------------------------------------------------- // File: Deferred.fx // // Deferred Rendering //-------------------------------------------------------------------------------------- //-------------------------------------------------------------------------------------- // Global Variables //-------------------------------------------------------------------------------------- #ifndef BLOCK_SIZE #pragma message( "BLOCK_SIZE undefined. Default to 8.") #define BLOCK_SIZE 8 // should be defined by the application. #endif // The matrices (4x4 matrix of floats) for transforming from 3D model to 2D projection (used in vertex shader) float4x4 WorldMatrix; float4x4 ViewMatrix; float4x4 ProjMatrix; float4x4 ViewProjMatrix; float4x4 InvViewMatrix; // Viewport Dimensions float ViewportWidth; float ViewportHeight; // Lights are stored in a stucture so we can pass lists of them struct SPointLight { float3 LightPosition; float LightRadius; float4 LightColour; }; // Point lights for forward-rendering. The deferred implementation passes the lights in as a vertex buffer (although that is // not a requirement of deferred rendering - could use these variables instead) static const int MaxPointLights = 256; // Maximum number of point lights the shader supports (this is for forward-rendering only) int NumPointLights; // Actual number of point lights currently in use (this is for forward-rendering only) SPointLight PointLights[MaxPointLights]; // List of point lights (for forward-rendering only) // Other light data float3 AmbientColour; float3 DiffuseColour; float3 SpecularColour; float SpecularPower; float3 CameraPos; float CameraNearClip; // Textures Texture2D DiffuseMap; // Diffuse texture map (with optional specular map in alpha) Texture2D NormalMap; // Normal map (with optional height map in alpha) // G-Buffer when used as textures for lighting pass Texture2D GBuff_DiffuseSpecular; // Diffuse colour in rgb, specular strength in a Texture2D GBuff_WorldPosition; // World position at pixel in rgb (xyz) Texture2D GBuff_WorldNormal; // World normal at pixel in rgb (xyz) // Samplers to use with the above textures SamplerState TrilinearWrap { Filter = MIN_MAG_MIP_LINEAR; AddressU = Wrap; AddressV = Wrap; }; SamplerState LinearClamp { Filter = MIN_MAG_MIP_LINEAR; AddressU = Clamp; AddressV = Clamp; }; // Nearly always sample the g-buffer with point sampling (i.e. no bilinear, trilinear etc.) because we don't want to introduce blur SamplerState PointClamp { Filter = MIN_MAG_MIP_POINT; AddressU = Clamp; AddressV = Clamp; }; //////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////// cbuffer ScreenToViewParams : register(b3) { float4x4 InverseProjection; float2 ScreenDimensions; } // Convert clip space coordinates to view space float4 ClipToView(float4 clip) { // View space position. float4 view = mul(InverseProjection, clip); // Perspective projection. view = view / view.w; return view; } // Convert screen space coordinates to view space. float4 ScreenToView(float4 screen) { // Convert to normalized texture coordinates float2 texCoord = screen.xy / ScreenDimensions; // Convert to clip space float4 clip = float4(float2(texCoord.x, 1.0f - texCoord.y) * 2.0f - 1.0f, screen.z, screen.w); return ClipToView(clip); } struct Plane { float3 N; // Plane normal. float d; // Distance to origin. }; Plane ComputePlane(float3 p0, float3 p1, float3 p2) { Plane plane; float3 v0 = p1 - p0; float3 v2 = p2 - p0; plane.N = normalize(cross(v0, v2)); // Compute the distance to the origin using p0. plane.d = dot(plane.N, p0); return plane; } struct Frustum { Plane planes[4]; // left, right, top, bottom frustum planes. }; struct Sphere { float3 c; // Center point. float r; // Radius. }; bool SphereInsidePlane(Sphere sphere, Plane plane) { return dot(plane.N, sphere.c) - plane.d < -sphere.r; } bool SphereInsideFrustum(Sphere sphere, Frustum frustum, float zNear, float zFar) { bool result = true; // First check depth // Note: Here, the view vector points in the -Z axis so the // far depth value will be approaching -infinity. if (sphere.c.z - sphere.r > zNear || sphere.c.z + sphere.r < zFar) { result = false; } // Then check frustum planes for (int i = 0; i < 4 && result; i++) { if (SphereInsidePlane(sphere, frustum.planes)) { result = false; } } return result; } struct Cone { float3 T; // Cone tip. float h; // Height of the cone. float3 d; // Direction of the cone. float r; // bottom radius of the cone. }; bool PointInsidePlane(float3 p, Plane plane) { return dot(plane.N, p) - plane.d < 0; } bool ConeInsidePlane(Cone cone, Plane plane) { // Compute the farthest point on the end of the cone to the positive space of the plane. float3 m = cross(cross(plane.N, cone.d), cone.d); float3 Q = cone.T + cone.d * cone.h - m * cone.r; // The cone is in the negative halfspace of the plane if both // the tip of the cone and the farthest point on the end of the cone to the // positive halfspace of the plane are both inside the negative halfspace // of the plane. return PointInsidePlane(cone.T, plane) && PointInsidePlane(Q, plane); } bool ConeInsideFrustum(Cone cone, Frustum frustum, float zNear, float zFar) { bool result = true; Plane nearPlane = { float3(0, 0, -1), -zNear }; Plane farPlane = { float3(0, 0, 1), zFar }; // First check the near and far clipping planes. if (ConeInsidePlane(cone, nearPlane) || ConeInsidePlane(cone, farPlane)) { result = false; } // Then check frustum planes for (int i = 0; i < 4 && result; i++) { if (ConeInsidePlane(cone, frustum.planes)) { result = false; } } return result; } //////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////// //-------------------------------------------------------------------------------------- // Forward Rendering and Common Structures //-------------------------------------------------------------------------------------- // This structure describes generic vertex data to be sent into the vertex shader // Used in forward rendering and when creating the g-buffer in deferred rendering struct VS_INPUT { float3 Pos : POSITION; float3 Normal : NORMAL; float2 UV : TEXCOORD0; }; // This stucture contains the vertex data transformed into projection space & world space, i.e. the result of the usual vertex processing // Used in forward rendering for the standard pixel lighting stage, but also used when building the g-buffer - the main geometry processing // doesn't change much for deferred rendering - it's all about how this data is used next. struct PS_TRANSFORMED_INPUT { float4 ProjPos : SV_Position; float3 WorldPosition : POSITION; float3 WorldNormal : NORMAL; float2 UV : TEXCOORD0; }; // For both forward and deferred rendering, the light flares (sprites showing the position of the lights) are rendered as // a particle system (this is not really to do with deferred rendering, just a visual nicety). Because the particles are // transparent (additive blending), they must be rendered last, and they can't use deferred rendering (see lecture). // This is the input to the particle pixel shader. struct PS_LIGHTPARTICLE_INPUT { float4 ProjPos : SV_Position; float2 UV : TEXCOORD0; nointerpolation float3 LightColour : COLOR0; // The light colour is passed to the pixel shader so the flare can be tinted. See below about "nointerpolation" }; ////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////// struct COMPUTE_SHADER_INPUT { uint3 groupID : SV_GroupID; // 3D index of the thread group in the dispatch. uint3 groupThreadID : SV_GroupThreadID; // 3D index of local thread ID in a thread group. uint3 dispatchThreadID : SV_DispatchThreadID; // 3D index of global thread ID in the dispatch. uint groupIndex : SV_GroupIndex; // Flattened local index of the thread within a thread group. }; cbuffer DispatchParams : register(b4) { uint3 numThreadGroups; uint3 numThreads; } RWStructuredBuffer<Frustum> out_Frustums : register(u0); [numthreads(BLOCK_SIZE, BLOCK_SIZE, 1)] void CS_ComputeFR(COMPUTE_SHADER_INPUT IN) { // View space eye position is always at the origin. const float3 eyePos = float3(0, 0, 0); // Compute the 4 corner points on the far clipping plane to use as the // frustum vertices. float4 screenSpace[4]; // Top left point screenSpace[0] = float4(IN.dispatchThreadID.xy * BLOCK_SIZE, -1.0f, 1.0f); // Top right point screenSpace[1] = float4(float2(IN.dispatchThreadID.x + 1, IN.dispatchThreadID.y) * BLOCK_SIZE, -1.0f, 1.0f); // Bottom left point screenSpace[2] = float4(float2(IN.dispatchThreadID.x, IN.dispatchThreadID.y + 1) * BLOCK_SIZE, -1.0f, 1.0f); // Bottom right point screenSpace[3] = float4(float2(IN.dispatchThreadID.x + 1, IN.dispatchThreadID.y + 1) * BLOCK_SIZE, -1.0f, 1.0f); float3 viewSpace[4]; // Now convert the screen space points to view space for (int i = 0; i < 4; i++) { viewSpace = ScreenToView(screenSpace).xyz; } Frustum frustum; // Left plane frustum.planes[0] = ComputePlane(eyePos, viewSpace[2], viewSpace[0]); // Right plane frustum.planes[1] = ComputePlane(eyePos, viewSpace[1], viewSpace[3]); // Top plane frustum.planes[2] = ComputePlane(eyePos, viewSpace[0], viewSpace[1]); // Bottom plane frustum.planes[3] = ComputePlane(eyePos, viewSpace[3], viewSpace[2]); if (IN.dispatchThreadID.x < numThreads.x && IN.dispatchThreadID.y < numThreads.y) { uint index = IN.dispatchThreadID.x + (IN.dispatchThreadID.y * numThreads.x); out_Frustums[index] = frustum; } } Texture2D DepthTextureVS : register(t3); // Precomputed frustums for the grid. StructuredBuffer<Frustum> in_Frustums : register(t9); // Global counter for current index into the light index list. RWStructuredBuffer<uint> o_LightIndexCounter : register(u2); RWStructuredBuffer<uint> t_LightIndexCounter : register(u3); // Light index lists and light grids. RWStructuredBuffer<uint> o_LightIndexList : register(u4); RWStructuredBuffer<uint> t_LightIndexList : register(u5); RWTexture2D<uint2> o_LightGrid : register(u6); RWTexture2D<uint2> t_LightGrid : register(u7); static uint uMinDepth; static uint uMaxDepth; static Frustum GroupFrustum; // Opaque geometry light lists. static uint o_LightCount; static uint o_LightIndexStartOffset; static uint o_LightList[BLOCK_SIZE*BLOCK_SIZE*1]; // Transparent geometry light lists. static uint t_LightCount; static uint t_LightIndexStartOffset; static uint t_LightList[BLOCK_SIZE*BLOCK_SIZE*1]; ///CHANGE IF NEEDED!!!!! READ!!!!! Texture2D LightCountHeatMap : register(t10); RWTexture2D<float4> DebugTexture : register(u1); void o_AppendLight(uint lightIndex) { uint index; // Index into the visible lights array. InterlockedAdd(o_LightCount, 1, index); if (index < 256) { o_LightList[index] = lightIndex; } } // Add the light to the visible light list for transparent geometry. void t_AppendLight(uint lightIndex) { uint index; // Index into the visible lights array. InterlockedAdd(t_LightCount, 1, index); if (index < 256) { t_LightList[index] = lightIndex; } } [numthreads(BLOCK_SIZE, BLOCK_SIZE, 1)] void CS_main(COMPUTE_SHADER_INPUT IN) { CS_ComputeFR(IN); // Calculate min & max depth in threadgroup / tile. int2 texCoord = IN.dispatchThreadID.xy; float fDepth = DepthTextureVS.Load(int3(texCoord, 0)).r; uint uDepth = asuint(fDepth); if (IN.groupIndex == 0) // Avoid contention by other threads in the group. { uMinDepth = 0xffffffff; uMaxDepth = 0; o_LightCount = 0; t_LightCount = 0; GroupFrustum = in_Frustums[IN.groupID.x + (IN.groupID.y * numThreadGroups.x)]; } GroupMemoryBarrierWithGroupSync(); InterlockedMin(uMinDepth, uDepth); InterlockedMax(uMaxDepth, uDepth); GroupMemoryBarrierWithGroupSync(); float fMinDepth = asfloat(uMinDepth); float fMaxDepth = asfloat(uMaxDepth); // Convert depth values to view space. float minDepthVS = ScreenToView(float4(0, 0, fMinDepth, 1)).z; float maxDepthVS = ScreenToView(float4(0, 0, fMaxDepth, 1)).z; float nearClipVS = ScreenToView(float4(0, 0, 0, 1)).z; // Clipping plane for minimum depth value // (used for testing lights within the bounds of opaque geometry). Plane minPlane = { float3(0, 0, -1), -minDepthVS }; // Cull lights // Each thread in a group will cull 1 light until all lights have been culled. for (uint i = IN.groupIndex; i < MaxPointLights; i += BLOCK_SIZE * BLOCK_SIZE) { Sphere sphere = { PointLights.LightPosition, PointLights.LightRadius }; if (SphereInsideFrustum(sphere, GroupFrustum, nearClipVS, maxDepthVS)) { // Add light to light list for transparent geometry. t_AppendLight(i); if (!SphereInsidePlane(sphere, minPlane)) { // Add light to light list for opaque geometry. o_AppendLight(i); } } } // Wait till all threads in group have caught up. GroupMemoryBarrierWithGroupSync(); // Update global memory with visible light buffer. // First update the light grid (only thread 0 in group needs to do this) if (IN.groupIndex == 0) { // Update light grid for opaque geometry. InterlockedAdd(o_LightIndexCounter[0], o_LightCount, o_LightIndexStartOffset); o_LightGrid[IN.groupID.xy] = uint2(o_LightIndexStartOffset, o_LightCount); // Update light grid for transparent geometry. InterlockedAdd(t_LightIndexCounter[0], t_LightCount, t_LightIndexStartOffset); t_LightGrid[IN.groupID.xy] = uint2(t_LightIndexStartOffset, t_LightCount); } GroupMemoryBarrierWithGroupSync(); // Now update the light index list (all threads). // For opaque goemetry. for (i = IN.groupIndex; i < o_LightCount; i += BLOCK_SIZE * BLOCK_SIZE) { o_LightIndexList[o_LightIndexStartOffset + i] = o_LightList; } // For transparent geometry. for (i = IN.groupIndex; i < t_LightCount; i += BLOCK_SIZE * BLOCK_SIZE) { t_LightIndexList[t_LightIndexStartOffset + i] = t_LightList; } // Update the debug texture output. if (IN.groupThreadID.x == 0 || IN.groupThreadID.y == 0) { DebugTexture[texCoord] = float4(0, 0, 0, 0.9f); } else if (IN.groupThreadID.x == 1 || IN.groupThreadID.y == 1) { DebugTexture[texCoord] = float4(1, 1, 1, 0.5f); } else if (o_LightCount > 0) { float normalizedLightCount = o_LightCount / 50.0f; float4 lightCountHeatMapColor = LightCountHeatMap.SampleLevel(LinearClamp, float2(normalizedLightCount, 0), 0); DebugTexture[texCoord] = lightCountHeatMapColor; } else { DebugTexture[texCoord] = float4(0, 0, 0, 1); } } //-------------------------------------------------------------------------------------- // Forward rendering shaders - nothing particularly new here //-------------------------------------------------------------------------------------- // This vertex shader transforms the vertex into projection space & world space and passes on the UV, i.e. the usual vertex processing PS_TRANSFORMED_INPUT VS_TransformTex(VS_INPUT vIn) { PS_TRANSFORMED_INPUT vOut; // Transform the input model vertex position into world space float4 modelPos = float4(vIn.Pos, 1.0f); float4 worldPos = mul(modelPos, WorldMatrix); vOut.WorldPosition = worldPos.xyz; // Further transform the vertex from world space into view space and into 2D projection space for rendering float4 viewPos = mul(worldPos, ViewMatrix); vOut.ProjPos = mul(viewPos, ProjMatrix); // Transform the vertex normal from model space into world space float4 modelNormal = float4(vIn.Normal, 0.0f); vOut.WorldNormal = mul(modelNormal, WorldMatrix).xyz; // Pass texture coordinates (UVs) on to the pixel shader, the vertex shader doesn't need them vOut.UV = vIn.UV; return vOut; } // Pixel shader that calculates per-pixel lighting and combines with diffuse and specular map // Basically the same as previous pixel lighting shaders except this one processes an array of lights rather than a fixed number // Obviously, this isn't efficient for large number of lights, which is the point of using deferred rendering instead of this float4 PS_PixelLitDiffuseMap(PS_TRANSFORMED_INPUT pIn) : SV_Target { //////////////////// // Sample texture // Extract diffuse material colour for this pixel from a texture float4 DiffuseMaterial = DiffuseMap.Sample(TrilinearWrap, pIn.UV); // clip( DiffuseMaterial.a - 0.5f ); // Discard pixels with alpha < 0.5, the model in this lab uses a lot of alpha transparency, but this impacts performance // Renormalise normals that have been interpolated from the vertex shader float3 worldNormal = normalize(pIn.WorldNormal); /////////////////////// // Calculate lighting // Calculate direction of camera float3 CameraDir = normalize(CameraPos - pIn.WorldPosition); // Position of camera - position of current vertex (or pixel) (in world space) // Sum the effects of each light, float3 TotalDiffuse = AmbientColour; float3 TotalSpecular = 0; for (int i = 0; i < NumPointLights; i++) { float3 LightVec = PointLights.LightPosition - pIn.WorldPosition; float LightIntensity = saturate(1.0f - length(LightVec) / PointLights.LightRadius); // Tweaked the attenuation approach, see the function PS_PointLight above float3 LightDir = normalize(LightVec); float3 Diffuse = LightIntensity * PointLights.LightColour * max(dot(worldNormal, LightDir), 0); TotalDiffuse += Diffuse; float3 halfway = normalize(LightDir + CameraDir); TotalSpecular += Diffuse * pow(max(dot(worldNormal, halfway), 0), SpecularPower); } //////////////////// // Combine colours // Combine maps and lighting for final pixel colour float4 combinedColour; combinedColour.rgb = DiffuseMaterial.rgb * TotalDiffuse + SpecularColour * TotalSpecular; // The models in this lab have no specular in texture alpha, so use specular colour from X-file combinedColour.a = 1.0f; return combinedColour; } // Dummy vertex shader for the light particle system geometry shader below. The geometry shader does all the work VS_POINTLIGHT_INPUT VS_LightParticles(VS_POINTLIGHT_INPUT vIn) { return vIn; } // Pixel shader to render the flares at the centre of each light, nothing special here float4 PS_LightParticles(PS_LIGHTPARTICLE_INPUT pIn) : SV_Target { // Tint texture with colour of the light float3 diffuse = DiffuseMap.Sample(TrilinearWrap, pIn.UV) * pIn.LightColour; return float4(diffuse, 0.0f); } //-------------------------------------------------------------------------------------- // States //-------------------------------------------------------------------------------------- // States are needed to switch between additive blending for the lights and no blending for other models RasterizerState CullNone // Cull none of the polygons, i.e. show both sides { CullMode = None; FillMode = SOLID; }; DepthStencilState DepthWritesOn // Write to the depth buffer - normal behaviour { DepthFunc = LESS; DepthWriteMask = ALL; }; BlendState NoBlending // Switch off blending - pixels will be opaque { BlendEnable[0] = FALSE; }; //-------------------------------------------------------------------------------------- // Techniques //-------------------------------------------------------------------------------------- // A particle system of lights (just the sprite to show the location, not the effect of the light). Rendered as camera-facing quads with additive blending technique11 ForwardPlus { pass P0 { SetVertexShader(CompileShader(vs_5_0, VS_TransformTex())); SetGeometryShader(NULL); SetPixelShader(CompileShader(ps_5_0, PS_PixelLitDiffuseMap())); SetComputeShader(CompileShader(cs_5_0, CS_main())); SetBlendState(NoBlending, float4(0.0f, 0.0f, 0.0f, 0.0f), 0xFFFFFFFF); SetRasterizerState(CullNone); SetDepthStencilState(DepthWritesOn, 0); } } ForwardPlus.fx
  12. 3D Forward Plus Lighting

    How about you show us what you have managed so far, and where specifically you are stuck?
  13. Need to map Xbox360 controller.

    I have a Logitech Gamepad F310, The reason I said it was Xbox360 is because my computer treats it like such. I can map my controller buttons to jump, but I CAN NOT map the left trigger to the keyboard left shift to run. Can someone help me out? I'm using Unity BTW.
  14. Hi, I am working on a project where I'm trying to use Forward Plus Rendering on point lights. I have a simple reflective scene with many point lights moving around it. I am using effects file (.fx) to keep my shaders in one place. I am having a problem with Compute Shader code. I cannot get it to work properly and calculate the tiles and lighting properly. Is there anyone that is wishing to help me set up my compute shader? Thank you in advance for any replies and interest!
  15. Is there a doctor in the house?

    Button Obsession Sickness Syndrome, or BOSS. (I am not a doctor and arbitrarily chose words that would spell "BOSS".) L. Spiro
  16. GDC 2017 Kicks Off

    The Game Developer's Conference 2017 kicks off this week. Every year 25,000+ developers descend upon San Francisco, CA for a week full of education, networking, and - let's be honest - parties! This week you can watch for updates, news, pictures, and more from GameDev.net. We'll be speaking with several companies about their latest products and technologies for game developers, but we'll also talk to developers, go to events and parties, and try to give you a general feel of GDC this year. Outside of general press release and information, our approach will include short opinion posts with pictures and/or video included in most. Of course, you can get coverage from just about every other media source out there, including Twitter and Facebook, but we hope you'll find us as an informative and entertaining source of a look inside GDC 2017. So what do we have to look forward to? Personally, this is my 14th straight year at GDC. I remember when it was less than 10,000 people and held in San Jose, and a lot has changed. In fact, it's almost irresponsible to say a lot has changed because it's practically unfathomable to have predicted how much has changed at GDC and across the industry. Every year is an adventure, and while many times it seems like not much has changed, ultimately you leave realizing a lot has changed. We have two GameDev.net staff joining us this year to help with press coverage - Emily and Andrea. They'll be feeding me their reports, and I'll be posting them. As far as the week ahead, let's start with Parties. Unfortunately, most companies planned their parties on Tuesday evening. The trend has been moving toward earlier in the week the last few years as the BIG parties are typically Wednesday and/or Thursday evening, so companies are trying to get bodies in the door. Unfortunately it might have backfired as everyone made the move to Tuesday. This isn't necessarily a bad thing, but at least for our coverage it means you'll get the inside look at one party on Tuesday night due to a pre-commitment before the rest of the invites flowed in. We do have another Wednesday evening and Thursday evening, but I learned long ago that you have to pace yourself. Wednesday we're meeting with a variety of companies, ranging from VR to audio and developer services. We'll get a first hand look at some of the latest technologies available to developers to enhance immersion and technologies in their games. Thursday is a mix of more meetings and free range on the Expo floor. We hope to swing by the Independent Games Pavilion to talk to the IGF winners (presented Wednesday evening) and other interesting games at IGF and GDC Play - a new area created for developers to showcase their games. Friday we're leaving wide open. We've learned that plans change over the course of GDC week, and it's better to stay flexible than to stick to a rigid plan. Of course, we will also be going to a few sessions and will provide key points from them as well. Wednesday in particular we'll be attending two of the Classic Games Postmortem's for Oregon Trail and Sid Meier's Civilization - a personal favorite. For me, GDC17 starts tomorrow evening with a 1.5 hour flight from San Diego to San Francisco, CA. My only plans are to get dinner and maybe meet up with a few people to give them GameDev.net t-shirts. Are you attending GDC? If so, leave a comment. And if you aren't making it to GDC this year, let us know what you'd like to see in the comments below.
  17. Yesterday
  18. Old guy seeks help!

    I don't think you can get much simpler than Game Maker if you want visual scripting. But you can take a look at Godot, which also has visual scripting, and see if you like it more. Or even GameSalad. Unreal also has a visual scripting tool, but it's overkill for almost any beginner. If you go for a traditional programming approach, you can try Love2D, a simple 2D framework. It uses Lua, which is a simple language. And take some time to learn maths. For simple 2D games, vectors would be a good start, as they allow you to make fluid movement for your games (characters, enemies and so on) and helps when programming simple steering behaviors (like follow or pursuit). It's something you can learn the basics in a day, and you will get better as you use it.
  19. Problem with sleep

    I go to the gym 3 times a week, so for me that's not a problem.
  20. No, the data passed to the sendto() function is the same as you will see in the packet on the wire. Instead, you need to look back up the call stack and see who calls this function, and then figure out how that data is generated. Sometimes, setting write breakpoints on buffers being constructed will help. (This is obviously easier if the buffers are in a predictable address in memory.)
  21. Problem with sleep

    In addition to what others have said, make sure you're getting sufficient exercise. Try to get at least 30 minutes of moderate exercise 5 times a week. In addition to all the normal health benefits, I find it much easier to get to sleep when I have been active than when I've been sedentary for the whole day.
  22. Old guy seeks help!

    I've been searching online for somewhere to ask this, I'm not sure this is the right place so I'm sorry if I'm in the wrong section. The thing is, I'm a passionate gamer and have been gaming since the late 1970's (yes I'm that old) and I would love to create some simple games to have fun with, now I'm not stupid enough to think I'm going to code another Ultima V, Morrowind or Elite, but I'd love to be able to make simple games, like the old arcade games. dig dug, space invaders, galaxians etc. Now I have tried game maker and I've been toying with it close to a year but it's just over my head and even following tutorials, videos and books I'm still pretty clueless, so thats out. I am looking for something I can buy to create simple games with, now I don't wish to pay a subscription so something like Construct 3 is out, I just want one price. Could you recommend something really easy to use for an old guy who utterly failed maths. Thank you!
  23. Web Development Decisions.

    All very interesting, glad I asked. What do you all mean by a CSS framework, please. Sincerely, Josheir
  24. Problem with sleep

    That's interesting. I'll try it tomorrow to see if it helps me at least a little bit, and I'll share my experience.
  25. Problem with sleep

    Thinking about algorithms or implementation details is the programmer version of a common reason for insomnia in which your mind is simply to active to get to sleep or to maintain sleep. I have horrible insomnia (at one point last week it was approaching 100 hours with only 10-16 hours of sleep), and so have strategies for fighting some of these problems. I tend not to program just before bed and instead engage in a long multi-hour wind-down involving video games (Super Smash for Wii U), chess, chat, and piano. It's good to have a variety of things you can do to pass a few hours before bed after programming. I am about to see a sleep specialist and will have better advice to share afterwards, but as grumpyOldDude points out it does tend to worsen with age. In my case sound sleep is almost impossible to obtain, regardless of what I do in the day, and it is impossible to say whether that was always going to be the case as I always naturally tended towards the nocturnal life, or if it got worse because I was unable to fix some tendency such as over-active mind earlier. L. Spiro
  26. Problem with sleep

    I'll keep that in mind, thanks for the tip.
  27. This Week in Game-Guru - 02/19/2018

    This week's update will be short and brief. A few major components that are worth mentioning in my area at the end about my own works. ENGINE UPDATE PROGRESS REPORTThis week was a HUGE week for Game-Guru. Lee dropped a new public preview update onto our heads and aside from a minor issue with one of my levels loading a strange view (due to something wonky with the AI waypoint system causing my camera to screw up) it is a MASSIVE improvement. Speeds are way up. Weird stutter and lag is gone. Crashes caused by decals are gone. Other associated fixes are still on the way. Impressively, Preben (a forum member who does a lot of code fixes for us) has been getting increasingly involved and it shows. Updates to the update came in quick succession with minor bugfixes deployed often times in several same-day hotfixes. It's a huge jump. Expect a little delay before the next major release as they transition this PP to an actual Game-Guru release around April. For now, it's a great gift and seems to really run well on my system. Full notes and information can be found both here: https://forum.game-guru.com/thread/219024 And on the news update here: https://www.game-guru.com/news-post/gameguru-new-public-preview NEW PRODUCTS IN THE STORE This week on the store we have several new items. You're not dreaming, my name is actually up there for once.Mstockton has put in a low-priced 'burn pile' for junkyards. I like it though I do wish the texturing was a little better. That said it's impressively low priced so it could definitely find a home in your inventory. https://www.tgcstore.net/pack/11005 Also we have several updates - Teabone has added a 'gain health by drinking water' script. It's expressly listed as being similar to the ones in Skyrim and Fallout. https://www.tgcstore.net/product/31049 Graphix has also included something eerily reminiscent of fallout - a 'mech hangar'. This is a nice piece though I do wish he'd have done a little more to differentiate it from Fallout 4's power armor station. I worry that one might run into some snags legally if they used it in a product for sale. https://www.tgcstore.net/pack/11006 AlexGCC has put together a DAMN fine building called the 'derelict building'. This building unfortunately has a slightly ajar door but the overwhelmingly high quality output of the model makes up for that little misgiving I have. It's priced to sell. https://www.tgcstore.net/product/31046 Lastly my own product - the Camerakit (aka camkit) is on the store. This product is expressly designed to make camera work for security monitors, cutscenes, etc much easier for the average user. More on how and why down below but for now, check out this video: It can be picked up from the store here: https://www.tgcstore.net/product/31048 (NOTE - IT'S on sale until 2/25!) FREE STUFF The realm of free stuff has really slowed down a lot. This is all I could find this week - an updated copy of the sandstone building that comes with the base Game-Guru system: https://forum.game-guru.com/attachment/18255 THIRD PARTY TOOLS Looks like OldPman is still improving the normalizator utility which I understand will have full PBR support if not now, then soon. Latest update is as follows: Hey. Pirate Myke and Earthling45 , thanks for the feedback. I'm working on expanding the toolkit at the moment and there will be many opportunities added. A square brush, applying text, stamps is the first on my desktop. Adding the color of the texture and several other filters are now also in development. If you had in mind the choice of the color of the brush, then to the left of the alpha under the paint button, there is a rectangle clicking on which you open the colorpicker. - OldPMan This is a really good utility I need to get my mitts on when I get some free time. RANDOM ACTS OF CREATIVITYThere's a lot of stuff on youtube. I found a new project with a purported '239 hours' of development. That's quite a lot for a single project! https://www.youtube.com/watch?v=xP4I-Xq1auA There's also this great lighting tutorial by DuchenKuke: https://www.youtube.com/watch?v=d6-7m6QLQMA He also made a pretty good mountain-making tutorial showcasing an interesting method of using the flattener tool to create realistic shapes. https://www.youtube.com/watch?v=RlLS7Kts9mI And a totally Russian (I think?) tutorial. Feel free to watch it if you dare, though I recommend using auto-translate to read it. https://www.youtube.com/watch?v=eb1zCS8kPxw Lastly Teabone (a generally awesome forum-bro) has been doing some really great scripting work of his own: https://www.youtube.com/watch?v=uVjoOoSRzwg IN MY OWN WORKS So I just wanted to bring up a few things, things which are going on in my world here. Foremost, I have my Camera Kit (shown above!) on the store. This project has been a while in the making and has some really cool features such as tracking, re-use, multi-camera setups, cutscene style location triggers and more. I may add a trigger zone in the future but for now I have more than enough to get people started. This kit came from my work with the Notepad++ API I made. I realized camera controls in this game engine could be fairly complex. I wondered if maybe it'd be easier to have a camera that had a place you could easily point to instead of calculating Euler angles and rotations. I mean who really wants to use a rotation matrix just to figure out where to place a camera?! So I created a very simple camera system which sort of blossomed from there into an easy to use kit for the 'everyman'. Simply place an object, attach the camera initiator, then choose it's target. Easy! The multi-cam system works the same, except the camera and initiator are separated so you can multiple cameras. Next, I submitted my proposal to a publisher. What, you ask? Let me say that again - I submitted my Game-Guru book proposal to a publisher. I've been in contact with several prominent forum members, Lee, Rick V, and a publishing house (CRC Press). I am currently working on a very lengthy print-on-paper book which will be a start to finish guide to Game-Guru. I'm looking forward to working on that over the next year. It will be eating a lot of my spare cycles but I don't mind even in the slightest. The book is probably going to be on the order of 430 pages, tons of pictures, tutorials, a complete walkthrough of how to build a high quality game, and more. Keep an eye out for more details as I complete the work on it. View the full article
  28. Hi all, I posted here a few weeks ago to get some pointers on my SAT collision detection implementation. The help I got was great, so I thought I'd pester this forum again with another question: Best ways to handle 3D collision resolution? If anyone has any resources on the topic, I'd love to see them. I have looked over a few so far but sometimes I feel like I'm reading another language when looking into this topic. Having done some preliminary research, sequential impulses seems to be the way to go but I'm having trouble understanding it. The mathematics has me confused straight away. Looking at Erin Catto's GDC slides on the subject it seems I need to calculate a normal impulse first to resolve penetration. I'm going to highlight my process so far cause I do not feel I have interpreted the calculations correctly, and hopefully I can get some pointers on where things may be going wrong. First I calculate a relative velocity between the two objects: relV = v2 + w2 x r2 - v1 - w1 x r1 (where v is the linear velocity, w is the angular velocity, and r is the contact point relative to the center of mass) Then I need to compute that along the normal: vn = relV . n (where n is the contact normal) Then I need to calculate k: kn = invM1 + invM2 + [invI1(r1 x n) x r1 + invI2(r2 x n) x r2] . n (where invM is the inverse mass and invI is the inverse inertia in world space) Combining these calculations I get a normal impulse: Pn = max(-vn / kn, 0) Then I can get the magnitude of my normal impulse by multiplying with the contact normal: P = Pn * n This can be added/subtracted to my linear velocities by multiplying it against the inverse mass of the object: linearVel += P * invM And to the angular velocities by transforming the relative contact point angularVel += invI(r x P) The resulting velocities should be adjusted by a basic normal impulse to counteract any penetrations occurring between the colliding objects (not yet dealing with velocity correction), but currently I don't seem to be getting any impulse (Pn = 0). Maybe that's just the case since I haven't added any bias impulses yet?
  1. Load more activity
  • Advertisement