Jump to content

  • Log In with Google      Sign In   
  • Create Account

Banner advertising on our site currently available from just $5!


1. Learn about the promo. 2. Sign up for GDNet+. 3. Set up your advert!


Matt Carr

Member Since 29 Apr 2006
Offline Last Active Apr 06 2014 02:01 AM

Topics I've Started

Every Semicolon: A video capture of a game made from start to finish

24 February 2012 - 07:26 PM

Before starting our new game project I had a thought: that I could record the entire process from start to finish and put it online as both a learning tool for others, and a way of receiving feedback on what I do so I too might learn some new things and improve the game before it’s released. This will be a fairly long series as while the game is not massive, it’s certainly not a small weekend game-jam sort of production.

So far I’ve recorded 2 videos, one 2 hours and the other 4 hours long. Trying to talk to yourself while working for 4 hours straight and not devolve into inane rambling is difficult (so far impossible), but this is something I want to improve on as I continue with this series. I’d really appreciate feedback on any aspect of the videos, be it the game itself, the video production, what I talk about or my programming and development techniques.

Here is Part 1:

http://www.youtube.com/watch?v=eaGnrQPWOss
(I recommend watching in fullscreen 1080P. Watch on YouTube)

Some of the events in this video:
  • I start a new Unity project and begin setting up the folder structure
  • I discuss some general concepts of the Unity engine
  • I write some basic scripts and show how to apply and use them
  • I show how to setup and use prefabs
  • I show some shader development and get fooled by Unity 3.5's new Linear lighting solution which sends me on a shader debugging session
  • I write a new custom editor window for measuring distance between objects
Here is Part 2:

http://www.youtube.com/watch?v=jy0jPFANGpg
(I recommend watching in fullscreen 1080P. Watch on YouTube)

Some of the events in this video:
  • I develop the initial elements of a camera manager to handle smoothly repositioning the camera
  • I create the first basic siege weapon in the game, the catapult with placeholder graphics
  • I create the top and side aiming modes with placeholder controls and graphics
  • I create the first "bullet" and set it up to be shot into the fortress
A little about me: I work full time as the lead programmer of a team working on large “serious game” projects in Unity and work on my own stuff (like the game in this series) on the side in my spare time. I’ve been working with Unity for around 3 years almost every day.

I really want to evolve and improve these videos so any feedback is appreciated. Especially important is if you really hate anything I’m doing, please let me know and I’ll try to change that.

I'll also be posting each video on my Journal and will update this thread as new videos go up.

Cluster rendering with Unity

15 March 2010 - 10:58 AM

We're potentially going to be working with some display systems requiring a rendering cluster in the near future and I'd like to use Unity since we've used it on previous projects effectively. I'm not especially knowledgable about cluster rendering at this stage, so I was hoping that perhaps someone here might be able to fill in the blanks. From what I know about the hardware we would run on, they have hardware frame locking between the systems (NVIDIA Quadro cards I'm guessing). Where my knowledge on the subject fails is at the point of knowing what else is required (if anything) on the software side to have synced display from all the systems. I'd then need to figure out if whatever is required could be done in Unity. Also, I don't see any issue with using Unity's networking to keep the camera and dynamic object transform info synced, but maybe I'm assuming too much there. I would think that perhaps updating should take place at some consistant delta and the random seed should be synced at launch, but other than that the systems should be able to run the application autonomously while receiving input info from the server and occasional authoritative transform data checks. If anyone could shed some light on this for me then I'd be grateful. I'm hoping the time required to get this working with Unity will be significantly less than creating a workable engine/framework with something like OpenSceneGraph which has cluster rendering support built in. The answer I'm hoping for is "run with V-Sync on and the hardware frame locking will take care of the rest", but I won't cross my fingers.

Slope based terrain pathfinding

15 May 2008 - 08:23 PM

I've been thinking about how I would setup the pathfinding nodes/edges for the heightmap based terrain in the RTS game I'm working on. The method I've come up with thus far is as follows: 1. Get all vertices and edges from the terrain mesh 2. Create a node at every vertex and pathing edge for every edge 3a. Loop through every edge, finding it's slope angle 3b. If angle > maxAngle, delete edge and mark edge's nodes (vertices) as red 4a. Loop through every edge 4b. If the edge's nodes (vertices) are both red then delete edge 4c. If the edge has 1 red node then mark the other node as blue 5. Loop through every edge and delete those that don't have 2 blue nodes 6. Loop through every node and delete all those that aren't blue (I called the nodes blue and red in this instance to correlate with the picture below) After that, assuming I'm not mistaken, there should be remaining only an edging around any rises or falls that exceed the slope limit. This picture should help show what I mean. The green edges are those that exceed the slope limit. The red nodes are those that are connected to edges that exceed the slope limit. The blue nodes are those that are connected to edges with one red node. The yellow edges are those that have 2 blue nodes. The yellow edges and blue nodes are the ones that would be kept in the end. Given that I now have the boundaries, I could run through each remaining node and check it against every other node and see if there is a boundary (yellow edge) between them (on the x-z plane) and if not, create a new edge between them. I'm pretty sure that after that I'd have all I needed for pathfinding to anywhere. At run time when pathfinding from A to B I would: 1. Get closest node to A and pathfind to closest node to B 2. Check if there's any blockage between closest node and next node on path and if not, drop the closest node and go to the next node 3. At second last node on path, check if there's any blockage to B and if not, go straight to B I'm pretty sure that this will all work, but I don't want to start implementing it without being sure so my questions are: Is this going to work and/or is there a better way? Also, what would be the best way to incorporate dynamic blockages at runtime (eg. built buildings and other units)? I would think ignoring them when doing all the pathfinding and just using flocking to get around them. [Edited by - Matt Carr on May 18, 2008 12:14:44 AM]

Rendering problem in 3DS Max

29 April 2006 - 07:37 PM

I'm having a problem when I render my model. Below is a print screen from the 3DS Max window of my model which is normal, but when I render it, it comes out with a weird wireframe and it extremely bright as you can see further below. I've tried changing options in the render window and environment lighting. I don't know much about 3DS Max in general and especially the rendering side of things. Anyone know what's causing this and how to fix it?

PARTNERS