Jump to content

  • Log In with Google      Sign In   
  • Create Account

slayemin's Journal

Quick screenshots

Posted by , 23 September 2014 - - - - - - · 895 views

I'm excited enough to post an early screenshot of my prototype. I just got dynamic directional lighting to work! I have a sun which arcs across the sky and acts as a directional light source with color being dependent on its position in the sky. The sky color is also a function of the sun position in the sky, so I have a nice, seamless transition between day and night.

Early Morning:
Attached Image

Mid Afternoon:
Attached Image

Evening Sunset:
Attached Image

Attached Image

It's not perfect (and it doesn't have to be for prototyping), but I'm starting to get a lot better at writing HLSL shaders. I'm not going to bother writing any shadow code though, that's just excessive polish (and I don't know how, aside from expensive ray tracing through the scene).

Month 3 Review

Posted by , 05 September 2014 - - - - - - · 1,175 views
Intel, Software Engineering and 3 more...
It's been a busy month. I've bolded the topic heads so you can skip around.

Intel Buzz Workshop
The Intel sponsored seminar at Digipen was kind of a waste of time. The fact that it's sponsored by Intel should tell you a bit about the tone of the seminar (technical sales pitch and evangelism). The first talk was a high level heads up on new chips coming down the pipeline. They've been working really hard to reduce the amount of power consumed by chips and they were trying to pimp out their on-board video. I don't work that close to the metal, so it's all a bit irrelevant to my daily work. I was hoping to get some free hardware and make some contacts, but that didn't happen. I'm not good at socializing. One thing I learned though: Intel really wants to be a goto resource for game developers to use. Check out their tools for game developers.

They also had a guy who works as the Sr. Architect for DirectX at Microsoft give us a 30 minute presentation on DirectX 12. What's the difference between DX11 and DX12? Performance and power usage. They gave a demo of a procedurally generated asteroid field of 60,000 unique asteroids being rendered in real time, and switching between DX11 and DX12. With DX11, the power usage was significant and the performance was around 30fps. The GPU fans would spin up and make noise. Then, switching to DX12 caused the scene to render at 60fps with half the power consumption, and the GPU fans would slow down and make less noise. Neat and impressive stuff, except ultimately I'm going to be using a game engine which uses the upcoming tech. I use whatever the engine uses. So, I'm not quite their target audience?

The highlight of the seminar was a presentation by the Oculus Rift lead programmer and the current state of their research and development. They're still working out a lot of the issues with the OR headset. They're still focusing on getting the display working just right. The challenge is a bit unique to traditional development. With a typical monitor, your screen can refresh at about 60Hz. With virtual reality, the bare minimum is 90hz ("or go home!", he says) and preferred at 125hz. The display resolution also needs to be a lot larger (>1080), and needs to be replicated on two displays. So, do the math: 125 draw calls per second on two screens with high res displays = a ton of graphical processing. What that translates into for developers targeting virtual reality: Scale back your scenes by about 7 years in tech. In 2014, make a game that looks like 2007 and you'll be fine. For most indie developers, that shouldn't be too hard if you're going to target VR (which is crazy!).
Oculus Rift has been focusing on the display very hard, but not on sound and user input. Here are the issues they still need to tackle:
1. Sound tech has been stagnant for the last 15 years. Everyone just does stereo and calls it 'good enough'. The problem is that the baseline headphone has not changed. Consumers aren't willing to pay for the good stuff, and therefore, devs don't see a financial incentive to support the high tech headphones. I personally think the OR folks have an opportunity to integrate superior sound hardware into their headset. Build it, go big, and we'll support it.
2. User input: With virtual reality, mouse and keyboard input breaks the immersion and locks someone to a desk. You also frequently need to see your keyboard, so how can you do that if you're wearing a headset? Personally, I think the solution is the Leap Motion hardware. The only potential drawback would be the rate at which your hands and arms get fatigued. But maybe that just requires additional stamina on the gamers behalf? I mean, the XBox Kinect can read full body motions and has been doing so for many years, so it's not too out of the ordinary to ask for it.
Anyways, I'd love to make an Oculus Rift version of our game with leap motion support, but... that's too much for a two person indie studio with limited finances and time. Maybe for version 2.0 or as a post-release patch. It would certainly make for a good market opportunity to seize early on. When the OR ships and everyone buys it up, the next thing they're going to look for are games built for the platform.

PAX 2014:
So, that was the "big event" happening in Seattle on the last weekend of August. People from all over the world fly in to our home town and attend a 4 day long event celebrating "gaming", in the loosest form of the term. I see it as a celebration of geek culture more than anything else. I didn't go this year. Apparently all of the tickets sold out in less than a few minutes after they went on sale. I also don't have anything to exhibit yet, so there's no point in paying money to market something I don't have. Maybe next year, or the year after that. We'll see if I even make it that far.

A week before the event, we went over to visit a fellow indie studio to play test their first upcoming game called "Mekazoo". They were putting the finishing touches on their PAX build and putting on an exhibit. I had never seen or heard of their game before, so I was a virgin play tester who would run into all of the problems an uninitiated player would stumble on. And stumble I did! As I was playing, the designer and producer sat behind me and made lots of notes about where I was having trouble and what was and wasn't working. Afterwards, we had a short post-game interview for feedback.

This last sunday, there was a social event called "Devs & Bevs" being put on by some indie game devs at the hard rock cafe for local game developers and PAX attendees. I went to check it out. Over 300 people had signed up for it on Facebook, so I wanted to get there early in case seating was limited. I was expecting some informal panel discussions and a lot of good postmortem lessons learned by the various indie dev sponsors. Instead, it was a crowded place filled with PAX attendees all trying to shout over each other while some of the devs played their games on a big screen projector. I realized that I really am not at all social and it wasn't what I was expecting, so I finished my beer and left. Note of insight to self: I am realizing just how bad I am at starting conversations with total strangers. That's going to come and bite me in the ass some time in the future.

Game Progress and News:
I switched my prototype from 2D sprites back into my 3D engine. Maybe that was a bad move and I'm an idiot for making it, but here's my reasoning:
In favor:
1. I had worked on my 3D game engine for 10 months. It works and it already solves a lot of the problems I'm starting to run into. Why re-solve problems I've already solved in 3D? (note: completely ignore the 10 month cost and be as impartial as possible)
2. Camera's are all taken care of. If I go back to 3D, I don't have to worry about panning, zooming, rotating, etc.
3. Object coordinates are in 3D world space, not screen space. This means I don't have to mess around with the inverted Y-axis in screen space. Math is also consistent. In math, zero degrees is at the 3 o'clock position and 90 degrees is at the noon position. 45 degrees is at 1:30pm position. In screen coordinates, angles wind clockwise so that 90 degrees is at 6 o'clock. It's an annoyance and source for gotchas.
4. My game is going to be in 3D, so why not prototype it in 3D as well? My flying units will actually fly. Arrows will actually have trajectories. High ground will make a difference.
5. I can actually *show* people the exact camera controls and fiddle around with it, and I can figure out when I want to do a level of detail swap between a low poly model and a map symbol (like in Supreme Commander). You gotta see it in action to understand it, right?

1. Working in 3D can be significantly slower. I'm not just moving sprites around anymore, I'm moving models in 3D space. This adds time to my workflow.
2. I have to do some significant porting to get my objects from 2D into 3D. Most of the changes are just coordinate conversions and rendering changes. The solution logic should be relatively unchanged.
3. Some of the problems I have to worry about in 3D aren't problems in 2D (such as selection boxes and picking points).
4. Hello?! Prototype! What's the opportunity cost of spending a month switching to 3D? Where could you be in the 2D version now?

I decided to go forward into 3D. So, it's five steps back and one step forward. Essentially, it's a bit of a 'restart', except all of my high level code, behaviors and back end architecture is already done. This essentially returned me back to where I left off on my engine, except now I'm just trying to make it work with what I'm trying to do.

I discovered that my user input system was still lacking and under-designed. Before I abandoned my engine, I had decided that it would be a "smart thing to do" to switch to a Model-View-Controller framework -- because I was introduced to it in University and liked the separation of roles. I was starting to think about my engine in those terms anyway, so might as well make the leap and be done with it. Then, after I did a half-assed job at that implementation, I decided I was ready to implement a robust input handling system based on ApochPiQ's fantastic article on the subject. So both were half-finished, half-working systems (not that they weren't poorly done though!). I also had a half completed GUI system, so I could essentially create windows, buttons, tool tips, and maybe a text box. Good enough for now!

So, there's a slight challenge with the MVC framework. You are making a game which may support multiple players. This is where the MVC starts to shine... If you think of the core of your game as the "Model", and you give each of the players a viewer and controller, you can support as many players as you want. The game model can run anywhere (ie, network server) and game clients can just consist of the viewer and controller. The viewer is only responsible for rendering the current state of the game model. The controller is only responsible for collecting user inputs. So, here's the first question: Where in the framework does the player camera belong? Initially, I thought it went into the viewer. The camera is your view into the world, right?! Well, what if you want to move the camera? Then all of a sudden, the controller has to send input commands to the viewer! Uh... okay? That's probably not supposed to happen. And what happens if we attach the camera to a character in the game? So, the initial intuition is wrong. The camera belongs in the Model because it is a part of the game world. This lets your controller and viewer maintain independence from each other, which is good. It just goes to show that first intuitions can be wrong! Pause and think about what you're doing, you'll save more time.

So, a player can view the game world and change their view by using WASD and mouse look commands. What about Graphical User Interfaces (GUI's)? The main game world gives the player a view of the battlefield with a lot of units. The player can left click on a unit to select it, and right click to give it a movement order. So far, so good, right? The player then opens up their spell book. This causes a spell book GUI to overlay the existing battlefield. What happens to the selected units on the battlefield if the player left clicks around on the spell book? What if the spell book overlaps a battlefield button and the player clicks there? What should happen is that the spell book is the only consumer of inputs while it is active.

Here are a few notes and revisions from my own implementation of ApochPiQ's original article:

-I decided to combine all of my inputs into a single "InputCombo" action, which contains a single 32 bit integer bitmask for the keyboard and mouse states. I separate input data into four 8 bit chunks. The first byte contains the key being pressed, which fortunately ranges from 0->255. The second byte contains a bitmask of all combination keys I want to check (Ctrl + Alt + Shift). The third byte contains the mouse button states as bitmasks (supporting up to 8 buttons on a mouse). The fourth byte is a bitmask of all of the input events we want to trigger on (key down, key released, key pressed, drag, etc). The result is a unique input combo signature which I can use as a hash key. I do this for every key, so if you are holding down Ctrl + A + S, you get two input combos: Ctrl + A, Ctrl + S. Each may or may not be hash keys in our set. If neither of these match, I "atomize" my combinations by breaking down the key combinations, to get "Ctrl", "S", and "A" as new hash keys to test, and then pass them through the input cycle again. If an input combo is already atomic and its unhandled, it just gets discarded.

-Since I have hashable keys, I can create a dictionary (or hash table) which maps a key combo to a delegate (aka, function pointer). This is my "Input Context". Every input context needs to be able to receive a reference to a list of input combos and be able to try to handle them as inputs. Each GUI has its own input context. I have a stack of active GUI's for each player. If the first GUI can handle an input combo, it removes the input from the list and fires off the corresponding function pointer within the GUI. All unhandled input combos are then passed down the stack to underlying GUI's (except if a GUI is modal, in which case it consumes all inputs, handled or not).

-Each GUI is a separate class which inherits from a GUILayer class. Each GUI object has its own list of events and an Input Context which maps input combinations to those events. The GUI has a list of controls which can also fire off an event, so its not unfeasible for a GUI button to fire off the same event as a keyboard button (ie, hotkey).

-Some GUI's should pause the game world while active (though, reconsider this for multi-player games).

Software Engineering Tools:
#1 MS Paint: It's the best tool for architecting software quickly. The usability could be slightly better, but it is the fastest and most robust tool in my arsenal. Don't let its simplicity fool you, that's its greatest strength!
#2 Pen and notebook: Superior to whiteboards only because you can save the paper. Don't be afraid to use up paper. Paper is cheap! You can get another 100 page notebook for $5 at an office supplies store. If you need to, you can scan the paper into digital format for sharing.
#3 Whiteboard & markers: It's great for drawing things up very quickly and with maximum freedom, but you also can't copy and paste or save your drawings. Digital cameras are the best way to convert this data to digits.
#4 Photoshop: Sure you get layers, but the usability is not as well suited for quickly architecting software. You can't easily draw boxes!
#5 Notepad++: "It's just a text editor though!". Yes it is. Don't forget that software engineering is about communicating relationships between objects. You don't necessarily need diagrams for everything. Sometimes, a few sentences is sufficient to get the job done the fastest.
#6 Visio: It's pretty much garbage. This is surprising because its supposed to be a robust tool designed specifically for designing "stuff" like software. Unfortunately, it has two glaring problems.
1) It is very limited on the symbols / graphics you can use
2) It is very restrictive on how you can use the symbols they give you.
I am willing to bet that some people would rank visio as #1. Sure, you can make pretty diagrams but it takes too long. By the time you're done, I'm already coding.

Much like painting and paint brushes, at the end of the day it doesn't matter how good the tools are, but how good the craftsman is at using them effectively.

On Rigorous Testing:
Doubt the correctness of your algorithm until it has been tested rigorously. Just because your method appears to work as expected in a few preliminary cases doesn't mean that it works as expected in all cases... unless you've proven it through testing. Test your edge cases. Test unexpected inputs. If your function takes a range of values, test every value with small step sizes. The whole focus of your effort is not to prove that the algorithm works, but to prove that it can't be broken.
"I ain't got time for that!" you might say, "How much testing can I actually get away with safely?"
Proportion your testing based on how many other bits of code are going to rely on it to work perfectly and based on the consequences of unexpected behavior. If your function tells you the angle between two vectors and a ton of your math depends on it to be correct, then test that shit with every vector variation. If on the other hand its just a mesh with a few misplaced vertices due to a calculation error, it's probably even safe to leave that bug in there for non-production stuff.

Personal Example: I have a 3D terrain mesh based off of a height map. For any position X/Y on the terrain, I need to find the corresponding height. This isn't as easy as it seems at first glance because each X/Y value may not exactly correspond to a vertex position. Let's say we have a terrain tile with four corners:

|Description | Pos | Height |
|TopLeft |[0,0] | 3.0 |
|TopRight |[1,0] | 4.0 |
|BottomLeft |[0,1] | 3.2 |
|BottomRight |[1,1] | 3.0 |
If we get an input position of [0,0], the height is easy to find because it's the top left vertex.
What if we get an input value of [0.25, 0.75]? There's nothing fast and easy to look up now, so we have to do a bilinear interpolation of all four corners to get a height value. However you do end up solving it, it needs to support a few extra things: What if a tile isn't 1x1? What if we change the dimensions of our sample space? Again, test everything! If you see a slight bug, where things are just off by even a teensy bit here and there, you've got a problem that needs to be fixed asap. Those types of bugs start off as small and seemingly insignificant things, but then you build systems on top of the existing systems which depend on the buggy portion to work as expected. Then, when it doesn't, those systems have bugs too! Now you have two bugs to fix and it's going to take a lot longer. If instead you hack together a dirty fix, you're essentially shoelacing your code. Add enough shoelaces and special fixes and you're going to have a huge mess on your hands which takes forever to fix. Don't let that happen. Test.

I always keep in mind Johannes Kepler. He was an astronomer from centuries ago who discovered that the planets don't actually orbit the sun in perfect circles. He took very careful measurements of planetary movement over time and was finding some very minute discrepancies in where the planets *should* be if they moved in circles, and where he actually observed them. He could chalk up the error to slight malfunctions in his instruments, but he was careful enough to be able to rule that out. Most people would have ignored these very slight errors, but he didn't. It turns out that there was a fundamental problem in the understanding of the model of the solar system. By using ellipses (with very close foci) to represent planetary orbits, he was able to match the observations with the modeled expectations. I think this is a fantastic approach! So, if you take that same degree of rigor towards testing and development, where you don't even ignore the slightest unexplained bits output by your code, you can fix bugs before they even occur. Your "model" is your pen and paper conceptual/mathematical model. Your "observed data" is the actual output you get in your debugger. If they all match perfectly in every scenario, and you can't break it, you've got a solid algorithm.

Good testing of code also takes into account its performance. I've heard a lot about it, but never really did it in practice... until two days ago. What it revealed stunned me.

So, the general rule of thumb for optimization is that you don't want to optimize an algorithm unless your profiling shows that it is really using up a lot of your resources. I finally ran into a scenario where I had a ton of code complexity and when I issued a move order to any unit, my framerate would drop from about 120fps to about 15-20fps in a timespan of 10-15 seconds. What's causing this slowdown?!
My first guess was that it was the new code I just added. So, I commented it out and tried again. Still slow? I didn't properly replicate the previous error, but I could get slowness again. Eventually I figured out that every time my units moved, the game slowed down. Okay, maybe one of my algorithms doesn't scale well for 50 creatures? That was my next guess, so I commented out a bunch of code to disable the guess. Nope, still slow. Okay, what if I remove all but one creature? Surely, that can't cause any algorithmic slowdown, right? Wrong. Still slow. At this point, I was just starting to guess and poke around in hundreds of thousands of lines of code. That's never good. So, I did what I had been longing for: I built a system for profiling the performance of my code and created a sweet bar-graph of performance history. It only took 3 hours to build and test perfectly. So, keep in mind that a game can have either a CPU bound bottleneck or a GPU bound bottleneck. I decided to test both, and found that my GPU FPS was growing linearly after a creature moved. It's a simple feat to start isolating sections of code, and within 5 minutes, I isolated the slowdown to some debug code I had put into my Octree to visualize its regions. Remove that bit, try again, performance is perfectly smooth. Add everything back in, no noticeable hit on performance.

One other really nice benefit to performance monitoring is that you can take a lot of guess work out of the actual performance of sections of code. You don't have to say, "I think this code looks complicated and has a lot of mathematical operations, therefore, it must run slow and needs to be optimized" (it doesn't if you actually measure it!). Instead, you can say, "I ran a profiler and stress tested this and found that is reduces framerate by 1 frame per second in the worst case scenario."

Progress Screenshots:
Lastly, I thought I'd share two screenshots of the prototype progress so far.
This is from the 28th of July:
Attached Image

This is from the 4th of September (~1 month later):
Attached Image

September 2014 »


Recent Entries

Recent Comments