• entries
123
280
• views
160178

We're pushing .NET technology to its limits. Come watch!

Today, we’re going to talk about how to configure a MIDI controller to act as a debugging aid for a game – or any software development project. Why a MIDI controller? Photo credit: Wikimedia Commons / CC BY-SA 3.0Why would you want to do this? It’s very common to have a set of internal variables that we want to tweak on the fly, while the game is running, until things are just right. There’s animation, physics, AI, game play and difficulty, graphics, UI, and more. Typically people end up homebrewing a variety of tricks to edit these variables. You might ‘borrow’ a couple keyboard keys and throw an on screen text display in. You might have a developer console command and type in the values. If you really get fancy, you might even have a TCP debug channel that can control things from another UI, maybe even on another computer. These options all have their pros and cons. Why add MIDI as another option to the mix? It’s really easy and safe to disable for releases. We’re probably not using MIDI for any legitimate user purpose, after all. It doesn’t require any UI, on any screen. Supports platforms like mobile where keystroke debugging can be particularly problematic. It’s quick to get to edit a large range of parameters in tandem. Editing without explicit numerical values avoids the problem of sticking to “convenient” numbers. It’s easy to give this setup to a non-technical person and let them toy around. Tactile editing is just plain fun. How does a MIDI controller work? Over the MIDI protocol, naturally. MIDI was standardized in 1983 as a way for musical tools like keyboards, samplers, sequencers, and all sorts of computerized devices to communicate with each other. While in ye olden days the connection was made over a 7 pin DIN cable, many modern controllers have USB interfaces. These are class compliant devices, which means you can simply plug them into a computer and go, without any drivers or mandatory support software. MIDI itself is a slightly wonky protocol, based strictly around events. There’s a wide range of events, but for our purposes we only really need four: Note On, Note Off, Control Change, and Program Change. Note On/Off are easy – they’re your key down/up equivalents for a MIDI controller. Control Change (CC) represents a value change on an analog control, like a knob or slider. Program Change changes an entire bank or preset. Note what’s not in there: state. The whole thing is designed to be stateless, which means no state queries either. But many MIDI controllers do have state, in the physical position of their knobs or sliders. That leads us to the dreaded MIDI CC jump. In short, what happens when a value in our software is set to 0 (min) and the user twists a knob is at 127 (max)? The knob will transmit a CC with a value of 126 attached, causing the variable it’s attached to to skyrocket. This desynchronization between software and hardware state can be confusing, inconvenient, and downright problematic. Choosing a MIDI controller Photo Credit: Behringer/MUSIC GroupEnter the Behringer X-Touch Mini. For a trivial $60 street at time of writing, take a look at what it gives us: 8 rotary encoders with LED collars for value display Volume slider 24 push buttons (8 push-knob, 16 LED) with configurable behavior (momentary/toggle) Dual control layers, so multiply everything above by two Simple class-compliant USB connectivity Now back up and read that again. Rotary encoders. LED collars. Get it? Encoders aren’t knobs – they spin freely. The value isn’t based on the position of a potentiometer, but digitally stored and displayed using the LEDs around the outside of each encoder. Same goes for the button states. This is a controller that can not only send CC messages but receive them too, and update its own internal state accordingly. (The volume slider is not digital and will still cause CC jumps.) Setting up the X-Touch Mini Photo Credit: Promit Roy / CC BY-SA 3.0Behringer offers a program called X-Touch Editor which you’ll want to download. Despite the terrible English and oddball UI, It allows you to configure the X-Touch Mini’s behavior and save it persistently onto the controller’s internal memory. You can change the display style of the LEDs (I recommend fan) and change the buttons between momentary (like a keyboard button) and toggle (push on, push again off). It also offers other options for MIDI behavior, but I recommend leaving that stuff where it is. For my purposes, I set the center row of buttons to act as toggles, and left the rest of it alone. Understanding the MIDI protocol At this stage it might be helpful to download a program called MIDI-OX. This is a simple utility that lets you send and monitor MIDI messages, which is very useful in understanding exactly what’s happening when you are messing with the controller. Note that MIDI devices are acquired exclusively – X-Touch Edit, MIDI-OX, and your own code will all lock each other out. The MIDI messages we’ll be using have a simple structure: a four bit message ID, a four bit channel selection, followed by one or two data bytes. The X-Touch Mini is configured to use channel 11 for its controls out of the box. Here are the relevant message IDs: enum MidiMessageId { MIDI_NoteOn = 144, MIDI_NoteOff = 128, MIDI_ControlChange = 176, MIDI_ProgramChange = 192, }; Channels are 0 based in the protocol, so you’ll add 10 to these values when submitting them. NoteOn and NoteOff are accompanied by a note ID (like a key code) and a velocity (pressure). The key codes go from 0-23 for layer A, representing the buttons left to right and top to bottom. When you switch to layer B, they’ll switch over to 24-47. You’ll receive On and Off to reflect the button state, and you can also send On and Off back to the controller to set Toggle mode buttons on or off. This is really handy when hooked up to internal boolean variables: light on = true, light off = false. We’re not interested in the pressure field, but it does need to be set correctly. The values are 127 for NoteOn and 0 for NoteOff. ControlChange (CC) will appear any time the knobs are rotated, with the knob ID and the current value as data bytes. The default range is 0-127; I recommend leaving it that way and rescaling it appropriately in software. For layer A, the knob IDs go from 1-8 (yes, they’re one based) and the slider is assigned to 9. On layer B that’ll be 10-17 and 18. You can transmit identical messages back to the controller to set the value on its end, and the LED collar will automatically update to match. The X-Touch will never send you a ProgramChange (PC) by default. However on the current firmware, it will ignore messages that don’t apply to the currently active layer. You can send it a Program Change to toggle between layer A (0) and B (1), and then send the rest of your data for that layer to sync everything properly. PC only has a single data byte attached, which is the desired program. Writing the MIDI glue code Go ahead and grab RtMidi. It’s a nice little open source library, which is really just composed of a single header and source file pair, but supports Windows, Linux, and Mac OSX. (I have an experimental patch for iOS support as well that may go up soon.) I won’t cover how to use the library in detail here, as that’s best left to the samples – the code you need is right on their homepage – but I will give a quick overview. You’ll need to create two objects: RtMidiIn for receiving data, and RtMidiOut for sending it. Each of these has to be hooked to a “port” – since multiple MIDI devices can be attached, this allows you to select which one you want to communicate with. The easiest thing to do here is just to search the port lists for a string name match. At this point it’s just a question of signing up for the appropriate callbacks and parsing out their data, and then sending the correct messages back. The last step is to bidirectionally synchronize variables in your code to the MIDI controller. I did it with some template/macro nonsense: void GameStage::MidiSyncOut() { if(_midiOut) { _midiOut->Send(MIDI_ProgramChange, 0); MidiVars(1, 0, 0, 0); _midiOut->Send(MIDI_ProgramChange, 1); MidiVars(1, 0, 0, 0); _midiOut->Send(MIDI_ProgramChange, 0); } } void GameStage::NoteOn(unsigned int channel, unsigned int note, unsigned int velocity) { MidiVars(0, 0, note, velocity); } void GameStage::NoteOff(unsigned int channel, unsigned int note, unsigned int velocity) { MidiVars(0, 0, note, velocity); } void GameStage::ControlChange(unsigned int channel, unsigned int control, unsigned int value) { MidiVars(0, 1, control, value); } template<typename T1, typename T2, typename T3> void MidiVariableOut(const T1& var, T2 min, T3 max, unsigned int knob, MidiOut* midiOut) { float val = (var - min) / (max - min); val = clamp(val, 0.0f, 1.0f); unsigned char outval = unsigned char(val * 127); midiOut->Send(MIDI_ControlChange + 10, knob, outval); } template<typename T1, typename T2, typename T3> void MidiVariableIn(T1& var, T2 min, T3 max, unsigned int knob, unsigned int controlId, unsigned int noteOrCC, unsigned char value) { if(noteOrCC && knob == controlId) { float ratio = value / 127.0f; var = T1(min + (max - min) * ratio); } } void MidiBoolOut(const bool& var, unsigned int button, MidiOut* midiOut) { if(var) midiOut->Send(MIDI_NoteOn + 10, button, 127); else midiOut->Send(MIDI_NoteOff + 10, button, 0); } void MidiBoolIn(bool& var, unsigned int button, unsigned int controlId, unsigned int noteOrCC, unsigned char value) { if(!noteOrCC && button == controlId) { var = value > 0; } } #define MIDIVAR(var, min, max, knob) inout ? MidiVariableOut(var, min, max, knob, _midiOut) : MidiVariableIn(var, min, max, knob, controlId, noteOrCC, value) #define MIDIBOOL(var, button) inout ? MidiBoolOut(var, button, _midiOut) : MidiBoolIn(var, button, controlId, noteOrCC, value) void GameStage::MidiVars(unsigned int inout, unsigned int noteOrCC, unsigned int controlId, unsigned int value) { if(!_midiOut) return; MIDIVAR(_fogTopColor.x, 0.0f, 1.0f, 1); MIDIVAR(_fogTopColor.y, 0.0f, 1.0f, 2); MIDIVAR(_fogTopColor.z, 0.0f, 1.0f, 3); MIDIVAR(_fogBottomColor.x, 0.0f, 1.0f, 4); MIDIVAR(_fogBottomColor.y, 0.0f, 1.0f, 5); MIDIVAR(_fogBottomColor.z, 0.0f, 1.0f, 6); MIDIVAR(CoCScale, 1.0f, 16.0f, 8); MIDIBOOL(MUSIC, 8); MIDIBOOL(RenderEnvironment, 9); MIDIBOOL(DepthOfField, 10); } Probably not going to win any awards for that, but it does the trick. MidiVars does double duty as an event responder and a full data uploader. MidiSyncOut just includes the PC messages to make sure both layers are fully updated. Although I haven’t done it here, it would be very easy to data drive this, attach variables to MIDI controls from dev console, etc. Once everything’s wired up, you have a completely independent physical display of whatever game values you want, ready to tweak and adjust to your heart’s content at a moment’s notice. If any of you have particularly technically minded designers/artists, they’re probably already using MIDI controllers along with independent glue apps that map them to keyboard keys. Why not cut out the middle man and have explicit engine/tool support? View the full article ## Bandit's Shark Showdown - Can Video Games Help Stroke Victims? It's been a long time since I've posted, but with good reason. For the last couple months, I've been busy building our newest game: Bandit's Shark Showdown! Released as a launch title for Apple TV and now out on iOS, the game represents the latest generation of our animation/physics tech, and will power the next round of stroke rehabilitation therapies that we're developing. Maybe you don't know what I'm talking about - that's okay because the New Yorker published a feature on us this week for their tech issue: Can Video Games Help Stroke Victims? And here's the game on the iTunes App Store: Bandit's Shark Showdown! Our Apple TV trailer: ## Sony A77 Mark II: EVF Lag and Blackout Test I’m planning to review this camera properly at some point, but for the time being, I wanted to do a simple test of what the parameters of EVF lag and blackout are. Let’s talk about lag first. What do we mean? The A77 II uses an electronic viewfinder, which means that the viewfinder is a tiny LCD panel, showing a feed of what the imaging sensor currently sees. This view takes camera exposure and white balance into exposure, allowing you to get a feel for what the camera is actually going to record when the shutter fires. However, downloading and processing the sensor data, and then showing it on the LCD, takes time. This shutter firing needs to compensate for this lag; if you hit the shutter at the exact moment an event occurs on screen, the lag is how late you will actually fire the shutter as a result. How do we test the lag? Well, the A77 II’s rear screen shows exactly the same display as the viewfinder, presumably with very similar lag. So all we have to do is point the camera at an external timer, and photograph both the camera and the timer simultaneously. And so that’s exactly what I did. Note that I didn’t test whether any particular camera settings affected the results. The settings are pretty close to defaults. “Live View Display” is set to “Setting Effect ON”. These are the values I got, across 6 shots, in millseconds: 32, 16, 17, 34, 17, 33 = 24.8 ms average I discarded a few values due to illegible screen (mid transition), but you get the picture. The rear LCD, and my monitor, are running at a 60 Hz refresh rate, which means that a new value appears on screen every ~16.67 ms. The lag wobbles between one and two frames, but this is mostly due to the desynchronization of the two screen refresh intervals. It’s not actually possible to measure any finer using this method, unfortunately. However the average value gives us a good ballpark value of effectively 25 ms. Consider that a typical computer LCD screen is already going to be in the 16ms range for lag, and TVs are frequently running in excess of 50ms. This is skirting the bottom of what the fastest humans (pro gamers etc) can detect. Sony’s done a very admirable job of getting the lag under control here. Next up: EVF blackout. What is it? Running the viewfinder is essentially a continuous video processing job for the camera, using the sensor feed. In order to take a photo, the video feed needs to be stopped, the sensor needs to be blanked, the exposure needs to be taken, the shutter needs to be closed, the image downloaded off the sensor into memory, then the shutter must open again and the video feed must be resumed. The view of the camera goes black during this entire process, which can take quite a long time. To test this, I simply took a video of the camera while clicking off a few shots (1/60 shutter) in single shot mode. Here’s a GIFed version at 20 fps: By stepping through the video, I can see how long the screen is black. These are the numbers I got, counted in 60 Hz video frames: 17, 16, 16, 17, 16, 16 = 272 ms average The results here are very consistent; we’ll call it a 0.27 second blackout time. For comparison, Canon claims that the mirror blackout on the Canon 7D is 0.055 seconds, so this represents a substantial difference between the two cameras. It also seems to be somewhat worse than my Panasonic GH4, another EVF based camera, although I haven’t measured it. I think this is an area which Sony needs to do a bit more, and I would love to see a firmware update to try and get this down at least under 200 ms. It’s worth noting that the camera behaves differently in burst mode, going to the infamous “slideshow” effect. At either 8 or 12 fps settings, the screen shows the shot just taken rather than a live feed. This quantization makes “blackout time” slightly meaningless, but it can present a challenge when tracking with erratically moving subjects. View the full article ## Time Capsule Draft: “Speculating About Xbox Next” I was digging through my Ventspace post drafts, and I found this writeup that I apparently decided not to post. It was written in March of 2012, a full year and a half before the Xbox One arrived in the market. In retrospect, I’m apparently awesome. On the one hand, I wish I’d posted this up at the time, because it’s eerily accurate. On the other hand, the guesses are actually accurate enough that this might have looked to Microsoft like a leak, rather than speculation. Oh well. Here it is for your amusement. I haven’t touched a thing about it. I’ve been hearing a lot of rumors, though the credibility of any given information is always suspect. I have some supposed info about the specs on the next Xbox, but I’m not drawing on any of that info here. I’m dubious about at least some of the things I heard, and it’s not good to spill that kind of info if you’re trying to maintain a vaguely positive relationship with a company anyway. So what I’m presenting here is strictly speculation based on extrapolation of what we’ve seen in the past and overall industry and Microsoft trends. I’m also assuming that MS is fairly easy to read and that they’re unlikely to come out of left field here. 8 GB shared memory. The original Xbox had 64 MB of shared memory. The Xbox 360 has 512, a jump of 8x. This generation is dragging along a little longer, and memory prices have dropped violently in the last year or so. I would like to see 16 GB actually, but the consoles always screw us on memory and I just don’t think we’ll be that lucky. 4 GB is clearly too low, they’d be insane to ship a console with that now. As for the memory type, we’re probably talking simple (G)DDR3 shared modules. The Xboxes have always been shared memory and there’s no reason for them to change that now. Expect some weird addressing limitations on the GPU side. Windows 8 kernel. All indications are that the WinCE embedded kernel is being retired over the next two years (at least for internal use). There’s a substantial tech investment in Windows 8, and I think we’re going to see the desktop kernel roll out across all three screens. (HINT HINT.) iOS and Android are both running stripped desktop kernels, and the resources in current mobile platforms make WinXP’s minimum hardware requirements look comically low. There is no reason to carry the embedded kernel along any longer. I wouldn’t want to be a CE licensee right now. x86-64, 8×2 threads, out of order CPU. There are three plausible CPU architectures to choose from: x86, ARM, and PowerPC. Remember what I said about the Windows 8 kernel? There’s no Windows 8 PPC build, and we’re not going to see PowerPC again here. ARM is of course a big focus right now, but the design parameters of the current chips simply won’t accommodate a console. They’re not fast enough and that can’t be easily revised. That pretty much leaves us with x86. The only extant in-order x86 architecture is Intel Atom, which sucks. I think they’ll get out of order for free from the existing architectures. As far as the CPU, 8 core is essentially the top of the market right now, and I’m assuming they’ll hyperthread it. They’ll probably steal a core away from the OS, and I wouldn’t be surprised if they disable another core for yield purposes. That means six HT cores, which is a simple doubling of the current Xbox. I have a rumored clock-speed, but have decided not to share. Think lower rather than higher. DirectX 11 GPU — AMD? DX11 class should be blatantly obvious. I have reason to believe that AMD is the supplier, and I did hear a specific arch but I don’t believe it. There’s no word in NVIDIA land about a potential contract, either. No idea if they’re giving the design ownership to MS again or anything like that, all I know is the arrows are all pointed the same way. There are some implications for the CPU here. Wifi N and Gigabit ethernet. This is boring standard consumer networking hardware. No surprises here. Optical drive? — I don’t think they want to have one. I do think they have to have one, though you can definitely expect a stronger push towards digital distribution than ever. There’s no choice but to support Blu-ray at this point. Top tier games simply need the space. I suspect that we’ll see a very large (laptop grade) hard drive included in at least some models. Half terabyte large, with larger sizes later in the lifecycle. That is purely a guess, though. AMD Fusion APU? — I’m going to outlandishly suggest that a Fusion APU could be the heart of this console. With an x86 CPU and a mainstream Radeon core in about the right generation, the existing Fusion product could be retooled for use in a console. Why not? It already has the basic properties you want in a console chip. The big sticking points are performance and heat. It’s easy to solve either one but not both at once, and we all know what happened last time Microsoft pushed the heat envelope too far. If it is Fusion architecture, I would be shocked if they were to actually integrate the CPU and GPU dies. Kinect. — Here’s another outlandish one: Every Xbox Next will include a Kinect (2?), in the box. Kinect has been an enormous winner for Microsoft so far on every single front, and this is where they’re going to draw the battle lines against Nintendo and Sony. Nintendo’s control scheme is now boring to the general public, with the Wii U being introduced to a resounding “meh”. PS Move faded into irrelevance the day it was launched. For the first time in many years, the Xbox is becoming the casual gamers’ console and they’re going to hammer that advantage relentlessly. Microsoft is also pushing use of secondary features (eg microphone) for hardcore games — see Mass Effect 3.$500. Yes, it’s high, although not very high once you adjust for inflation. The Xbox 360 is an extremely capable device, especially for the no-so-serious crowd. It’s also pure profit for Microsoft, and really hitting its stride now as the general public’s long tail console. There’s no need to price its successor aggressively, and the stuff I just described is rather expensive besides. A $600 package option at launch would not be surprising. November 2013. As with the last two Xboxes, it will be launched for the holiday season. Some people were saying it would be announced this year but the more I think about it, the less it makes sense to do so. There’s no way it’s launching this year, and they’re not going to announce it a year and some ahead of time. E3 2013 will probably be the real fun. There are some problems with the specs I’ve listed so far. AMD doesn’t produce the CPU I described. Not that the rumors match any other known CPU, but Intel is closer. I don’t think one of the Phenom X6 designs is a credible choice. The Xbox 360 CPU didn’t match any existing chips either, so this may not really be a problem. The total package price would have to be quite high with a Kinect 2 included. The Xbox 360 may function as a useful buffer against being priced out of the market. View the full article ## Quick tip: Retina mode in iOS OpenGL rendering is not all-or-nothing Some of you are probably working on Retina support and performance for your OpenGL based game for iOS devices. If you’re like us, you’re probably finding that a few of the devices (*cough* iPad 3) don’t quiiite have the GPU horsepower to drive your fancy graphics at retina resolutions. So now you’re stuck with 1x and 4x MSAA, which performs decently well but frankly looks kind of bad. It’s a drastic step down in visual fidelity, especially with all the alpha blend stuff that doesn’t antialias. (Text!) Well it turns out you don’t have to choose such a drastic step. Here’s the typical enable-retina code you’ll find on StackOverflow or whatever: if([[UIScreen mainScreen] respondsToSelector:@selector(scale)] && [[UIScreen mainScreen] scale] == 2) { self.contentScaleFactor = 2.0; eaglLayer.contentsScale = 2.0; } //some GL setup stuff ... //get the correct backing framebuffer size int fbWidth, fbHeight; glGetRenderbufferParameteriv(GL_RENDERBUFFER, GL_RENDERBUFFER_WIDTH, &fbWidth); glGetRenderbufferParameteriv(GL_RENDERBUFFER, GL_RENDERBUFFER_HEIGHT, &fbHeight); The respondsToSelector bit is pretty token nowadays – what was that, iOS 3? But there’s not much to it. Is the screen a 2x scaled screen? Great, set our view to 2x scale also. Boom, retina. Then we ask the GL runtime what we are running at, and set everything up from there. The trouble is it’s a very drastic increase in resolution, and many of the early retina devices don’t have the GPU horsepower to really do nice rendering. The pleasant surprise is, the scale doesn’t have to be 2.0. Running just a tiny bit short on fill? if([[UIScreen mainScreen] respondsToSelector:@selector(scale)] && [[UIScreen mainScreen] scale] == 2) { self.contentScaleFactor = 1.8; eaglLayer.contentsScale = 1.8; } Now once you create the render buffers for your game, they’ll appear at 1.8x resolution in each each direction, which is very slightly softer than 2.0 but much, much crisper than 1.0. I waited until after I Am Dolphin cleared the Apple App Store approval process, to make sure that they wouldn’t red flag this usage. Now that it’s out, I feel fairly comfortable sharing it. This can also be layered with multisampling (which I’m also doing) to fine tune the look of poly edges that would otherwise give away the trick. I use this technique to get high resolution, high quality sharp rendering at 60 fps across the entire range of Apple devices, from the lowly iPhone 4S, iPod 5, and iPad 3 on up. View the full article ## I Am Dolphin – Kinect Prototype I’d hoped to write up a nice post for this, but unfortunately I haven’t had much time lately. Releasing a game, it turns out, is not at all relaxing. Work doesn’t end when you hit that submit button to Apple. In the meantime, I happened to put together a video showing a prototype of the game, running off Kinect control. I thought you all might find it interesting, as it’s a somewhat different control than the touch screen. Personally I think it’s the best version of the experience we’ve made, and we’ve had several (touch screen, mouse, PS Move, Leap, etc). Unlike the touch screen version, you get full 3D directional control. We don’t have to infer your motion intention. This makes a big difference in the feeling of total immersion. View the full article ## I Am Dolphin - Kinect Prototype I'd hoped to write up a nice post for this, but unfortunately I haven't had much time lately. Releasing a game, it turns out, is not at all relaxing. Work doesn't end when you hit that submit button to Apple. In the meantime, I happened to put together a video showing a prototype of the game, running off Kinect control. I thought you all might find it interesting, as it's a somewhat different control than the touch screen. Personally I think it's the best version of the experience we've made, and we've had several (touch screen, mouse, PS Move, Leap, etc). Unlike the touch screen version, you get full 3D directional control. We don't have to infer your motion intention. This makes a big difference in the feeling of total immersion. ## Our New Game: I Am Dolphin After an incredibly long time of quiet development, our new game, I Am Dolphin, will be available this Thursday, October 9th, on the Apple/iOS App Store. This post will be discussing the background and the game itself; I'm planning to post more technical information about the game and development in the future. This depends somewhat on people reading and commenting - tell me what you want to know about the work and I'm happy to answer as much as I can. For those of you who may not have followed my career path over time: A close friend and I have spent quite a few years doing R&D with purely physically driven animation. There's plenty of work out there on the subject; ours is not based on any of it and takes a completely different approach. About three years ago, we met a neurologist at the Johns Hopkins Hospital who helped us set up a small research group at Hopkins to study biological motion and create a completely new simulation system from the ground up, based around neurological principles and hands-on study of dolphins at the National Aquarium in Baltimore. Unlike many other physical animation systems, including our own previous work, the new work allows the physical simulation to be controlled as a player character. We also developed a new custom in-house framework, called the Kata Engine, to make the simulation work possible. One of the goals in developing this controllable simulation was to learn more about human motor control, and specifically to investigate how to apply this technology to recovery from motor impairments such as stroke. National Geographic was kind enough to write some great articles on our motivations and approach: Virtual Dolphin On A Mission John Krakauer's Stroke of Genius Although the primary application of our work is medical and scientific, we've also spent our spare time to create a game company, Max And Haley LLC, and a purely entertainment focused version of the game. This is the version that will be publicly available in a scant few days. Here is a review of the game by AppUnwrapper.[quote] I got my hands on the beta version of the game, and it's incredibly impressive and addictive. I spent two hours playing right off the bat without even realizing it, and have put in quite a few more hours since. I just keep wanting to come back to it. iPhones and iPads are the perfect platform for the game, because they allow for close and personal, tactile controls via simple swipes across the screen.[/quote] I now have three shipped titles to my name; I'd say this is the first one I'm really personally proud of. It's my firm belief that we've created something that is completely unique in the gaming world, without being a gimmick. Every creature is a complete physical simulation. The dolphins you control respond to your swipes, not by playing pre-computed animation sequences but by actually incorporating your inputs into the drive parameters of the underlying simulation. The end result is a game that represents actual motion control, not gesture-recognition based selection of pre-existing motions. As I said at the beginning of the post, this is mostly a promotional announcement. However, this is meant to be a technical blog, not my promotional mouthpiece. I want to dig in a lot to the actual development and technical aspects of this game. There's a lot to talk about in the course of developing a game with a three person (2x coder, 1x artist) team, building a complete cross-platform engine from nothing, all in the backdrop of an academic research hospital environment. Then there's the actual development of the simulation, which included a lot of interaction with the dolphins, the trainers, and the Aquarium staff. We did a lot of filming (but no motion capture!) in the course of the development as well; I'm hoping to share some of that footage moving forward. Here's a slightly older trailer - excuse the wrong launch date on this version. We decided to slip the release by two months after this was created - that's worth a story in itself. It is not fully representative of the final product, but our final media isn't quite ready. Note that everything you see in the trailer is real gameplay footage on the iPad. Every last fish, shark, and cetacean is a physical simulation with full AI. ## Our New Game: I Am Dolphin After an incredibly long time of quiet development, our new game, I Am Dolphin, will be available this Thursday, October 9th, on the Apple/iOS App Store. This post will be discussing the background and the game itself; I’m planning to post more technical information about the game and development in the future. This depends somewhat on people reading and commenting – tell me what you want to know about the work and I’m happy to answer as much as I can. For those of you who may not have followed my career path over time: A close friend and I have spent quite a few years doing R&D with purely physically driven animation. There’s plenty of work out there on the subject; ours is not based on any of it and takes a completely different approach. About three years ago, we met a neurologist at the Johns Hopkins Hospital who helped us set up a small research group at Hopkins to study biological motion and create a completely new simulation system from the ground up, based around neurological principles and hands-on study of dolphins at the National Aquarium in Baltimore. Unlike many other physical animation systems, including our own previous work, the new work allows the physical simulation to be controlled as a player character. We also developed a new custom in-house framework, called the Kata Engine, to make the simulation work possible. One of the goals in developing this controllable simulation was to learn more about human motor control, and specifically to investigate how to apply this technology to recovery from motor impairments such as stroke. National Geographic was kind enough to write some great articles on our motivations and approach: Virtual Dolphin On A Mission John Krakauer’s Stroke of Genius Although the primary application of our work is medical and scientific, we’ve also spent our spare time to create a game company, Max And Haley LLC, and a purely entertainment focused version of the game. This is the version that will be publicly available in a scant few days. Here is a review of the game by AppUnwrapper. I now have three shipped titles to my name; I’d say this is the first one I’m really personally proud of. It’s my firm belief that we’ve created something that is completely unique in the gaming world, without being a gimmick. Every creature is a complete physical simulation. The dolphins you control respond to your swipes, not by playing pre-computed animation sequences but by actually incorporating your inputs into the drive parameters of the underlying simulation. The end result is a game that represents actual motion control, not gesture-recognition based selection of pre-existing motions. As I said at the beginning of the post, this is mostly a promotional announcement. However, this is meant to be a technical blog, not my promotional mouthpiece. I want to dig in a lot to the actual development and technical aspects of this game. There’s a lot to talk about in the course of developing a game with a three person (2x coder, 1x artist) team, building a complete cross-platform engine from nothing, all in the backdrop of an academic research hospital environment. Then there’s the actual development of the simulation, which included a lot of interaction with the dolphins, the trainers, and the Aquarium staff. We did a lot of filming (but no motion capture!) in the course of the development as well; I’m hoping to share some of that footage moving forward. Here’s a slightly older trailer – excuse the wrong launch date on this version. We decided to slip the release by two months after this was created – that’s worth a story in itself. It is not fully representative of the final product, but our final media isn’t quite ready. View the full article ## Game Code Build Times: RAID 0, SSD, or both for the ultimate in speed? I’ve been in the process of building and testing a new machine using Intel’s new X99 platform. This platform, combined with the new Haswell-E series of CPUs, is the new high end of what Intel is offering in the consumer space. One of the pain points for developers is build time. For our part, we’re building in the general vicinity of 400K LOC of C++ code, some of which is fairly complex — it uses standard library and boost headers, as well as some custom template stuff that is not simple to compile. The worst case is my five year old home machine, an i5-750 compiling to a single magnetic drive, which turns in a six minute full rebuild time. Certainly not the biggest project ever, but a pretty good testbed and real production code. I wanted to find out what storage system layout would provide the best results. Traditionally game developers used RAID 0 magnetic arrays for development, but large capacity SSDs have now become common and inexpensive enough to entertain seriously for development use. I tested builds on three different volumes: A single Samsung 850 Pro 512 GB (boot) A RAID 0 of two Crucial MX100 512 GB A RAID 0 of three WD Black 4 TB (7200 rpm) Both RAID setups were blank. The CPU is an i7-5930k hex-core (12 threads) and I’ve got 32 GB of memory on board. Current pricing for all of these storage configurations is broadly similar. Now then, the results. Will the Samsung drive justify its high price tag? Will the massive bandwidth of two striped SSDs scream past the competitors? Can the huge magnetic drives really compete with the pinnacle of solid state technology? Who will win? Drumroll… They’re all the same. All three configurations run my test build in roughly 45 seconds, the differences between them being largely negligible. In fact it’s the WD Blacks that posted the fastest time at 42s. The obvious takeaway is that all of these setups are past the threshold where something else is the bottleneck. That something in this case is the CPU, and more specifically the overall hardware thread count. Overclocking the CPU from 3.5 to 4.5 did nothing to help. I’ve heard of some studios outfitting their engineers with dual Xeon setups, and it’s not looking so crazy to do so when employee time is on the line. (The potential downside is that the machine starts to stray significantly from what the game will actually run on.) Given the results, and the sizes of modern game projects, I’d recommend using an inexpensive 500 GB SSD for a boot drive (Crucial MX100, Sandisk Ultra II, 840 EVO), and stocking up on the WD Blacks for data. Case closed. But… as long as we’re here, why don’t we take a look at what these drives are benchmarking at? The 850 Pro is a monster of a drive. Those striped MX100s might be the real heros though; ATTO shows them flirting with a full gigabyte per second of sequential transfer. Here are the raw CrystalDiskMark numbers for all three: Samsung 850 Pro: 2x Crucial MX100 in RAID 0: 3x WD Black 4TB in RAID 0: I don’t claim that these numbers are reliable or representative. I am only posting them to provide a general sense of the performance characteristics involved in each choice. The SSDs decimate the magnetic drive setup for random ops, though the 512 KB values are respectable. I had expected the 4K random read, for which SSDs are known, to have a significant impact on build time, but that clearly isn’t the case. The WDs are able to dispatch 177 of those per second; despite being 33x slower than the 850 Pro, this is still significantly faster than the compiler can keep up with. Even in the best case scenarios, a C++ compiler won’t be able to clear out more than a couple dozen files a second. View the full article ## A Pixel Is NOT A Little Square It’s good to review the fundamentals sometimes. Written in 1995 and often forgotten: A Pixel Is Not A Little Square. View the full article ## Neuroscience Meets Games It's been a long time since I've written anything, so I thought I'd drop off a quick update. I was in San Francisco last week for a very interesting and unusual conference: ESCoNS. It's the first meeting of the Entertainment Software and Cognitive Neurotherapy Society. Talk about a mouthful! The attendance was mostly doctors and research lab staff, though there were people in from Activision, Valve, and a couple more industry representatives. The basic idea is that games can have a big impact on cognitive science and neuroscience, particularly as applies to therapy. This conference was meant to get together people who were interested in this work, and at over 200 people it was fairly substantial attendance for what seems like a rather niche pursuit. For comparison's sake, GDC attendance is generally in the vicinity of 20,000 people. The seminal work driving this effort is really the findings by Daphne Bevalier at the University of Rochester. All of the papers are available for download as PDF, if you are so inclined. I imagine some background in psychology, cognitive science, neurology is helpful to follow everything that's going on. The basic take-away, though, is that video games can have dramatic and long-lasting positive effects on our cognitive and perceptual abilities. Here's an NPR article that is probably more helpful to follow as a lay-person with no background. One highlight: [quote]Bavelier recruited non-gamers and trained them for a few weeks to play action video games. [...] Bavelier found that their vision remained improved, even without further practice on action video games. "We looked at the effect of playing action games on this visual skill of contrast sensitivity, and we've seen effects that last up to two years."[/quote] Another rather interesting bit: [quote]Brain researcher Jay Pratt, professor of psychology at the University of Toronto, has studied the differences between men and women in their ability to mentally manipulate 3-D figures. This skill is called spatial cognition, and it's an essential mental skill for math and engineering. Typically, Pratt says, women test significantly worse than men on tests of spatial cognition. But Pratt found in his studies that when women who'd had little gaming experience were trained on action video games, the gender difference nearly disappeared.[/quote] As it happens, I've wound up involved in this field as well. I had the good fortune to meet a doctor at the Johns Hopkins medical center/hospital who is interested in doing similar research. The existing work in the field is largely focused on cognition and perception; we'll be studying motor skills. Probably lots of work with iPads, Kinect, Wii, PS Move, and maybe more exotic control devices as well. There's a lot of potential applications, but one early angle will be helping stroke patients to recover basic motor ability more quickly and more permanently. There's an added component as to why we're doing this research. My team believes that by studying the underlying neurology and psychology that drives (and is driven by) video games, we can actually turn the research around and use it to produce games that are more engaging, more interactive, more addictive, and just more fun. That's our big gambit, and if it pans out we'll be able to apply a more scientific and precise eye to the largely intuitive problem of making a good game. Of course the research is important for it's own sake and will hopefully lead to a lot of good results, but I went into games and not neurology for a reason ;) ## Our game Slug Bugs released! FREE! We've made Slug Bugs free for a limited time. Why wouldn't you try it now? I've been quiet for a very long time now, and this is why. We've just shipped our new game, Slug Bugs! It uses our Ghost binaural audio which I've teased several times in the past, and is also wicked fun to play. Please check it out! A little later in the week I'll probably discuss the development more. ## Understanding Subversion's Problems I've used Subversion for a long time, even CVS before that. Recently there's a lot of momentum behind moving away from Subversion to a distributed system, like git or Mercurial. I myself wrote a series of posts on the subject, but I skipped over the reasons WHY you might want to switch away from Subversion. This post is motivated in part by Richard Fine's post, but it's a response to a general trend and not his entry specifically. SVN is a long time stalwart as version control systems go, created to patch up the idiocies of CVS. It's a mature, well understood system that has been and continues to be used in a vast variety of production projects, open and closed source, across widely divergent team sizes and workflows. Nevermind the hyperbole, SVN is good by practically any real world measure. And like any real world production system, it has a lot of flaws in nearly every respect. A perfect product is a product no one uses, after all. It's important to understand what the flaws are, and in particular I want to discuss them without advocating for any alternative. I don't want to compare to git or explain why it fixes the problems, because that has the effect of lensing the actual problems and additionally the problem of implying that distributed version control is the solution. It can be a solution, but the problems reach a bit beyond that. Committing vs publishing Fundamentally, a commit creates a revision, and a revision is something we want as part of the permanent record of a file. However, a lot of those revisions are not really meant for public consumption. When I'm working on something complex, there are a lot of points where I want to freeze frame without actually telling the world about my work. Subversion understands this perfectly well, and the mechanism for doing so is branches. The caveat is that this always requires server round-trips, which is okay as long as you're in a high availability environment with a fast server. This is fine as long as you're in the office, but it fails the moment you're traveling or your connection to the server fails for whatever reason. Subversion cannot queue up revisions locally. It has exactly two reference points: the revision you started with and the working copy. In general though, we are working on high availability environments and making a round trip to the server is not a big deal. Private branches are supposed to be the solution to this problem of work-in-progress revisions. Do everything you need, with as many revisions as you want, and then merge to trunk. Simple as that! If only merges actually worked. SVN merges are broken Yes, they're broken. Everybody knows merges are broken in Subversion and that they work great in distributed systems. What tends to happen is people gloss over why they're broken. There are essentially two problems in merges: the actual merge process, and the metadata about the merge. Neither works in SVN. The fatal mistake in the merge process is one I didn't fully understand until reading HgInit (several times). Subversion's world revolves around revisions, which are snapshots of the whole project. Merges basically take diffs from the common root and smash the results together. But the merged files didn't magically drop from the sky -- we made a whole series of changes to get them where they are. There's a lot of contextual information in those changes which SVN has completely and utterly forgotten. Not only that, but the new revision it spits out necessarily has to jam a potentially complicated history into a property field, and naturally it doesn't work. For added impact, this context problem shows up without branches if two people happen to make more than trivial unrelated changes to the same trunk file. So not only does the branch approach not work, you get hit by the same bug even if you eschew it entirely! And invariably the reason this shows up is because you don't want to make small changes to trunk. Damned if you do, damned if you don't. Newer version control systems are typically designed around changes rather than revisions. (Critically, this has nothing at all to do with decentralization.) By defining a particular 'version' of a file as a directed graph of changes resulting in a particular result, there's a ton of context about where things came from and how they got there. Unfortunately the complex history tends to make assignment of revision numbers complicated (and in fact impossible in distributed systems), so you are no longer able to point people to r3359 for their bug fix. Instead it's a graph node, probably assigned some arcane unique identifier like a GUID or hash. File system headaches .svn. This stupid little folder is the cause of so many headaches. Essentially it contains all of the metadata from the repository about whatever you synced, including the undamaged versions of files. But if you forget to copy it (because it's hidden), Subversion suddenly forgets all about what you were doing. You just lost its tracking information, after all. Now you get to do a clean update and a hand merge. Overwrite it by accident, and now Subversion will get confused. And here's the one that gets me every time with externals like boost -- copy folders from a different repository, and all of a sudden Subversion sees folders from something else entirely and will refuse to touch them at all until you go through and nuke the folders by hand. Nope, you were supposed to SVN export it, nevermind that the offending files are marked hidden. And of course because there's no understanding of the native file system, move/copy/delete operations are all deeply confusing to Subversion unless it's the one who handles those changes. If you're working with an IDE that isn't integrated into source control, you have another headache coming because IDEs are usually built for rearranging files. (In fact I think this is probably the only good reason to install IDE integration.) It's not clear to me if there's any productive way to handle this particular issue, especially cross platform. I can imagine a particular set of rules -- copying or moving files within a working copy does the same to the version control, moving them out is equivalent to delete. (What happens if they come back?) This tends to suggest integration at the filesystem layer, and our best bet for that is probably a FUSE implementation for the client. FUSE isn't available on Windows, though apparently a similar tool called Dokan is. Its maturity level is unclear. Changelists are missing Okay, this one is straight out of Perforce. There's a client side and a server side to this, and I actually have the client side via my favorite client SmartSVN. The idea on the client is that you group changed files together into changelists, and send them off all at once. It's basically a queued commit you can use to stage. Perforce adds a server side, where pending changelists actually exist on the server, you can see what people are working on (and a description of what they're doing!), and so forth. Subversion has no idea of anything except when files are different from their copies living in the .svn shadow directory, and that's only on the client. If you have a couple different live streams of work, separating them out is a bit of a hassle. Branches are no solution at all, since it isn't always clear upfront what goes in which branch. Changelists are much more flexible. Locking is useless The point of a lock in version control systems is to signal that it's not safe to change a file. The most common use is for binary files that can't be merged, but there are other useful situations too. Here's the catch: Subversion checks locks when you attempt to commit. That's how it has to work. In other words, by the time you find out there's a lock on a file, you've already gone and started working on it, unless you obsessively check repository status for files. There's also no way to know if you're putting a lock on a file somebody has pending edits to. The long and short of it is if you're going to use a server, really use it. Perforce does. There's no need to have the drawbacks of both centralized and distributed systems at once. I think that's everything that bothers me about Subversion. What about you? ## I am at GDC I arrive in SFO for GDC tonight, give me a shout if you'd like to meet! ## Evaluation: Git Late copy of a Ventspace post. Last time I talked about Mercurial, and was generally disappointed with it. I also evaluated Git, another major distributed version control system (DVCS). Short Review: Quirky, but a promising winner. Git, like Mercurial, was spawned as a result of the Linux-BitKeeper feud. It was written initially by Linus Torvalds, apparently during quite a lull in Linux development. It is for obvious reasons a very Linux focused tool, and I'd heard that performance is poor on Windows. I was not optimistic about it being usable on Windows. Installation actually went very smoothly. Git for Windows is basically powered by MSYS, the same Unix tools port that accompanies the Windows GCC port called MinGW. The installer is neat and sets up everything for you. It even offers a set of shell extensions that provide a graphical interface. Note that I opted not to install this interface, and I have no idea what it's like. A friend tells me it's awful. Once the installer is done, git is ready to go. It's added to PATH and you can start cloning things right off the bat. Command line usage is simple and straightforward, and there's even a 'config' option that lets you set things up nicely without having to figure out what config file you want and where it lives. It's still a bit annoying, but I like it a lot better than Mercurial. I've heard some people complain about git being composed of dozens of binaries, but I haven't seen this on either my Windows or Linux boxes. I suspect this is a complaint about old versions, where each git command was its own binary (git-commit, git-clone, git-svn, etc), but that's long since been retired. Most of the installed binaries are just the MSYS ports of core Unix programs like ls. I was also thrilled with the git-svn integration. Unlike Mercurial, the support is built in and flat out works with no drama whatsoever. I didn't try committing back into the Subversion repository from git, but apparently there is fabulous two way support. It was simple enough to create a git repository but it can be time consuming, since git replays every single check-in from Subversion to itself. I tested on a small repository with only about 120 revisions, which took maybe two minutes. This is where I have to admit I have another motive for choosing Git. My favorite VCS frontend comes in a version called SmartGit. It's a stand-alone (not shell integrated) client that is free for non commercial use and works really well. It even handled SSH beautifully, which I'm thankful about. It's still beta technically, but I haven't noticed any problems. Now the rough stuff. I already mentioned that Git for Windows comes with a GUI that is apparently not good. What I discovered is that getting git to authenticate from Windows is fairly awful. In Subversion, you actually configure users and passwords explicitly in a plain-text file. Git doesn't support anything of the sort; their 'git-daemon' server allows fully anonymous pulls and can be configured for anonymous-push only. Authentication is entirely dependent on the filesystem permissions and the users configured on the server (barring workarounds), which means that most of the time, authenticated Git transactions happen inside SSH sessions. If you want to do something else, it's complicated at best. Oh, and good luck with HTTP integration if you chose a web server other than Apache. I have to imagine running a Windows based git server is difficult. Let me tell you about SSH on Windows. It can be unpleasant. Most people use PuTTY (which is very nice), and if you use a server with public key authentication, you'll end up using a program called Pageant that provides that service to various applications. Pageant doesn't use OpenSSH compatible keys, so you have to convert the keys over, and watch out because the current stable version of Pageant won't do RSA keys. Git in turn depends on a third program called Plink, which exists to help programs talk to Pageant, and it finds that program via the GIT_SSH environment variable. The long and short of it is that getting Git to log into a public key auth SSH server is quite painful. I discovered that SmartGit simply reads OpenSSH keys and connects without any complications, so I rely on it for transactions with our main server. I am planning to transition over to git soon, because I think that the workflow of a DVCS is better overall. It's really clear, though, that these are raw tools compared to the much more established and stable Subversion. It's also a little more complicated to understand; whether you're using git, Mercurial, or something else it's valuable to read the free ebooks that explain how to work with them. There are all kinds of quirks in these tools. Git, for example, uses a 'staging area' that clones your files for commit, and if you're not careful you can wind up committing older versions of your files than what's on disk. I don't know why -- seems like the opposite extreme from Mercurial. It's because of these types of issues that I favor choosing the version control system with the most momentum behind it. Git and Mercurial aren't the only two DVCS out there; Bazaar, monotone, and many more are available. But these tools already have rough (and sharp!) edges, and by sticking to the popular ones you are likely to get the most community support. Both Git and Mercurial have full blown books dedicated to them that are available electronically for free. My advice is that you read them. ## Evaluation: Mercurial Copy of a Ventspace post. I've been a long time Subversion user, and I'm very comfortable with its quirks and limitations. It's an example of a centralized version control system (CVCS), which is very easy to understand. However, there's been a lot of talk lately about distributed version control systems (DVCS), of which there are two well known examples: git and Mercurial. I've spent a moderate amount of time evaluating both, and I decided to post my thoughts. This entry is about Mercurial. Short review: A half baked, annoying system. I started with Mercurial, because I'd heard anecdotally that it's more Windows friendly and generally nicer to work with than git. I was additionally spurred by reading the first chapter of HgInit, an e-book by Joel Spolsky of 'Joel on Software' fame. Say what you will about Joel -- it's a concise and coherent explanation of why distributed version control is, in a general sense, preferable to centralized. Armed with that knowledge, I began looking at what's involved in transitioning from Subversion to Mercurial. Installation was smooth. Mercurial's site has a Windows installer ready to go that sets everything up beautifully. Configuration, however, was unpleasant. The Mercurial guide starts with this as your very first step: Yes, because what I've always wanted from my VCS is for it to be a hassle every time I move to a new machine. Setting up extensions is similarly a pain in the neck. More on that in a moment. Basically Mercurial's configurations are a headache. Then there's the actual VCS. You see, I have one gigantic problem with Mercurial, and it's summed up by Joel: This is an incredibly awkward design decision. The basic idea, I guess, is that somebody got really frustrated about forgetting to check in changes and decided this was the solution. My take is that this is a stupid restriction that makes development unpleasant. When I'm working on something, I usually have several related projects in a repository. (Mercurial fans freely admit this is a bad way to work with it.) Within each project, I usually wind up making a few sets of parallel changes. These changes are independent and shouldn't be part of the same check-in. The idea with Mercurial is, I think, that you simply produce new branches every time you do something like this, and then merge back together. Should be no problem, since branching is such a trivial operation in Mercurial. So now I have to stop and think about whether I should be branching every time I make a tweak somewhere? Oh but wait, how about the extension mechanism? I should be able to patch in whatever behavior I need, and surely this is something that bothers other people! As it turns out that definitely the case. Apart from the branching suggestions, there's not one but half a dozen extensions to handle this problem, all of which have their own quirks and pretty much all of which involve jumping back into the VCS frequently. This is apparently a problem the Mercurial developers are still puzzling over. Actually there is one tool that's solved this the way you would expect: TortoiseHg. Which is great, save two problems. Number one, I want my VCS features to be available from the command line and front-end both. Two, I really dislike Tortoise. Alternative Mercurial frontends are both trash, and an unbelievable pain to set up. If you're working with Mercurial, TortoiseHg and command line are really your only sane options. It comes down to one thing: workflow. With Mercurial, I have to be constantly conscious about whether I'm in the right branch, doing the right thing. Should I be shelving these changes? Do they go together or not? How many branches should I maintain privately? Ugh. Apart from all that, I ran into one serious show stopper. Part of this test includes migrating my existing Subversion repository, and Mercurial includes a convenient extension for it. Wait, did I say convenient? I meant borderline functional: The silver lining is there are apparently third party tools to handle this that are far better, but at this point Mercurial has tallied up a lot of irritations and I'm ready to move on. Spoiler: I'm transitioning to git. I'll go into all the gory details in my next post, but I found git to be vastly better to work with. ## BioReplicant Crowd Simulation Been burning the oil on this for a couple weeks. What do you think? ">YouTube link Vimeo link (looks nicer) Also can't help but notice that YouTube HD's encode quality is awful. ## NHibernate Is Pretty Cool HIt Ventspace to read my latest rants. My last tech post was heavily negative, so today I'm going to try and be more positive. I've been working with a library called NHibernate, which is itself a port of a Java library called Hibernate. These are very mature, long-standing object relational mapping systems that I've started exploring lately. Let's recap. Most high end storage requirements, and nearly all web site storage, are handled using relational database management systems, RDBMS for short. These things were developed starting in 1970, along with the now ubiquitous SQL language for working with them. The main SQL standard was laid down in 1992, though most vendors provide various extensions for their specific systems. Ignoring some recent developments, SQL is the gold standard for handling relational database systems. When I set out to build SlimTune, one of the ideas I had was to eschew the fairly crude approach that most performance tools take with storage and build it around a fully relational database. I bet that I could make it work fast enough to be usable for profiling, and simultaneously more expressive and flexible. The ability to view the profile live as it evolves is derived directly from this design choice. Generally speaking I'm really happy with how it turned out, but there was one mistake I didn't understand at the time. SQL is garbage. (Damnit, I'm being negative again.) I am not bad at SQL, I don't think. I know for certain that I am not good at SQL, but I can write reasonably complex queries and I'm fairly well versed in the theory behind relational databases. The disturbing part is that SQL is very inconsistent across database systems. The standard is missing a lot of useful functionality -- string concatenation, result pagination, etc -- and when you're using embedded databases like SQLite or SQL Server Compact, various pieces of the language are just plain missing. Databases also have more subtle expectations about what operations may or may not be allowed, how joins are set up, and even syntactical details about how to refer to tables and so on. SQL is immensely powerful if you can choose to only support a limited subset of database engines, or if your query needs are relatively simple. Tune started running into problems almost immediately. The visualizers in the released version are using a very careful balance of the SQL subset that works just so on the two embedded engines that are in there. It's not really a livable development model, especially as the number of visualizers and database engines increases. I needed something that would let me handle databases in a more implementation-agnostic way. After some research it became clear that what I needed was an object/relational mapper, or ORM. Now an ORM does not exist to make databases consistently; that's mostly a side effect of what they actually do, which is to hide the database system entirely. ORMs are actually the most popular form of persistence layers. A persistence layer exists to allow you to convert "transient" data living in your code to "persistent" data living in a data store, and back again. Most code is object oriented and most data stores are relational, hence the popularity of object/relational mapping. After some reading, I picked NHibernate as my ORM of choice, augmented by Fluent mapping to get away from the XML mess that NH normally uses. It's gone really well so far, but over the course of all this I've learned it's very important to understand one thing about persistence frameworks. They are not particularly generalized tools, by design. Every framework, NH included, has very specific ideas about how the world ought to work. They tend to offer various degrees of customization, but you're expected to adhere to a specific model and straying too far from that model will result in pain. Persistence frameworks are very simple and effective tools, but they sacrifice both performance and flexibility to do so. (Contrast to SQL, which is fast and flexible but a PITA to use.) Composite keys? Evil! Dynamic table names? No way! I found that NHibernate was amongst the best when it came to allowing me to bend the rules -- or flat out break them. Even so, Tune is a blend of NH and native database code, falling back to RDBMS-specific techniques in areas that are performance sensitive or outside of the ORM's world-view. For example, I use database specific SQL queries to clone tables for snapshots. That's not something you can do in NH because the table itself is an implementation detail. I also use database specific techniques to perform high-volume database work, as NH is explicitly meant for OLTP and not major bulk operations. Despite all the quirks, I've been really pleased with NHibernate. It's solved some major design problems in a relatively straightforward fashion, despite the somewhat awkward learning curve and lots of bizarre problem solving due to my habit of using a relational database as a relational database. It provides a query language that is largely consistent across databases, and very effective tools for building queries dynamically without error-prone string processing. Most importantly, it makes writing visualizers for Tune and all around much smoother, and that means more features more quickly. So yeah, I like NHibernate. That said, I also like this rant. Positive thinking! ## Windows Installer is Terrible As usual, this was copied from Ventspace. I find Windows Installer to be truly baffling. It's as close to the heart of Windows as any developer tool gets. It is technology which literally every single Windows user interacts with, frequently. I believe practically every single team at Microsoft works with it, and that even major applications like Office, Visual Studio, and Windows Update are using it. So I don't understand. Why is Installer such a poorly designed, difficult to use, and generally infuriating piece of software? Let's recap on the subject of installers. An installer technology should facilitate two basic tasks. One, it should allow a developer to smoothly install their application onto any compatible system, exposing a UI that is consistent across every installation. Two, it should allow the user to completely reverse (almost) any installation at will, in a straightforward and again consistent fashion. Windows, Mac OSX, and Linux take three very different approaches to this problem, with OSX being almost indisputably the most sane. Linux is fairly psychotic under the hood, but the idea of a centralized package repository (almost like an "app store" of some kind) is fairly compelling and the dominant implementations are excellent. And then we have Windows. The modern, recommended approach is to use MSI based setup files, which are basically embedded databases and show a mostly similar UI. And then there's InstallShield, NSIS, InnoSetup, and half a dozen other installer technologies that are all in common use. Do you know why that is? It's because Windows Installer is junk. Let us start with the problem of consistency. This is our very nice, standard looking SlimDX SDK installation package: And this is what it looks like if you use Visual Studio to create your installer: Random mix of fonts? Check. Altered dialog proportions for no reason? Check. Inane question that makes no sense to most users? Epic check. Hilariously amateur looking default clip art? Of course. Okay, so maybe you don't think the difference is that big. Microsoft was never Apple, after all. But how many of those childish looking VS based installers do you see on a regular basis? It's not very many. That's because the installer creation built into Visual Studio, Microsoft's premiere idol of the industry development tool, is utter garbage. Not only is the UI for it awful, it fails to expose most of the useful things MSI can actually do, or most developers want to do. Even the traditionally expected "visual" half-baked dialog editor never made it into the oven. You just get a series of bad templates with static properties. Microsoft also provides an MSI editor, which looks like this: Wow! I've always wanted to build databases by hand from scratch. Why not just integrate the functionality into Access? In fact, Microsoft is now using external tools to build installers. Office 2007's installer is written using the open source WiX toolset. Our installer is built using WiX too, and it's an unpleasant but workable experience. WiX essentially translates the database schema verbatim into an XML schema, and automates some of the details of generating unique IDs etc. It's pretty much the only decent tool for creating MSI files of any significant complexity, especially if buying InstallShield is just too embarrassing (or expensive,$600 up to \$9500). By the way, Visual Studio 2010 now includes a license for InstallShield Limited Edition. I think that counts as giving up.

Even then, the thing is downright infuriating. You cannot tell it to copy the contents of a folder into an installer. There is literally no facility for doing so. You have to manually replicate the entire folder hierarchy, and every single file, interspersed with explicit uniquely identified Component blocks, all in XML. And all of those components have to be explicitly be referenced into Feature blocks. SlimDX now ships a self extracting 7-zip archive for the samples mainly because the complexity of the install script was unmanageable, and had to be rebuilt with the help of a half-baked C# tool each release.

Anyone with half a brain might observe at this point that copying a folder on your machine to a user's machine is mostly what an installer does. In terms of software design, it's the first god damned use case.

Even all of that might be okay if it weren't for one critical problem. Lots of decent software systems have no competent toolset. Unfortunately it turns out that the underlying Windows Installer engine is also a piece of junk. The most obvious problem is its poor performance (I have an SSD, four cores, and eight gigabytes of RAM -- what is it doing for so long before installation starts?), but even that can be overlooked. I am talking about one absolutely catastrophic, completely unacceptable design flaw.

Windows Installer cannot handle dependencies.

Let that sink in. Copying a local folder to the user's system is use case number one. Setting up dependencies is, I'm pretty sure, the very next thing on the list. And Windows Installer cannot even begin to contemplate it. You expect your dependencies to be installed via MSI, because it's the standard installer system, and they usually are. Except...Windows Installer can't chain MSIs. It can't run one MSI as a child of another. It can't run one MSI after another. It sure can't conditionally install subcomponents in separate MSIs. Trying to run two MSI installs at once on a single system will fail. (Oh, and MS licensing doesn't even allow you to integrate any of their components directly in DLL form, the way OSX does. Dependencies are MSI or bust.)

The way to set up dependencies is to write your own custom bootstrap installer. Yes, Visual Studio can create the bootstrapper, assuming your dependencies are one of the scant few that are supported. However, we've already established that Visual Studio is an awful choice for any installer-related tasks. In this case, the bootstrapper will vomit out five mandatory files, instead of embedding them in setup.exe. That was fine when software was still on media, but it's ridiculous for web distribution.

Anyway, nearly any interesting software requires a bootstrapper, which has to be pretty much put together from scratch, and there's no guidelines or recommended approaches or anything of the sort. You're on your own. I've tried some of the bootstrap systems out there, and the best choice is actually any competing installer technology -- I use Inno. Yes, the best way to make Windows Installer workable is to actually wrap it in a third party installer. And I wonder how many bootstrappers correctly handle silent/unattended installations, network administrative installs, logging, UAC elevation, patches, repair installs, and all the other crazy stuff that can happen in installer world.

One more thing. The transition to 64 bit actually made everything worse. See, MSIs can be built as 32 bit or 64 bit, and of course 64 bit installers don't work on 32 bit systems. 32 bit installers are capable of installing 64 bit components though, and can be safely cordoned off to exclude those pieces when running on a 32 bit system. Except when they can't. I'm not sure exactly how many cases of this there are, but there's one glaring example -- the Visual C++ 2010 64 bit merge module. (A merge module is like a static library, but for installers.) It can't be included in a 32 bit installer, even though the VC++ 2008 module had no problem. The recommended approach is to build completely separate 32 and 64 bit installers.

Let me clarify the implications of that statement. Building two separate installers leaves two choices. One choice is to let the user pick the correct installation package. What percentage of Windows users do you think can even understand the selection they're supposed to make? It's not Linux, the people using the system don't know arcane details like what bit-size their OS installation is. (Which hasn't stopped developers from asking people to choose anyway.) That leaves you one other choice, which is to -- wait for it -- write a bootstrapper.

Alright, now I'm done. Despite all these problems, apparently developers everywhere just accept the status quo as perfectly normal and acceptable. Or maybe there's a "silent majority" not explaining to Microsoft that their entire installer technology, from top to bottom, is completely mind-fucked.

## Selling Middleware

So a few days ago, we published a video demo of our BioReplicant technology. In particular, we published it without saying much. No explanation of how it works, what problems it solves, or how it could be used. That was a very important and carefully calculated decision. I felt it was critical that people be allowed to see our technology without any tinting or leading on our part. Some of the feedback was very positive, some very negative, and a whole lot in between. I'm sure we'll get an immense amount more from GDC, but this initial experience has been critical in understanding what people want and what they think we're offering.

To a large extent, people's expectations do not align with what BioReplicants actually does. Our eventual goal is to meet those expectations, but in the meantime there is a very tricky problem of explaining what our system actually does for them. I think that will continue to be a problem, exacerbated by the fact that on the surface, we seem to be competing with NaturalMotion's Euphoria product, and in fact we've encouraged that misconception.

In truth, it's not the case. We aren't doing anything like what NM does internally, and all we're really doing is trying to solve the same problem every game has to solve. Everybody wants realistic, varied, complex, and reactive animations for their game. Everybody! And frankly, they don't need Euphoria or BioReplicants to do it. There's at least three GDC talks this year on the subject. That's why it's important to step back and look at why middleware even exists.

The rest of this post is at Ventspace. Probably one of my best posts in a long time, actually.