Jump to content

  • Log In with Google      Sign In   
  • Create Account

Promit's Ventspace

Evaluation: Mercurial

Posted by , 07 October 2010 - - - - - - · 623 views

Copy of a Ventspace post.


I've been a long time Subversion user, and I'm very comfortable with its quirks and limitations. It's an example of a centralized version control system (CVCS), which is very easy to understand. However, there's been a lot of talk lately about distributed version control systems (DVCS), of which there are two well known examples: git and Mercurial. I've spent a moderate amount of time evaluating both, and I decided to post my thoughts. This entry is about Mercurial.

Short review: A half baked, annoying system.

I started with Mercurial, because I'd heard anecdotally that it's more Windows friendly and generally nicer to work with than git. I was additionally spurred by reading the first chapter of HgInit, an e-book by Joel Spolsky of 'Joel on Software' fame. Say what you will about Joel -- it's a concise and coherent explanation of why distributed version control is, in a general sense, preferable to centralized. Armed with that knowledge, I began looking at what's involved in transitioning from Subversion to Mercurial.

Installation was smooth. Mercurial's site has a Windows installer ready to go that sets everything up beautifully. Configuration, however, was unpleasant. The Mercurial guide starts with this as your very first step:
As first step, you should teach Mercurial your name. For that you open the file ~/.hgrc with a text-editor and add the ui section (user interaction) with your username:

Yes, because what I've always wanted from my VCS is for it to be a hassle every time I move to a new machine. Setting up extensions is similarly a pain in the neck. More on that in a moment. Basically Mercurial's configurations are a headache.

Then there's the actual VCS. You see, I have one gigantic problem with Mercurial, and it's summed up by Joel:
Whereas, in Mercurial, all commands always apply to the entire tree. If your code is in c:\code, when you issue the hg commit command, you can be in c:\code or in any subdirectory and it has the same effect.
This is an incredibly awkward design decision. The basic idea, I guess, is that somebody got really frustrated about forgetting to check in changes and decided this was the solution. My take is that this is a stupid restriction that makes development unpleasant.

When I'm working on something, I usually have several related projects in a repository. (Mercurial fans freely admit this is a bad way to work with it.) Within each project, I usually wind up making a few sets of parallel changes. These changes are independent and shouldn't be part of the same check-in. The idea with Mercurial is, I think, that you simply produce new branches every time you do something like this, and then merge back together. Should be no problem, since branching is such a trivial operation in Mercurial.

So now I have to stop and think about whether I should be branching every time I make a tweak somewhere?

Oh but wait, how about the extension mechanism? I should be able to patch in whatever behavior I need, and surely this is something that bothers other people! As it turns out that definitely the case. Apart from the branching suggestions, there's not one but half a dozen extensions to handle this problem, all of which have their own quirks and pretty much all of which involve jumping back into the VCS frequently. This is apparently a problem the Mercurial developers are still puzzling over.

Actually there is one tool that's solved this the way you would expect: TortoiseHg. Which is great, save two problems. Number one, I want my VCS features to be available from the command line and front-end both. Two, I really dislike Tortoise. Alternative Mercurial frontends are both trash, and an unbelievable pain to set up. If you're working with Mercurial, TortoiseHg and command line are really your only sane options.

It comes down to one thing: workflow. With Mercurial, I have to be constantly conscious about whether I'm in the right branch, doing the right thing. Should I be shelving these changes? Do they go together or not? How many branches should I maintain privately? Ugh.

Apart from all that, I ran into one serious show stopper. Part of this test includes migrating my existing Subversion repository, and Mercurial includes a convenient extension for it. Wait, did I say convenient? I meant borderline functional:
Subversion's Python bindings are a prerequisite. The bindings (generated with SWIG) are installed separately on Windows, and can be found on http://subversion.tigris.org/ . Note that you can't do this with the Win32 Mercurial binaries -- there's no way to install the Subversion bindings into its built-in Python library. So you'll need to use a Mercurial installed on top of a stand-alone Python, and you may also need to do something like "set HG=python c:\Python25\Scripts\hg" to override the default Win32 binaries if you have those installed also. For Mac OS X, the easiest way is to install the CollabNet Subversion build, and then copy the content of /opt/subversion/lib/svn-python to the site-package directory of the python installation.

The silver lining is there are apparently third party tools to handle this that are far better, but at this point Mercurial has tallied up a lot of irritations and I'm ready to move on.

Spoiler: I'm transitioning to git. I'll go into all the gory details in my next post, but I found git to be vastly better to work with.


BioReplicant Crowd Simulation

Posted by , 19 August 2010 - - - - - - · 462 views

Been burning the oil on this for a couple weeks. What do you think?
YouTube link
Vimeo link (looks nicer)

Also can't help but notice that YouTube HD's encode quality is awful.


NHibernate Is Pretty Cool

Posted by , 22 July 2010 - - - - - - · 411 views

HIt Ventspace to read my latest rants.

My last tech post was heavily negative, so today I'm going to try and be more positive. I've been working with a library called NHibernate, which is itself a port of a Java library called Hibernate. These are very mature, long-standing object relational mapping systems that I've started exploring lately.

Let's recap. Most high end storage requirements, and nearly all web site storage, are handled using relational database management systems, RDBMS for short. These things were developed starting in 1970, along with the now ubiquitous SQL language for working with them. The main SQL standard was laid down in 1992, though most vendors provide various extensions for their specific systems. Ignoring some recent developments, SQL is the gold standard for handling relational database systems.

When I set out to build SlimTune, one of the ideas I had was to eschew the fairly crude approach that most performance tools take with storage and build it around a fully relational database. I bet that I could make it work fast enough to be usable for profiling, and simultaneously more expressive and flexible. The ability to view the profile live as it evolves is derived directly from this design choice. Generally speaking I'm really happy with how it turned out, but there was one mistake I didn't understand at the time.

SQL is garbage. (Damnit, I'm being negative again.)

I am not bad at SQL, I don't think. I know for certain that I am not good at SQL, but I can write reasonably complex queries and I'm fairly well versed in the theory behind relational databases. The disturbing part is that SQL is very inconsistent across database systems. The standard is missing a lot of useful functionality -- string concatenation, result pagination, etc -- and when you're using embedded databases like SQLite or SQL Server Compact, various pieces of the language are just plain missing. Databases also have more subtle expectations about what operations may or may not be allowed, how joins are set up, and even syntactical details about how to refer to tables and so on.

SQL is immensely powerful if you can choose to only support a limited subset of database engines, or if your query needs are relatively simple. Tune started running into problems almost immediately. The visualizers in the released version are using a very careful balance of the SQL subset that works just so on the two embedded engines that are in there. It's not really a livable development model, especially as the number of visualizers and database engines increases. I needed something that would let me handle databases in a more implementation-agnostic way.

After some research it became clear that what I needed was an object/relational mapper, or ORM. Now an ORM does not exist to make databases consistently; that's mostly a side effect of what they actually do, which is to hide the database system entirely. ORMs are actually the most popular form of persistence layers. A persistence layer exists to allow you to convert "transient" data living in your code to "persistent" data living in a data store, and back again. Most code is object oriented and most data stores are relational, hence the popularity of object/relational mapping.

After some reading, I picked NHibernate as my ORM of choice, augmented by Fluent mapping to get away from the XML mess that NH normally uses. It's gone really well so far, but over the course of all this I've learned it's very important to understand one thing about persistence frameworks. They are not particularly generalized tools, by design. Every framework, NH included, has very specific ideas about how the world ought to work. They tend to offer various degrees of customization, but you're expected to adhere to a specific model and straying too far from that model will result in pain.

Persistence frameworks are very simple and effective tools, but they sacrifice both performance and flexibility to do so. (Contrast to SQL, which is fast and flexible but a PITA to use.) Composite keys? Evil! Dynamic table names? No way! I found that NHibernate was amongst the best when it came to allowing me to bend the rules -- or flat out break them. Even so, Tune is a blend of NH and native database code, falling back to RDBMS-specific techniques in areas that are performance sensitive or outside of the ORM's world-view. For example, I use database specific SQL queries to clone tables for snapshots. That's not something you can do in NH because the table itself is an implementation detail. I also use database specific techniques to perform high-volume database work, as NH is explicitly meant for OLTP and not major bulk operations.

Despite all the quirks, I've been really pleased with NHibernate. It's solved some major design problems in a relatively straightforward fashion, despite the somewhat awkward learning curve and lots of bizarre problem solving due to my habit of using a relational database as a relational database. It provides a query language that is largely consistent across databases, and very effective tools for building queries dynamically without error-prone string processing. Most importantly, it makes writing visualizers for Tune and all around much smoother, and that means more features more quickly.

So yeah, I like NHibernate. That said, I also like this rant. Positive thinking!


Windows Installer is Terrible

Posted by , 15 July 2010 - - - - - - · 772 views

As usual, this was copied from Ventspace.

I find Windows Installer to be truly baffling. It's as close to the heart of Windows as any developer tool gets. It is technology which literally every single Windows user interacts with, frequently. I believe practically every single team at Microsoft works with it, and that even major applications like Office, Visual Studio, and Windows Update are using it.

So I don't understand. Why is Installer such a poorly designed, difficult to use, and generally infuriating piece of software?

Let's recap on the subject of installers. An installer technology should facilitate two basic tasks. One, it should allow a developer to smoothly install their application onto any compatible system, exposing a UI that is consistent across every installation. Two, it should allow the user to completely reverse (almost) any installation at will, in a straightforward and again consistent fashion. Windows, Mac OSX, and Linux take three very different approaches to this problem, with OSX being almost indisputably the most sane. Linux is fairly psychotic under the hood, but the idea of a centralized package repository (almost like an "app store" of some kind) is fairly compelling and the dominant implementations are excellent.

And then we have Windows. The modern, recommended approach is to use MSI based setup files, which are basically embedded databases and show a mostly similar UI. And then there's InstallShield, NSIS, InnoSetup, and half a dozen other installer technologies that are all in common use. Do you know why that is? It's because Windows Installer is junk.

Let us start with the problem of consistency. This is our very nice, standard looking SlimDX SDK installation package:

And this is what it looks like if you use Visual Studio to create your installer:

Random mix of fonts? Check. Altered dialog proportions for no reason? Check. Inane question that makes no sense to most users? Epic check. Hilariously amateur looking default clip art? Of course.

Okay, so maybe you don't think the difference is that big. Microsoft was never Apple, after all. But how many of those childish looking VS based installers do you see on a regular basis? It's not very many. That's because the installer creation built into Visual Studio, Microsoft's premiere idol of the industry development tool, is utter garbage. Not only is the UI for it awful, it fails to expose most of the useful things MSI can actually do, or most developers want to do. Even the traditionally expected "visual" half-baked dialog editor never made it into the oven. You just get a series of bad templates with static properties. Microsoft also provides an MSI editor, which looks like this:

Wow! I've always wanted to build databases by hand from scratch. Why not just integrate the functionality into Access?

In fact, Microsoft is now using external tools to build installers. Office 2007's installer is written using the open source WiX toolset. Our installer is built using WiX too, and it's an unpleasant but workable experience. WiX essentially translates the database schema verbatim into an XML schema, and automates some of the details of generating unique IDs etc. It's pretty much the only decent tool for creating MSI files of any significant complexity, especially if buying InstallShield is just too embarrassing (or expensive, $600 up to $9500). By the way, Visual Studio 2010 now includes a license for InstallShield Limited Edition. I think that counts as giving up.

Even then, the thing is downright infuriating. You cannot tell it to copy the contents of a folder into an installer. There is literally no facility for doing so. You have to manually replicate the entire folder hierarchy, and every single file, interspersed with explicit uniquely identified Component blocks, all in XML. And all of those components have to be explicitly be referenced into Feature blocks. SlimDX now ships a self extracting 7-zip archive for the samples mainly because the complexity of the install script was unmanageable, and had to be rebuilt with the help of a half-baked C# tool each release.

Anyone with half a brain might observe at this point that copying a folder on your machine to a user's machine is mostly what an installer does. In terms of software design, it's the first god damned use case.

Even all of that might be okay if it weren't for one critical problem. Lots of decent software systems have no competent toolset. Unfortunately it turns out that the underlying Windows Installer engine is also a piece of junk. The most obvious problem is its poor performance (I have an SSD, four cores, and eight gigabytes of RAM -- what is it doing for so long before installation starts?), but even that can be overlooked. I am talking about one absolutely catastrophic, completely unacceptable design flaw.

Windows Installer cannot handle dependencies.

Let that sink in. Copying a local folder to the user's system is use case number one. Setting up dependencies is, I'm pretty sure, the very next thing on the list. And Windows Installer cannot even begin to contemplate it. You expect your dependencies to be installed via MSI, because it's the standard installer system, and they usually are. Except...Windows Installer can't chain MSIs. It can't run one MSI as a child of another. It can't run one MSI after another. It sure can't conditionally install subcomponents in separate MSIs. Trying to run two MSI installs at once on a single system will fail. (Oh, and MS licensing doesn't even allow you to integrate any of their components directly in DLL form, the way OSX does. Dependencies are MSI or bust.)

The way to set up dependencies is to write your own custom bootstrap installer. Yes, Visual Studio can create the bootstrapper, assuming your dependencies are one of the scant few that are supported. However, we've already established that Visual Studio is an awful choice for any installer-related tasks. In this case, the bootstrapper will vomit out five mandatory files, instead of embedding them in setup.exe. That was fine when software was still on media, but it's ridiculous for web distribution.

Anyway, nearly any interesting software requires a bootstrapper, which has to be pretty much put together from scratch, and there's no guidelines or recommended approaches or anything of the sort. You're on your own. I've tried some of the bootstrap systems out there, and the best choice is actually any competing installer technology -- I use Inno. Yes, the best way to make Windows Installer workable is to actually wrap it in a third party installer. And I wonder how many bootstrappers correctly handle silent/unattended installations, network administrative installs, logging, UAC elevation, patches, repair installs, and all the other crazy stuff that can happen in installer world.

One more thing. The transition to 64 bit actually made everything worse. See, MSIs can be built as 32 bit or 64 bit, and of course 64 bit installers don't work on 32 bit systems. 32 bit installers are capable of installing 64 bit components though, and can be safely cordoned off to exclude those pieces when running on a 32 bit system. Except when they can't. I'm not sure exactly how many cases of this there are, but there's one glaring example -- the Visual C++ 2010 64 bit merge module. (A merge module is like a static library, but for installers.) It can't be included in a 32 bit installer, even though the VC++ 2008 module had no problem. The recommended approach is to build completely separate 32 and 64 bit installers.

Let me clarify the implications of that statement. Building two separate installers leaves two choices. One choice is to let the user pick the correct installation package. What percentage of Windows users do you think can even understand the selection they're supposed to make? It's not Linux, the people using the system don't know arcane details like what bit-size their OS installation is. (Which hasn't stopped developers from asking people to choose anyway.) That leaves you one other choice, which is to -- wait for it -- write a bootstrapper.

Alright, now I'm done. Despite all these problems, apparently developers everywhere just accept the status quo as perfectly normal and acceptable. Or maybe there's a "silent majority" not explaining to Microsoft that their entire installer technology, from top to bottom, is completely mind-fucked.


Selling Middleware

Posted by , 05 March 2010 - - - - - - · 486 views

So a few days ago, we published a video demo of our BioReplicant technology. In particular, we published it without saying much. No explanation of how it works, what problems it solves, or how it could be used. That was a very important and carefully calculated decision. I felt it was critical that people be allowed to see our technology without any tinting or leading on our part. Some of the feedback was very positive, some very negative, and a whole lot in between. I'm sure we'll get an immense amount more from GDC, but this initial experience has been critical in understanding what people want and what they think we're offering.

To a large extent, people's expectations do not align with what BioReplicants actually does. Our eventual goal is to meet those expectations, but in the meantime there is a very tricky problem of explaining what our system actually does for them. I think that will continue to be a problem, exacerbated by the fact that on the surface, we seem to be competing with NaturalMotion's Euphoria product, and in fact we've encouraged that misconception.

In truth, it's not the case. We aren't doing anything like what NM does internally, and all we're really doing is trying to solve the same problem every game has to solve. Everybody wants realistic, varied, complex, and reactive animations for their game. Everybody! And frankly, they don't need Euphoria or BioReplicants to do it. There's at least three GDC talks this year on the subject. That's why it's important to step back and look at why middleware even exists.


The rest of this post is at Ventspace. Probably one of my best posts in a long time, actually.


BioReplicant Keeps Walking

Posted by , 03 March 2010 - - - - - - · 335 views

Click for High Def video.

This is what we've been working on for the last several months at AR Labs.
Forget falls.
Forget tackles.
BioReplicant keeps walking.
info@actionreactionlabs.com for more information.
-----------------------
BioReplicants is a completely reactive procedural animation system for use in video games. No key framing, motion capture, or precomputed animations were used. Everything you see here was generated in real-time, reacting to human input. Oh, and it's efficient enough to run on an iPhone.

We know he looks crazy. Sure we could've made it realistic, but it's just not that interesting to watch. BioReplicant can keep going even through bone crushing impacts, and we think that's pretty cool.

We'll be showing off the LIVE DEMO at GDC. Catch up with us to try it out!


How to Serialize Interfaces in .NET

Posted by , 23 February 2010 - - - - - - · 284 views

Late copy from Ventspace.

I'm working on some final touches for SlimTune's next version, and one of them involves persisting the launcher settings between application runs. Launching is handled by an interface ILauncher, which can be set to any number of things via a reflected list of inherited types. A PropertyGrid is used to configure the settings, and all the underlying code ever sees is the interface. SlimTune's a plugin based C# app, and this is all pretty standard.

When it came to persisting this data across sessions, I figured it'd be no big deal -- I'll just serialize the object out to isolated storage, and deserialize it again when I need it. There's one hang-up, though. Serializers (or at least XmlSerializer) can't handle interfaces! Worse still, the so-called solutions I found online were utterly ludicrous. It turns out this is actually an incredibly easy problem to solve, and mainly involves stopping and thinking about what you're doing for about five seconds.

Alright, so we can't serialize an interface, but we can serialize any concrete type. Same goes for the deserialization process. The answer is to simple: store the concrete type with the serialized data.

//save the launcher configuration to isolated storage
var isoStore = IsolatedStorageFile.GetUserStoreForApplication();
using(var configFile = new IsolatedStorageFileStream(ConfigFile, FileMode.Create, FileAccess.Write, isoStore))
{
var launcherType = m_launcher.GetType();
//write the concrete type so we know what to deserialize
string launcherTypeName = launcherType.AssemblyQualifiedName;
var sw = new StreamWriter(configFile);
sw.WriteLine(launcherTypeName);

//write the object itself
var serializer = new XmlSerializer(launcherType);
serializer.Serialize(sw, m_launcher);
}




We simply ask the interface to give us its real type, and record it to the file before serializing. Okay, so the result won't be a legal XML file, but how often is that actually a problem? Now the deserialize side of the equation:

//try and load a launcher configuration from isolated storage
var isoStore = IsolatedStorageFile.GetUserStoreForApplication();
using(var configFile = new IsolatedStorageFileStream(ConfigFile, FileMode.Open, FileAccess.Read, isoStore))
{
//read the concrete type to deserialize
var sr = new StreamReader(configFile);
var launcherTypeName = sr.ReadLine();
var launcherType = Type.GetType(launcherTypeName, true);

//read the actual object
XmlSerializer serializer = new XmlSerializer(launcherType);
m_launcher = (ILauncher) serializer.Deserialize(sr);
}




Reversing things, we first read the type that was written to file, and reconstruct the actual concrete type that goes with that string. Then we know exactly what to deserialize, and XmlSerializer is happy to oblige.

Now that wasn't so hard, was it?


ClickOnce Support in SlimDX

Posted by , 21 February 2010 - - - - - - · 507 views

Man, it's been a long time since I wrote about SlimDX. We've released the February 2010 version, so go ahead and grab that if you're so inclined. This version is mostly bug fixes, for both us and Microsoft. DirectX 11 should be much more usable, although we're still working towards stronger D2D and DWrite implementations. In the meantime, I wanted to discuss a feature that was included too late for documentation to catch up: ClickOnce support.

ClickOnce has actually been on the to-do list for a very long time, but it ended up quite far down the priority list thanks to a lack of user demand. For the February 2010 release we've gone ahead and included it. The installer will set it up for VS 2008 and VS 2010; we've dropped 2005 support across the board so no ClickOnce there. There are a few quirks to setting it up properly though, so I just want to explain what you need to do in order to make sure it works properly.

First of all, make sure you've got a reference to the GAC version of SlimDX in your project (via the .NET tab in Add References). Also check the properties of the SlimDX reference; the default is Copy Local = True and that should be set to False.

Once you've done that, go into your project's properties and select the Publish tab. This is where you set up all of the ClickOnce options and actually produce a distribution. If you press Application Files, you should see SlimDX.dll listed as Prerequisite (Auto). If it's something else, you've got the previous step wrong.

Next, hit the Prerequisites... button. Somewhere in the listbox, you'll see "SlimDX Runtime (February 2010)". Check the box and press OK.

That's it. Pretty easy, huh? Now when you publish, the SlimDX runtime will be included with your application and run automatically as part of the ClickOnce installation.






December 2016 »

S M T W T F S
    123
456 7 8910
11121314151617
18192021222324
25262728293031

Recent Comments

Recent Comments