Jump to content

  • Log In with Google      Sign In   
  • Create Account

Promit's Ventspace

Developing a Graphics Driver I

Posted by , 09 June 2007 - - - - - - · 290 views

Well, I've finished my first week. I want to write about what it's like to work on developing a graphics driver. First, we need to cover some basic architecture. Then I'll talk about the actual process. And remember, I'm on the DirectX driver team, so I'm going to focus on that. (I looked at OpenGL too, but I'm less clear on the architecture.) Not surprisingly, most development is currently focused on Windows Vista. We need to bring the Vista driver up to par with the XP driver, both in terms of correctness and performance.

If you take a D3D application, it's actually sitting on top of several layers. The first layer is the D3D runtime itself. What is underneath that changes somewhat between XP and Vista. In XP, D3D is on top of a kernel mode driver (KMD), which interacts with the HAL and a miniport driver (also kernel). There's a spliting of responsibilities between the various pieces. D3D essentially accumulates and validates information, and then forwards it onto the kernel driver at appropriate times. (Draw calls, present calls, etc.) The miniport driver handles low level communication details. Everything else is handled by the main kernel driver. The driver is very much on its own here, and has a relatively direct connection to the application.

In Vista, things shifted around and the splitting changed. The main graphics driver no longer lives in the kernel, but rather in user mode as a normal DLL which D3D loads for you. D3D comunicates with this layer through a series of calls and callbacks. There's an equivalent, separate layer for OpenGL. Both the OGL and D3D user mode drivers communicate with a common kernel mode layer. The KMD interacts with Vista's backend graphics scaffolding. In particular, Vista takes over on two fronts: resource management and GPU scheduling. In XP and earlier, the driver was solely responsible for deciding how allocations and deallocations were placed and managed, as well as how DMA was handled and when the GPU did what. All of those responsibilities have been taken over by the OS* now. Vista provides the scheduling, virtual memory, etc this way. The driver negotiates with Vista over that, and it's worth noting that direct pointers are no longer manipulated, because physical addresses are not always available to the driver. (Quite rarely, actually.) Kernel mode switches in this setup only happen when information needs to be communicated down to the kernel layer; this happens only when the user mode command buffer is full, when a present happens, or a couple other situations which force the KMD to become involved.

It should be clear by this point that the interactions between the various components are more complex under Vista. It shouldn't be a surprise, of course -- virtual memory, scheduling, etc weren't going to come for free. For the most part, the existing NVIDIA driver architecture seems to have fit in quite well; I assume that the designs of the ATI and NVIDIA drivers were both major factors in the design of the new Windows Display Driver Model (WDDM, formerly LDDM for Longhorn). Admittedly I don't know what things looked like pre-Vista, but right now it all looks fairly sane. The behavior of the driver isn't that different between XP and Vista; the main difference is that under Vista, some of the pieces aren't written by us anymore.

One particularly uncomfortable question is whether or not Vista is inherently slower than XP when it comes to graphics. That's a sticky discussion, and like all performance related discussions, it's hideously complex and intertwined. All I can really tell you is that Vista is for the most part slower than XP right now -- benchmarks by hardware sites have established that quite soundly. There are many issues to be worked through, both in our drivers and in Vista itself. We have plenty of tuning to do, and MS has some mistakes to fix. I suspect that things will get much, much better after Vista SP1 is released, thanks to work on the parts of both companies. (Also, I can't share the expected timeline for that. Sorry.) Things are getting better every day and will continue to improve.

That's enough for now. In the next part, I'll discuss the actual process of working on the driver.

* This is described in internal documentation under the sub-heading: "All Your Bufs are Belong to OS".

Stop! Internship time.

Posted by , 07 June 2007 - - - - - - · 170 views

So as some of you may know, I'm currently working for NVIDIA. I was hired as an intern on the DirectX driver team, and started on Monday. Just getting settled in now, and I'll mainly be doing performance tuning work for the Vista driver.

Warning: the rest of this post will read like an HR/marketing recruiting presentation.

This place is amazing. First of all, it's Silicon Valley, so the weather is pretty much permanently nice. The NVIDIA campus is fairly compact, consisting of 6 buildings. There's a cafeteria, which is quite good and seems to have a solid variety. The food isn't free, but everybody gets a $5/meal credit (excluding breakfast, I think). I eat lunch for about $0.50 a day. I may start having dinner more often here too. The food services are run by Aramark, who I happen to like. (They were running the food services at school as well.) There's actually a bus that goes direct from my place to NVIDIA and back, but the latest bus in the morning leaves at 7:30 and arrives at 7:45, and in the evening leaves at 5:30 and arrives at 5:45. These are, of course, not natural times of day for an engineer, and nobody is around when I come in. I will be obtaining a car shortly, though that has been a huge headache. More on that some other time.

The culture is very much a relaxed, open environment, much like Microsoft was. Of particular interest is that there are no offices here. (Everybody at MS has an office except for some of the HR people.) Everybody has a cubicle. I'm told the CEO merely has a wall and a table. I don't mind it, except that with an office you can use speakers for music, which doesn't really work in a cube farm. I'm sharing my cube with another intern, but I believe all the full time guys get their own. I have two machines, an XP machine where I work and a Vista machine where stuff actually runs. I remote debug the Vista machine from the XP machine. There's a phone too, not that I ever use it.

Then of course there's the engineering. There's a particular dedication to care and quality here. Everyone strives to do their job and do it well. That's been emphasized over and over, and it's clearly quite fundamental to everything that happens here. Apart from that, I've had the privilege of seeing the details of how the hardware and drivers work, which has been extremely enlightening, even though I've just scratched the surface. Naturally I can't really talk about it. It's really quite stunning though, how beautifully well done the unified driver is. It's not simple, and it's not hack free, and it's not small. Still, the immense challenge it handles is just incredible. It's a single codebase that works across at least nine operating systems, with every GeForce NVIDIA has ever made (which is probably a couple dozen), and with all the versions of OpenGL and Direct3D over the years.

The interns are treated like any new hire here. That means I'm doing real work -- in particular, fixing the performance of a rather high profile game on Vista. Of course, it also means no hand holding. There are no tutorials here people. I have several million lines of code, and a bunch of architectural documentation. Doesn't bother me, but I and the other interns are all expected to take our bugs and figure out what's going on with relatively little guidance. There's a vast number of in house tools for doing all sorts of different things (like, a dozen or more), and there's internal docs to get you up to speed. These tools are seriously awesome. And after hours, plenty of employees like to stay and game and eat dinner here rather than going home.

I could get used to this.

[EDIT] I almost forgot. These are the expensive cars I've seen so far:
* Porsches (Boxster, Carrera, 911 Turbo)
* Corvettes (Stingray, C3, C4, C5, C6)
* Lots of high end Mercedes
* Ferraris (F430)
* Aston Marton V12 Vanquish
* Bentley Continental GT
* Dodge Viper
* Jaguars (don't know models)
* Lamborghini Gallardo

The Famous Language Thread List

Posted by , 29 May 2007 - - - - - - · 278 views

This one enjoys fairly significant popularity. (Note that only threads containing significant discussion are included.)

1) Professional Games Made In C#?
2) Java for game development?
3) Java----C/C++
4) c++ or c#
5) Question about Java Vs. C# Vs. C++
6) Java Games?
7) Java is fast?
8) Secondary Language:VB or Java?
9) What makes C++ so powerful?
10) C# games and cheating...
11) Is C# good enough for system utility programming
12) MC++ vs. C#
13) Which language is best for a 3d Games Engine?
14) C# vs C++ as a choice for development
15) Is Java the Future?
16) why C# and not Java?
17) What do you think of the D language?
18) my c++ d c# benchmark!
19) The Definitive Guide to Language Selection
20) Sharp Java
21) C++ or C#?
22) C++ or C#?
23) Java disadvantages
24) C++ or C#?
25) Visual C++.net vs Visual C#.net
26) C# - huh?
27) which language should i learn?
28) C or C++ or C#
29) learn C or C++ ??
30) Is C still useful in gamedev?
31) Why C# XNA When Everyone Wants C/C++
32) JIT compiled code vs native machine code
33) C++ or C?

This particular list is my top ten, because of the sheer frequency with which they occur. 12 days, 10 threads.
1) c++ or c# (5/1/06)
2) Java for game development? (5/2/06)
3) Java Games? (5/3/06)
4) Java----C/C++ (5/3/06)
5) MC++ vs. C# (5/4/06)
6) What makes C++ so powerful? (5/9/06)
7) C# games and cheating... (5/9/06)
8) Is C# good enough for system utility programming (5/9/06)
9) Which language is best for a 3d Games Engine? (5/11/06)
10) C# vs C++ as a choice for development (5/12/06)

Principles of Software Engineering

Posted by , 12 May 2007 - - - - - - · 332 views

Here it is. A first round of Promit's Principles of Software Engineering. I may expand on this list in future posts, but this will get us started nicely.

1) All good software engineering is fundamentally motivated by KISS and YAGNI.
It's a fairly straightforward observation that the complexity of software increases as functionality and flexibility are increased. This manifests itself across the codebase. You can see it in the basic interfaces and APIs, in the user interface, and in the checkbooks as you spend millions of dollars maintaining a mess of super flexible goo of code. Good software strives for simplicity, and elegance, not infinite configurability. Don't try to design systems that solve every conceivable problem that could arise. If an unforeseen problem does arise, refactor to it, keeping things as simple as is reasonable.
2) A wide range of experience in multiple languages, paradigms, and environments is critical to a developer's ability to design software.
Every developer should know a large variety of languages. Lisp, Ocaml and/or Haskell, assembly, and C are on my list of important languages to know. Personally, I would not advocate developing in any of these languages (well, maybe Haskell), and I give my condolences to those of you who are forced to. Still, it's important to know these languages and many others because they give you a wider pool of knowledge and ideas to draw on. Many modern ideas in C++ are borrowed from Lisp, albeit in a rather contrived and roundabout manner. Software engineering isn't just about knowing the language you happen to be using and keeping a copy of Design Patterns on your desk. The more languages you learn, the more tools you have at hand, no matter what you're developing.

3) There are no legitimate uses of the singleton design pattern.
And I mean it. There is no correct way to use a singleton. There is no situation in which it makes sense. Not for logging, not for resource managers, and certainly not for your renderer or related objects. Everyone I've talked to has been a little bit hazy here, saying that maybe, in a particular scenario, a singleton could be a good solution. I am rejecting that outright. Singletons are a useless antipattern. If you have one, your design is broken.
If you want some actual discussion of these points, feel free to post a forum thread. I just wanted to list the points here; each one merits a discussion of its own.

[EDIT] Reordered the principles.

Why I Hate Developing in OpenGL

Posted by , 09 May 2007 - - - - - - · 844 views

I've decided that I'm pretty tired of repeating this all the time, so I may as well write it all down.
  • The official spec doc is completely useless; you have to refer to the extension specs for any kind of useful information.
  • The extensions thing is irritating, and you usually end up using several dozen by the time you're done doing anything vaguely serious.
  • Everything's a freaking GLuint. I end up wrapping things up in classes for textures, VBs, IBs, etc in order to get real type safety. By the time I'm done, things pretty much look like D3D.
  • Can't store vertex declarations in a convenient object, like you can in D3D. I usually end up writing vertex declarations myself and building a function that calls *Pointer functions appropriately.
  • Every time you change VBO, your vertex setup is trashed and you have to call all the Pointer functions again.
  • Binding to do everything means you can't even make simple assertions about the state of the pipeline between two draw calls. That is, when I'm getting ready to submit a batch, I cannot guarantee that the current VBO has not changed since the last draw call, because any MapBuffer, BufferData, etc calls in the middle probably included a BindBuffer.
  • No way to compile a GLSL shader into a quickly loadable binary form.
  • No coherent SDK type docs. If you want to find out about something, you figure out what it was named when it was an extension, and look it up.
  • Lack of a tool suite comparable to D3D (debug runtime, PIX). There are a few passable tools for free, and then there's gDEBugger, if you can afford to shell out for it. There's just so much more that's readily available in D3D.
  • Lots and lots of unnecessary cruft in the API. To be fair, D3D 9 is guilty too. Both D3D 10 and OGL 3 seek to solve this problem; the difference is that one has materialized as a product.
  • FBO and antialiasing don't mix. (Ok, so this is actually Ysaneya's complaint, but I'm willing to take his word on it.) There's a new extension for this, but only the GeForce 8 series exposes it.
  • GLSL can't be used on hardware that predates shader 2.0. This is getting less important as time goes on, but it's an irritating limit in the meantime.
  • Developing GLSL on an NVIDIA card is a pain, because they simply run it through a slightly modified Cg frontend. Long story short, a lot of illegal GLSL code will pass through on NVIDIA hardware, whereas ATI will flag it appropriately.
  • There are some bizarre cases of strictness in GLSL. For example, there's no implicit casting of literal integers to floats/doubles. So a 1.0 will compile, but 1 will break.
  • You can't really query if all GLSL functionality is available or not. The singular example is the noise() function. Very nearly nobody implements it, choosing instead to return a constant (usually black). You can't detect this failure, at all.
  • Lack of a D3DX equivalent. Math, image loading, etc. Getting a simple OpenGL application working without completely reinventing the wheel requires tapping about a half dozen libraries.
  • Related to the above, there's no D3DX Effect style functionality. If you've ever had the misfortune of working with CgFX, you know it's not really a great option.
  • You can't change the multisample mode without destroying the entire render window outright.
  • You can't create an FBO that is depth and stencil without another depth_stencil extension. That extension exists in NV and EXT forms, but no non-NVIDIA cards currently make it available.

Detailed explanation of GDNet's downtime

Posted by , 12 February 2007 - - - - - - · 216 views

If you're reading this, save a copy. I'm revealing deeply guarded secrets of GDNet's internal operations. There's no telling when this post will be annihilated. Don't say I didn't warn you.

As many of you have noticed, GDNet has been down over the last ten days or so. While the site leadership wishes to keep the exact happenings secret, I feel you are all entitled to the true story.

GameDev.Net, LLC does not run off a conventional data center. Such systems are not able to handle our unique site patterns. It was run in a conventional data center several years back, but stresses from many things, particularly the splurge of MMO threads in Help Wanted, would frequently damage the data center's physical construction, as well as the minds of those who worked there. Replacing the north wall of a data center gets old real fast.

When the newest revision of the site was rolled out we switched to a radically different architecture, molded around the highly concurrent and efficient phone switching systems used during the 1950s. However, one critical innovation was made. Instead of having women manage the switching, we hired a team of experienced monkeys to do it. They were far more resilient, and the added dexterity in their feet made them highly efficient. The team of monkeys was led by a rhino with several decades of experience in monkey management. The entire setup was designed by none other than Steve Irwin himself (rest in peace buddy). This safari zone resides in Fuchsia City, Texas, and continues to form the backbone of everything you see here today.

When GDNet went down, several of us went to the safari zone to investigate the problem. We discovered that the rhino had been attacked and drugged, and all of the monkeys were missing. We believe the attack was provoked when the drugs did not affect the rhino as quickly as the invaders had expected. The rhino sustained several injuries but he did recover quickly. The monkeys, however, were a different story, and it was that story which led to our extended downtime.

Upon examination of the outer perimeter of the safari zone, we discovered crumbs of a mysterious substance that was later identified as banana chips in a forensics lab. We believe that after the rhino was down, these chips were used to lure the monkeys away from the safari zone central building. (This building is a bomb shelter with heavy blast doors. It was designed to sustain nuclear attacks.) This trail of crumbs ended in the visitor center. We believe that this is where the switch monkeys were finally subdued.

A word about the visitor center. We here at GameDev.Net, LLC, are comitted to quality in everything we do. Our visitor center trinkets are not your normal trinkets. Most are worth in excess of several thousand dollars. We sell them to our esteemed visitors at a fraction of that price. Review of sales records and interviews with the (human) staff at the visitor center revealed that some unknown masked men had purchased several trinkets earlier in the day; no official visits had been scheduled that day. (Now, you may be thinking that masked men coming into a visitor center unannounced to buy things is unusual. Clearly you've never visited Texas.)

We put out a federal call to keep an eye out for any GameDev trinkets that might appear on the black market or eBay. A few days later, an eBay auction did appear, selling "Slightly damaged GameDev trinkets". The description indicated that the damage was limited to being knocked around by monkeys. We had them! Via a federal database of eBay sellers, we were able to link into the GPS system and satellite imagery to observe the criminals from space. By applying some well known automatic image enhancement technology (you've seen this stuff on TV, it's not that new), we were able to increase the effective resolution of the satellite imagery by a factor of ten, thereby reading the license plates, Burger King nametags, and other identifying marks from the criminals.

Unfortunately, the FBI was less than subtle in their approach and the criminals were tipped off a few minutes early when the raid was about to begin. They made a break for it (though the monkeys were left at the house and safely rescued). The car chase was quite epic -- I was personally driving the police vehicle (a Corvette C6) that finally stopped their getaway vehicle. While original footage is not available, we have used the recorded telemetry to reconstruct the chase. That video is available.

The monkeys were discovered completely unharmed and actually fairly happy about their surprise vacation. They were relieved to get back to work though. (They're very serious about what they do.) They were flown back to the safari zone and are now hard at work. As for the perpetrators of this most vile act, we do have them in custody, but we are not sure about their motives yet. While I cannot reveal more details there, I can assure you that we are using the latest and greatest in torture interrogation techniques to determine more.

October 2016 »

23 242526272829

Recent Comments

Recent Comments