Why don't game engines run in kernel mode?

Started by
7 comments, last by Hodgman 8 years, 10 months ago

Hello, I've been wondering - why aren't game engines written by drivers running in kernel mode, getting maximum performance. I mean then they can handle their memory the way they like it instead of having indirection (process address space to actual absolute address) Apart from the potential freeze/BSOD if there is a fault in the code, there don't seem to be any other setbacks? Assuming there are no faults in the code, wouldn't it be utterly superior to running it in user mode?

Advertisement

No. Ignoring for a moment that the virtual to physical address space mapping is done in hardware through the MMU, which has a built-in cache, so the address space translation is pretty much free, the main bottleneck in games these days is either CPU/memory/IO (which kernel mode can't really help much with) or graphics API overhead (which kernel mode would make worse, since there is no usable graphics API in kernel mode, so you'd have to basically reimplement a D3D or OpenGL runtime in kernel mode for every version of every display driver you intend to support, which is not realistic). Add to that the fact that switching from user to kernel mode is not free, and that kernel mode development is harder (and more difficult for cross-platform development), that all code has bugs and that in kernel mode any bug has the potential to, say, silently corrupt your physical memory, and that since Vista any Windows drivers you use must be signed or just won't be loaded, and that the driver would be sitting there idling for no reason even when the game is not running, that you'd be unable to run the game on a system you don't have administrative privileges on... just, no. User mode exists for a reason, and there really isn't much performance to be gained in kernel mode anyway compared to the work you'd need to put in to make it work to begin with.

“If I understand the standard right it is legal and safe to do this but the resulting value could be anything.”

Apart from the potential freeze/BSOD if there is a fault in the code, there don't seem to be any other setbacks? Assuming there are no faults in the code

That's a huge ass assumption.
Remember those guys mad at Steam because it deleted their entire documents folder?
Imagine a game corrupting your entire filesystem, or overclocking your motherboard, or disabling the fans to the levels where it melts and you end up with irreversible hardware damage.

I mean, there are AAA games today that you need to kill with task manager because they didn't close correctly. In kernel space that means you have to reboot.

A single error in kernel mode can be utterly catastrophic. Fortunately the vast majority of them are caught by the OS either because they did something clearly illegal or thanks to OS safety checks; which is why when a BSOD triggers, the entire system is brought to a halt as a precaution.

Today we barely see BSODs because of 1. Software Maturity; and 2. Engineers try very hard to move as much as possible out of kernel space.

since there is no usable graphics API in kernel mode, so you'd have to basically reimplement a D3D or OpenGL runtime in kernel mode for every version of every display driver you intend to support, which is not realistic). Add to that the fact that switching from user to kernel mode is not free, and that kernel mode development is harder (and more difficult for cross-platform development),

Add to that, "hooks" like xc360 (XBox360 gamepad emulator), Steam integration, and other similar hooks would be very, very hard hard.

This is kinda like saying cars are too slow, so rather than further develop the cars to reduce waste and improve their performance we just strap rockets to them.

Except in this case even if that didn't already sound like a bad idea, the rockets would barely carry the car's weight and have a high chance of exploding on ignition.

This is kinda like saying cars are too slow, so rather than further develop the cars to reduce waste and improve their performance we just strap rockets to them.


You mean more like "cars are so slow, so we'll ride the engine by sitting on top, no seats or steering wheel and just try to lean to steer..." :lol:

Assuming there are no faults in the code


Statistically games are amongst the most complex, and therefore most buggy of all applications you'd ever run on your system. I'd not trust EA, Bethesda or any game outfit to run in kernel mode, just the drm stuff they bundle that runs in kernel mode causes enough issues to start with...
Some games used to boot into their own custom OS. That would be good for high performance games and would eliminate many OS-related problems and slowdowns, but most people don't want to reboot just to play a game.

This is my thread. There are many threads like it, but this one is mine.

Some games used to boot into their own custom OS. That would be good for high performance games and would eliminate many OS-related problems and slowdowns, but most people don't want to reboot just to play a game.


Oldschool!

Yes, this was popular on older systems like the amiga, where the game would be a bootable disk bypassing workbench. You'd still have routines similar to a bios but back then most games avoided such things and wrote direct to hardware.

The main reason for bypassing an os would be to directly access graphics and sound devices, but these days those devices are so heavily abstracted that you'd be biting off your nose to spite your face as you'd only have a basic 2D framebuffer without the driver stack and os, nothing more...

It might be a weird idea in 2015 but I'd be curious if it would be helpful for consoles to have DOS-like operating systems. In DOS you were provided all the basic OS services and drivers you need for a typical application of that era, but you also had the possibility to bypass the OS completely and talk directly to the hardware if you really wanted to, and your program ran in kernel mode and ruled over the system. Since the hardware is always the same there would be only one I/O interface to code against per console. (So it wouldn't be like the actual DOS from the early 90s where you had to code for all possible drivers.)

I don't think console manufacturers would go in that direction with modern consoles though, for several reasons.

  1. Next-gen graphics API like Vulkan and DirectX12 are already doing a great job at removing as much overhead as possible from the driver by moving a lot of responsibilities to the application, and they do so in a safe way that doesn't expose dangerous kernel-mode instructions. It would take away the overhead of the syscall (context switching, reading interrupt vector, etc), but that's about it.
  2. It would limit the manufacturer's ability to improve their console's performance by providing better drivers if some studios are bypassing their drivers.
  3. It would be dangerous for the manufacturer's precious DRM. A buffer overflow happens quickly, and it's quite useful when it's running in kernel mode. wink.png Find one game with a buffer overflow and then you can probably rootkit the OS with a DRM bypass and then it's game over.
  4. Since games and saves are all on the harddrive a fault in the code of one game could affect more than that game itself. For example, if a game breaks the file system, it would break a lot more than itself. That's a big risk that neither the developer nor the console manufacturer might want to take. Besides, only the brave would play Bethesda games. tongue.png
  5. Privileged code is inherently more difficult to debug. Even your debug printout and code dump functions can get corrupted if something goes wrong.

So yeah, console games are probably going to stay in user mode for now, and run huge OSes. But as a PC gamer this is a good thing because I don't want developers to get too comfortable with kernel-mode tricks, otherwise I'll get less games to play. smile.png

Modern consoles often do have the user/kernel mode split, but -

Next-gen graphics API like Vulkan and DirectX12 are already doing a great job at removing as much overhead as possible from the driver by moving a lot of responsibilities to the application, and they do so in a safe way that doesn't expose dangerous kernel-mode instructions. It would take away the overhead of the syscall (context switching, reading interrupt vector, etc), but that's about it.

Vulkan/D3D12 are inspired by console's graphics APIs.
Consoles typically do allow the user-mode games to do a lot of dangerous stuff, such as directly writing instructions to some hardware, such as the GPU.
i.e. you already do have the choice of calling higher-level GL-style functions, or writing specially crafted bitwise commands directly into buffers that are flushed into the GPU frontend. Sometimes the GPU command buffer might be hardcoded to a certain memory address (memory mapped IO register), so code like thid might push a packet into a queue which will enable the blending register when executed by the GPU later in the frame:
*(size_t*)0xc1000000 = 42;
:D

Often user-mode code will also be responsible for allocating virtual address space, allocating physical memory regions, and binding the two together. Typically, the OS's wont implement virtual memory (paging to disk to magically make physical memory seem larger), though it may be possible for a game to implement it itself.

Also, there's been notorious bugs where buffer overruns in user-mode game code has actually led to the downfall of DRM hardware!

Despite this, they do typically strive to balance security and stability. Typically it's possible to allocate executable memory and dynamically write executable code into it (e.g. JIT'int), but typically it will be forbidden by the certification requirements as it's such a vulnerable attack vector.
Most of the certification requirements actually focus on the user experience - things liie loading screens always animating so the user knows the system hasnt hung, responding to OS requests in a timely manner, leaving enough system resources spare for the OS to use for background tasks, using consistent naming and iconography, filtering user conternt consistently, etc.

This topic is closed to new replies.

Advertisement