Protecting shader code?

Started by
31 comments, last by InvalidPointer 14 years, 7 months ago
Quote:Original post by Yann L
The system I propose would entirely function in HW. In order to extract the decryption keys, one would have to RE the GPU itself, which is insanely expensive and complicated (or you need an insider at the GPU manufacturer, or hack the central registry servers, etc). Certainly not uncrackable, but much better than sending plain text high level shader code to the driver, which is an invitation to copy'n'paste.

Indeed. This is a couple orders of magnitude harder than the current situation, but I can imagine a sufficiently motivated hacker writing a simple kernel module to dump the decrypted executable from VRAM. Executable formats are well-documented and relatively easy to disassemble, so your code is still exposed.

Now, there are ways to stop this attack vector but it's a tradeoff between performance and complexity. Plain binary shaders would get you the most of the benefits for the least hassle.

I find it rather interesting that no IHV has managed to create a "binary shader" extension for general use, even though this is one of the most-requested features since 2003! There must be significant technical hurdles blocking binary shaders - I distinctly recall Ati bashing the D3D shader compiler and actually *undoing* its optimizations before passing it through its own optimizer. On the Khronos' side, the most recent attempt was in OpenGL ES 2.0 and is basically dead in the water (no driver I know of implements this).

There seems to be more to this than typical ARB incompetence.

[OpenTK: C# OpenGL 4.4, OpenGL ES 3.0 and OpenAL 1.1. Now with Linux/KMS support!]

Advertisement
The main problem for me is that its hard for me to think of any game that has gained a competitive advantage by simply copying graphical techniques. By the time someone "steals" your shader code, your game is already out, or is close to being out. They're automatically behind you. Furthermore, they may have to do additional work on the engine and assets to even be able to use your techniques.

In the end, my opinion as a game-player seems to be that the artistic assets are probably more important than shader code. Having a good-enough engine displaying an excellent artistic style is a better situation than having a technically advanced engine displaying a bland or unoriginal artistic style.

I'm just not sure that hiding shader techniques is much of a competitive advantage. I'm even more doubtful that its enough of a competitive advantage to warrant spending development time to protect them.
I agree that it doesn't really make any sense for *games*, but graphics are used in more fields than games. Companies in those fields tend to be very competitive and extremely protective of their IP (you know, armies of lawyers, node licenses/DRM, tightly-controlled hardware - all kinds of fun stuff!)

Yann L's million-dollar R&D figure kinda hinted at this. :)

[OpenTK: C# OpenGL 4.4, OpenGL ES 3.0 and OpenAL 1.1. Now with Linux/KMS support!]

Quote:Original post by Fiddler
Indeed. This is a couple orders of magnitude harder than the current situation, but I can imagine a sufficiently motivated hacker writing a simple kernel module to dump the decrypted executable from VRAM. Executable formats are well-documented and relatively easy to disassemble, so your code is still exposed.

That would be quite trivial to block. The GPUs memory manager would just have to make sure that the portion of the VRAM holding decrypted data will never be mapped into CPU accessible address space (a 'no external access' flag for the concerned memory pages would do the trick). It could even use dedicated, high speed VRAM for this purpose. If done right, then there would be no technical way to access this information by software, not even by accessing the GPU on per-register basis.

Quote:Original post by Fiddler
There seems to be more to this than typical ARB incompetence.

Yeah, there are lots of technical issues. Specifically finding an appropriate intermediate format that would still allow all needed optimizations for different GPU architectures (eg. scalar vs. vector), and that doesn't give a performance advantage to one specific manufacturer.

Quote:
By the time someone "steals" your shader code, your game is already out, or is close to being out.

It's not about a single game. It's about engine sub-licensing. Your engine is much more valuable to potential licensees, if it has a totally unique selling point, that the competition cannot easily reproduce.

Quote:
In the end, my opinion as a game-player seems to be that the artistic assets are probably more important than shader code. Having a good-enough engine displaying an excellent artistic style is a better situation than having a technically advanced engine displaying a bland or unoriginal artistic style.

Certainly true for current generation engines. But with the advent of GPUs with more and more programmability (think Larabee), shaders will become more and more important. We're just at the beginning here. A few years back, most titles still used the fixed function pipeline. Imagine you somehow manage to code a fully realtime GI solver. That would definitely be something you'd like to keep to yourself as long as possible, because it will give you a tremendous competitive advantage.

And yeah, there's more to the graphics industry than games. But most of this also applies to future cutting-edge games.
Quote:Original post by Yann L
Quote:Original post by Fiddler
Indeed. This is a couple orders of magnitude harder than the current situation, but I can imagine a sufficiently motivated hacker writing a simple kernel module to dump the decrypted executable from VRAM. Executable formats are well-documented and relatively easy to disassemble, so your code is still exposed.

That would be quite trivial to block. The GPUs memory manager would just have to make sure that the portion of the VRAM holding decrypted data will never be mapped into CPU accessible address space (a 'no external access' flag for the concerned memory pages would do the trick). It could even use dedicated, high speed VRAM for this purpose. If done right, then there would be no technical way to access this information by software, not even by accessing the GPU on per-register basis.


Here is where the performance/complexity trade-off enters the picture. What if you overflow the dedicated VRAM (eviction strategies, re-decryption)? What about optimization? Would you really add a shader optimizer on the GPU instead of the drivers? (Optimization would have to come *after* decryption!) What if a key or a pool of keys is compromised?

Myself, I'd be happy with regular binary blobs and some fast VRAM dedicated to multisampled framebuffers.

Quote:And yeah, there's more to the graphics industry than games. But most of this also applies to future cutting-edge games.


Yet cutting-edge games seem content to ship text-based shaders. Doom 3, Oblivion, Crysis...

I'd argue it's not such a big issue for games, simply because your competitor has already lost if he needs to copy your shaders (he will ship after you and fall behind the curve). If anything, moddable shaders add longevity - Doom and Oblivion have had their shaders modded to hell and back and the modders are still going strong.

Anyone knows how Unreal-based games ship their shaders (text or binaries)?

Edit: spelling dammit!

[Edited by - Fiddler on September 18, 2009 2:43:25 AM]

[OpenTK: C# OpenGL 4.4, OpenGL ES 3.0 and OpenAL 1.1. Now with Linux/KMS support!]

Quote:Original post by Fiddler
Quote:Original post by Yann L
Quote:Original post by Fiddler
Indeed. This is a couple orders of magnitude harder than the current situation, but I can imagine a sufficiently motivated hacker writing a simple kernel module to dump the decrypted executable from VRAM. Executable formats are well-documented and relatively easy to disassemble, so your code is still exposed.

That would be quite trivial to block. The GPUs memory manager would just have to make sure that the portion of the VRAM holding decrypted data will never be mapped into CPU accessible address space (a 'no external access' flag for the concerned memory pages would do the trick). It could even use dedicated, high speed VRAM for this purpose. If done right, then there would be no technical way to access this information by software, not even by accessing the GPU on per-register basis.


Here is where the performance/complexity trade-off enters the picture. What if you overflow the dedicated VRAM (eviction strategies, re-decryption)? What about optimization? Would you really add a shader optimizer on the GPU instead of the drivers? (Optimization would have to come *after* decryption!) What if a key or a pool of keys is compromised?

Myself, I'd be happy with regular binary blobs and some fast VRAM specificlly for multisampled framebuffers.

Quote:And yeah, there's more to the graphics industry than games. But most of this also applies to future cutting-edge games.


Yet cutting-edge games seem content to ship text-based shaders. Doom 3, Oblivion, Crysis...

I'd argue it's not such a big issue for games, simply because your competitor has already lost if he needs to copy your shaders (he will ship after you and fall behind the curve). If anything, moddable shaders add longevity - Doom and Oblivion have had their shaders modded to hell and back and the modders are still going strong.

Anyone knows how Unreal-based games ship their shaders (text or binaries)?

Text. They're in the Engine\Shaders subfolder of the install directory. I will also openly admit that I stole their color levels control shamelessly (all two lines or whatever) as it's a fascinating postprocessing effect I haven't really seen used before. At least in games. It's only one element in the entire toolbox, however-- I've certainly written plenty of the other components myself, especially in the reflectance department. Not even from a tutorial or anything :)

Point is, there are other, more important things to be worrying about.

EDIT: To clarify, Unreal didn't even appear to use it much-- then again the phrase 'Unreal brown/gray' had to start somewhere :)
clb: At the end of 2012, the positions of jupiter, saturn, mercury, and deimos are aligned so as to cause a denormalized flush-to-zero bug when computing earth's gravitational force, slinging it to the sun.
You're all doing it worng. This is the only sure way of protecting your IP.
----------
Gonna try that "Indie" stuff I keep hearing about. Let's start with Splatter.
Quote:Original post by Schrompf
You're all doing it worng. This is the only sure way of protecting your IP.


eeeewww, this looks ugly
mat3 tbnMatrix = mat3(t.x, b.x, n.x, t.y, b.y, n.y, t.z, b.z, n.z);

I wonder if the driver will create 9 MOV instructions.

But it is a funny post anyway.

Personally, all I want is binary blob support (compile once on users machine and store on disk for faster startup)

An inaccessible memory location on the video card? Sounds good but I didn't understand how it would work. A flag that makes it impossible for the OS to read it? It's not possible unless some hardware changes will be made : some new type of VRAM needs to be designed.

I doubt that it would take 10 years for _some_ kind of protection to make it to market. I'm sure they are already working on it and multiple solutions are ready. As soon as the industry pressures nvidia, intel, amd and in a year, they will implement it.
Sig: http://glhlib.sourceforge.net
an open source GLU replacement library. Much more modern than GLU.
float matrix[16], inverse_matrix[16];
glhLoadIdentityf2(matrix);
glhTranslatef2(matrix, 0.0, 0.0, 5.0);
glhRotateAboutXf2(matrix, angleInRadians);
glhScalef2(matrix, 1.0, 1.0, -1.0);
glhQuickInvertMatrixf2(matrix, inverse_matrix);
glUniformMatrix4fv(uniformLocation1, 1, FALSE, matrix);
glUniformMatrix4fv(uniformLocation2, 1, FALSE, inverse_matrix);
Just a note, GLES2 supports binary blob shaders; The idea of having something pre-compiled to send to the driver is not exactly vendor specific; one could imagine a system where one sends pre-compiled byte code to a driver which in turn works it's own mojo on it.

Right now, I'd just like to have the ability to send compiled shaders to cut down on start up times, i.e. each GPU architecture one targets has a compiler whose output you can feed to a driver, though this might open an entire different ugly can of worms (note: one can use nVidia Cg compiler to generate nVidia asm, so if you target nVidia cards only you could do that).
Close this Gamedev account, I have outgrown Gamedev.
I, atleast, dream of such solutions but it will never happen. since you have transfer the code. when we delete the layer between cpu and gpu then it might be more possible but that will still show many problems. Think about dvd encryption its broken just as fast as its released. Same with your code if it have any thing worth protecting then it would be broken in a matter of moments it hit shelves. Shader coder i would worry less about. saving it on the hard drive in as encryptic form as possible have the engine load that and have it processed for gpu retrieval.
Bring more Pain

This topic is closed to new replies.

Advertisement