If the device isn't available, both of them return the same thing. While the success of the Present call is not logically tied to the success of EndDraw call, in practice you can determine "device lost" if either of them returns that status.
That is correct usage - SetDefaults sets the shader registers to the default values parsed from the shader source.
Since the offsets of the constants can be assumed to vary between different shader instances, the API user should always call constant table's SetDefaults after setting the corresponding shader, and then call the various Set* methods on the constants that need to differ from the defaults.
Constant table is just a map of shader symbols to their corresponding registers. It, by itself does not remember the values you set through it at all. When you call Set* on the interface, the implementation will call Set*ShaderConstants immediately on the associated device, by using its map to determine which constant registers correspond to the symbolic constant name you give to it.
The point is, the buffer of the constant table is not a buffer of the constant values you set through it - the buffer just holds the mapping between constant registers and constant names (and the default values).
Roslyn cannot make the distinction between good and malicious intents. Something like encrypting all files in "my documents" is just business as usual for the system, but the majority of users certainly don't want a game to do that.
The overarching problem is that it is impossible to recognise harmful code in advance. And when all of your users get hacked by a malicious content pack that slipped past your inspection, it largely becomes your problem.
With reflection, you can circumvent such whitelists.
In general, it is not wise to trust user-written code at all. It is unrealistic to assume that one could take into account all possible attack vectors.
In "game makers", the user usually produces just data (or very primitive logic as in LBP), and STILL some users are able to hack the systems via said data.
Case in point, I played Mario Maker two days ago and I stumbled upon a level named something like "this will crash the game" - which it indeed did. It is not far-fetched to think that such level could then execute arbitrary code by using the level data as the injection vector, even though presumably the MM level system was not by any means designed to run user code.
Case in point 2: by trivially editing some save data of certain Wii games, one used to be able to cause a buffer overflow which could then be used to run arbitrary code, including overwriting the system firmware with a custom one loaded from USB or SD. The trivial edit? Change a save file name to be just slightly longer than the buffer allocated for it. The vulnerable games, which I won't mention here, were not particularly obscure either.
The only difference between self-signed and purchased certificates is that the latter has a chain of trust established. This means that the clients can trust your server identity without asking the user, as a trusted party has signed your certificate and therefore it can be assumed that it hasn't been tampered with. The actual encryption algorithm(s) and cryptographic strength is exactly same whether you self-sign your certificate or purchase the signature for it.
SSL/TLS by itself does not do anything that would render it incompatible with URL rewriting. That said, you need to be careful about the protocol prefix and/or the port, if your system uses them somehow. HTTP default port is 80, while HTTPS is 443 (although almost all servers can be configured to listen to different ports regardless of the protocol). For example, if you hard-code some resource paths with the absolute URL, note the small but real difference between http://something/example.jpg and https://something/example.jpg
Domain name is used with verifying the certificate owner's identity, along with the signature trust. A SSL client is free to choose to ignore the protection that the domain name association gives (for example, common browsers will let you proceed even if there is no match), but this would also cause the effective security and manageability of the system to decrease.
For development, self-signed certs are ok because developers can of course trust a certificate they themselves created. But the chain of trust is very important when giving access to other people, so purchased signatures are the way to go when you publish.
Effects, especially realistic ones, are hard to do from scratch; there is no way around that. Usually, the effects you listed are implemented as a creative mix of billboards, particles, shaders and textures.
However, engines like Unreal already implement most of the effects you listed. Ask yourself, are you writing a game or a game engine?