If you are not writing something that needs to be cross-platform there is no reason whatsoever to not use Visual Studio Community edition. I don't know what wintertime is concerned about above: a "Microsoft account" is just a banal free account that you might even already have if say, you, ever set up an outlook.com or hotmail email address. Anyway it takes about a minute to create an account and is free.
I would not agree with that. I'm using MSVC at work and I'm now using QtCreator with MinGW at home for my hobby. The only thing I miss at home are the debugging tools MSVC has (although QtCreator certainly has become more comfortable since I have started using it). For everything else I need QtCreator (and the compiler with extremely decent C++11/C++14 support) is doing an equivalent or better job. That said, for someone new to C++ I would still recommend to start out with some kind of reasonable MSVC. It's simpler to find help on the weband especially precompiled libraries.
If you do care about being cross-platform there is still no reason to use Dev-C++ which hasn't been updated in a decade at this point.
While I can agree on the verdict (avoid Dev-C++ at all cost), the statement as given is not completely correct. Bloodshed Dev-C++ has not been updated in more than a decade, but there are two or three much newer forks around (although I think one of them is a bit abandoned again). However, I still would not advise anyone to use any Dev-C++ at this point. Code::Blocks and QtCreator are decent alternatives to MSVC (still not what I would suggest to a newbie, though, as said above).
Certainly true, but you do not need to provide a copy-constructor (actually, in most cases it is harmful to do one by hand) since the compiler will generate one for you (provided all members can be copied).
I don't think that will work at all. dynamic_cast can, by definition, fail to convert types. If it fails it returns a null-pointer. If you are certain a cast should always work then you should use static_cast, not dynamic_cast.
OxygineInitiationSettings* settings = dynamic_cast<OxygineInitiationSettings*>(&initSettings);
if (settings != nullptr) ...
The optomising away behaviour is exactly what is happening.
The compiler is saying "what a load of bollocks, cannot possibly happen" and throws it away.
Sadly that isn't the case, it can and does happen.
I think we all have done things at some point we aren't proud of and which relied on various levels of undefined behavior. However, I would say what makes people annoyed with you here is that you have the arrogance to blame resulting problems on the compiler. Sometimes you have to do nasty undefined things to get something extremely important finished in time. But if you need to do that, you sure as hell do not change compiler versions during your project lifetime. And if you do, you gracefully accept that you might have to fix all of your undefined behavior without blaming your tool.
It has to do with dynamic objects with multiple class inheritance in a multi-threaded environment.
Nothing in that list really explains or excuses what I have seen above. Out of interest, what kind of library/middleware are you talking about? It might be useful for an early discard decision.
So problem solved: It just wasn't using the latest value of the default parameter for every run of the program. Think I've experienced similar problems before...
Why does this sometimes happen when you don't have to manually rebuild for most code changes and it automatically knows to use the latest versions of the altered files? You'd think any minor change would be important and that the compiler would make sure to account for everything all of the time.
As I guessed in my first post, your dependencies are not properly set up. For example when using MSVC you need to place all relevant files (anything that might get changed at some point) in the project file, especially headers. You also have several options to modify/disable dependencies between files in the project but unless you modified the settings there you should be usually good.
The code as you have posted it should be working as intended. Something else must be going wrong, although with what little information is given I'm at a loss to what exactly. If I were forced to guess I would put a little money on a dependency screwup though.
Someone said that I should not use public keys because they are suspected of being susceptible to quantum cracking. Is this true for all algorithms or just certain ones? Also, people tell me I should transmit the key over the Internet, but if I'm not using public key cryptography, that's idiotic! So there seems to be a conflict here.
I'm not an expert but my understanding was it's a general property of public key algorithms.
You said yourself that transferring the one-time-pad to the other party is not a problem. If you can do that, you can transfer the key for a symmetric cypher as well.
If that assumption was based on public key cryptography being safe, you have to find a different method or believe in public key cryptography remaining strong enough. Maybe forward secrecy is relevant for you? I do, however, have not much interest in public keys cryptography.
samoth made a suggestion for 512-bit encryption. Why not 1024- or 2048- or 4096-bit? The point is, obviously I couldn't have infinity-bit encryption (though that's essentially what an OTP does, in a way), but why stop at 512?
Because 256 bits are already overkill to the best knowledge available today.
There were also suggestions of adding rounds and layering multiple algorithms over each other, etc. I've read that this is a bad idea, because in some cases it can actually weaken security, and it could potentially be hard to predict whether it will be strengthened or weakened.
If you are not an actual expert in cryptography you should not just do that or stick at least to modifications which have already been adequately discussed in the cryptographic community.
Also, one of the things that really bugs me about cryptography is that for the most part, it's not provably secure. It's so complex that there's usually no mathematical way to absolutely prove the difficulty of cryptanalysis, because someone will come up with a better way eventually. In many cases, there may be a theoretical limit to how easily an algorithm can be broken, but it seems to me like it can't usually be proven. It's the same thing with compression algorithms, or most kinds of data encoding, really. You just have to test it a billion times and then inductively assume that it works. But with compression algorithms, the worst thing that can happen is the file grows (and you can prevent that anyway, so really the worst thing is that it doesn't shrink), but with cryptography, the consequences can be catastrophic.nbsp; It's so complex that there's usually no mathematical way to absolutely prove the difficulty of cryptanalysis, because someone will come up with a better way eventually.can be broken, but it seems to me like it can't usually be proven.&
Then you don't rely on one cipher but several with independent keys. Finding a fatal flaw in one cipher somewhere during your lifetime is possible, but unlikely. Finding fatal flaws in two or more ciphers during your lifetime is increasingly closer to impossible. Good candidates could be Rijndael (now known as AES), Serpent and Twofish since they were the finalists to become AES. Also, it is by no means certain that there will ever be a way to break a cipher. For example AES is used extensively (including several governments) and the best attempt on it is still the purely theoretical attack I quoted from Wikipedia. Twofish has a similar purely theoretical attack under extremely special circumstances and there is even a newer replacement with Threefish.
[...] And you should shoot the other guy because you can't trust him to keep the secret.
I'm not an expert in the area but that feels to me like the wrong way to go about it. In a lot of places of the world (especially those where an actual terror attack would really be noticed) just shooting someone is bound to cause at least a little bit of investigation. Shouldn't your priority be to make it look like a plausible just-an-accident-scenario?
Well, he wants to get some data from A to B through a hostile environment. Standard internet public key cryptography is obviously one way to go. But he can also send it 'in the clear' but pre-encrypted with a symmetric block cypher. Personally I would favor symmetric block ciphers because I know a bit more about their strength than and attack feasibility than public key cryptography. Symmetric ciphers are also closer to the one-time-pad he originally targeted (well, 'still targets' although by now it's pretty clear they are an added complication without adding anything).