I'm not an expert but my understanding was it's a general property of public key algorithms.Someone said that I should not use public keys because they are suspected of being susceptible to quantum cracking. Is this true for all algorithms or just certain ones? Also, people tell me I should transmit the key over the Internet, but if I'm not using public key cryptography, that's idiotic! So there seems to be a conflict here.

You said yourself that transferring the one-time-pad to the other party is not a problem. If you can do that, you can transfer the key for a symmetric cypher as well.

If that assumption was based on public key cryptography being safe, you have to find a different method or believe in public key cryptography remaining strong enough. Maybe forward secrecy is relevant for you? I do, however, have not much interest in public keys cryptography.

Because 256 bits are already overkill to the best knowledge available today.samoth made a suggestion for 512-bit encryption. Why not 1024- or 2048- or 4096-bit? The point is, obviously I couldn't have infinity-bit encryption (though that's essentially what an OTP does, in a way), but why stop at 512?

If you are not an actual expert in cryptography you should not just do that or stick at least to modifications which have already been adequately discussed in the cryptographic community.There were also suggestions of adding rounds and layering multiple algorithms over each other, etc. I've read that this is a bad idea, because in some cases it can actually weaken security, and it could potentially be hard to predict whether it will be strengthened or weakened.

Then you don't rely on one cipher but several with independent keys. Finding a fatal flaw in one cipher somewhere during your lifetime is possible, but unlikely. Finding fatal flaws in two or more ciphers during your lifetime is increasingly closer to impossible. Good candidates could be Rijndael (now known as AES), Serpent and Twofish since they were the finalists to become AES.Also, one of the things that really bugs me about cryptography is that for the most part, it's not

provablysecure. It's so complex that there's usually no mathematical way to absolutely prove the difficulty of cryptanalysis, because someone will come up with a better way eventually. In many cases, there may be a theoretical limit to how easily an algorithmcanbe broken, but it seems to me like it can't usually beproven. It's the same thing with compression algorithms, or most kinds of data encoding, really. You just have to test it a billion times and then inductively assume that it works. But with compression algorithms, the worst thing that can happen is the file grows (and you can prevent that anyway, so really the worst thing is that it doesn't shrink), but with cryptography, the consequences can be catastrophic.nbsp; It's so complex that there's usually no mathematical way to absolutely prove the difficulty of cryptanalysis, because someone will come up with a better way eventually.canbe broken, but it seems to me like it can't usually beproven.&

Also, it is by no means certain that there will

*ever*be a way to break a cipher. For example AES is used extensively (including several governments) and the best attempt on it is still the purely theoretical attack I quoted from Wikipedia. Twofish has a similar purely theoretical attack under extremely special circumstances and there is even a newer replacement with Threefish.