Jump to content

  • Log In with Google      Sign In   
  • Create Account


Member Since 28 Feb 2011
Offline Last Active Today, 05:03 AM

#5226831 CodeBlocks and cygwin gcc issue

Posted by Bacterius on 02 May 2015 - 09:12 AM


#5226500 Random Number Generation

Posted by Bacterius on 30 April 2015 - 07:41 AM

Someone said that I should not use public keys because they are suspected of being susceptible to quantum cracking. Is this true for all algorithms or just certain ones? Also, people tell me I should transmit the key over the Internet, but if I'm not using public key cryptography, that's idiotic! So there seems to be a conflict here.


It's not really clear what quantum computers are capable of yet. It's known that some mathematical problems useful in current public key schemes are not hard in a quantum computation model (notably, Shor's algorithm, which would defeat RSA, all of elliptic-curve cryptography, and the Diffie-Hellman key exchange protocol given a sufficiently large quantum computer). There are other, less popular public key algorithms like NTRU (this one is covered by patents), lattice-based cryptography, McEliece which is based on error-correcting codes, that do not currently appear vulnerable to quantum computer algorithms, but since these algorithms have not seen much study (since they are not used widely) and that quantum computing is still in its infancy, it's possible that they are vulnerable and that further research will reveal that. Post-quantum cryptography, as it is called, is currently an active area of study.


But anyway as I mentioned before, the existence of an unconditionally secure public key scheme is logically impossible, since a computationally unbounded attacker can always retrieve the private key from the public one.




why stop at 512?


You usually want to use the smallest key you can afford to use (taking into account potential future cryptanalysis if needed) because it's more efficient. It makes no sense to use a 4096-bit symmetric key when a 256-bit one will do just fine: if there is a miracle breakthrough that can defeat some 256-bit cipher instantly, what exactly makes you think a similar breakthrough is less likely for a 4096-bit cipher? Key length is not everything; a 4096-bit ROT13 is still just ROT13.




There were also suggestions of adding rounds and layering multiple algorithms over each other, etc. I've read that this is a bad idea, because in some cases it can actually weaken security, and it could potentially be hard to predict whether it will be strengthened or weakened.


That is correct. Unless the algorithm was specifically designed by its authors so that the number of rounds can be increased at the discretion of the user, there is no guarantee that adding more rounds will increase (or, indeed, not decrease) the cipher's security. Though the recent algorithms tend to be parameterizable, to respond to user needs.


On the other hand, cascading ciphers (with independent keys) is perfectly fine if you have the correct design. If you do it right, it can't decrease security, and the security of the cascaded cipher is provably equal to the security of the "strongest" cipher in the cascade. You have to be careful though, because you can really shoot yourself in the foot if you design your cascade wrong, but overall provably secure cascaded ciphers are easy to design if you are remotely competent.




Also, one of the things that really bugs me about cryptography is that for the most part, it's not provably secure. It's so complex that there's usually no mathematical way to absolutely prove the difficulty of cryptanalysis, because someone will come up with a better way eventually. In many cases, there may be a theoretical limit to how easily an algorithm can be broken, but it seems to me like it can't usually be proven. It's the same thing with compression algorithms, or most kinds of data encoding, really. You just have to test it a billion times and then inductively assume that it works. But with compression algorithms, the worst thing that can happen is the file grows (and you can prevent that anyway, so really the worst thing is that it doesn't shrink), but with cryptography, the consequences can be catastrophic.


That's right. A lot of optimization problems in physics have no exact solutions, but engineers still manage to build bridges that don't collapse (and you still entrust your life to the engineers who built it whenever you walk or drive on it). Similarly I'd say cryptography is doing quite well and has proven its worth so far, and the field is progressing rapidly. It might not be philosophically appealing, but I'd say there is no real problem here except a lack of trust in the field (which is understandable, but it's really no different than the bridge example, and you can't be an expert in everything anyway).

#5226437 What is enumerate?

Posted by Bacterius on 30 April 2015 - 12:50 AM

Yeah, the Python docs are really good. Make sure you're looking at the right docs for your version though! (2 vs 3)

#5226254 Random Number Generation

Posted by Bacterius on 29 April 2015 - 08:26 AM

And that's the paradox. Obviously I wouldn't want to tell you anything specific, lest it defeat the purpose of encrypting anything in the first place. Suffice it to say, I want to be able to send data that I will be 100% certain that if any snooper were to intercept it, it would be impossible for them to see the information. (And yes I know about people being hit with wrenches, etc.)


As has been explained countless times in the thread, computational security guarantees will get you there no problem. You don't need "perfect security", in fact you don't want it because it is actually much more brittle in practice than a straightforward application of modern cryptographic techniques, and if you are asking about how to generate random numbers then you certainly do not have the kind of resources needed to actually achieve and, more importantly, maintain perfect security in an imperfect world where data can get lost, people can forget stuff, and opponents don't play by the rules.


Furthermore, this mindset of "I want 100% security and nobody will tell me otherwise" is precisely why people gravitate towards the OTP and why they are advised otherwise. It's not a sustainable security model. It is a mistake. You are not the first person to get it in your head to implement the next big super-duper unbreakable encryption and you certainly will not be the last. Meanwhile, the rest of the planet grounded in reality uses modern, real world cryptography, which buys them conveniences such as instantaneous, very fast and secure encryption, public key schemes (it's actually provably impossible to design an unconditionally secure public key scheme, by the way; how unfortunate) which power modern ecommerce, and also reliable digital signatures, and more cool gadgets like zero knowledge proofs and homomorphic encryption, which in turn allow things like electronic voting, secret sharing, and the like.


Yes, the OTP is cute and alluring and all, it has a tendency to enthrall and capture the imagination of people, blah blah blah, we get it. But it really isn't all it's cracked up to be, and if I may say so myself there are much more interesting things out there to learn about than the OTP, which is ultimately a fairly boring, unenlightening algorithm that isn't really that useful in practice, and is also pretty hard to use correctly (putting aside for the moment the fact that security-oriented software is in general hard to write robustly, no matter how simple the protocols involved may be) because key generation is hard, key distribution is even harder, and key reuse is fatal.


Oh, sorry, I thought I made that clear in a previous post. It will be encrypted, sent to a user over the Internet, then decrypted by that user. The data will be sent in real time and not stored on any hard drive. I have a method of sharing the key and that shouldn't be an issue.


I see no mention of authentication here. If you're going to share the protocol, then share it in its entirety. If knowledge of the protocol weakens it, then you have completely failed to understand Kerckhoff's principle. Is it robust against impersonation? Is it robust against an MITM? If the receiver does not understand what he receives (or, say, the message is detected to have been modified in transit), what are the procedures that the two parties involved should follow? Is the protocol interactive? Is it vulnerable to replay attacks? Can the entropy source of the parties be poisoned, and if so, is that detectable? If one party is compromised, does the protocol still offer forward secrecy? Does the protocol support more than two users, if needed? Does this protocol offer any kind of deniable encryption features (which something using the OTP will probably want, otherwise what the hell is the point)? These are (a small list of) the kinds of questions that really matter, a protocol doesn't just turn perfectly secure just because it happens to be using the OTP.


I'm sorry I have to say this but based on your previous posts I believe you don't know nearly as much about cryptography and computer security as you think you do (actually, your second post already revealed that quite plainly, it pretty much sounds like you just stumbled across the Wikipedia page on the OTP and said to yourself "I want this").

#5226233 Random Number Generation

Posted by Bacterius on 29 April 2015 - 07:15 AM

The word virtually is interesting, as it implies "not entirely".  I don't intend to use any overly complex protocols anyway.


Not exactly sure what your point is here, because I was not actually talking about one-time-pads and my next sentence did eventually start with "the only exceptions I know of". Also, one-time-pads do not offer perfect security. They offer perfect secrecy, you are still lacking authentication and integrity checking, for that you need an unconditionally secure message authentication code (MAC) if you wish to preserve those unconditionally secure properties, which is quite a bit more challenging to implement than the OTP (yeah, it's funny how people get hung up on the "perfect secrecy" bit but completely skip the next chapter which is "secrecy without authenticity is worthless").


Actually, I do know quite a bit about security, especially cryptography, which is why I'm posting this.  I've been going over and over it in my mind, trying to find a reasonable solution, and it seems like true random data is the only thing with absolutely perfect security (at least in terms of the quality of the key).


Then maybe you would share what your actual problem is, so that other people who also know quite a bit about security can peer-review and give their opinions and possible alternative solutions. There's this pattern I noticed that people who tend to fixate on one particular feature that they need, invariably fail to mention why they need it, and become somewhat abrasive whenever they are called out on it, and that actually pisses me off quite a bit because people who claim to be knowledgeable in the field should know that transparency is a key attribute... but if you really want to go ahead with your (in my and others' opinions questionable) supposed solution to your problem, then the answer to your problem was given in page 1, and I'm not sure this thread is a great place to talk about popsci quantum physics and discuss the nature of the universe.

#5226216 Random Number Generation

Posted by Bacterius on 29 April 2015 - 03:26 AM

DES, 3DES, Blowfish, (they were once recommended by have since had vulnerabilities discovered)


Technically 3DES has no known weaknesses that can be practically exploited, it's just dog slow compared to the more modern (and hardware-accelerated) algorithms available today so it makes no sense to use it. Blowfish is also kinda slow, but its only known weakness is it has a too small block size, which allows one to implement distinguishers if you encrypt a few gigabytes of data with the same key; that doesn't reveal the key, but it's not a feature you want in an encryption algorithm. The only (fatal) flaw of DES is its too small block size, it again besides that has no practically exploitable flaws in its internal structure, which is actually why 3DES is a thing.


But really, it's pretty easy to pick a good encryption algorithm these days, and in any case almost no real world vulnerabilities are predicated on the failure of security properties of a low-level cryptographic primitive, which are incredibly robust today. They virtually all occur as either side channels in the implementation of the higher-level protocols, or edge cases in said protocols that happen to leak sensitive information. The only exceptions I know of are related to very high profile events such as the FLAME forgery, and it generally requires immense amounts of computational power, not to mention skill, to pull off something like this. In other words, I would be much more concerned with how the protocols are implemented than the theoretical properties of the algorithms used.

#5226104 Conway's Game of Life

Posted by Bacterius on 28 April 2015 - 11:15 AM

This is an instance of a really large class of logic errors that can be surprisingly hard to debug. The problem is that you're modifying the board in-place, so when you go to check the neighbours of the (0, 0) cell, you're going to be changing that cell based on whether it lives or dies, but then you move on to the (1, 0) cell, which has (0, 0) as a neighbour, but the (0, 0) cell is now the "new cell" since you just changed it, but the game of life algorithm is based on each cell at time T+1 depending on their neighbours at time T... and this is the bug that you are observing, see?


Basically you want to make a copy of the board and put your changes in there, and then replace the old board with the new one once every cell has been checked.


PS: for what it's worth, I had the exact same bug years ago back when I was starting to program, when I first played around with cellular automata... took me a whole week to figure it out!!

#5226085 Random Number Generation

Posted by Bacterius on 28 April 2015 - 09:39 AM

True randomness is hard. (Technically, impossible on a deterministic machine.)


If you really need to generate a megabytes of random bits, there are ways to do it locally (of varying quality). Unix systems provide /dev/random, which is cryptographically secure.


You probably won't be able to draw megabytes from /dev/random in any amount of time anyway (unless the configuration is screwed with) as entropy is difficult to collect on your average system because estimates are conservative and there aren't *that* many entropy sources in the system. If you really need "real" random bits, there are small, fairly affordable USB devices that use signal noise or quantum physics to produce large amounts of it. Google for "true random number generator" or "hardware random number generator" (HRNG).


But like others have said above, you seem to be misguided. The output of cryptographically secure random number generators is designed to be computationally indistinguishable from a stream of "truly random" bits. And when you think about it, that's all you need; doesn't matter if the output bits aren't "really" random, there is no test anyone without the original key can perform in reasonable time to detect that they aren't! So is there really any observable, measurable difference? And they are much cheaper to generate (since you only need a few dozen truly random bits in total to seed the CSPRNG and generate a virtually infinite pseudorandom stream of bits) which is the whole point.


In fact, and I like to point this out to people, the probability that someone somehow manages to distinguish between the output of a CSPRNG and a truly random stream of bits is actually far, far lower than the probability of your hardware number generator failing and producing correlated, non-random bits (and also lower than the probability of the computer generating the CSPRNG output failing). So there is in any case a physical limit to how reliable a process can be made to be, and CSPRNG's happen to fall below that threshold.

#5225672 Why Does Everyone Tell Newbies To Make Games?

Posted by Bacterius on 26 April 2015 - 11:24 AM

In some ways many game engines/frameworks available today are simply too high-level to write games like Pong or Tetris in their most basic form, with their 3D-oriented abstractions (cameras, projections, models), interactive physics systems, node-based programming and whatnot, and they are most likely to confuse and discourage you than teach you anything.

If you are starting out just moving up from text-based games, all you really want is an event loop and a canvas, so that's what I'd recommend.

#5224857 C++ Ternary operator ?: surprising conversion rules.

Posted by Bacterius on 22 April 2015 - 09:05 AM

Well, if I didn't assign it to anything, or return it, there would be no need to convert it at all, would it?


The expression still has to evaluate to some typed value because of the way the grammar is defined (and parsed). That the expression eventually ends up unused doesn't free the compiler from the obligation to reject invalid code. Also consider:

(cond ? new B() : new C())->foo(...);

If both B and C had a member function foo inherited from A, which of the three would you expect to be called? What if B has a member function foo but not C? What if only A does? It seems like a pretty big source of confusion to give meaning to such expressions IMHO. But I think I agree with BitMaster above in that the operator was primarily not designed with polymorphism in mind.

#5224853 C++ Ternary operator ?: surprising conversion rules.

Posted by Bacterius on 22 April 2015 - 08:41 AM

Probably the most logical explanation is that the ternary operator simply isn't defined by the standard to infer the strictest common base type of its arguments, so, well, it doesn't. And I also think it would be kind of weird if the ternary operator had an unspecified return type completely dependent on the various types visible from the ternary operator's site (I guess in your case the only reason you would "expect" it to return a value of type A* would be because that's what your example function returns, if not it would genuinely be an unexpected "feature" for the compiler to resolve the ternary's return type to some random base type in my opinion).


But I'm sure someone later in the thread will post some C++ template madness that does just that, for kicks biggrin.png

#5224408 is it possible to program one app with different languages?

Posted by Bacterius on 19 April 2015 - 09:25 PM

Of course, directly calling a function in a program (or library) in another language is not the only way to communicate with it. You can very well make any two programs communicate via network or IPC, this is how DB queries and remote procedure calls generally work. That's also an example of programming an "app" with different languages: write the backend in one language, the client in another language, yet they can still communicate. This is probably not what you meant (you probably mean within the same process, e.g. Lua and C/C++) but thought I'd point it out.

#5222834 Limited lifetime of returned objects issue

Posted by Bacterius on 12 April 2015 - 06:48 PM

I don't think monitor names are unique - for instance my two monitors have the same name except one has 21" at the end and the other has 22" - and in my experience this kind of reverse lookup stuff tends to be brittle and unreliable. In the docs I see that you can register a callback to be notified of monitor connection/disconnection, I think I would personally use this to display to the user the *available* monitors and use it to make sure you never hold onto a pointer to a disconnected monitor.

#5222612 Is this C code using pointers valid?

Posted by Bacterius on 11 April 2015 - 10:23 AM


			current = (serverInfo*)realloc((*pServers), sizeof(serverInfo) * (serverCount+1));
			if (current)
				pServers = &current;
				return -1;
This part is broken.... I think (the logic is confusing).
You reallocate the input data, so it seems that you want to pass that new allocation back out to the caller. If so, it should be:
(*pServers) = (serverInfo*)realloc((*pServers), sizeof(serverInfo) * (serverCount+1));
Anywhere where you're taking the address of a local variable should be cause for great inspection of the logic smile.png



Careful with this though, if realloc fails and returns a nil pointer you've lost your original pointer and you're screwed! biggrin.png Unless the caller knows that it has to make a copy of the input pointer, which can get complicated fast with more complex situations.

#5222610 Is this C code using pointers valid?

Posted by Bacterius on 11 April 2015 - 10:08 AM

Consider using array notation whenever appropriate, it's often a good idea to distinguish pointers to objects from pointers as arrays in your code and as a bonus it helps reason about the different levels of indirection involved. In fact non-temporary arrays should probably eventually be wrapped into a struct and given proper access patterns and ownership if they are used regularly.


Just because it's written in C doesn't mean it has to be hemorrhaging asterisks and ampersands, giving higher-level structure to your code beyond what the syntax offers is part of what differentiates high quality, readable/maintainable code from procedural, spaghetti code with plenty of side effects smile.png



I voted your answer down FRex because without any concrete examples it is unnecessarily negative.

Saying that it's "obfuscated on purpose" and that "it's not even proper C" is net helpful. Unless you demonstrate how to do it 'properly'.

Just my thoughts.

I am not really an expert in C so I can't offer any advice on how sound/unsound the code is.

However, the code doesn't look awful at all. ;)


FRex may be a bit blunt, however he does raise some good points, the code has a serious flaw, although I think besides the main problem of the reallocated array not being returned outside the function (you want *pServers = current) the rest mostly has to do with structuring the code, architectural concerns, avoiding redundant variables and better variable scoping, maybe using snprintf to build the server name from the server index without going through itoa and a temporary buffer, ... although those aren't the pointer problems the OP requested comments on.