The expanded key material for encryption and decryption is the same afaik (unless there exists some strange variant that does not) so if your encryption routines work then it suggests your decryption routines are wrong. Perhaps you could post them so we can take a look.
BacteriusMember Since 28 Feb 2011
Offline Last Active Yesterday, 11:54 PM
- Group Crossbones+
- Active Posts 2,348
- Profile Views 24,917
- Submitted Links 0
- Member Title Crossbones
- Age 22 years old
- Birthday August 1, 1993
Wellington, New Zealand
Low/mid-level programming, computer science, mathematics and cryptography, some graphics development, video games, trance music, table tennis, sleep, and procrastination.
- Skype tomcrypto
Posted by Bacterius on 28 March 2014 - 07:23 AM
I've had a think, and I've decided that using TLS isn't neccessary. This is an open source project, so people can figure out the protocol either way. And there's no way they can retrieve the password from the hash.
However I won't remove the encryption currently in place, because its been a learning experience. I'll switch to using RSA key exchange just because I can.
Ignoring absolutely all of the advice in this thread is your call, but "people can figure out the protocol either way", "there's no way they can retrieve the password from the hash", "it's been a learning experience", and "just because I can" really aren't good ways to write security-related code, and don't inspire confidence. There's a reason it's recommended not to roll your own. Learning experiences in cryptography should be confined to personal experimentation, as soon as you're handling user information you are morally and ethically obligated* to secure it to the best of your ability, and if that means using existing, industry-standard, proven technology or contracting an expert to implement or audit your code, so be it. Just be aware that by making this choice, you are almost certainly not acting in your or your users' best interest.
* depending on what your program does, you may in fact be legally required to submit such code for an audit and be held legally responsible for protecting your users' privacy, whoever that may be. This probably doesn't apply in your case, though, usually this applies to banking authentication and transactions, storing employee payroll information, and so on, but it is worth keeping in mind that if a significant data breach occurs, and a company is found to have not acted responsibly by using a home-made security backend, things can occasionally become very, very unpleasant for them.
Posted by Bacterius on 26 March 2014 - 10:06 PM
What I am trying to understand is how they (amplitude and frequency) are used/implemented in code (specifically the code you posted).
When you're not sure what equation you should be using to describe some motion or are unsure what the variables represent, it's a good habit to do a summary dimensional analysis on the equation, using the simple rules below:
- dimensions multiply and divide as usual
- you cannot add or subtract different dimensions
- transcendental functions are dimensionless
Using this on the equation Alvaro posted, you can see that the frequency (as inverse time, i.e. in Hz) is multiplied with the time variable to give a dimensionless value, the sine of that is dimensionless and is then multiplied with amplitude as a (peak) displacement, giving a result as a measure of displacement, which checks out and is what you wanted. Using the same reasoning you can deduce that the frequency cannot be a vector (unless time is a vector too, which would imply that your x and y coordinates are subject to different times, which is probably not what you want), and so on. As you can see this gives a quick way to check what units a variable should be, whether a physics equation "makes sense", and is also handy to verify that you didn't make an implementation/logic error somewhere.
Posted by Bacterius on 25 March 2014 - 05:35 AM
Were you aware that the linked page describes an approach for solving existing puzzles rather than for generating puzzles? We can still help you to understand it if you wish though!
If the algorithm to solve an arbitrary puzzle is sufficiently constructive, it is often not hard to reverse it to produce an algorithm to generate a solvable puzzle (though such an algorithm can probably more readily be obtained by simply reversing the game rules, as frob suggests).
Posted by Bacterius on 20 March 2014 - 06:40 PM
You want a much higher shininess than 20 for water. Water is extremely reflective, and is usually a near perfect mirror at most viewing angles (though you don't want to do that in a shader because of aliasing), so make that a lot higher, maybe somewhere near 128-256. As it is now your water looks like plastic.
You mention wave amplitude - that usually doesn't matter, if you've ever watched a pond or a lake in calm weather, there are almost no waves, yet the water sparkles. Why? Because those tiny waves are actually sharp (high frequency), so only a small part of them is able to face the right direction to reflect lots of light towards you. What does that mean when translated to graphics? It means your normal map (and possibly wave heightmap) isn't high resolution enough, and is being blurred so much in the process that the wave normals lose all their high frequencies, resulting in a yucky "jello", almost flat appearance.
In shallow water, caustics play a huge part in lighting, as light that refracts into the water is reflected off the bottom and back towards the surface, and is then refracted towards you, which tends to produce a kind of "ambient lighting" term for water (consider your last two pictures - all those unlit areas on the water surface, should still get some light).
And finally, water is not air, it's a liquid. So you can't just render the plane (interface) between water and air and expect it to suffice. Light loses energy in a liquid, and is scattered by it, so the bottom of your ocean shouldn't be visible at all, except near the shore where it appears slightly blurred and tinted. Sufficiently deep water takes a color depending on its composition, usually ranging from light blue/green for shallow seawater to deep blue for deep seawater. Obviously it's unrealistic to simulate all of these in a game, but a cheap way to approximate it is to have a kind of "fog" in water depending on the distance between your terrain and the water plane (by reading the depth buffer) with exponential decay and a green/blue tint. That will almost certainly make your water look much better.
Fixing that and then adding proper reflection/refraction with a little caustic map should dramatically improve the quality of your water.
Posted by Bacterius on 20 March 2014 - 06:18 PM
does having 2 cards use up the bus twice as quick?
Each card has its own bus, and they can also communicate (synchronization, and I believe some types of resources are shared) without going the long way around through the PCI-E bus, to system memory, back to the PCI-E bus of the other card, as they can just use the SLI bridge.
Posted by Bacterius on 19 March 2014 - 09:16 PM
Then I run into the problem where I don't know what I'm handling. I guess requiring some sort of standard interface is required.
Yes, if you don't know what you're handling then you can't do anything with it though if you know that they are all unsigned integer types, you might be able to use a compile-time sizeof() to work out how many bits you're getting from the RNG, and using that to piece together a digit, I don't know if such a thing is idiomatic in C++ though.
To what extent should things be required? Should RNG classes also be expected to provide things like floats in the range [0, 1], and other things?
To be perfectly honest there is no convention. Some people (like me) advocate for separating the RNG from the type (distribution) of data generated, so that the RNG provides only a stream of bits (either as a literal stream of bits, or by chunks of 32/64 bits, whichever is easier) and then another class uses that to produce floating point values, another class uses that to produce a normal distribution, another class uses that to produce integers between 0 and N, and so on. Other people prefer to couple the two together, so that you have this big "Random" object that usually doesn't expose its output directly, but is able to convert it to what you need on the fly. It's a trade-off between composability and convenience, but even if you don't separate them in code I feel it is crucially important to understand the difference between a pseudorandom bit generator and a probability distribution.
Both approaches are sufficient in terms of features you need, but of course the code needed to make use of them will not be the same in both cases. At the end of the day, you (so far) only want to generate uniform "big integers" in a specific range. As I've shown in my previous post, you can build this using only a single primitive: "give me N random bits" (or "give me N random 32-bit integers", and so on). That is what your RNG needs to be able to do. There's no way to tell which particular signature is best-suited without knowing more about your code and its potential use cases, but they are all equivalent and sufficient.
Posted by Bacterius on 19 March 2014 - 05:59 AM
Really? I highly doubt that, can anyone confirm?
Another thing int32_t family of types does is guarantee twos compliment arithmetic.
The typedef name int N _t designates a signed integer type with width N, no padding bits, and a two's-complement representation. Thus, int8_t denotes a signed integer type with a width of exactly 8 bits.
Posted by Bacterius on 17 March 2014 - 12:40 AM
I'm using this call mostly for the Rabin-Miller Primality Test; is a uniform distribution actually required for this task?
Yes. If the base is not chosen uniformly then the worst-case bounds of the primality test no longer hold. Though as usual, in practice, a very small bias is unlikely to make a difference in the operation of the program, but if you can get it right, you should, because it will screw you over eventually (the bias actually gets quite measurable in some cases). But, yeah, let's get this thread back on its rails.
In my opinion, if you'd like to keep it as "pure" as possible then all you fundamentally need is a function that generates a random integer of n - uniformly distributed - digits (or bits - which unit you choose is a tradeoff between putting the digit/bit conversion stuff into the random engine or elsewhere in the code) for arbitrary n, which makes the integer uniformly distributed between 0 and D^n - 1 (or 2^n - 1, if bits). At that point your random engine is finished, and you can specialize a distribution class to take those integers and generate any distribution you'd like from them, such as a more specific range, floating-point values, even numbers only, etc, etc.. whatever.
So that kind of builds on top of the design of <random>, and in fact with some work it might be possible to implement your class such that it can be directly dropped into existing <random> classes so you can seamlessly leverage the functionality that's already there, but I don't know enough to tell you how to do that. And of course it's not the only approach, but it's the one I find the most logical from a design point of view (that you should try to make your integer class behave like an integer, such that you only need to implement a transitional function that generates the simplest uniform distribution possible from a bitstring, at which point all the bit-fiddling is over and you can use existing numerical methods to transform that distribution into anything you need).
I guess I could remove the most general function; it would have generated a number up to as large as allowed by the implementation. If one really wanted this behavior, they could generate a random number with a bit count of either (digit_bits * max_digits) or 2^size_type_bits - 1, whichever is less, ignoring the fact that it'd potentially fail to allocate. The other two have an upper bound defined by either a number that is the divisor, or a number of bits.
Yes, I think you should remove that function. Arbitrary precision arithmetic suggests there is no upper bound (besides available memory) and conceptually it doesn't make much sense to speak of generating a random number on an unbounded interval. The n-digit/bit version I mentioned previously is general enough.
(btw: have you considered using the Baillie-PSW primality test? it's a bit of the unsung hero but tends to be much more efficient than Miller-Rabin computationally speaking from a practical perspective, though it is somewhat of a conjecture mathematically speaking - anyway, just throwing it out there)
Posted by Bacterius on 16 March 2014 - 11:50 PM
Everything is working great now! Glad you guys know the idiosyncrasies of MinGW. I would never have thought of that.
It's not MinGW in particular, all C/C++ compilers work that way - even command-line MSVC I believe. IDE's just hide that
Posted by Bacterius on 16 March 2014 - 10:47 PM
I think you want -ld3d9 and -ld3dx9, the "-l" part is actually the command and the "lib" prefix and ".a/.dll/.so" etc.. extension are automatically added. I'm not sure why it's designed that way, probably historical reasons but yeah you only want to pass "-l" + the name of the library without the "lib" prefix and the extension, the compiler does the rest.
Posted by Bacterius on 16 March 2014 - 10:27 PM
What does "generating a random number" mean? Is it a arbitrary precision decimal class, otherwise what should the maximum number be? Anyway, each of these can be achieved efficiently by simply generating each digit at random followed by some post-processing. I don't think having a templated digit size should be a problem - the generation process is much more efficient if the digits are powers of two, but I think an algorithm can handle arbitrary digit sizes (and you can always specialize as needed).
So for the interface I'd go with a function like "int<digit_size> rand_range(int<digit_size> n, random_engine engine)" or something like that, along with "rand_bits" that takes the number of bits needed (ideally as a plain unsigned integer, not an arbitrary precision integer).
The RNG state itself should IMHO be separate from the way it is used in the code, it's not a "random digit generator", it's a PRNG and so should behave that way, with functions to generate random bits, a random integer, a random double, and so on... (or just random bits, if you opt for separating the underlying random generator from the distribution to generate, which has its advantages). Another alternative is to extend your RNG class with a random arbitrary precision integer function, but I don't really like that approach as it couples the two classes.
I would try and promote interoperability with other RNG states, how were you thinking of proceeding? Is your RNG state custom-made specifically for your class, or were you planning on basing it on something standard (e.g. the C++11 random engines)?
Posted by Bacterius on 16 March 2014 - 09:16 PM
You need to actually pass all your source files. i.e.:
>g++ main.cpp engine.cpp otherfiles.cpp -o MinDX.exe -static-libgcc -static-libstdc++ -s
Otherwise they won't get compiled. The header files can be found because they are included on-demand via #include directives and the current working directory is implicitly searched for header files, but due to the modular nature of object files you usually compile each source file separately and then link them together (which can be done in a single command as given above, but usually you prefer to create a makefile or use an IDE to only recompile source files which changed so that you don't restart the compilation from scratch every single time).
Posted by Bacterius on 16 March 2014 - 06:46 PM
Do you know the outer product? The dot product is the inner product, which multiplies a 1*n matrix (row vector) with an n*1 matrix (column vector) and gives a 1*1 matrix (a scalar). The outer product multiplies a n*1 matrix with an 1*n matrix to obtain an n*n matrix. This is what you want here: p and q are column vectors (3*1) and their transpose is a row vector (1*3). Multiplying them together gives you a 3*3 matrix. You use the same rules as ordinary matrix multiplication to compute this outer product.
Posted by Bacterius on 16 March 2014 - 05:45 AM
in mathematics people would often use x,y to x', y'
so this x_, y_ simulates in my opinion this (imo it is also not so bad it makes code looking clearer than x1, y1, x2, y2)
naming are hard part in programming
your proposition src dst is also not good if treated heavy -
becouse in line you do not get the source point and destination point, not sure if i realy even got a begin point and end point
(just two points - now im closer to use
DrawLine(int px, int py,int qx,int qy, color);
Well, if you go that way, the function is poorly named because a line has no beginning nor end. So when you say "draw a line" what you really mean is "draw a segment" (which does have a beginning and an end point). And I don't think using an underscore as an alternative for a prime (') character is idiomatic. Usually underscores are used as substitutes for whitespace.