Jump to content

  • Log In with Google      Sign In   
  • Create Account


Member Since 28 Feb 2011
Offline Last Active Yesterday, 02:16 PM

#5144015 Why are my ray traced reflections wrong?

Posted by Bacterius on 02 April 2014 - 10:24 PM

Since your plane doesn't appear to be reflecting light in the same way as your spheres regardless of the reflection formula you use, I would check that your surface normal is pointing the right way for both objects. It's almost certain your sphere surface normals are pointing inwards of the sphere whereas your plane surface normal is pointing upwards (or vice versa).


Remember: the surface normal is typically defined as always pointing "outwards", that is, if V is the direction of the ray as it hits the geometry, then dot(V, N) < 0 (so that dot(R, N) > 0 where R is the reflected ray). You can define it the other way, with it pointing inwards if you want, but you have to be consistent and make sure this is true for all your geometry, since the reflection formula kind of "needs to know" which way to reflect your incident vector, and that depends on the orientation of the normal vector.


It's fairly easy to enforce if you're only doing reflection, since you can do the dot product check and flip your normal as needed, but it gets messy when you start doing refraction/transparency where non-watertight meshes are physically meaningless (they have no boundary) and you can't flip normals since you need to keep track of which object(s) your ray is currently inside of, therefore you have to be a lot more careful in these situations to make sure your geometry is self-consistent. For opaque reflection you can just hack the surface normal to make it point in whichever direction you need and you're good to go (again, because with reflection only your ray is always "outside" and you know it).


EDIT: if you could show your code for the GetSurfaceNormal() method, that should help track down the bug.

#5143354 AES Encryption

Posted by Bacterius on 30 March 2014 - 08:08 PM

The expanded key material for encryption and decryption is the same afaik (unless there exists some strange variant that does not) so if your encryption routines work then it suggests your decryption routines are wrong. Perhaps you could post them so we can take a look.

#5142840 Cryptography - am I doing it right?

Posted by Bacterius on 28 March 2014 - 07:23 AM

Thanks guys! smile.png

I've had a think, and I've decided that using TLS isn't neccessary. This is an open source project, so people can figure out the protocol either way. And there's no way they can retrieve the password from the hash.

However I won't remove the encryption currently in place, because its been a learning experience. I'll switch to using RSA key exchange just because I can. smile.png


Ignoring absolutely all of the advice in this thread is your call, but "people can figure out the protocol either way", "there's no way they can retrieve the password from the hash", "it's been a learning experience", and "just because I can" really aren't good ways to write security-related code, and don't inspire confidence. There's a reason it's recommended not to roll your own. Learning experiences in cryptography should be confined to personal experimentation, as soon as you're handling user information you are morally and ethically obligated* to secure it to the best of your ability, and if that means using existing, industry-standard, proven technology or contracting an expert to implement or audit your code, so be it. Just be aware that by making this choice, you are almost certainly not acting in your or your users' best interest.


* depending on what your program does, you may in fact be legally required to submit such code for an audit and be held legally responsible for protecting your users' privacy, whoever that may be. This probably doesn't apply in your case, though, usually this applies to banking authentication and transactions, storing employee payroll information, and so on, but it is worth keeping in mind that if a significant data breach occurs, and a company is found to have not acted responsibly by using a home-made security backend, things can occasionally become very, very unpleasant for them.

#5142466 Need help with 2D oscillation

Posted by Bacterius on 26 March 2014 - 10:06 PM

What I am trying to understand is how they (amplitude and frequency) are used/implemented in code (specifically the code you posted).


When you're not sure what equation you should be using to describe some motion or are unsure what the variables represent, it's a good habit to do a summary dimensional analysis on the equation, using the simple rules below:

- dimensions multiply and divide as usual

- you cannot add or subtract different dimensions

- transcendental functions are dimensionless


Using this on the equation Alvaro posted, you can see that the frequency (as inverse time, i.e. in Hz) is multiplied with the time variable to give a dimensionless value, the sine of that is dimensionless and is then multiplied with amplitude as a (peak) displacement, giving a result as a measure of displacement, which checks out and is what you wanted. Using the same reasoning you can deduce that the frequency cannot be a vector (unless time is a vector too, which would imply that your x and y coordinates are subject to different times, which is probably not what you want), and so on. As you can see this gives a quick way to check what units a variable should be, whether a physics equation "makes sense", and is also handy to verify that you didn't make an implementation/logic error somewhere.

#5141969 auto generate mahjong level with at least 1 way to win

Posted by Bacterius on 25 March 2014 - 05:35 AM

Were you aware that the linked page describes an approach for solving existing puzzles rather than for generating puzzles?  We can still help you to understand it if you wish though!


If the algorithm to solve an arbitrary puzzle is sufficiently constructive, it is often not hard to reverse it to produce an algorithm to generate a solvable puzzle smile.png (though such an algorithm can probably more readily be obtained by simply reversing the game rules, as frob suggests).

#5140810 Good specular for water surface

Posted by Bacterius on 20 March 2014 - 06:40 PM

You want a much higher shininess than 20 for water. Water is extremely reflective, and is usually a near perfect mirror at most viewing angles (though you don't want to do that in a shader because of aliasing), so make that a lot higher, maybe somewhere near 128-256. As it is now your water looks like plastic.


You mention wave amplitude - that usually doesn't matter, if you've ever watched a pond or a lake in calm weather, there are almost no waves, yet the water sparkles. Why? Because those tiny waves are actually sharp (high frequency), so only a small part of them is able to face the right direction to reflect lots of light towards you. What does that mean when translated to graphics? It means your normal map (and possibly wave heightmap) isn't high resolution enough, and is being blurred so much in the process that the wave normals lose all their high frequencies, resulting in a yucky "jello", almost flat appearance.


In shallow water, caustics play a huge part in lighting, as light that refracts into the water is reflected off the bottom and back towards the surface, and is then refracted towards you, which tends to produce a kind of "ambient lighting" term for water (consider your last two pictures - all those unlit areas on the water surface, should still get some light).


And finally, water is not air, it's a liquid. So you can't just render the plane (interface) between water and air and expect it to suffice. Light loses energy in a liquid, and is scattered by it, so the bottom of your ocean shouldn't be visible at all, except near the shore where it appears slightly blurred and tinted. Sufficiently deep water takes a color depending on its composition, usually ranging from light blue/green for shallow seawater to deep blue for deep seawater. Obviously it's unrealistic to simulate all of these in a game, but a cheap way to approximate it is to have a kind of "fog" in water depending on the distance between your terrain and the water plane (by reading the depth buffer) with exponential decay and a green/blue tint. That will almost certainly make your water look much better.


Fixing that and then adding proper reflection/refraction with a little caustic map should dramatically improve the quality of your water.

#5140807 sli and the pci express bus use

Posted by Bacterius on 20 March 2014 - 06:18 PM

does having 2 cards use up the bus twice as quick?


Each card has its own bus, and they can also communicate (synchronization, and I believe some types of resources are shared) without going the long way around through the PCI-E bus, to system memory, back to the PCI-E bus of the other card, as they can just use the SLI bridge.

#5140525 Random Arbitrary Precision Numbers

Posted by Bacterius on 19 March 2014 - 09:16 PM

Then I run into the problem where I don't know what I'm handling. I guess requiring some sort of standard interface is required.


Yes, if you don't know what you're handling then you can't do anything with it wink.png though if you know that they are all unsigned integer types, you might be able to use a compile-time sizeof() to work out how many bits you're getting from the RNG, and using that to piece together a digit, I don't know if such a thing is idiomatic in C++ though.


To what extent should things be required? Should RNG classes also be expected to provide things like floats in the range [0, 1], and other things?


To be perfectly honest there is no convention. Some people (like me) advocate for separating the RNG from the type (distribution) of data generated, so that the RNG provides only a stream of bits (either as a literal stream of bits, or by chunks of 32/64 bits, whichever is easier) and then another class uses that to produce floating point values, another class uses that to produce a normal distribution, another class uses that to produce integers between 0 and N, and so on. Other people prefer to couple the two together, so that you have this big "Random" object that usually doesn't expose its output directly, but is able to convert it to what you need on the fly. It's a trade-off between composability and convenience, but even if you don't separate them in code I feel it is crucially important to understand the difference between a pseudorandom bit generator and a probability distribution.


Both approaches are sufficient in terms of features you need, but of course the code needed to make use of them will not be the same in both cases. At the end of the day, you (so far) only want to generate uniform "big integers" in a specific range. As I've shown in my previous post, you can build this using only a single primitive: "give me N random bits" (or "give me N random 32-bit integers", and so on). That is what your RNG needs to be able to do. There's no way to tell which particular signature is best-suited without knowing more about your code and its potential use cases, but they are all equivalent and sufficient.

#5140296 Exactly what's the point of 'int32_t', etc.

Posted by Bacterius on 19 March 2014 - 05:59 AM


Another thing int32_t family of types does is guarantee twos compliment arithmetic.

Really? I highly doubt that, can anyone confirm?



It does.




The typedef name int N _t designates a signed integer type with width N, no padding bits, and a two's-complement representation. Thus, int8_t denotes a signed integer type with a width of exactly 8 bits.

#5139645 Random Arbitrary Precision Numbers

Posted by Bacterius on 17 March 2014 - 12:40 AM

I'm using this call mostly for the Rabin-Miller Primality Test; is a uniform distribution actually required for this task?


Yes. If the base is not chosen uniformly then the worst-case bounds of the primality test no longer hold. Though as usual, in practice, a very small bias is unlikely to make a difference in the operation of the program, but if you can get it right, you should, because it will screw you over eventually (the bias actually gets quite measurable in some cases). But, yeah, let's get this thread back on its rails.


In my opinion, if you'd like to keep it as "pure" as possible then all you fundamentally need is a function that generates a random integer of n - uniformly distributed - digits (or bits - which unit you choose is a tradeoff between putting the digit/bit conversion stuff into the random engine or elsewhere in the code) for arbitrary n, which makes the integer uniformly distributed between 0 and D^n - 1 (or 2^n - 1, if bits). At that point your random engine is finished, and you can specialize a distribution class to take those integers and generate any distribution you'd like from them, such as a more specific range, floating-point values, even numbers only, etc, etc.. whatever.


So that kind of builds on top of the design of <random>, and in fact with some work it might be possible to implement your class such that it can be directly dropped into existing <random> classes so you can seamlessly leverage the functionality that's already there, but I don't know enough to tell you how to do that. And of course it's not the only approach, but it's the one I find the most logical from a design point of view (that you should try to make your integer class behave like an integer, such that you only need to implement a transitional function that generates the simplest uniform distribution possible from a bitstring, at which point all the bit-fiddling is over and you can use existing numerical methods to transform that distribution into anything you need).




I guess I could remove the most general function; it would have generated a number up to as large as allowed by the implementation. If one really wanted this behavior, they could generate a random number with a bit count of either (digit_bits * max_digits) or 2^size_type_bits - 1, whichever is less, ignoring the fact that it'd potentially fail to allocate. The other two have an upper bound defined by either a number that is the divisor, or a number of bits.


Yes, I think you should remove that function. Arbitrary precision arithmetic suggests there is no upper bound (besides available memory) and conceptually it doesn't make much sense to speak of generating a random number on an unbounded interval. The n-digit/bit version I mentioned previously is general enough.


(btw: have you considered using the Baillie-PSW primality test? it's a bit of the unsung hero but tends to be much more efficient than Miller-Rabin computationally speaking from a practical perspective, though it is somewhat of a conjecture mathematically speaking - anyway, just throwing it out there)

#5139638 MinGW compilation problem

Posted by Bacterius on 16 March 2014 - 11:50 PM

Thanks Bacterius.


Everything is working great now! Glad you guys know the idiosyncrasies of MinGW. I would never have thought of that. smile.png


It's not MinGW in particular, all C/C++ compilers work that way - even command-line MSVC I believe. IDE's just hide that smile.png

#5139628 MinGW compilation problem

Posted by Bacterius on 16 March 2014 - 10:47 PM

I think you want -ld3d9 and -ld3dx9, the "-l" part is actually the command and the "lib" prefix and ".a/.dll/.so" etc.. extension are automatically added. I'm not sure why it's designed that way, probably historical reasons smile.png but yeah you only want to pass "-l" + the name of the library without the "lib" prefix and the extension, the compiler does the rest.

#5139623 Random Arbitrary Precision Numbers

Posted by Bacterius on 16 March 2014 - 10:27 PM

What does "generating a random number" mean? Is it a arbitrary precision decimal class, otherwise what should the maximum number be? Anyway, each of these can be achieved efficiently by simply generating each digit at random followed by some post-processing. I don't think having a templated digit size should be a problem - the generation process is much more efficient if the digits are powers of two, but I think an algorithm can handle arbitrary digit sizes (and you can always specialize as needed).


So for the interface I'd go with a function like "int<digit_size> rand_range(int<digit_size> n, random_engine engine)" or something like that, along with "rand_bits" that takes the number of bits needed (ideally as a plain unsigned integer, not an arbitrary precision integer).


The RNG state itself should IMHO be separate from the way it is used in the code, it's not a "random digit generator", it's a PRNG and so should behave that way, with functions to generate random bits, a random integer, a random double, and so on... (or just random bits, if you opt for separating the underlying random generator from the distribution to generate, which has its advantages). Another alternative is to extend your RNG class with a random arbitrary precision integer function, but I don't really like that approach as it couples the two classes.


I would try and promote interoperability with other RNG states, how were you thinking of proceeding? Is your RNG state custom-made specifically for your class, or were you planning on basing it on something standard (e.g. the C++11 random engines)?

#5139604 MinGW compilation problem

Posted by Bacterius on 16 March 2014 - 09:16 PM

You need to actually pass all your source files. i.e.:

>g++ main.cpp engine.cpp otherfiles.cpp -o MinDX.exe -static-libgcc -static-libstdc++ -s

Otherwise they won't get compiled. The header files can be found because they are included on-demand via #include directives and the current working directory is implicitly searched for header files, but due to the modular nature of object files you usually compile each source file separately and then link them together (which can be done in a single command as given above, but usually you prefer to create a makefile or use an IDE to only recompile source files which changed so that you don't restart the compilation from scratch every single time).

#5139580 vector into matrix

Posted by Bacterius on 16 March 2014 - 06:46 PM

Do you know the outer product? The dot product is the inner product, which multiplies a 1*n matrix (row vector) with an n*1 matrix (column vector) and gives a 1*1 matrix (a scalar). The outer product multiplies a n*1 matrix with an 1*n matrix to obtain an n*n matrix. This is what you want here: p and q are column vectors (3*1) and their transpose is a row vector (1*3). Multiplying them together gives you a 3*3 matrix. You use the same rules as ordinary matrix multiplication to compute this outer product.


Reading: http://en.wikipedia.org/wiki/Outer_product