Jump to content

  • Log In with Google      Sign In   
  • Create Account


Member Since 28 Feb 2011
Offline Last Active Yesterday, 11:16 PM

#5269266 Criticism of C++

Posted by on 04 January 2016 - 03:48 PM

Which is just the issue... pointer arithmetic that lands outside the bounds of an array is undefined behavior. I'm not going to argue that this makes 99% of all programs ill-formed (with haphazard results) because every pointer arithmetic bears a result that is outside the bounds of some array (and given the fact that there is argv, there is always at least one other array).


... what? not some array, the array that is involved in the pointer arithmetic expression! And while that specific part of the standard seems arbitrary in light of the modern, unified, fully byte-addressable memory model of today's architectures, it makes more sense when you view it in the context of segmented memory architectures, where in C you still only have, say, an int* pointer type, but you could have two int arrays in two different memory segments, and it's just not possible to meaningfully, say, subtract the two array pointers, or add an integer to one array to reach the other somehow; with this in mind it makes sense to not have distinct arrays be able to interact in any way (not that most code does this anyway)


EDIT: I think I see your misunderstanding now; the standard states that a pointer not otherwise part of an array may be treated as a one-element array in the context of pointer arithmetic (it is actually very clear on that point)


I agree some aspects of undefined behaviour can seem punishing in that a meaning could have been assigned to the operation that everyone would have been happy with and it would have made life much simpler. But, these things were decided upon a long time ago, and in many cases there were historical reasons for why the standard is written a certain way.

#5268988 How long does the C/C++ preprocessor keep going?

Posted by on 03 January 2016 - 08:12 AM

The algorithm followed by the preprocessor is actually quite elaborate and involves stacked contexts to correctly handle nested and/or recursive macros (for instance if a token is expanded using some macro, then that macro will not be considered in expanding the resulting token). Probably the most direct way to fully understand the system, if you want to, is to read the docs and maybe even the internal docs.


But in most cases the preprocessor can be thought of as a process that does a single pass over the input stream, keeps track of all preprocessor macros encountered so far, and tries to expand each token according to these macros. If a token is encountered before the macro that's supposed to handle it is defined, the token will be left untouched.


Be aware that despite its apparent simplicity, advanced use of the preprocessor is typically not portable; things will work on a particular compiler and will break horribly on another because it doesn't expand your tokens correctly or somesuch. If your preprocessor definitions are growing out of control, consider reviewing your design to see if all these macros are needed, and then think about using a more specialized preprocessing system like m4 or similar.

#5267590 Multiplication support in GLM

Posted by on 22 December 2015 - 10:28 PM

It doesn't look like anything geometrically, it's just component-wise multiplication. One typical use is in blending, where you might multiply colors together channel-wise. Not all vec3's are geometric, sometimes they are (ab?)used as a bag of three related values.

#5267102 What will change with HDR monitor ?

Posted by on 19 December 2015 - 11:12 PM

Since the original question was "What will change with HDR monitor?"; if the monitors truly are HDR (like Hodgman said, HDR is quite overused by marketing... what it really means by HDR monitors "depends") then what will change for sure is your electricity bill. A monitor that can show a picture in a dynamic range around 4-10 times higher is bound to consume much higher electricity when the "shiny stuff" lits the pixels.


Funny how "staying green" slogan is also important.


If the monitors had a high enough dynamic range then our GPUs might in contrast use less power as they won't need to do things like tonemapping, bloom, lens flares and so on, as the monitor and our eyes will do it automatically, so we may actually save electricity in the long run! laugh.png (although we may overall lose due to people accidentally setting fire to their homes and losing their eyesight after displaying a rendered sun on their monitor unsure.png )

#5264540 PRNG Question

Posted by on 01 December 2015 - 11:28 PM

Bacterius, so you're implying I should use a ctyptographic algorithm to create my random numbers?


I'm saying that it's one surefire way to do it (that also provides a bunch more advantages when done right, such as an extremely small memory footprint, the ability to seed, zero issues with poorly distributed seeds, good parallelization properties, optimal quality of output, and effectively infinite period). Sure, they may be slightly slower than a highly optimized dedicated generator, but:


 - random number generation is almost never the bottleneck in software as almost all generators are completely CPU-bound (and if it is, there are ways around that, and if you seriously need every last clock cycle for some embedded or HPC task then you are definitely an outlier and already know the tradeoff between output quality and the amount of work you'll need to put in to actually achieve that)

 - on the other hand, random number quality is actually a much bigger deal, and you avoid plenty of subtle issues with the way "dedicated" generators expect to be used, e.g. "weak seeds" or the all-too-common "yeah, the first few bytes kinda don't really look random, just skip them" defect


So, yes, it's a heavy-duty solution, but it's a flexible one that I've come to rely on and that has never once failed me. Why settle for anything less when the cost is so low, and the benefits so great? I have nothing against things like the Mersenne twister or xorshift, they are nice and noteworthy algorithms in their own right, I just find them inferior in almost every way to the above when it comes to practical matters. Necessary reading if you are interested in knowing more.

#5264452 PRNG Question

Posted by on 01 December 2015 - 12:56 PM

As far as I understand, numbers produced by a PRNG will only be random relative to other numbers from that same sequence of numbers, not relative to numbers from a different sequence (with a different seed).


Only true for crappy PRNG's.


If you want to generate random numbers from a sequence of seeds (not a sequence of random numbers from a singular seed), look into hash functions instead.


Sure, what you're looking for is the term "DRBG" (deterministic random bit generator) which can easily be built from hash functions, they tend to be a bit slow with hash functions though. Better to build them from block ciphers, it's more efficient.

#5264280 Casting a vector of arrays

Posted by on 30 November 2015 - 12:44 PM


I don't know what C++ standard you use, but it's not true in C++11, which says at http://en.cppreference.com/w/cpp/language/types

I wouldn't say it's not true. char is still implied to be unsigned, but it is a distinct type from unsigned char for compiler.


If you perform numeric operations with char, it is gonna result to same value as if with unsigned char. Also if you compare char and unsigned char memory bits, they are gonna match for every value. They are just considered distinct types, so "unsigned char" in c++ code is just a water for compilation mill.



Whether char is signed or not is up to the implementation. Hell, modern compilers even let you choose, e.g. gcc with -funsigned-char and -fsigned-char.

#5263801 Lerp vs fastLerp

Posted by on 27 November 2015 - 05:35 AM


when does the first form produces a different result than the second form?


They are equivalent.

a * (1 - t) + bt //Multiply a into (1 - t)
= a - at + bt //Rearrange to see the end result better...
= a + bt - at //Take the common term 't' and put it outside the expression
= a + (b - a) * t


I think OP is aware of that, he is probably asking regarding floating-point. The two expressions are not equivalent in general, for instance if a = +infinity, b = 0, t = 0, then one of them evaluates to +infinity while the other one evaluates to NaN. I'm sure you can also come up with all sorts of fun examples that the two expressions aren't equivalent using catastrophic cancellation, denormal numbers, or other such IEEE 754 goodies cool.png

#5263296 C# System.Numerics.Vectors slow?

Posted by on 23 November 2015 - 12:38 PM

Those look like SIMD registers to me. xmm# instructions are the SIMD registers for your CPU. It just seems that the C# compiles to some pretty inefficient output. I see some extraneous shifting and moving. It's funny because it is doing MOVAPS but immediately performs MULSS instead of MULPS. It looks like it's pulling out single pieces of your matrix one at a time, placing them into the xmm registers, and then performing MULSS on a single scalar, rather than loading up 4 at a time doing doing MULPS. I have no idea why though, (here comes random guesses) maybe the memory alignment is poor, or the transforms are stored transposed in memory.




On AMD64 the SSE instruction set is used by default for any floating-point computations instead of the x87 FPU.

#5260542 Unrestricted while loop?

Posted by on 04 November 2015 - 12:41 PM

Most GPU's are non-preemptive, meaning that once your shader is running... it will run to completion no matter what, without letting other stuff like, say, rendering your desktop, run in parallel. In other words your screen will freeze and if you have a driver watchdog it will reset the driver after some number of seconds.


To do what you want you should just get each invocation of the shader to do N passes on each pixel, and accumulate the results, where N is a reasonable number (large enough that you're not invoking a shader run for every pass, but not too large that your shader takes too long to run). If you are using a dedicated second graphics card not connected to a monitor then N can be as large as you want.


That way you can see the results of the iterative passes, N passes at a time. You'll know if N is too large if your desktop feels choppy to use.

#5257800 combating udp flood attacks

Posted by on 18 October 2015 - 12:48 PM

This will not make any difference. Even if you instantly drop all packets from a set of IP's in the kernel's network stack (with iptables or whatever) the packets are still flowing through your network pipe connecting you to the rest of the internet, degrading quality of service (which is the "denial of service" part). Nothing you do locally will prevent that, the DOS attack does not even have to target the port your application is using, the DOS attack is done on a server, not on an individual application.


As I understand it the common technique to mitigate DOS attacks these days is to put your servers behind load-balancers with an absolutely colossal bandwidth that cannot (easily) be flooded enough to prevent legitimate users from passing through, and having these load-balancers do the filtering work in response to incoming traffic as needed, e.g. cloudflare. You generally rent those as a service.


If that is overkill for you, then I think just using operating system tools (again, like iptables) will do nicely for your usecase if you just need to stop a client from connecting every now and then. It's transparent to your application and you can write scripts to automate the process. For instance with iptables:

$ iptables -A INPUT -s -j DROP

and your application will never hear of again (remember to remove the rule eventually, because the IP could have gotten reused, and processing rules takes nonzero time, and because perma-banning IP's is generally bad form anyway).

#5257728 Checking if a bit is set, when it's not the first

Posted by on 17 October 2015 - 05:12 PM


return x & (1 << bit);

Which works by masking everything but that bit and checking if the result is nonzero.

#5257288 What is This doing A Function With a defined function inside it.

Posted by on 14 October 2015 - 10:44 PM

As for performance, the compiler is pretty good at inlining lambdas when appropriate, especially when they don't capture anything...

#5256062 C# Garbage Collection and performance/stalls

Posted by on 07 October 2015 - 12:56 PM

Use value types like structs for temporary allocations, these are limited to their scope and are efficiently released when the program's flow exits their scope. Then you don't need to bother the GC with a million tiny 16-byte objects that are created and immediately destroyed every frame.

#5255970 Unpredictable, ubiquitously rng seed

Posted by on 07 October 2015 - 03:04 AM

So again, the mechanism I propose is:
* Player chooses a move MOVE and a long (say 128-bit) random number SALT.
* First message is HASH(CONCATENATE(MOVE,SALT)).
* Once you have the message(s) from the other player(s), you submit a second message, which is CONCATENATE(MOVE,SALT).
* Collect the second message(s) from the other player(s), and verify HASH(SECOND_MESSAGE) == FIRST_MESSAGE.
* Compute the XOR of all the salts, and use that as the seed for a PRNG to be used in resolving the turn.


Another benefit of this approach is that the seed will be uniformly random as long as at least one player generates a uniformly random salt, unconditionally. If exactly all players collude, they can force the seed to be the same as last turn's, or anything really, but this scenario is not interesting as if all players collude then they need not perform any commitment scheme to begin with and can just do whatever.


Although I would personally use PRF(SALT, MOVE) with the salt keying the pseudorandom function family (HMAC with a strong hash function is ubiquitous as a PRF and will do nicely) instead of concatenating the salt and the move. Naive concatenation of bitstrings feeding into plain hash functions tends to lead to subtle but devastating vulnerabilities such as length extension attacks, especially if not all of the inputs are fixed-length. If you're not careful you could accidentally (and completely invisibly) destroy the binding properties of your scheme.