Jump to content

  • Log In with Google      Sign In   
  • Create Account


Member Since 28 Feb 2011
Online Last Active Today, 08:40 PM

#5267102 What will change with HDR monitor ?

Posted by on 19 December 2015 - 11:12 PM

Since the original question was "What will change with HDR monitor?"; if the monitors truly are HDR (like Hodgman said, HDR is quite overused by marketing... what it really means by HDR monitors "depends") then what will change for sure is your electricity bill. A monitor that can show a picture in a dynamic range around 4-10 times higher is bound to consume much higher electricity when the "shiny stuff" lits the pixels.


Funny how "staying green" slogan is also important.


If the monitors had a high enough dynamic range then our GPUs might in contrast use less power as they won't need to do things like tonemapping, bloom, lens flares and so on, as the monitor and our eyes will do it automatically, so we may actually save electricity in the long run! laugh.png (although we may overall lose due to people accidentally setting fire to their homes and losing their eyesight after displaying a rendered sun on their monitor unsure.png )

#5264540 PRNG Question

Posted by on 01 December 2015 - 11:28 PM

Bacterius, so you're implying I should use a ctyptographic algorithm to create my random numbers?


I'm saying that it's one surefire way to do it (that also provides a bunch more advantages when done right, such as an extremely small memory footprint, the ability to seed, zero issues with poorly distributed seeds, good parallelization properties, optimal quality of output, and effectively infinite period). Sure, they may be slightly slower than a highly optimized dedicated generator, but:


 - random number generation is almost never the bottleneck in software as almost all generators are completely CPU-bound (and if it is, there are ways around that, and if you seriously need every last clock cycle for some embedded or HPC task then you are definitely an outlier and already know the tradeoff between output quality and the amount of work you'll need to put in to actually achieve that)

 - on the other hand, random number quality is actually a much bigger deal, and you avoid plenty of subtle issues with the way "dedicated" generators expect to be used, e.g. "weak seeds" or the all-too-common "yeah, the first few bytes kinda don't really look random, just skip them" defect


So, yes, it's a heavy-duty solution, but it's a flexible one that I've come to rely on and that has never once failed me. Why settle for anything less when the cost is so low, and the benefits so great? I have nothing against things like the Mersenne twister or xorshift, they are nice and noteworthy algorithms in their own right, I just find them inferior in almost every way to the above when it comes to practical matters. Necessary reading if you are interested in knowing more.

#5264452 PRNG Question

Posted by on 01 December 2015 - 12:56 PM

As far as I understand, numbers produced by a PRNG will only be random relative to other numbers from that same sequence of numbers, not relative to numbers from a different sequence (with a different seed).


Only true for crappy PRNG's.


If you want to generate random numbers from a sequence of seeds (not a sequence of random numbers from a singular seed), look into hash functions instead.


Sure, what you're looking for is the term "DRBG" (deterministic random bit generator) which can easily be built from hash functions, they tend to be a bit slow with hash functions though. Better to build them from block ciphers, it's more efficient.

#5264280 Casting a vector of arrays

Posted by on 30 November 2015 - 12:44 PM


I don't know what C++ standard you use, but it's not true in C++11, which says at http://en.cppreference.com/w/cpp/language/types

I wouldn't say it's not true. char is still implied to be unsigned, but it is a distinct type from unsigned char for compiler.


If you perform numeric operations with char, it is gonna result to same value as if with unsigned char. Also if you compare char and unsigned char memory bits, they are gonna match for every value. They are just considered distinct types, so "unsigned char" in c++ code is just a water for compilation mill.



Whether char is signed or not is up to the implementation. Hell, modern compilers even let you choose, e.g. gcc with -funsigned-char and -fsigned-char.

#5263801 Lerp vs fastLerp

Posted by on 27 November 2015 - 05:35 AM


when does the first form produces a different result than the second form?


They are equivalent.

a * (1 - t) + bt //Multiply a into (1 - t)
= a - at + bt //Rearrange to see the end result better...
= a + bt - at //Take the common term 't' and put it outside the expression
= a + (b - a) * t


I think OP is aware of that, he is probably asking regarding floating-point. The two expressions are not equivalent in general, for instance if a = +infinity, b = 0, t = 0, then one of them evaluates to +infinity while the other one evaluates to NaN. I'm sure you can also come up with all sorts of fun examples that the two expressions aren't equivalent using catastrophic cancellation, denormal numbers, or other such IEEE 754 goodies cool.png

#5263296 C# System.Numerics.Vectors slow?

Posted by on 23 November 2015 - 12:38 PM

Those look like SIMD registers to me. xmm# instructions are the SIMD registers for your CPU. It just seems that the C# compiles to some pretty inefficient output. I see some extraneous shifting and moving. It's funny because it is doing MOVAPS but immediately performs MULSS instead of MULPS. It looks like it's pulling out single pieces of your matrix one at a time, placing them into the xmm registers, and then performing MULSS on a single scalar, rather than loading up 4 at a time doing doing MULPS. I have no idea why though, (here comes random guesses) maybe the memory alignment is poor, or the transforms are stored transposed in memory.




On AMD64 the SSE instruction set is used by default for any floating-point computations instead of the x87 FPU.

#5260542 Unrestricted while loop?

Posted by on 04 November 2015 - 12:41 PM

Most GPU's are non-preemptive, meaning that once your shader is running... it will run to completion no matter what, without letting other stuff like, say, rendering your desktop, run in parallel. In other words your screen will freeze and if you have a driver watchdog it will reset the driver after some number of seconds.


To do what you want you should just get each invocation of the shader to do N passes on each pixel, and accumulate the results, where N is a reasonable number (large enough that you're not invoking a shader run for every pass, but not too large that your shader takes too long to run). If you are using a dedicated second graphics card not connected to a monitor then N can be as large as you want.


That way you can see the results of the iterative passes, N passes at a time. You'll know if N is too large if your desktop feels choppy to use.

#5257800 combating udp flood attacks

Posted by on 18 October 2015 - 12:48 PM

This will not make any difference. Even if you instantly drop all packets from a set of IP's in the kernel's network stack (with iptables or whatever) the packets are still flowing through your network pipe connecting you to the rest of the internet, degrading quality of service (which is the "denial of service" part). Nothing you do locally will prevent that, the DOS attack does not even have to target the port your application is using, the DOS attack is done on a server, not on an individual application.


As I understand it the common technique to mitigate DOS attacks these days is to put your servers behind load-balancers with an absolutely colossal bandwidth that cannot (easily) be flooded enough to prevent legitimate users from passing through, and having these load-balancers do the filtering work in response to incoming traffic as needed, e.g. cloudflare. You generally rent those as a service.


If that is overkill for you, then I think just using operating system tools (again, like iptables) will do nicely for your usecase if you just need to stop a client from connecting every now and then. It's transparent to your application and you can write scripts to automate the process. For instance with iptables:

$ iptables -A INPUT -s -j DROP

and your application will never hear of again (remember to remove the rule eventually, because the IP could have gotten reused, and processing rules takes nonzero time, and because perma-banning IP's is generally bad form anyway).

#5257728 Checking if a bit is set, when it's not the first

Posted by on 17 October 2015 - 05:12 PM


return x & (1 << bit);

Which works by masking everything but that bit and checking if the result is nonzero.

#5257288 What is This doing A Function With a defined function inside it.

Posted by on 14 October 2015 - 10:44 PM

As for performance, the compiler is pretty good at inlining lambdas when appropriate, especially when they don't capture anything...

#5256062 C# Garbage Collection and performance/stalls

Posted by on 07 October 2015 - 12:56 PM

Use value types like structs for temporary allocations, these are limited to their scope and are efficiently released when the program's flow exits their scope. Then you don't need to bother the GC with a million tiny 16-byte objects that are created and immediately destroyed every frame.

#5255970 Unpredictable, ubiquitously rng seed

Posted by on 07 October 2015 - 03:04 AM

So again, the mechanism I propose is:
* Player chooses a move MOVE and a long (say 128-bit) random number SALT.
* First message is HASH(CONCATENATE(MOVE,SALT)).
* Once you have the message(s) from the other player(s), you submit a second message, which is CONCATENATE(MOVE,SALT).
* Collect the second message(s) from the other player(s), and verify HASH(SECOND_MESSAGE) == FIRST_MESSAGE.
* Compute the XOR of all the salts, and use that as the seed for a PRNG to be used in resolving the turn.


Another benefit of this approach is that the seed will be uniformly random as long as at least one player generates a uniformly random salt, unconditionally. If exactly all players collude, they can force the seed to be the same as last turn's, or anything really, but this scenario is not interesting as if all players collude then they need not perform any commitment scheme to begin with and can just do whatever.


Although I would personally use PRF(SALT, MOVE) with the salt keying the pseudorandom function family (HMAC with a strong hash function is ubiquitous as a PRF and will do nicely) instead of concatenating the salt and the move. Naive concatenation of bitstrings feeding into plain hash functions tends to lead to subtle but devastating vulnerabilities such as length extension attacks, especially if not all of the inputs are fixed-length. If you're not careful you could accidentally (and completely invisibly) destroy the binding properties of your scheme.

#5255870 Unpredictable, ubiquitously rng seed

Posted by on 06 October 2015 - 01:01 PM

Isn't it a problem that if I receive the other player's move I can pick my move in response? You can still solve that problem with my two-step approach (I am sure I am not the first person to think of it, but I don't know of a better name).


It's called a commitment scheme.

#5254750 Why did COD: AW move every file into the same folder where .exe is?

Posted by on 30 September 2015 - 04:06 AM

Besides, why do you care? It works and that's enough.


I downvoted your post because with that reasoning we'd all still be programming software in 8-bit assembly. There's nothing wrong with asking why some games do some things differently than others, it can spark interesting discussions especially regarding file packing, asset loading etc, even if the actual answer is as simple as "no reason, it just worked well enough that way". Answering "why do you care" is no way to foster a community of thoughtful and well-rounded programmers.

#5254329 Singleton and template Error (C++)

Posted by on 27 September 2015 - 11:41 PM

I have to agree with Sean here, I am seriously tired of dealing with the cognitive overhead of dependency injection frameworks that have me forced to pass the same damn object around everywhere because it's sufficiently basic and low-level that everything needs it, and at the end of the day it's still a global because everything everywhere has a reference to the exact same object. Things are just as bad as with a global, except now I have a dependency injection framework with it and my code's simple logic is obscured by a weird meta-language to describe its relationship with other simple dependencies (parameter-passing in constructors/factories, annotations, XML dependency trees; pick your poison). I do not see a problem with having genuine services as long as their interface is sane, nor am I against explicitly codifying abstract dependencies when it makes sense and adds value. I have a brain and my colleagues do as well, and at work I find that most of it is being used working with against "modern" design patterns and contorting straightforward code to fit someone else's idea of "decoupled code" instead of actually implementing business logic, and probably producing far more bugs as a result. As everything else in life, design principles become bad and dangerous when taken in excess. If in doubt, stop and think.


Anyway frob I found it interesting that you used the word "generally" followed by an unconditional statement. What in your opinion would be an uncommon use case where a singleton or service locator would be a sensible design choice?