Jump to content

  • Log In with Google      Sign In   
  • Create Account

samoth

Member Since 18 Jan 2008
Offline Last Active Today, 04:21 PM

#5301733 Trusting The Client

Posted by samoth on 21 July 2016 - 06:21 AM

The Bloom filter idea sounds great, but I'm unsure of how you'd implement it in practice. If you're treating it as a bunch of false positive collisions, what's the client supposed to do in the time while it waits for the server to report that it's not a real collision?

Does it need to do something? It's perfectly acceptable if it shows "kaboom" when the next tick's server reply comes in (OP says 20 updates per second, so that's not long), is it not? Player will still be standing on the trap then.

Normally, a game has a pattern more or less like this: Client says "I'm moving to XYZ", and starts simulating locally to cover the network latency, and shows the move to the user, assuming the move will be OK. Eventually, the server confirms the move (or well, does not).
In this case, the server's reply might very well be: "Move confirmed. Oh by the way, you stepped on a trap".


In real life, when you step on a land mine, there is a fraction of a second while you're already lifting your foot again before you realize the "click", and then the mine only goes off half a second or so later. That seems to be a "works perfectly OK" method of building land mines. They wouldn't do that if it didn't work. I'd assume nobody will find a similar (indeed much shorter) delay disturbing or even immersion-breaking in a game.


#5301727 Trusting The Client

Posted by samoth on 21 July 2016 - 05:36 AM

I'm feeling uneasy with trusting the client. Without even addressing the paragraph containing fireballs: You make sure that people are not speedhacking by taking a delta. OK, fine.

So that means as long as I do not move faster than running speed, I am allowed to walk over water and over chasms. I am allowed to fly and attack my enemies (who cannot reach me with melee) with missle attacks from above, too, as long as I am not moving faster than X.

Having the client do the physics can be OK, but there certainly needs to be a validation for plausibility, or you need a means of making sure that cheating is not that easy and has an overwhelming chance of being detected.

One example how such an offload can be done would for example be to let the client do the collision checking against traps that the player can step on using a Bloom filter (or a spatial hash with enough collisions). Bloom filters have false positives, so the knowledge the client can extract and use to its advantage is limited. When the client reports "hit something (maybe)", the server checks if really something happened. If a client doesn't report so and so often, or if some explicit verification markers aren't reported, it is cheating.
It's hard for the client to selectively drop only the harmful events because it doesn't know what it is that just gave a "positive" result, might be a true positive or false positive. On the other hand, the server only has to do the heavy lifting once every 20 or 30 (or even fewer) steps, not every step.


#5301376 Vulkan is Next-Gen OpenGL

Posted by samoth on 19 July 2016 - 10:06 AM

Async compute is looking to be a killer feature for DX12 / Vulkan titles. The benchmarks indicate that Pascal's implementation of async compute isn't half as good as AMD's.

Ah, one should be careful with such a blanket statement (unless you have some more detailled information?). Benchmarks are rarely unbiased and never fair, and interpreting the results can be tricky. I'm not trying to defend nVidia here, and you might quite possibly even be right, but I think solely on the linked results, this is a bit of a hasty conclusion.

From what one can see in that video, one possible interpretation is "Yay, AMD is so awesome, nVidia sucks". However, another possible interpretation might be "AMD sucks less with Vulkan than with OpenGL". Or worded differently, AMD's OpenGL implementation is poor, with Vulkan they're up-to-par.

Consider the numbers. The R9 Fury is meant to attack the GTX980 (which by the way is Maxwell, not Pascal). With Vulkan, it has more or less the same FPS, give or take one frame. It's still way slower than the Pascal GPUs, but comparing against these wouldn't be fair since they're much bigger beasts, so let's skip that. With OpenGL, it is however some 30+ percent slower than the competing Maxwell card.

All tested GPUs gain from using Vulkan, but for the nVidia ones it's in the 6-8% range while for the AMDs it's in the 30-40% range. I think there are really two ways of interpreting that result.


#5301361 can c++ be used as a scripting language

Posted by samoth on 19 July 2016 - 08:09 AM

In a development build, each script is compiled into a single shared library for hot-reloading, fast iteration times, etc.
In a retail build, all scripts are compiled into the game executable, so the builds that go through QA are also what ends up in the user's hands - 100% identical.

Well no, that was my point. While it's the correct thing to do (from a security point of view), it does not lead to 100% identical code being executed.

Not only is (this is rather pedantic since it should hopefully make no difference, but who knows) the function binding and calling different between "same executable" and "in a shared library", but also, more importantly, you can expect a modern compiler to perform whole program optimizations on "same executable" code that are simply impossible across shared modules where the respective part isn't present at link time.

Hopefully, you never see a difference (and I guess most of the time you won't), but you can never be 100.0% sure (only 99%). The things that are most nasty to debug are the ones where it works fine on one machine, but doesn't on another, and then it turns out that's because they were running kind-of-the-same code, but not exactly.


#5301057 Slavery, Include Or Not?

Posted by samoth on 17 July 2016 - 04:10 AM

Slavery, like religion or race, is one of those things that will always be instrumentalized by someone. Often, the extent stays "harmless" to your business, but sometimes it does not, and you only know afterwards.

If you don't like risking that, you should stay away from the topic alltogether.

Personally, I think it's OK to have slavery because it's just a fact of life. Even in the 21st century it is, let alone the 16th. You should just try not to make it too much the "fun driving center" of the game or some weird concoction such as slave ship tetris (like the game Tom Sloper pointed out) since that will inevitably aggravate people.

That slave tetris game is problematic on so many levels, but I think that was not intended at all. It begins with such a bizarre negro carricature of the main character that you almost fall off your chair (both laughing in despair and thinking: Wow, that is... really courageous, or stupid). But if you look at the white characters,they don't look that much different. The (predominantly white European) characters in the sequel (yes, there is one!) look just the same, so I guess it's just the artist's style, and not intended. Then of course, that tetris game... not only is the tetris inventor well-known to love going to court, but doing it with bodies (... bodies of gross negro carricatures in contorted positions) is just asking for trouble. Well yeah, it's not far from the truth, but that doesn't mean people want to play it for entertainment. The sequel ("Plague") has the same minigame with dead plagued, and a few others (place the leech) which are kind of... well, odd.
All in all, I think the developers either have a weird sese of taste or were just being a bit stupid. I don't think there was really an intent of insulting someone or such. But what you're intending is not necessarily what you get.


#5300707 When I Drag A Piece Really Fast, My Mouse Drops It, Why?

Posted by samoth on 14 July 2016 - 07:44 AM

I think part of the problem (in addition what Alberth said, that's what finally causes it to occur) is that you have a bit of wrong program logic for dragging. Or maybe not"wrong", but at least odd.

The "correct" logic for dragging looks somewhat like this:
1. if you get a mouse down event:
- remember position
- remember "mouse down"

2. if you get a mouse up event, decide:
- was I dragging? ---> stop doing that, drop object right where it is
- otherwise ---> if mouse went up, it must have gone down before, so send a "click" event to object under position if there is one (might be a clickable button)

3. if you get a move event: is the mouse button known to be already down AND is the spot where it was pressed inside draggable object?
---> no: do nothing, unless state is already "is dragging"
---> yes: set state to "is dragging", take delta from current pos and mouse down pos, move object by that amount




#5300432 Getting address of an item in a std::vector

Posted by samoth on 12 July 2016 - 02:03 PM

I beg to differ on a couple of points. Not alltogether, but still somewhat. :)

No, that's not how ownership semantics work.[...] In a level-based game, assets are loaded for a particular level, and the sound-player is an object within that level. The level's assets are only unloaded when the level is unloaded, at which point, all level-specific objects (such as sound-players within the level) will also have been destroyed.

Yes, that is true, if you can arrange it that way, e.g. in most level-based games. It may not be that easy, however. For example, in an open-world game, some rare unit comes along (say, a wandering boss creature, I seem to remember Ryzom had these). The only working solution to the "you know because of contract" type of ownership is to load all bosses (whether one walks past you or not, and regardless which one it is) and keep them all loaded. Because hey, you never know what happens in half an hour, maybe. But that may be undesirable. It may be more desirable to only load the boss that is showing, on the clients that can see it. And free the memory once it's dead.
Another example would be a game with voice acting where a NPC tells some long sad story about how his father was killed by wolves and how he lost his aunt's ring which you are to retrieve. This is something you want to stream in once, listen to, and never use again. You definitively don't want to keep that around forever or load it ahead of time "just because", especially since there are maybe 50 such NPCs in this city.

The nice thing about the shared pointer approach is that you simply don't care (and you don't even know how long that will be, nor do you have to care!). What's needed stays in RAM, what's not needed is kicked. Eventually.

The performance impact of shared_ptr isn't great in single-threaded code, but is downright stupid in multi-threaded situations. If you saw the actual behavior of that solution written out as plain old C code -- without the use of shared_ptr to hide its stupidity behind an abstraction -- you'd fire whoever wrote it... at least in the context of a game engine.
e.g. The hidden reference counters used internally by shared pointers are thread-safe (via atomic increments), but the shared pointers themselves are not, so they need to be externally synchronized by a mutex/etc...

That's wrong, or rather it is correct but inconsequential.

The reference counter is updated with atomic increments/decrements, that's right. And this is a good thing most of the time (indeed always, except when you are strictly single-threaded, then it's wasting some cycles). The impact of this is however rather low. You do not load 250,000 assets every second. You do not copy around shared pointers 250,000 times per second. That's not what happens. For the comparatively low number of copies that you need to make (few hundred), the overhead of the atomic increments is very acceptable in comparison to the advantages that you buy with them. Yes, it's not free... but by no means a limiting factor unless you do very stupid things.

The pointer itself is, as you point out correctly, not thread-safe. But that is a good thing, not a bad thing. Not stupid, not in any way. You cannot create or reassign a shared pointer in a thread-safe manner. Yeah. Who cares! You don't want to do that anyway.

The shared pointers are created solely by the "manager". One thread, no concurrency. After that, they are, by all practical means, read-only objects (until destroyed). They are deleted solely by the last user decrementing the refcount (whoever that may be). Again, one thread, no concurrency anywhere (not anywhere where the fact that the pointer is not threadsafe would matter, anyway).

The weak pointers are admittedly upped concurrently (well, not necessarily, but at least possibly), but that's fine. The library guarantees that the refcount is managed correctly and once upping the weak pointer to a shared pointer has succeeded, the pointer is valid. Always, no exceptions. No mutex needed.
And if it didn't succeed, the pointer is invalid by definition, so you couldn't care less whether the value is garbled.

To use your wording: Shoot the guy who uses mutexes because they're not needed. :)

every lookup causes an atomic increment, hopefully with relaxed memory ordering

No. Every copy or upping a weak pointer (which is a kind-of copy) causes an atomic increment. Unless you do something very stupid and copy around the shared pointers all the time, the number of actual copies is comparatively low.
On the other hand, you can look up (dereference) the smart pointer a million times without an atomic operation otherwise. Because yeah, it's not threadsafe. Luckily.

The overall cost of upping the pointer is very moderate (I intentionally avoid saying "locking" because it gives a wrong impression). Something like once per frame per asset. How many textures and meshes do you have? A hundred? So that's 200 atomic increments per frame. And...? The problem being?

Consider how many cycles it costs to look up an asset in a map, or find it with a linear search in a vector of objects -- nobody even bothers about that. Why not? Well because it happens relatively rarely.

Like I said in my first sentence: It's maybe not a perfect fit for everybody, but I think it is overall a valid approach.


#5300362 Getting address of an item in a std::vector

Posted by samoth on 12 July 2016 - 05:37 AM

While maybe not being the perfect solution that fits all, I can see how storing shared pointers could be argued being a correct, and good solution.

Does the manager (alone) own the assets? Well, then you should definitively store unique pointers and give out raw pointers. Raw pointers are non-owning (technically wrong, they might be owning, but assuming them non-owning is a good strategy, also Stroustrup says so).

But is that the complete truth? While the renderer is using a mesh or texture, or while a sound is playing, shouldn't that user somehow own the asset, too? At least... a bit? How does it otherwise know the asset doesn't suddenly disappear? Well yeah, if there is only one thread... problem solved. But quite possibly, the manager runs in a different thread. What now?

Of course, the asset manager who alone owns the object could simply never delete any, problem solved. If only RAM was infinite. Well, it could only delete assets that are no longer in use. How does it know? It could only ever delete an object after the user (... each user) has told him: "OK, I'm done!". Hopefully none of the 10 programmers in your team ever forgets doing that on any of the roughly 50,000 occasions in the game.

Another solution would be to only ever delete on discrete sync points when it's known for sure that nobody uses anything. Safe, but not terribly efficient, and users must re-validate assets after each such checkpoint, too.

Ah, of course, reference counters! But wait, we need a way to communicate back the information about who is using stuff, and meh... Luckily, this is just what shared pointers are already doing.

Storing shared_ptr and giving weak_ptr to the users is an elegant solution to the problem. Less than ideal performance-wise in single-threaded contexts, but virtually everybody uses multithreading nowadays anyway. Plus, the overhead of copying a shared pointer really is no biggie compared to doing a DMA transfer to the graphics card or to reading a sector from disk.

If the manager who (overall) owns the assets decides that we are getting too close to the high water mark, and it is maybe time to start thinking about deleting some, it just tosses its shared_ptr. If nobody is currently using the object, it goes out of existence, but if someone is currently using (and thus owning a little bit) the object, it is only destroyed afterwards. Yes, this is "not a precise science", you have no direct, hard control about when something is deleted. But it works reliably.

This is just how the REPEATABLE READ isolation level works in transactional databases, or the principle of RCU. While you are reading, you are guaranteed that your view of the world doesn't change. The world manager may have a different view, and it does not know exactly how long you will keep holding onto an old state, but you both know that everything is consistent at all times (from each point of view), and no bad things can happen.

How does a user use the object? Well, at some point it asks the manager for an asset and receives a weak_ptr which it keeps around. Once it actually wishes to use the data, it locks the pointer. This either succeeds or fails, with no other choices in between.

If locking succeeded, the user now has a temporary shared pointer, which guarantees that no evil things can happen while it is using the data. Once done, the user drops the shared pointer. Most likely, the object continues to exist, but the user doesn't know for sure until he locks it for the next time.

If locking failed, the user knows that the manager decided to kick the object, and it must again request the asset, or produce an error, or whatever.


#5299863 Does Object Pooling with a Vector in C++ have problems with memory?

Posted by samoth on 09 July 2016 - 04:27 AM

Wasn't that Knuth

The quote is often attributed to Knuth, but he himself denied being the original "author" (what's the right word for that?).

That's in direct conflict the allocation/free strategy for shared_ptr (or any smart pointer).

There is no conflict. shared_ptr supports the concept of "custom deleters" which perfectly addresses this: deletion would then return the object to the pool for a later allocation.
There is arguably a conflict insofar as it is semantically wrong and totally contradicts the purpose.

Ownership is not shared, there is exactly one well-defined owner. It's not like shared_ptr couldn't be used for single owners, but that is not the "correct" usage of that smart pointer type.

Second, and equally important, the whole point of pooling and stuff is trying to optimize (in my opinion prematurely, but anyway). shared_ptr is thread-safe, so this is really the worst possible thing to use in this context. In the best case (assuming a no-shit library implementation), you pay for two atomic CAS, in the worst case, you have locks.

Third, you now have a couple of smart pointers that do not really serve a purpose, but they have to go somewhere! That's two extra pointers plus a counter (yes, shared_ptr is two pointers, not one) to be stored in a container when the intuitive, straightforward thing would be to simply store bullets. Plus extra indirection.

All bullets do the same thing every simulation step, all the time. Always. They move according to some set of "physics" rules, and maybe they hit something. Or not. Or they "die" (fall on ground, disappear after this simulation step). This perfectly maps to "iterate over array", followed by "erase-remove". No changing owners, no sharing owners, no complicated stuff.


#5299751 Does Object Pooling with a Vector in C++ have problems with memory?

Posted by samoth on 08 July 2016 - 03:56 AM

I really, really, really hate it when someone asks a question, and someone comes up quoting Hoare as a reply, but... premature optimization is, well... you know what.

While thinking so much all the time about how to optimize a total of 10 bullets (granted, your final game might have 1,000 instead of 10, but that's still pretty trivial), you did things like manually initializing a vector to N elements in the pool constructor rather than simply calling vector's constructor which does the exact same thing. Only with one allocation and zero memmoves rather than 5 of them. So basically, you are anti-optimizing.

Then there is all that copying around from a second store which has already been mentioned, which no matter how you un-anti-optimize it will still be slower than storing the objects right away, and use twice as much memory, and pollute cache, and all that for no good reason. Plus, your code is much more complex than it needs to be. Thinking about that complexity leads to silly oversights such as the one mentioned above.

Write clear, straightforward, understandable code. Make sure it works correctly. That is the absolute first thing to get right. Correct first. Always.

Only then see if it's necessary to optimize. Most likely the answer will be "no", even if you fire 10,000 bullets. If it runs fast enough, then by all means don't touch anything.


#5298704 what means these error?

Posted by samoth on 01 July 2016 - 08:54 AM

Is there, and I am asking in all seriousness, actually a reason why this has to be multithreaded?

Really, you have such an awful bag of problems (most of them related to multithreading), and all you are doing is process some window messages and read back a text control's value. This can be done very easily and efficiently in a single thread. The "gains" due to parallel processing are zero.

It's not like reading a text control's value blocks or takes noticeable time, or something.

The same goes for your extensive use of lambdas. Yes, lambdas are a legitimate thing to use, and yeah they're kinda cool, the newest bleeding edge C++14 stuff that the pros use, heh. But they are intended to make your code easier to read, not harder. It took me a minute to even realize that the code in your first example was valid C++ and would compile at all. That's almost never a good sign.


#5298651 Hundreds of timers...

Posted by samoth on 30 June 2016 - 11:17 AM

Priority queue is the right thing, but it may be too much. A simple array (vector / deque) will do, too.

Sorting an array is fast, especially sorting a mostly-sorted array. Removing elements from the beginning of an array is a memmove.

This reduces inserting new timers to one write followed by one sort operation (one for all inserts, not one for each!), and removing old timers to a memmove, or if you use a deque, possibly something less costly.

Checking timers is, depending on how you implement it, reduced to checking one integer, or to checking M integers where M is the number of timers that fire during this very simulation step (with M << N). This is very, very little work. Start at index zero, iterating linearly. Stop if(evt.time > current_time). Remove elements before current element. Done.


#5297962 what means these error?

Posted by samoth on 25 June 2016 - 03:24 AM

?? () () means that the debugger was unable to figure out the name of the function or the source file name of the location where the trap happened, or the name of a function alltogether, in any context.

Which can happen for a variety of reasons (no symbols present being one, third party module being another, with GCC inlined functions can be tricky, too, since GCC can actually optimize in debug mode).

Normally, the debugger will try its best to tell you the name of the function, and the source code line, and where it was called from, and so on. It does that by looking at the return addresses on the stack, and trying to translate them to something readable one by one.


#5297687 what means these error?

Posted by samoth on 23 June 2016 - 06:45 AM

every index starts on zero

Yes it does. But your code effectively looks like this:

1. set vector to have exactly 100 elements
2. whenever XYZ, do the following
    increment i, don't care what value it has
    take address of vector[i], don't care whether it's outside bounds
    pass address to another thread, write to it
    now that it's too late, check whether (i == 100), and let it wrap to -1
3. repeat at (2)

It's correct that the index starts at zero, but it reaches a value (100) which is out of bounds. Valid indices go from 0 to 99, not 100. readdata[100] is equivalent to readdata.end(), and the standard defines dereferencing end() as undefined behavior.

I don't have time to thoroughly read through your new code, but this here again smells of trouble:

if(readdata.size()<100)
   readdata.resize(i+1);

Not only is this not terribly efficient (unlike reserve, resize will not grow geometrically, but will do exactly as you say, so resizing to 101, 102, 103, and 104 will do 4 reallocations and deep copies!). But more urgently, resizing will -- almost guaranteed -- change the address of the vector's backing storage, and thus pointers which are still being held in one of your writing threads will be invalid. What's bad is that you cannot even rely on the fact that the addresses you're writing to are invalid. Depending on how the container and the CRT manage memory, they might be "valid", that is, it might seem to "work" although the code is completely wrong. This is a recipe for desaster.




#5297561 Is there efficient FREE way yo encode / encrypt the resource files ? ( like u...

Posted by samoth on 22 June 2016 - 04:22 AM

I recommend against DES. It offers next to no protection, and it is comparatively slow.

The "next to no protection" bit is not so much an issue because finally you cannot prevent someone from stealing your stuff anyway if the data plus the executable that can decrypt said data is on their harddisk. You can make it somewhat harder, and that's it. But DES is slow in addition to that, and that makes it a bad candidate.

I'm using XOR-encryption on a simple (non-secure) pseudorandom generator. That's about as fast as memcpy, and admittedly not very secure, but it stops the most obvious stupid attempts of unsavy people looking into, or tampering with datafiles (even more so as datafiles are compressed). As stated above, you cannot do this securely anyway, that's wasted time. To anyone not knowing the trade, this is a showstopper, and that's the intent. To anyone with the least bit of experience, it's laughable. Annoying, but pretty much as good as you can get, unless you control the hardware.

XOR-encryption is mighty fine, really. You don't even need a particularly clever, or even "random" random generator. Anything that is not immediately obvious and already implemented in editors with a plugin or script (like ROT13) and that prevents identical input values from having the same identical output values every time will do. Just enough so you cannot see patterns with the naked eye and so you can't select "un-ROT13" from the editor's menu to get something readable.

If you are in super paranoia mode, you could use a simple and fast Feistel cipher like TEA (need not even bother with the improved versions, and even half the number of rounds will do). A few lines of portable code, no huge state, and fast. And... totally sufficient.

The advantage of a block cipher over a stream cipher is that changing one byte always changes the entire output block. That makes byte-tampering with a hex editor even harder since you don't just change one byte but e.g. 16 of them at a time. But the problem of tampering is solved better with a checksum. Of course, with a checksum, you have the same issue again. It's stored on the user's computer, so the total amount of security that you can achieve is... somewhat limited. But you can sure make the life of someone trying to change a wall texture ID to transparent somewhat harder.

The bad news is, computer security is like dying due to being struck by lightning. Unless you are a total idiot, and if you take minimum precautions (such as not going on an open field with a metal rod in your hands during a thunderstorm), getting struck by lightning is an extremely difficult task. It's so rare it doesn't happen once in a lifetime. But when it happens... it literally happens once in a lifetime. Once is enough.
Encryption safely prevents a hundred million unsavy users from accessing something you don't want them to read, and the chance that they find a way are very small. But once is enough. If one dedicated (and skilled) person finds a way, the next day the 99.9 million remaining people will use the same click-to-crack patcher. And unluckily, there's more than just one person out there with spare time to try and nothing better to do, too.




PARTNERS