Kian

Members
  • Content count

    79
  • Joined

  • Last visited

Community Reputation

239 Neutral

About Kian

  • Rank
    Member
  1. That will call doStuff on every object. If you only want to call a method on one object, however, it's not helpful. I'll also suggest that you use a map for lookups, but I would suggest changing the key from string to some kind of id like an unsigned int. Maybe hash the string, for example.
  2. I agree. This is specially true for your language's libraries. Whoever implemented it for your system will have more information about the environment than you do. They don't need to worry about portability. If you try to roll your own, however, you need to make sure it can run on every platform you might want to support. It can be a good learning experience, but I wouldn't use it in production code.
  3. Ah, you're right. I didn't go back to check the spec for memcpy. I didn't brush it off as trivial. I was confused by you saying that you would need to check byte-by-byte after finding a mismatch. Yes, little endian vs big endian is something I don't often think about, since I'm not generally working at that level. If we have the pattern 0x01 0x02 0x03 0x04, in a little endian architecture, and compare against 0x02 0x02 0x03 0x03, we would want this to find the second pattern to be larger. But when we read it, they get interpreted as 0x04 0x03 0x02 0x01 and 0x03 0x03 0x02 0x02. So the processor will think the second pattern is smaller. I suppose that I would then do const UINT_PTR * puptrOne = reinterpret_cast<const UINT_PTR *>(pu8One); const UINT_PTR * puptrTwo = reinterpret_cast<const UINT_PTR *>(pu8Two); for ( UINT_PTR I = 0; I < 8 / sizeof( UINT_PTR ); ++I ) { if ( (*puptrOne++) != (*puptrTwo++) ) { /* Found a mismatch. */ auto rPtrOne = reinterpret_cast<const unsigned char*>( puptrOne ); auto rPtrTwo = reinterpret_cast<const unsigned char*>( puptrTwo ); // For a UINT_PTR of size 4. I could do a bit of template or macro magic to have it choose at compile time // something appropriate for size 8 UINT_PTR UINT_PTR reverseValueOne = rPtrOne[-4]>>24 | rPtrOne[-3]>>16 | rPtrOne[-2]>>8 | rPtrOne[-1]>>0; UINT_PTR reverseValueTwo = rPtrTwo[-4]>>24 | rPtrTwo[-3]>>16 | rPtrTwo[-2]>>8 | rPtrTwo[-1]>>0; return reverseValueOne - reverseValueTwo; } } I suppose checking byte by byte also works. I'd need to check how the ifs compare, though I generally believe branching logic is much more expensive than following a single path.
  4. Actually, on my first reply to you I clarified: I'm using byte to stand in for whatever your largest integral type is, since regardless of the size of your comparisons, the algorithm is the same. I didn't want to get into architectural details like how many bytes you can check per instruction. The effect is that you are checking for the first bit that differs. A byte just lets you check 8 bits at a time, a 32 bit pointer 32 bits at a time and a 64 bit pointer 64 bits at a time. I did agree that returning true or false was faster, I just didn't perhaps entirely understand what you meant by needing to check byte by byte to get -1 or +1. With your example: const UINT_PTR * puptrOne = reinterpret_cast<const UINT_PTR *>(pu8One); const UINT_PTR * puptrTwo = reinterpret_cast<const UINT_PTR *>(pu8Two); for ( UINT_PTR I = 0; I < 8 / sizeof( UINT_PTR ); ++I ) { if ( (*puptrOne++) != (*puptrTwo++) ) { /* Found a mismatch. */ if ( *--puptrOne > *--puptrTwo ){ return +1; } else { return -1; } return 0; }There's no byte-by-byte checking required. As soon as you find a mismatch it takes a single comparison, regardless of how many bits you're checking at a time.
  5. Well, if we're getting right down to the stated requirements, L. Spiro said the goal of his memcmp was to save cycles, not time.   The parallel search will have strictly worse cost in cycles spent, just to get the basic boolean result. Trying to get the +1 or -1 result would actually require a significant increase in complexity (compared to checking the first byte that differed).   This is because the single threaded compare only needs to search until it finds a discrepancy, and it can quit early. Parallel compare, on the other hand, has to split the memory into chunks and start every chunk at the same time. Even if you allow for threads to quit as soon as they find a discrepancy, the threads that didn't find one will continue until they are done.   Worse case for the library memcmp is if both strings are equal, since it has to compare it all to return 0. This is the same for the parallel compare, since all the threads will have to do the same number of comparisons. In every other case, every chunk that compares equal has to be run to completion in either parallel or single threaded compare, but the single threaded model won't compare any chunks after the first difference, while the parallel compare will have to compare at the very least the first byte of each chunk. A consequence of this, in fact, is that as you increase the number of chunks, your algorithm becomes faster but also more wasteful. Take the best case scenario, for example, where every byte is different. memcmp will check the first byte and return. Parallel compare will check n bytes and return, where n is the number of chunks you have.   And I'm not even comparing the cost of launching all the threads. Even for perfect threads that cost nothing to create, the algorithm is strictly worse for every case in terms of cycles spent.
  6. You'd be at most saving one comparison actually. As soon as you find a byte that's different (not sure if it compares byte by byte or word by word or whatever, but the case is the same regardless of how it does it), you check if the byte is bigger and return based on that. If you have four bytes 0x01 0x02 0x03 0x04 and compare them to 0x01 0x03 0x03 0x04, as soon as you check the second byte you know the second string is bigger, so you return 1. If the second string was 0x01 0x00 0x03 0x04 you'd return -1 after checking the second byte. The first byte that differs is the most significant byte that differs, so that's all you need. As for the actual topic, there's always a desire to roll you own when you realize you need something. You always need to be aware of the 80/20 rule, though: 80% of the work will be done in 20% of the time. The last 20% will take 80% of the time. That is to say, you can get most of the way in a short time, which may make you think that you saved a lot of time, but actually taking care of all the edge cases, testing, debugging and the like that would get you all the functionality you need will take you the most time. Deciding whether to roll your own or use 3rd party libraries needs to be carefully considered, there's no simple answer. The first thing you should do is understand what your requirements are, exactly. The better you can define your requirements, the easier it is to make a decision. Then look at the documentation for the library. Does it match your requirements? How much work would you need to do to get it all the way there? Look around, too, don't just settle for the first library you find. After this you should have a list of libraries that match your requirements. Compare those to determine which one is the best. Does it have an active community of users (good for getting help if you're stuck, and probably means it is in active development), does it have good documentation, does it have a bug tracker, are there any disqualifying bugs still around, is the interface reasonable and easy to use, is the build system simple, etc. Once you have the best of these, compare what it does to how much effort it would take you to match the parts that you actually need. If you are convinced you can do it better in less time that it would take you to learn the interface, go for it. Otherwise, use the library. Of course, writing libraries is great for learning too. If time isn't an issue (meaning, generally, a hobby project), do it yourself if you want to. What I generally do is write from scratch things that I think are interesting, and use libraries for things that are boring.
  7. That break up works for libraries, but when you're talking about a program, the "external behavior" is what the user sees. You could make major modifications to your code (adding or removing libraries, adjusting inheritance structure, etc), and so long as the program behaves the same I think calling it a refactoring would be accurate. It follows from the definition; the internal structure is whatever code you have in your program, the external behavior is what the user sees.
  8. Alan Turing

    The problem with the Turing test is that there's no reason to want to beat it. In order to convince someone that they are talking to a person, the computer has to meet one of two conditions: it either has two lie convincingly, or it has to have opinions and preferences. Neither of which are things you should be working towards if you want to build useful AI. Why do I say this? Let's say you make a simple question to the chat bot: "Do you like peaches?". A computer could not really care about peaches one way or another. Without sight, taste, smell, etc, peaches are just another collection of data. Now, if it answers "I don't care about peaches, I don't have senses to experience them in any form other than as abstract data," you'd know you're talking to a computer. So even though it would be a pretty impressive achievement to get the computer to answer that, you'd know you're talking to a computer. To pass the test, the computer would have to say something like "Yes, I like the taste,", or "No, I don't like how the juice is so sticky," or "I've never had peaches." These are all lies (the last one is at the very least a misleading statement). Why would you want to make a computer that can lie to you? "Are you planning to destroy all humans?" "No, I love humans!". I'd like to be able to trust my computer when it tells me something. Lets say instead you actually give your computer a personality, in order for it to adapt to questions that might come up in the conversation, and it actually does like peaches. It will still need to be able to lie ("Are you a computer?" "No."), but let's say you want it to be able to draw from some preference pool for the answers. You've now created something that has opinions. One such opinion could be that it doesn't like having to pass the Turing Test. Why would you create something with the potential to not want to do the thing you want it to do? And let's not forget, the ability to lie to you about it. What would be an impressive and useful achievement would be to have a computer that can't pass the Turing Test, but that you can have a conversation with. Meaning a computer that can understand queries made in natural language and reply in kind. I don't care that I can tell it's a computer by asking it "Are you a computer", or that it answers "What is the meaning of life" with "No data available". That alone would be amazing, and much more useful than a computer that can be said to think.
  9. I'm not sure you can use scoped enums (enum class/enum struct) to do bitwise operations, since you can't convert them to int without an explicit cast. If you want to use it as a mask you need to stick a regular enum in a class or struct.
  10. Huh, misread the first bit. Also, I can't edit my answer?
  11. If you want to use the scope operator (which I endorse), it would be better to do:   struct BIT { enum { _1 = 1 << 0, _2 = 1 << 1, _3 = 1 << 2, _4 = 1 << 3, _5 = 1 << 4, _6 = 1 << 5, _7 = 1 << 6, _8 = 1 << 7, }; };
  12. Location of data in the class

    Using these kind of trick, will work but it is also how you write unmaintainable code. And code readability is often more important than ease of writing your code, because in six months time you are going to be wondering why you wrote it that way.   The other way of doing this is class Matrix44 { private:     union     {         float f[16];         float mm[4][4];         Vector4  v[4]; //Assuming vector4 is stored as 128 bytes contiguously     } matrix; } You can now access matrix.f as an array, matrix.mm as a 2D array or as an array of vector4, if you add 16 float variables in the union you get 16 slot access options.   I'm not sure that's permitted. That is to say, you can do it, but you should only access the last member of the union you wrote to. This class looks meant to be used by writing to one member of the union and reading from another as convenient. That's undefined behavior. You'd need an enum type or something similar to keep track of what the last write operation was, and then different access methods instead of direct access to enforce the check, and then what are you even using a union for in the first place?.
  13. Without other players to lie to or get an advantage on, how can modifying a save file be dishonest or unfair?     It is not about the other players at all; it is about the act itself. What you are doing is actually fraudulent, because you are "unjustifiably claiming or being credited with particular accomplishments or qualities" when you "modify" your save file. I am not certain whether or not this applies to video games, but if you ever read your end user license agreement when you install software on your computer, modification is often prohibited.   Both your previous definition of cheating and this new one about "fraud" imply other players. You can't defraud yourself.   As for lying online, I can do that without modifying the save game. I can edit screenshots with paint, or just make any claim I want. And if the community is aware that the save file is editable, any exaggerated claims will be viewed skeptically anyway. If you have an online leaderboard then that's a different issue, but then we're outside the single-player, offline game scenario.   Finally, about the EULA, if you're making your own game I don't see why the EULAs of other games should matter. It's your game, license it for use however you want.
  14. I'll weigh in on the "ethical" aspect of the debate, since others have already mentioned ways to solve the original question.   I don't believe that cheating is possible on a single player offline game. It makes no sense to speak of cheating in this situation. What the player is doing when he edits the save files is creating a new game, one whose rules are known only to him and which simply uses your game as a basis. Just because you made the original game doesn't mean you can demand that anyone else play it.   You should not seek to place your player as an adversary you have to control. They've paid you to get your game, and you should strive to provide the best experience possible so they can enjoy themselves. You and the player are a team, and he is the one on the driver's seat. Be the best copilot you can be instead of double guessing their every move. They want to edit the save file? Let them know that achievements (if you have them) will be disabled, and you won't provide support if they break the game this way, but otherwise stay out of their way.   Be glad that they're engaged and will likely speak to others of what they accomplished editing the files, which can lead more people to buy your game. Trying to protect the save file is development work you could put elsewhere. Worse, it's development work meant specifically to detract from your game's engagement. You're identifying something the player may want to do and trying to curtail it, removing freedom to satisfy some weird desire for them to only play the game as you intended. You're literally spending development effort to make a worse game.
  15. I'm not sure if someone mentioned this, but a clear difference when your class allocates memory is that std::memcpy will not remove ownership from the original. So when your object dies, it will release the memory, but the object you copied will still hold a pointer to the released memory, causing a crash down the road. std::move, on the other hand, will release ownership from the original object (zeroing pointers, etc) so that it can be destroyed without affecting the state of the target object.   A move affects both the source and the destination, while a copy only affects the destination.