• Content count

  • Joined

  • Last visited

Community Reputation

654 Good

About Zlodo

  • Rank
  1.   The first time by writing the load function, and a second time by writing the save function. Sure, it doesn't look too bad in a simple 5 lines examples but if you have a bunch of those scattered across various objects storing a vaguely complicated data model, errors are easy to make. And can be the annoying kind to debug. Again, obviously not on a small scale example. It just doesn't scale up well.
  2. Efficiency is not the only concern there. Correctness is also paramount, especially if there is a lot of data to describe. Printf/scanf solutions as suggested above seems like a good way to summon a myriad of small, annoying to find bugs because of a lack of type safety and of the need for the programmer to specify the format twice - once in the read function, once in the write function. It is also harder to extend without risking to break things. People dont complicate these things just for fun, but mainly to make maintenance easier.
  3. A* A star vs huge levels

    For nav mesh generation, it can definitely be automated, but it's indeed not trivial. I work on a substantially large open world game where the nav meshes were generated procedurally.    You may want to look into recast / detour, an open source pathfinding framework that can also generate navmeshes from an arbitrary collision mesh. https://github.com/recastnavigation/recastnavigation
  4. I think I got it.   Your input is indeed ISO-8859-1, in which ü is 252 (or 0xfe). This also corresponds to the unicode codes for those characters, so that's supposed to work as input for the TTF_RenderGlyph_Blended function that you use to render the glyphs and give you the right glyph (according to the docs it is expecting a unicode character code as input: https://www.libsdl.org/projects/SDL_ttf/docs/SDL_ttf.html#SEC54 )   But since character 252 ends up being the exponent 3 character and since you expect ü to be 129, then it seems that the font that you're using is designed to map characters according to extended ascii (an old thing called codepage 437, http://www.theasciicode.com.ar/ , https://en.wikipedia.org/wiki/Code_page_437 ), in which 252 is the exponent 3 character and where ü is 129.   So... I think it might be the font. Try grabbing another TTF font to see if it fixes the issue.
  5.   Well, you need to debug then. Verify that you generate your font texture properly, verify that you get to the right position in your font texture according to the input character code, make sure that the input character code makes sense (if you get 1 with the test above it is probably ISO-8859-1 in which case ü should have the code 0xfc according to http://www.fileformat.info/info/charset/ISO-8859-1/list.htm).   If you only have the issue with characters > 127 then you likely still have a signed/unsigned issue involving a char somewhere.
  6. Try this:   std::string test( "é" ); cout << test.size() << endl;   If it displays 2, then the string is very likely encoded in UTF-8. If it displays 1, then it's probably ISO-8859-1.   But if you want to know if there's a 100% sure fire way to programmatically know the encoding of a string you're given, there's none. If it's a string literal, you have to make an assumption about what the compiler will give you. If you read it from a text file, you have to make sure of the encoding the file was saved as.   This is why in some text formats such as XML, there is a header that explicitly indicates the encoding. (for instance <?xml version="1.0" encoding="ISO-8859-1"?>)
  7. SVN vs Perforce

    One of the reasons perforce can scale to huge projects is that you can define rules to automatically "forget" the content of older versions of certain types of files. For instance, you can have it store only the last 4 versions of png files.   This allows to use it to store absolutely all of the assets of even a huge game, such as a large open world game, at the expense that the history gets truncated. But in those cases keeping the entire history is not feasible anyways.   I don't know if subversion have this feature nowadays.
  8. You can just replace the line   char c = s[si]; in draw() and measure() by   uint8_t c = s[i];   The problem with the characters still being wrong is probably an encoding issue. Using one unsigned char like that for accented latin characters would work if your input string was encoded in ISO 8859 format, but nowadays it is probably UTF-8. It depends where your string comes from really, if it's directly from the C++ source itself it may be encoded as pretty much anything depending on the compiler. But it's likely to be UTF-8, which encodes all characters with a code higher than 127 as two characters. https://en.wikipedia.org/wiki/UTF-8   So (if your text is indeed in UTF-8) you have to decode that to get the proper character codes. It's pretty easy to do but there are also libraries around (such as http://utfcpp.sourceforge.net/) that can do it for you.
  9. char, for historical reasons, is signed. So if you put a value above 127 in a char, it actually becomes a negative value.   So with start = 32 and end = 154, you actually have start = 32 and end = -something, and then everything blows up.   Specifically, it's probably that which causes the crash:         _regLength = ce - cs + 1;  // _regLength = -something... glm::ivec4* glyphRects = new glm::ivec4[_regLength]; // ...which is probably cast into an unsigned here, yielding +a gazillion, making the allocation fail, leaving you with a nice null pointer in glyphRects   The solution is basically to avoid char whenever possible and use uint8_t instead. When you read characters from a string (which are usually char), cast them first thing into uint8_t before doing anything else.
  10. Move your + operator into A as a friend function and it works: typedef std::size_t Size; template < typename T > class A { public: template < Size x > class B { T asdf[x]; }; template <Size x > friend A operator + ( T value, const B<x>& vec ) { return A(); } }; int main( int argc, char** argv ) { A<float>::B<2> b; 5.0f + b; // Works }
  11. In the above example, a smart compiler (like clang) would determine whether someFunc can be proven not to have side effects. In a nutshell it goes recursively through functions depth first, and marks as not having side effects any function that don't write to memory other than the stack and only call other functions that are marked as having no side effect. Of course if the function is defined in a different translation unit it has to assume that it potentially have side effects (perhaps it would be nice if this information was exported in object files) but imo it's more of a case for link time code generation than for micro optimizing using additional local variables.
  12. Criticism of C++

    I just want to point out that it is not really fair to characterize the c creators as lazy. You have to keep in mind that they wrote the first c compiler in assembly, on a pdp-11 (https://en.m.wikipedia.org/wiki/PDP-11), which had memory measured in kilo bytes and CPUs that were probably slower than the cheapest micro controller on the market nowadays. A lot of thing that may sound trivial for us nowadays with our multi core cpus running at frequencies measured in gigahertz might have been incredibly expensive back then.
  13. Unity a Dilemma

    An area where flash is, perhaps surprisingly, very much alive is to make the GUIs/huds of PC and console games, due to the widespread use of scaleform (a flash interpreter designed to be embedded in c++ games) So there are some jobs out there where a knowledge of both flash and c++ is highly desirable.
  14. Criticism of C++

    Can we not have a civil discourse on the pros/cons of a language without resorting to this? I fully agree with that sentiment. "Use a different language" is a defeatist approach that seem to assume that languages are immutable things that can never be improved, or that what we have now is as good as it gets. Complex applications such as games often have many different parts that, with the "use the right language for the job" mantra would call at a minimum for using one language for the tools, one for the engine, one for the high level scripting. Perhaps even yet another one for ui scripting such as action script or JavaScript. The downside is that you end up writing heaps of glue code to interface together things written in some different languages. Or at least, heaps of interface descriptions for tools to generate the glue. Or ugly macros, or whatever else you use to work around the lack of a good introspection system. I wrote a lot of tools to do this kind of things over the years, I have came up with solutions that I find sufficiently elegant, but I still find it generally clunky. And regardless of how much you automate it, it is always much uglier than necessary. The impedance mismatch between all the languages result in interface APIs whose semantics are always unnatural to some extent for the language in which they are exposed. It also makes refactoring much harder. Want to move this bit of logic from here to there? Oops, you moved to a country that speaks a different programming language. Time to rewrite it. There's something to be said for the simplicity of using the same language for everything. Or, at least fewer different languages.
  15. Optimisation not withstanding, maybe it's just a process issue. Can't you automatically run your generation process automatically during the night?