• Advertisement


  • Content count

  • Joined

  • Last visited

Community Reputation

1186 Excellent

About AthosVG

  • Rank

Personal Information

  • Interests
  1. You might also want to look into sprite atlases for improving performance CPU wise, although it's in a different area of the CPU overhead (sprite atlases allows the SpriteBatch to stuff more sprites into a single drawcall) and that would likely break transparency. But as in any case, if you want to improve performance, don't forget to profile first
  2. C# Basic C# quiz

    Little off-topic, but it throws it away in case Address would be a function rather than a property. I.e. Person.GetAddress().City will compile, but throw away the result. This precisely. This can make you start a discussion much more easily and can get you to learn a lot more about the programmer. As has been said before, these are all 'spot-the-bug'-kind of questions where you either catch it or are like 'aaah' (which doesn't tell you much; he either didn't know or he knows what's going on, but just happened to oversee it). Interviews can be stressful so it becomes easier to overlook such things. Starting an actual conversation rather than turning it into an exam will make candidates feel more at ease and get the interview to line up more with their normal workflow. I get how you're interested in the interviewee's knowledge of C#, but I suggest you leave out exam questions to exams as much as possible and start conversations instead, such as by asking 'what is the difference between a reference and a value type in C#?'. For this particular example, you could, if you're really keen on it, ask the code-example question while or after discussing the topic. I know that in normal cases of problem solving you get no such hint of topic, but in this case it shows the candidate at least requires some additional knowledge to answer the question.
  3. Entity-Component systems

    Not necessarily. You can work around it with using declarations (although they impose some duplication): class Light { float GetIntensity() const; } class Spotlight : private Light { public: using Light::GetIntensity; } It really depends on how you structure things. You can do this if you group all the components together, like in one big array/vector, which is something I did, rather than having each object store its own components. Upon adding/retrieving/deleting a component, you simply navigate to the correct vector containing these components (one way is to use their typeid() with std::type_index as the key for an unordered_map, just to throw in one approach). But yes, this is generally only possible if you store the components together, rather than having them be stored by the object/entity itself. Yes it was somewhat of an off-topic remark; it was related to not having the dynamic addition and removal of components altogether. On a side-note, I think that something you need to look out for is that you have already spent hours upon solving problems you haven't encountered yet. You can keep breaking your head over the perfect solution, but you don't seem to have ran into the problem yet. Have you actually tried an approach towards GameObjects that can modify behavior at runtime? The only reason I got to the way of structuring things I did was because I noticed the amount of boilerplate I was producing and managed to reduce that problem over time, by restructuring things and applying certain patterns based on the problems I was actually facing, for which I started with just plain old composition to structure my 'entities'
  4. Entity-Component systems

    If you want to avoid composition and the forwarding functions it brings along, sure, but I'd prefer private inheritance if possible; you're primarily intending to substitute composition after all. Not if you don't have anything virtual, of course. You don't need a polymorphic Component class, if you need that Component class at all. You can work around this with templates, considering you seem to be wanting to make the adding, removing etc. of components very generic. Else you could of course just stick with plain old composition.
  5. __declspec(selectany)

    Yes they seem the same x. If x is defined as 5 in a.cpp, b.cpp will see it as being 5. If some function in b.cpp assigns the value 10 to x, then a.cpp will see x as being 10. Note how in this example, the declaration doesn't need to be marked as extern, assuming you aren't defining it in a.cpp or b.cpp. After all, in each compilation unit you are referring to the same x anyway; the one declared in h.hpp. What extern allows you to do is declare the variable x that is defined, somewhere, some place, but not necessarily in this compilation unit. The same goes for functions and their definitions, although these are implicitly extern. Perhaps it is easier to illustrate with functions. Say you have this in a.cpp int func(); int func2() { return func(); } and this in b.cpp int func() { return 0; } This will compile and link fine, because for the first line of a.cpp, it is already clear that this is a declaration and not a definition. For a variable, this can be considered ambiguous, hence you mark it with extern to indicate that it is merely a declaration. It is defined later, i.e., the symbol's definition is provided somewhere else, externally. I hope this clarifies that extern is basically just a declaration of the variable and that you can later define it, like with any function. Yes, for all non-static data members, you can have a constant expression as in-class initializer. For const static data members too, but I believe only the built-in types (int, char, short etc.); others need to be defined out-of-line, but I'm not up to date on the exact rules on this one. As for inlining, I'll base my answer of the inlining of functions, for which compilers also emit a non-inline version to take the address of, so: yes, but the variable also still exists despite the inlining's substitution. However, I'm not a 100% sure whether that would work the same.
  6. __declspec(selectany)

    Extern does not guarantee that at all. It only says that the definition for the given symbol can also exist in another compilation unit, which typically prevents complaints from the compiler that some variable is not initialized before usage. Extern nor static say anything about in which file either should be present. Note that the following is completely valid when written in the same file: extern int x; int x = 5; What static, for a static member, does say is that initialization needs to be done outside of the class body, with the below being valid when written in the same file once again: class Foo { static int x; }; int Foo::x = 5; Don't forget how includes work; once preprocessing is done the above scenario will be in some file regardless ;). Extern in the end just indicates that the definition may be in some other compilation unit (static on a member too, actually). It is simply externally linked, but does not disallow you from having the definition inside the same compilation unit as one of its declarations. As for why you would want multiple definitions, you might want to read a little more about the entirety of the COMDAT folding story in this series, though in this specific case with __declspec(selectany), I've only seen it used in some convoluted way to mimic weak linking, which is only supported by Microsoft's linker in undocumented fashion, apparently.
  7. Well you mostly point it out yourself: __declspec(align(x)) (note the double underscore, by the way) has largely been superseded by alignas. If you really need to support older versions of C++ where it was not present, it might be of use, though I'd use something like this myself #if ... #define ALIGNAS(x) alignas(x) #else #define ALGINAS(x) __declspec(align(x)) #endif But it is, of course, not necessary and you can stick with the pre-c++11 version if you want to ensure that backwards compatibility. Note that GCC/Clang typically require __attribute__((aligned(x)) instead, if you were planning to support either by any chance.
  8. Usually you only limit this optimisation to a scenario where the material is equal, else you have exactly this problem. The best you can usually do if you want to batch render these in a collapsed mesh fashion is to use texture atlases. Packing more information into the vertices can work if you really want to achieve this for some of the uniform parameters, such as an array of colors of which the correct one is picked by using the index stored in the vertex, but then you still need the same set of shaders etc. Perhaps it's worth describing what you're looking to get out of it, because even if it were possible to merge them into a single mesh and have multiple materials for that combined mesh, it could still not really reduce the amount of draw calls (which is what the combining of static meshes is often used for)
  9. 1 - Normal stack size is around 4mb if I recall correctly, but that doesn't mean it can't be deviated from, hence you sometimes get a crash (though someone more knowledgeable on this may know of other reasons). Also if you're not sure if the stack is the same, there lies an explanation as well, you should compare the actual data on the stack, because this may be a cause as well. If it's not the exact same playthrough, this is more likely the reason I'd say. 2. 1 mb is kind of a safe bet to start out with and afaik will be the minimum supported by (modern) CPU's. 3 - Just take a look at the stack. You can attach a debugger like visual studio or just run it in that environment in the first place (after all, you're hitting it pretty consistently). Once it hits any exception, it should break and you can view the current stack. Such high stack usage is quite worrisome. It can indicate a high amount of recursion (i.e. a function calling itself), with the stop condition perhaps being incorrect (pretty common), large stack allocations such as by large arrays or simply huge objects (but I'd consider this less likely) and even less likely, you can have a lot of indirection, but I'd consider that it's very unlikely you will hit stack limit by just regular function calls, especially in release. Regardless, having to increase the default stack size is something I'd definitely not consider a decent solution at all, so you should definitely look into the cause
  10. Needed: Const Policy

    Just a notice in advance: all this kind of stuff will always be opinion-biased (though this I'd say is an easier one). My short version: use const one every variable and member function whenever you can, as long as you don't need const_cast and mutable to do so, but be consistent in the scenarios where you apply it. Long version: const is mostly just a communication to any other programmer in the sense of "I will not modify this variable" or "this member function will not change the state of the object". In the end, they could be false, as you can circumvent both 'promises' with const_cast and mutable members, so for this reason here's little compilers actually optimising for the case afaik (i.e. it's quite unlikely to make a performance difference).  I consider that communication to the programmer of great use. If a function accepts some object by reference, it's good if it's marked const so that I know it will (probably) not modify it. In fact if it's not marked const, I tend to the function will modify it (hence I'd advise to at least always use this for those passed by reference). If a member function is marked const, I know it will (probably and hopefully) not modify the object's state, making it easier to follow. No less applies to local variables. If it is marked const at its declaration, I know it will not be modified, which can be a good-to-know if some local variable is passed into a function, since you can't see on first eye whether that function accepted it by reference, possibly modifying the variable in the process. I know that's not a possibility if it was marked const in the first place, since that reference has to be a const reference for it to compile. Of course, with descriptive names having to know this can be unnecessary in the first place, so I wouldn't worry too much about this. I'd primarily get in the habit of using const references for parameters and marking member functions const. Marking non-reference parameters const is not so much of an issue, as they don't change anything for the side of the caller as the value is copied anyway. Adding const in other situations doesn't do much harm, but I'd be careful to be consistent so that later you don't have to figure out why something wasn't marked const (I speak from experience). Marking half your local variables const and the other half not, even though they are not changed either, is arguably worse to read.
  11. Nullptr is considering yourself lucky. Uninitialized variables don't default to zero, though in debug it can do so for you. Also, warnings as errors is something I'd strongly recommend as well
  12. Critical errors in CryEngine V code

    I'll give my two cents on this. I don't think I have commented on this series before, but I intend this to be the first and last comment to post in it, as I think this topic has been brought up enough. The only article in this series I could actually get into, was the collaboration with the guys from Epic Games on improvements to Unreal Engine. If I recall correctly, there was an actual description of workflow as well as how some of the bugs were solved and how they were unsolvable. That, I consider educational and worth reading. To me, as has been posted before, there is little educational about this. Aside from the intention of the article, which the writing has a clear emphasis towards, I really question myself and the author what the difference is between the article and me using a trial of PVS and viewing the result when ran over Crytek's source code. It is not just the tone of writing, it is not only the conclusion nor only the intent of the article, buying your product, because even if you fix that, I feel like you're not providing almost no content compared to an XML file containing the results of a run of your tool. If are really trying to put out an effort of creating quality content with these articles, I would focus on what is actually interesting about these errors. Were you able to find what the consequences of this bug were? What was the fix? What side-effects would have such a fix have? That's stretching it, because personally, I'm not so sure whether this starts to make things actually interesting. However, it may provide more interest into why we should find the 'bugs'/'issues' you find interesting. Perhaps there are no consequences at all for all we know, have you verified this whatsoever?  Also, focus on quality, not quantity. We are all aware how common bugs are, you even assume the high amount CryEngine has in your introduction, but we also know that there probably has not been a game shipped to date without bugs nor that they are necessarily interesting in the first place. To the developer, it may be nice to filter out these if/else statements that have equal expressions contained within, but I really couldn't give a damn. I mean, 'fixing' it would still do the same thing, is that really worth an article?  Finally, although this ties into my other comments, are any of these articles any different? The list of errors might be different, although they probably contain plenty of duplicates, but I honestly haven't seen your conclusion state anything different whatsoever. Feedback perhaps a little harshly put, but I think it's been brought up plenty before. I understand there's people in favor of these articles too and I have occasionally scrolled through the errors as well to look around, of which in rare cases they can be an interesting read. On the other hand, I've also used static code analyzers and to be fair, they really do kind of tell me the same thing as these articles, albeit with a few less words and a little less product advertisement ;). I don't intend to discourage you from posting at all, but I would seriously reconsider the value of these articles. Given your second-last paragraph and the general setup of these articles, you might even want to consider turning these into blog posts or sort-alike thing
  13. Short-circuit evaluation is only available for relevant boolean operators, && and || (the XOR, ^, operator isn't, since you always need to evaluate both operands anyway). You could say those operators are somewhat 'special' yes, but is definitely different from what you're expecting. The conditional operators can easily be implemented within the language. For example you can have: if (condition && otherCondition) changed to if (condition) { if (otherCondition) { which makes sense, because at run time you'll know the value of the first argument and based on that can determine whether you need to determine the second at all. Having to write those if-statements in all scenarios would become a little tiring though! The evaluation process you describe is vastly different however. You basically want to pass the argument, which is the result of a function, and only afterwards once you have entered the function and determined you don't need that value that you asked C# to compute for you upon entering, decided to not compute it after all. More importantly, what if gdc had side-effects such as printing something as console output? You called the function, so you'd at least expect the result to show up at the time of calling it! Such side-effects make something alike near impossible and so is deciding in advance whether a function will have side-effects. However, this isn't a problem for either of the boolean operators. What you're looking for is called lazy evaluation. There's at least one language I know supporting it, which is Haskell, a functional programming language without side-effects, making things quite a bit easier ;) For this case, I'd simply try to modify the structure so that the function you're calling instead computes gdc and you pass the arguments you would have passed to gdc. That way, only if the function you call evaluates the arguments, will gdc be computed. Of course storing/caching the results of particular calls to gdc would help too, if performance really is a problem.
  14. Are you referring to Intellisense's attempt of pointing out errors/suggestion by syntax highlighting? Because as pointed out, syntax highlighting and coloring are the same thing. If that's all you want, try as above, alhough I suggest you use a different editor (e.g notepad++) all together in that case
  15. Animating the drawing of text?

    A mask as described above is likely a good approach. Additionally, if you really require high precision (or have a really long title) a single mask might not suffice, so you could do each letter or something alike. Expensive, but I doubt performance is going to make a difference in your title screen. Disadvantage will probably be the authoring of that animation. You can always opt to look for an existing tool that does this for you and record that instead
  • Advertisement