Jump to content

  • Log In with Google      Sign In   
  • Create Account


Member Since 29 Jun 2010
Online Last Active Today, 09:15 AM

#5308051 Some HDR, tonemapping and bloom questions

Posted by on 26 August 2016 - 09:45 AM

HDR simply means using a larger-than-backbuffer texture format, such as R16G16B16A16 while doing lighting (like adding up light contributions, which could be greater than the [0, 1] range) and later in the pipeline, tonemapping/bloom post-processing effects?


Tonemapping is about calculating the average luminance (essentially light intensity?) and to smooth it out over the final image?


Actually as I've been told, HDR just means the first step of storing/calculating lights in a larger space. Tonemapping is just a process to get this information in a format that can be displayed on regular displays. If we ever had a full HDR monitor, no tonemapping would be needed anymore.


If so, where does Bloom fit into all this? Is tonemapping and bloom the same thing? Does it make sense to use only either or only both together? (Tonemap vs Bloom vs Tonemap+Bloom)


Bloom is just a light-bleeding effect that is naturally caused by the restrictions of a lense. In CG, we use a dedicated bloom pass to simulate this behaviour. So you can have tonemapping without bloom, but bloom works way better with HDR (now instead of having a bloom-threshold of 0.9, you can have a value of 9000.0 which will only bloom really bright light sources ie).


When calculating the average luminance, is it wise to do it in a compute shader or just mipmap the whole image and read the 1x1 level?


Regarding the different methods of calculating the average luminance, I would personally do it manually eigther via a compute-shader, or repeated fullscreen downsample-passes. I have found that the automatic methods fail to produce accurate results, which might be depending on the hardware, yet it happened on all of my PCs. The problem I've been facing with auto-downsampling was that it wouldn't take the average of the neigbouring pixels on downsampling correctly, kind of seemed like he was using a point-filter. The effect was, that a single dark area in a specific portion of the scren would cause the avg luminance to drop drastically. I haven't checked this in a while but back then, using manual downsampling actually fixed it.


Does tonemapping and bloom require special 3d assets to function properly? I am using assimp for example and am stuck with its restrictions when it comes to asset parsing. Best thing would be if it could just adapt to whatever scene is rendered.


No, not at all. Models don't need to change based on the rendering model anyways, and textures can also stay the same (since ie. the color texture of a model actually represents the amout of light being absorbed, which stays the same with HDR or without it).

#5305716 Is it pathetic to never get actually helpful answers here?

Posted by on 14 August 2016 - 04:01 AM

Since we are on the topic:


A one-line question is less likely to give you a meaningful answer, than a medium-length question with an explanation of at least some context to this question.


For example, unlike Alberth, I wouldn't feel like to search through all your previous topics to see what had happened. So with that in mind, I wouldn't be able to answer even this question. If you gave a short example, it would be easier to see what had happend. Like: "For example, that last time I asked about this, I just got people telling me to do that which was not part of my question".


I just had a brief look at your previous questions and, it seems most of them are kind of short like this. So maybe one thing that can be a reason, and that you can approve on is, trying to give more information towards your question. Nobody likes to read wall of texts but at least show some background, show us what you actually expect, then you might get answers more to what you actually expect :)

#5303546 Ecs Inheritance Problem (C++)

Posted by on 01 August 2016 - 03:25 PM

EDIT: Pardon me, I overread lexingtons post in between and though EarthBanana was talking about my code.


In this case, EarthBanana is right, the template serves no purpose. In my example in the #2 post, the template would auto-generate the TypeId for you. If you haven't worked with templates before, this is an excellent exercise to do so. It was the same place where I originally learned templates :) So I would recommend looking at my original code example, look up c++ templates and specifically CRTP, and then try to get a working version that doesn't require you to manually declare "TypeId = X" for every component type (as I originally wrote, this is really unsafe and prone to error).

#5303450 Ecs Inheritance Problem (C++)

Posted by on 01 August 2016 - 06:30 AM

Have global arrays of the same fixed size for each component type where the entity ID is a literal index into the array. Creating a new entity anywhere just increases the array size for all component types by 1.


Nononono. Don't use global arrays, for anything, especially in an ECS.


I'm not even going to comment on the implication and evilness of globals from a design-standpoint, but what this means is that you can only ever have one set of entities. Is this a restriction you want to live with? Now in an extremely simple and small game, this might be enough. But in anything just a little more complex, being able to only have one set of entities/components is not good. Here's just an example of a few things I had to do myself/know about:


- Being able to store the default state of all entities/components of the current scene, to implement a "play" mode in an editor like Unity does

- Allowing multiple scenes to be active (essentially what Unity allows/requires for menus/UIs). This could be solved by merging the other scenes components in this global array, but would be way more messy than having multiple separate component arrays.

- Being able to suspend an inactive scene/map. Think about eigther a menu or worldmap that temporarily replaces the current scene, and after leaving the game will go back to normal. Some games like Paper Mario 2 also have a system where at least 2 maps that you just left will still be active, so when you go back and forth NPCs will still be there, as well as 0 load time.

- Implementing a scene/entity/prefab-preview. Simply any secondary view of a different set of entities than what your current scene is displaying.


and thats just what I came up with in my limited set of needs. Sure if you don't plan on having an editor most of those points are not a requirement, and there exist hackaround for the rest, but why would you do that, when the clean solution is just barely more complicated.


Said solution is, instead of creating a global, you create a context/world/collection-structure, and pass this in to the systems.

struct World
    positionMeshComponents = {
               vector<vec3> positions;
                vector<mesh> meshes;

    positionWeaponInventoryComponents = {
        vector<vec3> positions;
        vector<weapon> weapons;
        vector<inventory> inventories;
system.Run(World world); // now I can create as many worlds as I want, and the system won't mind

Thats just for that, but since you were really insisted on that those structures being global, I though such a detailed answer was warranted :)


As for the rest of this idea: I have implemented something similar, but as an optimization on top of everything else. Basically, I have separate vectors of components, but when I access a certain subset of those, like "Position + Mesh", instead of iterating over everything, it will look up a collection/subset of those from a cache. I didn't have the problem with the type-ids becoming to big like you mentioned, because the array I create for this collection is tightly packed, by using some template magic:

const auto& vEntities = world.GetEntityCollection<Position, Mesh>;

for(auto& entity : vEntities)
    auto& position = entity.GetComponent<Position>();
    auto& mesh = entity.GetComponent<Mesh>();

// internally, the object returned by GetEntityCollection would look something like this:

// variadic templates. This is <Position, Mesh> in the code above
template<typename... Args>
struct EntityCollection
    // 0 = Position
    // 1 = Mesh
    BaseComponent* vComponents[numEntities][2];

    // now we have the problem of how to map "Position" to index 0, and "Mesh" to index 1
    // this can be solved by templates & the std::tuple-class

    using Tuple = std::tuple<Args...>; // this creates an std::tuple<Position, Mesh>. We pretty much just need the type, to get the implicit component ID in this array

    template<typename Component>
    Component& GetComponent(unsigned int entityId) // entityId is not the actualy UID of the entity, but goes from 0...numEntities in this collection
        constexpr auto componentId = tupleIndex<Component, Tuple>(); // this is a helper function I wrote with the help of stackoverflow it returns the position of a type in the tuples template argument list.
        // note that this is know at compile time - so the compiler could potentially optimize the access even more.

        return *(Component*)vComponents[entitId][componentId];

This is how it looks like for me. Most of this is just possible due to C++11, since I rely on variadic templates and std::tuple, but I think you won't get any better space/runtime. You only need to allocate space for what is needed, plus the array can be traversed totally continous. Oh, and one important distinction that I think exists is, that in your system entities would be stored directly inside these collections based on their components. For me, I just put pointers to components, which requires an additional indirection, but is necessary, as entities can be in many different collections (Position, Mesh, Movement, Collision, Sensor etc... might as well be processed by 3 different systems).


Applying this optimization has given a huge speed boost, but outside of special benchmarking tests (10000+ entities), the old system was way fast enough for everything. So you shouldn't design your system based on something like this. If you have a good level of abstraction, popping in such a collection/cache system should be fairly trivial and can be applied whenever you feel you actually need it.


Really, I can't stress it enough - even the most primitive ECS implementation should be fast enough. I started with a system where every entity owned their own components (which were created seperately on the heap, btw). Every access to world.GetEntities<Components>() would iterate over all entities and fill a dynamic vector (with unknown size, so not even reserve() was possible). Then, the system itself would need to iterate over all the fitting entities again, and there was like 20-30 systems. And it still ran with like 2000 FPS (for a 2D-game with 50-100 entities per screen). I recently applied pooling/caching like discussed here and it reduced the overhead of the entity system by quite a lot - but as said, I didn't even notice any real difference outside of debug mode/16x speed-mode, and syntetic benchmarks.

So what I'm saying is - always choose the design thats best to work with first, and then if you find your system is too slow (or just feel like cranking up a bit of performance that you don't need, because hopefully you are making a hobby project where it doesn't matter and are not trying to make a game and waste time on needlessly tweaking performance just for the sake of it :) ), then start to think about stuff like pooling/caching and whatnot.

#5303338 Ecs Inheritance Problem (C++)

Posted by on 31 July 2016 - 10:47 AM

dynamic_cast relies on RTTI (noticing a trend?)... the data for which will be primarily stored on disk at runtime. This means most dynamic_cast operations will necessarily contain a fetch from disk in order to retrieve that data. Not only does this make the dynamic_cast itself slow, but it can really hold hostage your ability to exercise hardware caching mechanisms for other systems, like streaming implementation.


Really? This calls for "citation needed", as I don't see why RTTI should store its data on the disk (why not keep it in RAM/memory?), and a quick google search showed up nothing to back the claim that RTTI will access disk data.


The concepts presented by Juliean are also basically the same way that I solved this problem (though I explicitly avoided creating virtual tables on my Component objects), and I don't consider it particularly unclean.


Yeah, avoiding virtual methods really is a good think when dealing with components. In case that you can avaoid the vtable altogether, you can just store the component type as a member variable instead :)  (In my case I cannot yet, as for some editor functionality I reguire virtual methods... at some point I will separate editor/runtime-data altogether though, because its much cleaner this way).


I also make use of hash tables rather than maps to get average-case constant-time lookups, under the assumption that I will be executing lookups more often than I will be executing inserts or deletes.


Note that std::unordered_map which I mentioned is an hashtable :) - though in the case of entities, it can be disadvantegious, because of the sheer size of the data structure - 8 byte (map) vs 32 byte (unordered_map). In my test cases using unordered map would be way slower than map, probably due more than double the size of an entity, effectively making cache hit rate much worse. In any case, benchmark it out yourself before deciding - except for "reserve" on an hashtable, switching both out should be fairly easy.

#5303320 Ecs Inheritance Problem (C++)

Posted by on 31 July 2016 - 08:23 AM

The way I handled this (and my early implementation was very close to the somewhat popular EntityX-framework):


Each component class/type gets an unique id, counting from 0 onwards. This is handled via CRTP, but could be done otherwise. In any way, now instead of dynamic_cast, you can just static_cast and check for your own type id (you've thus implemented a simplistic, fast but less flexible runtime type system).


Then, in an simple implementation, you can eighter 1) change your vector to an std::map/unordered map, and store each component directly with their type index or 2) simply store the component at their type index in the vector, and fill all non-used entries with nullptr. This could potentially waste memory if you have many components and many of your entities are using like say 31 and 39 but nothing else, though I don't know the exact memory footprint of map/unordered_map in comparison.


Though this is not the cleanest/purest of designs for an ECS (and it goes against the idea of cache coherence that ECS heavily implies), it should be sufficient most of the time. Now if you want/need to optimize at some point, instead of storing the components in the entity, store them in the system/a specific component pool. Then, when accessing components for a system (like in your example position + velocity), instead of querying all the entities and retriving the components from them, you instead ask the pool for the components, and build a subset of all components that are in both pools. If you want to go even further, you can cache this operation so it does not have to be redone unless you added entities/components, but I really wouldn't worry about this until it proves that this is a major bottleneck in your code.


Now back to the type-system, to give you an idea how this is handled, have a look at this code:

struct ACCLIMATE_API BaseComponent
    using Family = unsigned int; ///< Component struct type

    virtual Family GetFamily(void) const = 0;
    static Family GetFamilyCount(void);

    static Family GenerateFamilyId(void); // returns family_count and increases it by 1.
    static Family family_count; /**<    Counter of instanciated components.
        *   Each component of a different type that is created, or whose
        *   family function is called the first time will increase this
        *   counter. The counter is used as the ID of the next component. */


* Component struct

/// The component parent struct
/** Each component must derive from this struct. It automatically sets its
*   unique family id on creation, which can then be accessed via a static
*   method, so you can compare the type of a component object with the type
*   of a component struct. */
template <typename Derived> //template to pick implementation
struct Component :
    public BaseComponent
    /** Returns the "family" type of the component.
     *    This method sets the type id for the specific component struct
     *  on the first call, and returns it from there on. The id
     *  is consistend between the running time of a program and cannot
     *  be altered, but it may alter between different runs, depending
     *  on the first creation order of the components.
     *  @return The unique id of this component instance
    static Family family(void)
        //increase base family count on first access
        static const BaseComponent::Family Family = GenerateFamilyId();
        return Family;

    Family GetFamily(void) const override final
        return family();


Now every component will derive from Component<> in this manner:

class Position : 
    public Component<Position>

This will ensure that for each different component type, they will have an unique type id (accessible via Type::family() or baseComponent.GetFamily()). Now you can make methods like this:

Entity entity;

auto pPosition = entity.GetComponent<Position>();

That are able to use this static runtime type id and store/access components reliably.

#5301211 Finding that balance between optimization and legibility.

Posted by on 18 July 2016 - 08:30 AM

2) Eliminate all branches (even if it means more calculations)


Except when the calculations actually outweight the cost of a mispredicted branch... right? I'm not sure on the details, but shouldn't this misprediction cost be something like ~100 cycles on modern desktop CPUs? So if you can skip calculations that take significantly longer than that, a branch is the better choice.


Also on desktop, branches that are easy to predict have very little cost. Like something that checks for an memory allocation error that when thrown will terminate the program, the branch will always be false anyways so the only cost should be the branching operation itself. Thats different on consoles where there is no branch prediction (I don't think the current generation added it, did they?), but I didn't program on consoles myself so far so I can't say much about it.

#5300597 Overall Strategy For Move-Semantics? [C++11]

Posted by on 13 July 2016 - 02:25 PM



so something I've been wondering for a while. How do you generally account for move-semantics/rvalue-references, overall? What I mean is, in order to take advantage of move-semantics, I've generally been doing the following since now:


1) Make all classes have move-ctors, where possibly and advantageous. Since I'm making havy use of STL wrappers, most classes have functioning default-move-ctors already, with the exceptions being classes making use of std::unique_ptr and the latter.


2) When I know that a function will take an object that is expensive to copy, but I know that its only going to be a temporary in all forseably use cases (ie. a local function that is only called for one-time object initialization/serialization), I declarte it as &&, like this:

class MyClass
    using MyVector = std::vector<ComplexClass>;
    void Function(MyVector&& vData)
        m_vData = std::move(vData);


    MyVector m_vVector;

However, things get a bit more complicated when I cannot foresee how an variable is to be used, or if I know I will call it with temporaries, as well as have to pass in fixed data members that cannot be moved anyways.


So to all you fellow C++11-users, how dou you take care of this? I bascially see 4 options:


1) Do not account for it at all. Just use

void Function(const MyVector& vData)
    m_vData = vData;

like you would've done without C++11 and move semantics, and take the additional copies, memory allocations and deletions. It doesn't matter in most cases (so we're basically in premature optimization land), and/or maybe the compiler can figure this out for himself (which I doubt in any case where the compiler will not inline the function, but I'm by far no expert).


2) Write both version for move and no-move operations:

void Function(const MyVector& vData)
    m_vData = vData;

void Function(MyVector&& vData)
    m_vData = std::move(vData);

Obviously this will take care of both use cases, but it will require additional work for every function that benefits for move-semantics, and produces code duplication for non-trivial functions. Also it gets really messy once there are multiple parameters that could have move semantics.


3) Write only the move-semantic version

void Function(MyVector&& vData)
    m_vData = std::move(vData);

and when calling the function with a non-temporary, explicitely create a temporary:

const MyVector vDataFromSomewhere;

While this is just as efficient for this case than before (the temporary I create will be moved in), it requires additional typing for every non-temporary I pass in (so I now have to specify how I want to pass it in via eigther temporary ctor or std::move for every paramter that there is, ugh).


4) Now the next option was pretty much what sparked the question. I found out that I could do this:

void Function(MyVector vData)
    m_vData = std::move(vData);

MyVector vTemporaryData;

Which will move vTemporaryData to vData, and then vData to m_vData. So this means that for non-temporaries, I do not have to make additional typing, and it should also be equally efficient. For temporaries, it should also work like with MyVector&&, though in both cases there is an additional move-ctor call that would otherwise be evaded (though I imagine the compiler could be able to optimize this out, plus an move-ctor call is really nothing compared to copying a vector of 1000 elements).




So now I know this is not the most important problem in the world and I probably shouldn't worry about it, but its just something that I found interesting and wanted some opinions/real world stories on. Do you account for move-semantics in your code (if you generally use C++11, of course). If so, do you use any of the four options I presented, or do you use it on per-case basis like I used to do before (or maybe is there something completely different that I didn't see like option 4 for the longest time)? 


I find option 4 the best in this regard, though it has something weird and unusual, to now suddently be passing in all the expensive objects per value instead of reference... though I guess the same applied to before I found out that I could savely return stuff like MyVector from functions due to RVO/move semantics.

#5300582 How to automate texture deleting?

Posted by on 13 July 2016 - 12:36 PM

By the way, I have the 4th edition of the C++ Programming Language by the Bjarne guy, which covers c++11, and I haven't even started it. Should I read it or should I search for something that covers C++14. What do you think?


C++11 is fine enough. Before starting c++14, you need C++11 anyways, because 14 is in most parts a mere addition to 11, and 11 has way more "ground-breaking" and important changes than 14. So just learn C++11 and see how you can apply this to your everyday work, and then figure out what C++14 adds on top of that.

#5300511 How to automate texture deleting?

Posted by on 13 July 2016 - 06:44 AM

Second question: I have 10-20 textures, for every enemy, and i'm wondering, is it better to load all the textures in one place/function ( loadMedia() for example ), or load every texture in its own enemy class, depending on the enemy?


Definately load them in one place. Enemies should only reference a texture and neigther own nor even load it themselves. A texture should just be an attribute and not functionality of an enemy class, so it could look something like this:

class Enemy

    SDL_Texture* pTexture; // I'm not familiar with SDL so thats just pseudo-code

where you set pTexture when creating an enemy. There are lots of things that you can do to improve upon this design, but it should be a good starting ground.

#5300449 Getting address of an item in a std::vector

Posted by on 12 July 2016 - 04:28 PM

... it will cause a heap corruption. Which I am guessing is caused by 'data' being in the wrong place, created by the class member 'data_set()'.


Are you using a C++11 conformant compiler, in case of MSVC at least Version 2015 or above? If not, than your problem most likely has to do with your "asset" class not having an explicit copy-constructor.


So what would happen here is that if you push a second asset to the vector, it will have to create a different memory buffer and copy the asset from the old vector to it. Since you don't have a copy constructor, it will perform a shallow copy, essentially just setting the memblock pointer to the value of the old asset instance. Then thiss instance is deleted, calling "delete memblock" in the destructor (note: if(p) delete p is not required, delete on nullptr is a valid operation). Anyways, now the memory "memblock" is pointing too is already deleted, but there is a second instance referencing the same memory, namely in the copied instance! And once this is deleted (ie. when you add another asset, or when vectorAsset-variable is deleted, it will call the Asset-destructor, which will try to delete the now-already-invalid memory region from before, via the memblock-member.


The reason I mentioned C++11 is because with this, there is a move-ctor which can also be implicitely generated (in visual studio only from 2015 onwards), and this problem would not happen with that. So if you are already using such a setup, your issue lies elsewhere. In any way, this is something you should adress, because it is almost certain to create problems at some point, ie. if you forget to take a reference to an asset and instead take a value-copy by accident. What you should do is eigther of those:


1) Create a custom copy-constructor, that makes a deep copy of the memory block that is owned by the asset.

2) Use a wrapper like std::string which will automatically take care of that without you having to write a custom copy-ctor

3) Disallow copying of the asset altogether (by making copy-ctor private or deleting it via delete modifier), though this will require you to use C++11 and use a move-ctor instead


Option 3 would be preferable, though if you require assets to be copyable at some point you will have to use eigther 1 or 2 anyways. Having a move-ctor in C++11 really is a bonus though, because otherwise adding any single asset will have a huge overhead if it requires to copy all existing assets via deep copy.


Posted by on 11 July 2016 - 06:33 PM

In what way? Obviously there'd be the extra instruction to move the value out of the register, but what other performance do you lose? It probably will create a dependency, but OTOH if you used the original value there'd be a dependency anyway. Maybe I'm missing something?


You will trigger a very, very costly Load-Hit-Store, which is especially expensive on consoles, but is also far from trivial on desktop CPUs. The same happens for conversion from int to float too, and to put it short in case of a TL;DR; from the article: You mess up your CPUs pipelining by doing so, which is way worse than an additional move-instruction.

#5300270 Spatial Partitioning in a Hyper Light Drifter Kind of Game (Question)

Posted by on 11 July 2016 - 06:03 PM

Yup, size is the important factor here. Lets say you wanted to make exactly hyper light drifter. I'll just go all out and say:


A simple grid will be enough. By the amount of interactive NPCs per screen, a simple O(n^2) "check every NPC with every other NPC" ie. for collision detection will be entirely fast enough for this game (since from what I've seen there is like a max of ~10 moving NPCs active at once, mostly enemies, and until you get to multiple hundreds or maybe thousands of items, n^2-algorithms will generally be fast enough).


As long as you don't forsee having something like 10000 NPCs that all need full movement with collision detection at one time, using something like quadtrees will not gain you much - yes, you will increase performance, but at a factor that doesn't matter. You have 16.66 ms that you can fill. Why would you not chose the faster algorithm though? Well of the added complexity, which will take development time that could be spent on more important tasks.


(Thinking about it, for a small number of "entities" the simple each-vs-every style algorithms might actually even be faster than having to update and query a compliated data structure like quadtrees and the like, due to cache locality)


I'd also like to know how should I store the map. For instance, if there is a zone that is 500x500 tiles, should I separate the map in different smaller "zones"? Would that increase performance?


Why, of course this won't magically increase performance, it depends on what you do with those zones. For example, with rendering a uniform size tilemap, you can easily render any size of tilemap by looking just at as many tiles as the screen can fit, like with this pseudo-code:

const Vector2 startTile = cameraPos / TILE_SIZE;
const Vector2 numTilesToLookAt = screenSize / TILE_SIZE;
const Vector2 endTile = startTile + numTilesToLookAt;

for(int i = startTile.x; i < endTile.x; i++)
    for(int j = startTile.y; j < endTile.y; j++)

This will run equally fast for a 100*100 tilemap as with a 100000*10000000 tilemap, so you gain nothing by dividing the tilemap into zones. In fact, you might make matters worse as you will now have to handle drawing from multiple zones instead of being able to just access from the one zone vector.

I would probably only do this separation if you have to, like if your tilemap is so huge without transitions, that you cannot possibly fit it into memory at once - or if it is huge and you have to append/remove huge blocks at once at runtime. So not the common case in what I can see.

#5299917 Finding that balance between optimization and legibility.

Posted by on 09 July 2016 - 05:25 PM

So regardless of how you calculated what your cache hit rate is supposed to be, did you actually measure it?


Second, your function has whole bunch of if-conditions, some of which are easy to predict (idx == -1), and other that aren't.

for( i = 0; i < 8; i++ )
    if( rads == rads_cache[i] )
  if( i == 8 )
    i = idx;
    rads_cache[i] = rads;
    cosf_cache[i] = cosf( rads );
    sinf_cache[i] = sinf( rads );
    idx = ( idx + 1 ) % 8;

While you need to measure this, at least "if( rads == rads_cache[i] )" is going to be very hard if not impossible to predict. (the loop might be able to unroll and if not, be easier to predict but still no quarantee, as well as for the i == 8). So if your use case is really "cos/sin" being called exactly 3 times, than there has got to be a cleaner solution. If its really just random calls which sometimes share the same value, having an unpredictable branch will cost you more than calling sinf/cosf multiple times. As always, make sure to actually profile, but in general, the times where singular trigonometric functions are worth caching are over (especially at the cost of a branch). If you can cache a whole set of calcualtions/trig functions, sure, but if its just 3x sinf+cos vs. sinf+cos+branch, I'd say generally take the firmer (and don't forget to actually measure, to see what is actually faster, let alone if this even makes a measurable difference).

#5298500 [MSVC] Why does SDL initialize member variables?

Posted by on 29 June 2016 - 03:33 AM

I think you're missing the point that the "S" in "SDL" stands for "Security".


It's nothing to do with assisting debugging, it's nothing to do with spotting non-security issues in your code.


So in the case of a pointer it's either initialized or it's not, and if it's not initialized then it's going to be pointing at some memory address that's effectively random.  Hence security: a 3rd party could use your unitialized pointer to gain access to some other arbitrary data in your process address space.


Truth be told I really didn't get that security means this kind of security, lol. I just thought security mean that it makes the programm less prone to crashing, but seeing how it also incorporates the /GS switch (for protecting against buffer overruns) it makes perfect sense.


So seems that is my answer right there. Seems that in this case you will obviously want to have SDL turned on all the time in all builds, because there is no point just protecting your developement/debug build. In this case I think I'll just turn it off altogether, as of now I don't require the additional security (I dare you to hack my offline games :D ) and I certainly don't want the overhead of additional checks for things that I could have easily found using something like cppcheck.