Getting direct acces to vector<unique_ptr<My_Class>> but there are these factors...

Started by
7 comments, last by BaneTrapper 9 years, 9 months ago

Hello.

So i have a


std::vector<std::unique_ptr<My_Class>> list;

And i want to have direct access to My_Class object as.


My_Class * ptrMy_Class = list[id];

the issues:

The list is going to have its elements be added and erased during run time.

That means the list[id] can get erased, and the ptrMy_Class has no way to check if the object still valid?

So how do i get a pointer or a reference to the object, except giving each My_Class unique ID and looping the whole list checking for matching ID.

But downfall is that each My_Class will try to access other My_Class at least once a second and list.size() is more then 1000 at any given time?

Advertisement

Aren't you better off with a map?

Like dejaime said, you're probably looking for std::map.

You might also want to consider using std::shared_ptr instead of std::unique_ptr as you would have to remove a unqiue_ptr from the list/map when saving it as a separate variable (or you can use the get() method, but that would defeat the purpose of smart pointers).

Depends on what the id represents and where it comes from (how you calculate/generate it).

If it comes from an ever increasing id counter of some kind, use unordered_map<int32_t, unique_ptr<Object>>.

If it comes from finding a free index in the vector, use the vector, and instead of list.erase(), use list[id].swap(nullptr). This leaves the unique_ptr in the vector (so it can be reused for the same id), and calling list[id].get() will give you nullptr.

Don't use shared_ptr unless you know for a fact the lifetime of the object needs to be managed by multiple things.

The point of smart pointers is automatic object destruction for stuff allocated on the heap. If you're using a unique_ptr, you never want to save the underlying raw pointer somewhere else, except for temporary reads during which you know the object will still be alive.

devstropo.blogspot.com - Random stuff about my gamedev hobby

The problem here is that "id" is not really an ID but an index to an array (or vector, as it happens). Which, as you noticed, isn't unique or may even change if you insert/delete something at a lower index.

There are two ways around that issue. The first one is easiest, as proposed by dejaime: Use a map. It's the correct container to use for "look up object by key". Then you just need to make sure your keys are unique, which is easy (just increment a global variable, or return a static which you increment from a global gen_id() function).

Of course, a map is slightly more expensive than a vector on the average (objects are not stored in a contiguous area, lookup usually involves 3-4 indirections, and iteration isn't as trivial as ++pointer), so in some cases this just isn't fast enough (usually it is, though!).

In that case, you might keep an implementation backed by a vector, but use handles rather than indices and pointers. What is the difference? A handle uses (abuses) some of the bits in its integer value to encode a revision number. You can tell if two handles refer to the same object if both the index and the revision number are the same (i.e. the complete "binary number" is identical) and you can tell whether an object has been deleted and replaced (same index, different rev) or is the same one you expect.

That is of course a lot more work to implement (and debug!), but it may be somewhat more efficient due to being more compact and more cache-friendly, and lookups only involve a small, constant number of steps. There is ready-to-use code in one of the early Game Programming Game books for that too, but don't ask me in which one.

I would first profile whether it's worth the trouble, though. For 90% of all cases, using std::map is just good enough (and it is the correct thing to do, it will not make people frown when they look at your code!).

I feel that map will be slow.

Thinking about how many times i will have to search for object. Having a vector and removing peaces from it is also a downside thus i decided:
I will have vector in which i will hold all data. When i want object removed, i will just release its pointer. and save the vector position as empty.
The issue: In case allot of objects get removed and little are added, the vector starts to stall, because it never sizes down, but it will provide me with possibility of easy access, and i can make something that will check to sort it when 33% of it is just empty objects


int VecID = 921;
std::vector<std::unique_ptr<My_Class>> list;
std::vector<int> listEmptyPos;
EraseListElement(VecID);
//Adds 921 to listEmptyPos, and releases pointer in list

I really need it to be fast, the downfall compared to gain is unmatched to using map when size is always gonna be 1k+ of objects.

Thanks on suggestions, i posted above my solution.


I feel that map will be slow.

If you found a solution that works for you, great!

However, this kind of reasoning is something you should avoid. Whether you feel, think or guess something is slow is highly irrelevant. The compiler might do all kinds of magic to make any assumptions you have invalid, or your assumptions might be wrong from the get-go.

If you think something is too slow, profile it. That'll tell you if it actually is too slow, or if your bottleneck is elsewhere. If the bottleneck is somewhere else, all your optimizations, time spent and hair loss due to tearing at your own skull scalp might be for nothing.

If the code is too slow, then profiling before and after changes will tell you if the change actually made it better or worse. In a lot of cases, "clever optimizations" from a programmer can cause the application to run slower.

Of course, this isn't to say that e.g. algorithm knowledge is bad or wasteful -- high level changes related to algorithms are probably one of the best ways of optimizing something, and knowing whether an algorithm is O(n) or O(n^n) can matter a lot.

Also, another quote which might be of relevance:

Programmers waste enormous amounts of time thinking about, or worrying about, the speed of noncritical parts of their programs, and these attempts at efficiency actually have a strong negative impact when debugging and maintenance are considered. We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. Yet we should not pass up our opportunities in that critical 3%.

- Donald Knuth

And, just to re-iterate: the only way to know which parts are critically in need of optimization is by using some sort of profiling tool.

Hello to all my stalkers.


I feel that map will be slow.
premature optimization is the root of all evil. Yet we should not pass up our opportunities in that critical 3%.

- Donald Knuth

I agree, but if i am making something i will make it right from start, instead of spending time redoing it when i find out its slow, when i know it will be slow, chances that compiler would optimize are close to none, but there may be something out of my knowledge so profiling would be the choice, i just don't have time and i am certain that doing vector thingy i said would provide me with great performance and probably 10-15% more work, which ends up in 10 minute of coding when i know exactly what i am doing or what i want want.


when i know exactly what i am doing or what i want

Do you? Early optimization is a problem when you do not. Believe me, when I am starting a program, I never know precisely what I want. I only really understand my needs when part of the code is done and possibly redone some bits here and there.

Considering that you'd be using a vector to simulate a map, you'd be wasting a lot of memory and, instead of letting the map care for itself, you'd be concentrating all its relocation to a single operation.

If your containers will really be as big as you say, that a map wouldn't suffice, believe me, your cleanup will halt the program for fractions of seconds, causing FPS drops and other issues. This can also affect your physics engine, by generating exceptionally big "delta times" for it to simulate with. In addition, searching the way you mentioned your lookup would result in O(n) complexity or higher (any empty slots would make it > O(n) ), while std::map has it O( log(n) ).

In other words, trying to force a vector into a map would probably result in a lot more work for something that performs slower and uses more memory.

Unless you have good algorithms to insert, remove and search (as a red-black tree for an example), or some neat mechanism to avoid runtime stalls when rebalancing and possibly an automated memory pool to control your vector... but that is basically recreating the map container.


I agree, but if i am making something i will make it right from start, instead of spending time redoing it when i find out its slow

This is a good intent, but it works exactly the other way around. First, you try to do a correct implementation. Correctness is always first.

That is, you choose the algorithms and containers which are "correct" for what you want to achieve (in this case, mapping a key to an object ? map). Then you make sure that the program indeed works correctly and is robust for any possible input. And only then you make sure that it performs at the required speed given the minimum spec hardware. If it does, all is good, job complete.

Only if you find out that the requirements (say, game runs fluidly at 30fps) are not met with the minimum hardware specs that you target, you start to tweak things like this. And you never tweak a part before profiling, because you have no way of knowing whether you are turning the correct knob without measuring. Guessing is a bad advisor.

There is that saying: A good horse jumps over every obstacle. But an excellent horse doesn't jump higher than it needs to.

This topic is closed to new replies.

Advertisement