Quote:Original post by Hyrcan
Quote:If you want to convince yourself stl is a joke, just make a class with a vector of ints and a couple other primitive members, basically a typical object of some kind. Then make a vector of these classes, each with a dozen ints in the child vector. Add about a million to the vector, then sort it and see how long that takes to complete(for comparison, I can run the same code with my own implementation in about 150ms).
Funny thing is, that takes a few ms using the STL on my machine. So, what now?
You sorted a vector with a million classes on a random key value in the class, each containing a vector with 12 items in it in a few milliseconds without taking some kind of shortcut? Not that I don't believe you, but, I don't believe you because I've recently benchmarked it pretty thoroughly in VC++ and all benchmarks I see point to other implementations being about the same or even worse.
The results I get is it takes about 135ms to sort 1 million ints in STL.
Which to me is a pretty good result, but how often do you have a static array of a million ints?
It takes about 450ms if you use a vector instead of an array. Which is kind of crappy, but not enough to slit your wrists, but since this is the real world case for any large data it does kind of suck. Especially when you have sorting as part of a much more complex algorithm. A case that comes up again and again and again.
If you move to the scenario I describe it will be executing til the day you die unless you replace std:swap and optimize the copy constructor (and doing that for hundreds of classes is not a fun prospect). Which to me is kind of ridiculous. The interface here is bad enough to suffer, but aside from benchmarks almost everything you want to sort is going to be something more complicated. Like a linear octree node, a vertex, that kind of thing.
Quote:Original post by Codeka
Quote:Original post by thatguyfromthething
Quote:It's certainly not possible without having a profiler
Profiler is another misconception, it's like saying you need a debugger to figure out bugs. It's just raw data, it catches things that amount to mistakes but it really tells you very little. Like the debugger, it is easy to misuse as a crutch that negates actual thought - it's not there to debug logic, but to catch simple mistakes and that's it.
Are you telling me you don't use a profiler when doing performance analysis? If not a profiler then what? How do you know "Implementation A" is faster or slower than "Implementation B" if you don't profile?
You can't get very far just optimizing the obvious, whole program performance is almost always a bigger issue, especially the larger and more complex your program becomes.
Code that runs great in a simple test might cause problems in a large long running program and when people say stuff like "code size doesn't matter" or even that there's no reason to make your own data structures it tells me they have not dealt with these issues before. Especially if the say there's no reason to code them at all. If all you use is one thing and have never done it yourself how can you even think you know anything about it?
Maybe the confusion is in thinking containers = all data structures.
[Edited by - thatguyfromthething on February 26, 2010 5:35:30 PM]