Max array size C++?

Started by
24 comments, last by phresnel 13 years, 12 months ago
I think people have been missing the context here.

Quote:Original post by TutenStain
I need a way to store a lots of numbers for a .3ds model loader.


Why do you need to store "lots" of numbers? What does "lots" mean? Exactly 5,000? Why 5,000, and not any other number?
Advertisement
Quote:Original post by Zahlman
I think people have been missing the context here.

Quote:Original post by TutenStain
I need a way to store a lots of numbers for a .3ds model loader.


Why do you need to store "lots" of numbers? What does "lots" mean? Exactly 5,000? Why 5,000, and not any other number?


It really depends on the model itself. Im just coding the loader. I dont want to put a limitation on the 3d artist. If an object requires 10000000 (unlikely) vertecies I want to be able to support it. Even thought it might run a slow.
Quote:
It really depends on the model itself. Im just coding the loader. I dont want to put a limitation on the 3d artist. If an object requires 10000000 (unlikely) vertecies I want to be able to support it. Even thought it might run a slow.

Why aren't you using std::vector, then?
Quote:Original post by jpetrie
Quote:
It really depends on the model itself. Im just coding the loader. I dont want to put a limitation on the 3d artist. If an object requires 10000000 (unlikely) vertecies I want to be able to support it. Even thought it might run a slow.

Why aren't you using std::vector, then?


Isn't it slower?
Quote:Original post by TutenStain
Quote:Original post by jpetrie
Quote:
It really depends on the model itself. Im just coding the loader. I dont want to put a limitation on the 3d artist. If an object requires 10000000 (unlikely) vertecies I want to be able to support it. Even thought it might run a slow.

Why aren't you using std::vector, then?


Isn't it slower?

Slower than what?

Simple answer: No. It's as fast as any other array.

In time the project grows, the ignorance of its devs it shows, with many a convoluted function, it plunges into deep compunction, the price of failure is high, Washu's mirth is nigh.

Quote:Original post by jpetrie
Quote:
It really depends on the model itself. Im just coding the loader. I dont want to put a limitation on the 3d artist. If an object requires 10000000 (unlikely) vertecies I want to be able to support it. Even thought it might run a slow.

Why aren't you using std::vector, then?


I do not want to deter the question. You are bound by stack size if you use static arrays. To get out of that problem you can use dynamic memory; then your have about the size available in your virtual address space (32 bit / 64 bit).

But it seems you want to hard code the array size. Why the hell do you want to do that?! Use exactly the size needed. This can probably be read out of the model format before reading the single components.

Finally, here is a general rule of thumb:

Do not allocate arrays of things with new. It is error prone and you gain little in performance or memory consistency. Use standard containers.

If you use standard containers use std::vector if you *need* to have continuous memory space. In all other cases use one of the containers std::list, std::deque or std::set depending on your needs. (std::map is special case for key/value associations.)

http://www.informit.com/guides/content.aspx?g=cplusplus&seqNum=204
Rioki - http://www.rioki.org
I got it working with the new command. Might not be perfect but works for my purposes. Thanks for the suggestions.

int n = 10; //N=Number of objects.
float (*x)[100000] = new float[n][100000];
Quote:Original post by TutenStain
I got it working with the new command. Might not be perfect but works for my purposes. Thanks for the suggestions.

int n = 10; //N=Number of objects.
float (*x)[100000] = new float[n][100000];
I'd still suggest using std::vector. If you resize() it first, it's exactly as fast as an array, and has the added benefits of automatically cleaning up the memory when it goes out of scope, does bounds checking every time you access an element, and is easily copyable without having to implement it all yourself.

If you want to use new and arrays for learning, then go ahead, but the best way to do it, and the way with the least future problems is with a std::vector.
From the tests I've done vector is exactly as fast as vanilla array for access. The only thing they are slower on is allocation, which you probably only do on level load anyway. So yeah, vectors are just as fast, dynamically sized, and relieve the burden of memory management.
How are you measuring allocation speed? There is no particular reason why std::vector should be noticeable slower for once off allocation. Common implementations need only set two additional values, one of which would need to be tracked if you weren't using a container anyway.

Of course, if you are measuring the time to push_back() many elements, then that is a different matter entirely.

This topic is closed to new replies.

Advertisement