Sign in to follow this  
TutenStain

Max array size C++?

Recommended Posts

I need a way to store a lots of numbers for a .3ds model loader. The problem is that it crashes when I try to increase the array. Like: Works: float X[2][5000]; float Y[2][5000]; float Z[2][5000]; Instant crash: float X[10][5000]; float Y[10][5000]; float Z[10][5000]; Have I reached some kind of a limit on the array size or what?

Share this post


Link to post
Share on other sites
Or allocate dynamically, assuming you have a 32-bit system, each array takes 5000 * 10 * 4 = 200.000 bytes. There's enough of that on the heap:

float *X = new float[10 * 5000];
X[(y * 5000) + x] = 5.0f;
delete[] X;

Share this post


Link to post
Share on other sites
Quote:
Original post by _fastcall
Probably; you might have reached the stack size limit. You can increase the stack size in Visual C++, under Project Properties, Linker, System.


What do I need to change? This is what I see: http://img541.imageshack.us/img541/3460/51844983.jpg

Quote:
Original post by Decrius
Or allocate dynamically, assuming you have a 32-bit system, each array takes 5000 * 10 * 4 = 200.000 bytes. There's enough of that on the heap:

float *X = new float[10 * 5000];
X[(y * 5000) + x] = 5.0f;
delete[] X;


Does that still retain the [10][5000] look? I want to be able to access it like [object][vertex] (dont know how else to explain it)

Share this post


Link to post
Share on other sites
I wouldn't change the stack size at all, allocate the array with new[] instead - then you're bound by the available address space (~2GB on a 32-bit system), not the stack size.

Share this post


Link to post
Share on other sites
Quote:
Original post by TutenStain
Does that still retain the [10][5000] look? I want to be able to access it like [object][vertex] (dont know how else to explain it)


No it doesn't. But practically you still do the same thing, basically it expands as:

X[object][vertex] -> X[(object * max_vertices) + vertex] which equals *(X + (object * max_vertices) + vertex)

So yes, you do need to remember max_vertices.

Share this post


Link to post
Share on other sites
Quote:
Original post by Decrius
Quote:
Original post by TutenStain
Does that still retain the [10][5000] look? I want to be able to access it like [object][vertex] (dont know how else to explain it)


No it doesn't. But practically you still do the same thing, basically it expands as:

X[object][vertex] -> X[(object * max_vertices) + vertex] which equals *(X + (object * max_vertices) + vertex)

So yes, you do need to remember max_vertices.


Make sense. But cant I do it this way:
float (*x)[1000] = new float[1000][1000];

Share this post


Link to post
Share on other sites
Quote:
Original post by mind in a box
Thats not how it works. This would be right:

const int X = 1000;
const int Y = 5000;
float* Foo = new float[X*Y];

And then access it like Decrius said.



Thanks, think I got it. Will turn back here if I have any questions.

Thanks again!

Share this post


Link to post
Share on other sites
Quote:
Original post by TutenStain
Make sense. But cant I do it this way:
float (*x)[1000] = new float[1000][1000];


Yes, that will work so long as you know all but the first dimension at compile time.

int n = 10;
float (*x)[1000] = new float[n][1000]; // works
float (*y)[n] = new float[1000][n]; // doesn't work

You may also be interested in boost::array and boost::multi_array.

Share this post


Link to post
Share on other sites
I think people have been missing the context here.

Quote:
Original post by TutenStain
I need a way to store a lots of numbers for a .3ds model loader.


Why do you need to store "lots" of numbers? What does "lots" mean? Exactly 5,000? Why 5,000, and not any other number?

Share this post


Link to post
Share on other sites
Quote:
Original post by Zahlman
I think people have been missing the context here.

Quote:
Original post by TutenStain
I need a way to store a lots of numbers for a .3ds model loader.


Why do you need to store "lots" of numbers? What does "lots" mean? Exactly 5,000? Why 5,000, and not any other number?


It really depends on the model itself. Im just coding the loader. I dont want to put a limitation on the 3d artist. If an object requires 10000000 (unlikely) vertecies I want to be able to support it. Even thought it might run a slow.

Share this post


Link to post
Share on other sites
Quote:

It really depends on the model itself. Im just coding the loader. I dont want to put a limitation on the 3d artist. If an object requires 10000000 (unlikely) vertecies I want to be able to support it. Even thought it might run a slow.

Why aren't you using std::vector, then?

Share this post


Link to post
Share on other sites
Quote:
Original post by jpetrie
Quote:

It really depends on the model itself. Im just coding the loader. I dont want to put a limitation on the 3d artist. If an object requires 10000000 (unlikely) vertecies I want to be able to support it. Even thought it might run a slow.

Why aren't you using std::vector, then?


Isn't it slower?

Share this post


Link to post
Share on other sites
Quote:
Original post by TutenStain
Quote:
Original post by jpetrie
Quote:

It really depends on the model itself. Im just coding the loader. I dont want to put a limitation on the 3d artist. If an object requires 10000000 (unlikely) vertecies I want to be able to support it. Even thought it might run a slow.

Why aren't you using std::vector, then?


Isn't it slower?

Slower than what?

Simple answer: No. It's as fast as any other array.

Share this post


Link to post
Share on other sites
Quote:
Original post by jpetrie
Quote:

It really depends on the model itself. Im just coding the loader. I dont want to put a limitation on the 3d artist. If an object requires 10000000 (unlikely) vertecies I want to be able to support it. Even thought it might run a slow.

Why aren't you using std::vector, then?


I do not want to deter the question. You are bound by stack size if you use static arrays. To get out of that problem you can use dynamic memory; then your have about the size available in your virtual address space (32 bit / 64 bit).

But it seems you want to hard code the array size. Why the hell do you want to do that?! Use exactly the size needed. This can probably be read out of the model format before reading the single components.

Finally, here is a general rule of thumb:

Do not allocate arrays of things with new. It is error prone and you gain little in performance or memory consistency. Use standard containers.

If you use standard containers use std::vector if you *need* to have continuous memory space. In all other cases use one of the containers std::list, std::deque or std::set depending on your needs. (std::map is special case for key/value associations.)

http://www.informit.com/guides/content.aspx?g=cplusplus&seqNum=204

Share this post


Link to post
Share on other sites
I got it working with the new command. Might not be perfect but works for my purposes. Thanks for the suggestions.

int n = 10; //N=Number of objects.
float (*x)[100000] = new float[n][100000];

Share this post


Link to post
Share on other sites
Quote:
Original post by TutenStain
I got it working with the new command. Might not be perfect but works for my purposes. Thanks for the suggestions.

int n = 10; //N=Number of objects.
float (*x)[100000] = new float[n][100000];
I'd still suggest using std::vector. If you resize() it first, it's exactly as fast as an array, and has the added benefits of automatically cleaning up the memory when it goes out of scope, does bounds checking every time you access an element, and is easily copyable without having to implement it all yourself.

If you want to use new and arrays for learning, then go ahead, but the best way to do it, and the way with the least future problems is with a std::vector.

Share this post


Link to post
Share on other sites
From the tests I've done vector is exactly as fast as vanilla array for access. The only thing they are slower on is allocation, which you probably only do on level load anyway. So yeah, vectors are just as fast, dynamically sized, and relieve the burden of memory management.

Share this post


Link to post
Share on other sites
How are you measuring allocation speed? There is no particular reason why std::vector should be noticeable slower for once off allocation. Common implementations need only set two additional values, one of which would need to be tracked if you weren't using a container anyway.

Of course, if you are measuring the time to push_back() many elements, then that is a different matter entirely.

Share this post


Link to post
Share on other sites
From my experience, you shouldn't use raw arrays unless you have a darn good reason for doing so.

For most things you are going to use std::vector, which should be just as fast (sometime faster) than a raw array, and it has lots of built in functionality.

The only time a vector is slow is when you allocate a lot of them. Like if you allocate 10 million tiny vectors, it's going to take a long time.

If you don't need the resize functionality of std::vector, you can use std::tr1::array, which in Visual Studio 2010 is just plain old std::array. It has all the functionality of std::vector, but fixed sized and allocates quicker. I haven't found ANY reason to use a raw array in place of a std::array. I just quit using them all together.

Additionally, you should use things like shared_ptr or auto_ptr instead of raw pointers.

Share this post


Link to post
Share on other sites
Aargh aargh aargh why do people still worry about this or have misconceptions or misinformation. It is 2010 for heavens' sake. We are talking about a core component of the standard library of the language which has been standardized for over a decade.

D:

Just use the std::vector already. The easiest thing is to wrap the "line" of 10 elements (I assume that is an exact count) into some kind of structure (maybe you can give a meaningful name to each one? If not, just toss an array[10] into the struct) and make a vector of that struct.

Share this post


Link to post
Share on other sites
vector<vector<vector<vector<int> > > > hyperArray;

edit: that is, if you want an array where each sub-array has it's own size. If you want a surface-like data structure, where the whole has e.g. width*height, you could write a 2d-interface to vector (of course you could also use a vector of vectors, but that would be slightly more work):

class Foo {
unsigned int width, height;
std::vector<int> surface;
public:
Foo () : width(320), height(240), surface(width*height) {
}

int operator (int x, int y) const {
return surface[x + y*width];
}

int &operator (int x, int y) {
return surface[x + y*width];
}
};

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this