• Create Account

## Vector Math Class Storage: Array vs. Separate Members

Old topic!

Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

17 replies to this topic

### #1Ectara  Members

3097
Like
0Likes
Like

Posted 14 December 2012 - 09:47 AM

It is my understanding that
struct myStruct{
int x, y, z;

int operator [](int index){
return (&x)[index];
}
}
myStruct s;
s.x = 0;


and

struct myStruct{
int a[3];

int operator [](int index){
return a[index];
}

int x(void){
return a[0];
}

int y(void){
return a[1];
}

int z(void){
return a[2];
}
};

myStruct s;
s.a[0] = 0;


are nearly equivalent in terms of speed on modern compilers. The problem arises when I need to access it by subscript, and also use it like an array of floats (like with OpenGL). However, which is more tedious to use:

1. Ensuring that every platform supports telling the compiler how to align and pad the members so that one can make an unsafe cast of the address of the first member to a pointer type, so that it can be subscripted or returned, and being able to refer to each member by their name

or

2. Storing the members as an array, allowing me to return a pointer and subscript safely and reliably, and providing an inlined x(), y(), and z() member function that returns a[0], a[1], and a[2] respectively

?

I'd prefer the second, as deterministic type safety seems like a better choice.

The downfalls of each, as far as I can tell:

1. Requiring alignment and padding to both be controlled the same way no matter what compiler (if the compiler can't or doesn't do it right, the program could crash!), and unsafe cast from a single member to a pointer of its type, knowing it will be accessed like an array.

2. Takes a couple more keystrokes to access the data (s.a[0] or s.x() vs s.x), and a little tougher to conceptually grasp.

I believe that a properly inlined s.x() or s.a[0] would be just as fast/slow as s.x; both would dereference a this pointer, and both would use a constant offset from the beginning of the struct (and possibly array) to reach the data, so it seems like it makes no difference in speed.

The pros of each:

1. Simple to use and understand; behaves how you expect a vector class might.

2. Performs reliably, and safely.

Which would be preferable, knowing that my goal is portability, and reliability (I'd rather not depend on compiler pragmas, macros, and settings)? Does the array method have any serious implications, like speed?

### #2Álvaro  Members

20248
Like
1Likes
Like

Posted 14 December 2012 - 10:27 AM

I think either method would work fine, and in practice you wouldn't have to worry about things like padding or alignment. The standard only guarantees that method 2 would work. Whether that's important to you is mostly a matter of personal preference.

### #3Ectara  Members

3097
Like
0Likes
Like

Posted 14 December 2012 - 01:49 PM

I think either method would work fine, and in practice you wouldn't have to worry about things like padding or alignment. The standard only guarantees that method 2 would work. Whether that's important to you is mostly a matter of personal preference.

I plan to use SSE or whichever intrinsics are available if the processor has it, so there's a chance that the data must be non-padded and 16 byte aligned to use SSE, for example. Sounds like method 2 is the way to go for me; I try to go for standards compliance when possible.

Edited by Ectara, 14 December 2012 - 01:51 PM.

### #4Shannon Barber  Moderators

1657
Like
0Likes
Like

Posted 18 December 2012 - 08:57 PM

int's cannot require padding. The platform is busted if that happens.
If you are concerned about a hidden forth int being added to providing padding for an array of points, that can/would happen with both structures.
Alignment might be an issue but it would be the same in both cases as well.

For both structures you need to set the packing to 0.

In the x,y,z case the compiler is free to rearrange the elements but I've never seen it happen for like-types and you can typically turn that off with a similar the setting/pragam mechanism that you set the packing with.
- The trade-off between price and quality does not exist in Japan. Rather, the idea that high quality brings on cost reduction is widely accepted.-- Tajima & Matsubara

### #5Ectara  Members

3097
Like
0Likes
Like

Posted 18 December 2012 - 10:18 PM

int's cannot require padding. The platform is busted if that happens.

In the x,y,z case the compiler is free to rearrange the elements but I've never seen it happen for like-types and you can typically turn that off with a similar the setting/pragam mechanism that you set the packing with.

Section 9.2.12:
"Nonstatic data members of a (non-union) class declared without an intervening access-specifier are allocated so that later members have higher addresses within a class object. The order of allocation of nonstatic data members separated by an access-specifier is unspecified (11.1). Implementation alignment requirements might cause two adjacent members not to be allocated immediately after each other; so might requirements for space for managing virtual functions (10.3) and virtual base classes (10.1)."

Unless an access-specifier intervenes between x, y, and z, they must all be in order of declaration. Additionally, it explicitly states that adjacent members might not be allocated immediately after each other, and makes no mention of any types. It's possible that ints could have no padding, or 1024 bytes. I wouldn't want to program on such a system, but it's possible.

The int type was a placeholder, as the actual datatype was irrelevant; I'll use a generic class name from now on. I was more concerned about the difference in ease of use between the two; having them all be separate members allows the compiler to do several of numerous unpredictable things that could break the code in porting it to another platform. Thus, I decided to go with the array of values, to ensure that each value will positioned immediately after the previous.

The alignment is already taken care of through means outside of the scope of this example.

### #6iMalc  Members

2466
Like
0Likes
Like

Posted 19 December 2012 - 12:44 AM

Ah, I've held onto this link for many years, for just such an occasion.
In this thread you'll find the perfect answer I believe. You can have your cake and eat it too...
http://www.gamedev.net/topic/261920-a-slick-trick-in-c/
"In order to understand recursion, you must first understand recursion."
My website dedicated to sorting algorithms

### #7Shannon Barber  Moderators

1657
Like
0Likes
Like

Posted 19 December 2012 - 01:16 AM

Knowing 9.2.12 guarantees order of like-types you can accomplish this with a few lines of code.
If you make them templates you can toss the static back in if you want to force an instantiation.
I think that would be better in an explicit source file though (i.e. it's clear which object file it's in not in multiple!)

[source lang="cpp"]struct v3{ float x,y,z; const float& operator[](int i) const { return (&x)[i]; } float& operator[](int i) { return (&x)[i]; }}struct v4{ float w,x,y,z; const float& operator[](int i) const { return (&w)[i]; } float& operator[](int i) { return (&w)[i]; }}[/source]

Edited by Shannon Barber, 19 December 2012 - 01:33 AM.

- The trade-off between price and quality does not exist in Japan. Rather, the idea that high quality brings on cost reduction is widely accepted.-- Tajima & Matsubara

### #8Ectara  Members

3097
Like
0Likes
Like

Posted 19 December 2012 - 10:59 AM

Knowing 9.2.12 guarantees order of like-types you can accomplish this with a few lines of code.
If you make them templates you can toss the static back in if you want to force an instantiation.
I think that would be better in an explicit source file though (i.e. it's clear which object file it's in not in multiple!)

[source lang="cpp"]struct v3{ float x,y,z; const float& operator[](int i) const { return (&x)[i]; } float& operator[](int i) { return (&x)[i]; }}struct v4{ float w,x,y,z; const float& operator[](int i) const { return (&w)[i]; } float& operator[](int i) { return (&w)[i]; }}[/source]

I've seen this solution, before, and despite it looking fancy, it ignores what was also stated above "Implementation alignment requirements might cause two adjacent members not to be allocated immediately after each other; so might requirements for space for managing virtual functions (10.3) and virtual base classes (10.1)." It may be implementation defined to have 4 byte int members aligned to 8 byte boundaries. Such a machine that requires it would be inefficient, but that makes no difference in the fact that the standard allows for an implementation that would cause this example to break. Thus, I've avoided it.

The most important thing that has caused me to make my decision to use the array method is having the guaranteed that there will be no padding between elements in the array; it's not the first time I've made this decision.

However, there's now another reason: a lot of this code is designed to be parallelized, and the approach I'm using requires the members be in union with an aligned SIMD datatype, __m128. The most common solutions are:
struct myStruct{
union {
struct { int x, y, z; };
__m128 v;
};

...
}


and

struct myStruct{
union {
int a[4];
__m128 v;
};

...
}


The issue arises where the generic templated class would use either of the implementations outlined in my OP, but the SSE version would use either of these specialized implementations. The thing is, anonymous structures are not standards compliant! C11 just now added support for it, but C++11 still has no support for it. Anonymous unions are explicitly allowed, but there is no mention of anonymous structs. Thus, in order for this to comply with standards, the specialization with the union must contain a named object of the inner structure's type. However, to have this be consistent with the generic version, the generic version must contain an unnecessary inner structure containing the members. Thus, you lose all benefit of less keystrokes and a simpler concept, because the concept just became more obfuscated in order to comply with standards!

So, in conclusion, you can either have code that works some of the time, in some circumstances, on certain compilers, or have code that is more guaranteed to work the way you expect by abiding by the rules. The individual member method works so long as there are no specializations that require the members to be in a union (and aligned and unpadded, in this case), and you either use a switch, or load a temporary array with the values and subscript it in the subscript operator. Otherwise, you take a trip through implementation-defined behavior land.

### #9Álvaro  Members

20248
Like
0Likes
Like

Posted 19 December 2012 - 11:06 AM

I've seen this solution, before, and despite it looking fancy, it ignores what was also stated above "Implementation alignment requirements might cause two adjacent members not to be allocated immediately after each other; so might requirements for space for managing virtual functions (10.3) and virtual base classes (10.1)." It may be implementation defined to have 4 byte int members aligned to 8 byte boundaries. Such a machine that requires it would be inefficient, but that makes no difference in the fact that the standard allows for an implementation that would cause this example to break. Thus, I've avoided it.

What's harder to imagine is an architecture that would require alignment of members of the same type different from the alignment required of elements in an array. That's why I said that they will both work in practice.

If you worry that your compiler might put an infinite loop before returning from main because the standard allows it (at least that's my reading of the section on "observable behavior"), then you may also worry about a compiler putting padding between members of the same basic type. Otherwise, not really something to stress about.

### #10Ectara  Members

3097
Like
0Likes
Like

Posted 19 December 2012 - 11:10 AM

Ah, I've held onto this link for many years, for just such an occasion.
In this thread you'll find the perfect answer I believe. You can have your cake and eat it too...
http://www.gamedev.n...ick-trick-in-c/

The content looks interesting, but I'm having a hard time understanding how the pointer array winds up pointing to the actual data, unless I'm misreading it.

### #11Ectara  Members

3097
Like
1Likes
Like

Posted 19 December 2012 - 11:12 AM

I've seen this solution, before, and despite it looking fancy, it ignores what was also stated above "Implementation alignment requirements might cause two adjacent members not to be allocated immediately after each other; so might requirements for space for managing virtual functions (10.3) and virtual base classes (10.1)." It may be implementation defined to have 4 byte int members aligned to 8 byte boundaries. Such a machine that requires it would be inefficient, but that makes no difference in the fact that the standard allows for an implementation that would cause this example to break. Thus, I've avoided it.

What's harder to imagine is an architecture that would require alignment of members of the same type different from the alignment required of elements in an array. That's why I said that they will both work in practice.

If you worry that your compiler might put an infinite loop before returning from main because the standard allows it (at least that's my reading of the section on "observable behavior"), then you may also worry about a compiler putting padding between members of the same basic type. Otherwise, not really something to stress about.

I agree with you, you won't likely encounter this behavior. I'm just saying, if I have a choice between standards compliance and not standards compliance, and they both produce the same results for roughly the same cost, then I have no reason to be non-compliant.

### #12Álvaro  Members

20248
Like
0Likes
Like

Posted 19 December 2012 - 11:20 AM

Ah, I've held onto this link for many years, for just such an occasion.
In this thread you'll find the perfect answer I believe. You can have your cake and eat it too...
http://www.gamedev.n...ick-trick-in-c/

The content looks interesting, but I'm having a hard time understanding how the pointer array winds up pointing to the actual data, unless I'm misreading it.

I just tried this and it seems to work:
#include <iostream>

struct Vector {
float x, y, z;
static float Vector::* const a[3];

float operator[](size_t i) const {
return this->*a[i];
}

float &operator[](size_t i) {
return this->*a[i];
}
};

float Vector::* const Vector::a[3] = {&Vector::x, &Vector::y, &Vector::z};

int main() {
Vector v = {1.0, 2.0, 3.0};
for (size_t i=0; i!=3; ++i)
std::cout << v[i] << '\n';
}


### #13SiCrane  Moderators

11523
Like
0Likes
Like

Posted 19 December 2012 - 11:35 AM

The content looks interesting, but I'm having a hard time understanding how the pointer array winds up pointing to the actual data, unless I'm misreading it.

The important thing is that they aren't regular pointers, they are pointers to members. Pointers to members are effectively offsets into the class. So &Vector::x would resolve to an offset to the beginning of the class, &Vector::y would be something like 4 bytes into the class and &Vector::z would be something like 8 bytes into the class. The ->* operator then takes a pointer to a class object and uses the pointer to member to get at the member at that offset.

### #14Ectara  Members

3097
Like
0Likes
Like

Posted 19 December 2012 - 12:41 PM

I just tried this and it seems to work:

The content looks interesting, but I'm having a hard time understanding how the pointer array winds up pointing to the actual data, unless I'm misreading it.

The important thing is that they aren't regular pointers, they are pointers to members. Pointers to members are effectively offsets into the class. So &Vector::x would resolve to an offset to the beginning of the class, &Vector::y would be something like 4 bytes into the class and &Vector::z would be something like 8 bytes into the class. The ->* operator then takes a pointer to a class object and uses the pointer to member to get at the member at that offset.

Hm... I guess it's something that has no analogue in C. I was curious about ->* being its own operator, and being overloadable. I'll have to look more into this concept. Thank you all, for the new information. This may allow me to implement something new.

### #15SiCrane  Moderators

11523
Like
0Likes
Like

Posted 19 December 2012 - 03:23 PM

The closest C analogue to pointers to members is the offsetof() macro, though in order to do anything useful with offsetof() you need to do manual pointer manipulation that ->* takes care of for you in C++.

### #16NightCreature83  Members

4779
Like
0Likes
Like

Posted 19 December 2012 - 04:27 PM

I think either method would work fine, and in practice you wouldn't have to worry about things like padding or alignment. The standard only guarantees that method 2 would work. Whether that's important to you is mostly a matter of personal preference.

I plan to use SSE or whichever intrinsics are available if the processor has it, so there's a chance that the data must be non-padded and 16 byte aligned to use SSE, for example. Sounds like method 2 is the way to go for me; I try to go for standards compliance when possible.

If you are moving to SSE why even implement this or care about it's speed, as moving from SSE to FPU involves a LHS anyway. Most libraries i have seen just use the SSE intrinsics for the platform and have an array version for normal vectors and point types, no alligned allocations required. However they all caution to not move from the SSE intrinsics when you can.

Optimised C++ uses the SSE instructions anyway(when enabled in settings) as soon as you do any floating point work. Even if it is just a lonely dived where you just need to float result, this confused my when debugging a release build for something else.

Worked on titles: CMR:DiRT2, DiRT 3, DiRT: Showdown, GRID 2, theHunter, theHunter: Primal, Mad Max

3480
Like
0Likes
Like

Posted 19 December 2012 - 06:29 PM

If you are moving to SSE why even implement this or care about it's speed, as moving from SSE to FPU involves a LHS anyway.

Last time I checked LHS isn't a significant issue unless you happen to be using a PowerPC based processor...

### #18Ectara  Members

3097
Like
0Likes
Like

Posted 19 December 2012 - 09:23 PM

If you are moving to SSE why even implement this or care about it's speed, as moving from SSE to FPU involves a LHS anyway. Most libraries i have seen just use the SSE intrinsics for the platform and have an array version for normal vectors and point types, no alligned allocations required. However they all caution to not move from the SSE intrinsics when you can.

I don't understand what you are saying. I have a math vector that uses SSE if it can, if not, then it uses the same interface to do all of the calculations normally. It uses compiler intrinsics for SSE. This is worth implementing, and worth care. Just about all of the operations will be done SSE-side, anyway; all operations in the SSE specialization will use SSE only. I've already written it, and the FPU gets no real use after initializing the vector, until the results are inspected, where it gets loaded back into FPU registers to operate upon each element outside of the vector.

In short, the vector isn't the problem here. There will not be a bottleneck until another part of the code decides it needs to read or write the floating point data individually, which will have the penalties that you describe. However, if one is going to do so few mathematical operations that more time is spent converting or moving between processing units on the processor, then they probably should just do the math normally, without a specialized, high-speed vector class. In my opinion, when the code that needs these actions performed incurs this penalty, it accepts the risks and the responsibility. If it ever gets to the point that the vector is the true bottleneck, not anything that uses it, then I will take action.

Optimised C++ uses the SSE instructions anyway(when enabled in settings) as soon as you do any floating point work. Even if it is just a lonely dived where you just need to float result, this confused my when debugging a release build for something else.

The current version of GCC emits aligned SSE instructions even if the data is unaligned, causing the program to crash on the highest two optimization settings. Relying on the compiler to do the heavy lifting is not always an option. The compiler can fail, and is failing right now for the very case you state.

Old topic!

Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.