Sign in to follow this  
Ectara

Vector Math Class Storage: Array vs. Separate Members

Recommended Posts

Ectara    3097
It is my understanding that
[CODE]
struct myStruct{
int x, y, z;

int operator [](int index){
return (&x)[index];
}
}
myStruct s;
s.x = 0;
[/CODE]

and

[CODE]
struct myStruct{
int a[3];

int operator [](int index){
return a[index];
}

int x(void){
return a[0];
}

int y(void){
return a[1];
}

int z(void){
return a[2];
}
};

myStruct s;
s.a[0] = 0;
[/CODE]

are nearly equivalent in terms of speed on modern compilers. The problem arises when I need to access it by subscript, and also use it like an array of floats (like with OpenGL). However, which is more tedious to use:

1. Ensuring that every platform supports telling the compiler how to align and pad the members so that one can make an unsafe cast of the address of the first member to a pointer type, so that it can be subscripted or returned, and being able to refer to each member by their name

or

2. Storing the members as an array, allowing me to return a pointer and subscript safely and reliably, and providing an inlined x(), y(), and z() member function that returns a[0], a[1], and a[2] respectively

?

I'd prefer the second, as deterministic type safety seems like a better choice.

The downfalls of each, as far as I can tell:

1. Requiring alignment and padding to both be controlled the same way no matter what compiler (if the compiler can't or doesn't do it right, the program could crash!), and unsafe cast from a single member to a pointer of its type, knowing it will be accessed like an array.

2. Takes a couple more keystrokes to access the data (s.a[0] or s.x() vs s.x), and a little tougher to conceptually grasp.

I believe that a properly inlined s.x() or s.a[0] would be just as fast/slow as s.x; both would dereference a this pointer, and both would use a constant offset from the beginning of the struct (and possibly array) to reach the data, so it seems like it makes no difference in speed.

The pros of each:

1. Simple to use and understand; behaves how you expect a vector class might.

2. Performs reliably, and safely.

Which would be preferable, knowing that my goal is portability, and reliability (I'd rather not depend on compiler pragmas, macros, and settings)? Does the array method have any serious implications, like speed?

Share this post


Link to post
Share on other sites
alvaro    21246
I think either method would work fine, and in practice you wouldn't have to worry about things like padding or alignment. The standard only guarantees that method 2 would work. Whether that's important to you is mostly a matter of personal preference.

Share this post


Link to post
Share on other sites
Ectara    3097
[quote name='Álvaro' timestamp='1355502452' post='5010648']
I think either method would work fine, and in practice you wouldn't have to worry about things like padding or alignment. The standard only guarantees that method 2 would work. Whether that's important to you is mostly a matter of personal preference.
[/quote]
I plan to use SSE or whichever intrinsics are available if the processor has it, so there's a chance that the data must be non-padded and 16 byte aligned to use SSE, for example. Sounds like method 2 is the way to go for me; I try to go for standards compliance when possible. Edited by Ectara

Share this post


Link to post
Share on other sites
Shannon Barber    1681
int's cannot require padding. The platform is busted if that happens.
If you are concerned about a hidden forth int being added to providing padding for an array of points, that can/would happen with both structures.
Alignment might be an issue but it would be the same in both cases as well.

For both structures you need to set the packing to 0.

In the x,y,z case the compiler is free to rearrange the elements but I've never seen it happen for like-types and you can typically turn that off with a similar the setting/pragam mechanism that you set the packing with.

Share this post


Link to post
Share on other sites
Ectara    3097
[quote name='Shannon Barber' timestamp='1355885826' post='5012291']
int's cannot require padding. The platform is busted if that happens.
[/quote]
[quote name='Shannon Barber' timestamp='1355885826' post='5012291']
In the x,y,z case the compiler is free to rearrange the elements but I've never seen it happen for like-types and you can typically turn that off with a similar the setting/pragam mechanism that you set the packing with.
[/quote]
Section 9.2.12:
"Nonstatic data members of a (non-union) class declared without an intervening access-specifier are allocated so that later members have higher addresses within a class object. The order of allocation of nonstatic data members separated by an access-specifier is unspecified (11.1). Implementation alignment requirements might cause two adjacent members not to be allocated immediately after each other; so might requirements for space for managing virtual functions (10.3) and virtual base classes (10.1)."

Unless an access-specifier intervenes between x, y, and z, they must all be in order of declaration. Additionally, it explicitly states that adjacent members might not be allocated immediately after each other, and makes no mention of any types. It's possible that ints could have no padding, or 1024 bytes. I wouldn't want to program on such a system, but it's possible.

The int type was a placeholder, as the actual datatype was irrelevant; I'll use a generic class name from now on. I was more concerned about the difference in ease of use between the two; having them all be separate members allows the compiler to do several of numerous unpredictable things that could break the code in porting it to another platform. Thus, I decided to go with the array of values, to ensure that each value will positioned immediately after the previous.

The alignment is already taken care of through means outside of the scope of this example.

Share this post


Link to post
Share on other sites
iMalc    2466
Ah, I've held onto this link for many years, for just such an occasion.
In this thread you'll find the perfect answer I believe. You can have your cake and eat it too...
[url="http://www.gamedev.net/topic/261920-a-slick-trick-in-c/"]http://www.gamedev.net/topic/261920-a-slick-trick-in-c/[/url]

Share this post


Link to post
Share on other sites
Shannon Barber    1681
Knowing 9.2.12 guarantees order of like-types you can accomplish this with a few lines of code.
If you make them templates you can toss the static back in if you want to force an instantiation.
I think that would be better in an explicit source file though (i.e. it's clear which object file it's in not in multiple!)

[source lang="cpp"]
struct v3
{
float x,y,z;
const float& operator[](int i) const { return (&x)[i]; }
float& operator[](int i) { return (&x)[i]; }
}
struct v4
{
float w,x,y,z;
const float& operator[](int i) const { return (&w)[i]; }
float& operator[](int i) { return (&w)[i]; }
}
[/source] Edited by Shannon Barber

Share this post


Link to post
Share on other sites
Ectara    3097
[quote name='Shannon Barber' timestamp='1355901393' post='5012361']
Knowing 9.2.12 guarantees order of like-types you can accomplish this with a few lines of code.
If you make them templates you can toss the static back in if you want to force an instantiation.
I think that would be better in an explicit source file though (i.e. it's clear which object file it's in not in multiple!)

[source lang="cpp"]
struct v3
{
float x,y,z;
const float& operator[](int i) const { return (&x)[i]; }
float& operator[](int i) { return (&x)[i]; }
}
struct v4
{
float w,x,y,z;
const float& operator[](int i) const { return (&w)[i]; }
float& operator[](int i) { return (&w)[i]; }
}
[/source]
[/quote]
I've seen this solution, before, and despite it looking fancy, it ignores what was also stated above "Implementation alignment requirements might cause two adjacent members not to be allocated immediately after each other; so might requirements for space for managing virtual functions (10.3) and virtual base classes (10.1)." It may be implementation defined to have 4 byte int members aligned to 8 byte boundaries. Such a machine that requires it would be inefficient, but that makes no difference in the fact that the standard allows for an implementation that would cause this example to break. Thus, I've avoided it.

The most important thing that has caused me to make my decision to use the array method is having the guaranteed that there will be no padding between elements in the array; it's not the first time I've made this decision.

However, there's now another reason: a lot of this code is designed to be parallelized, and the approach I'm using requires the members be in union with an aligned SIMD datatype, __m128. The most common solutions are:
[CODE]
struct myStruct{
union {
struct { int x, y, z; };
__m128 v;
};

...
}
[/CODE]

and

[CODE]
struct myStruct{
union {
int a[4];
__m128 v;
};

...
}
[/CODE]

The issue arises where the generic templated class would use either of the implementations outlined in my OP, but the SSE version would use either of these specialized implementations. The thing is, anonymous structures are not standards compliant! C11 just now added support for it, but C++11 still has no support for it. Anonymous unions are explicitly allowed, but there is no mention of anonymous structs. Thus, in order for this to comply with standards, the specialization with the union must contain a named object of the inner structure's type. However, to have this be consistent with the generic version, the generic version must contain an unnecessary inner structure containing the members. Thus, you lose all benefit of less keystrokes and a simpler concept, because the concept just became more obfuscated in order to comply with standards!

So, in conclusion, you can either have code that works some of the time, in some circumstances, on certain compilers, or have code that is more guaranteed to work the way you expect by abiding by the rules. The individual member method works so long as there are no specializations that require the members to be in a union (and aligned and unpadded, in this case), and you either use a switch, or load a temporary array with the values and subscript it in the subscript operator. Otherwise, you take a trip through implementation-defined behavior land.

Share this post


Link to post
Share on other sites
alvaro    21246
[quote name='Ectara' timestamp='1355936352' post='5012489']
I've seen this solution, before, and despite it looking fancy, it ignores what was also stated above "Implementation alignment requirements might cause two adjacent members not to be allocated immediately after each other; so might requirements for space for managing virtual functions (10.3) and virtual base classes (10.1)." It may be implementation defined to have 4 byte int members aligned to 8 byte boundaries. Such a machine that requires it would be inefficient, but that makes no difference in the fact that the standard allows for an implementation that would cause this example to break. Thus, I've avoided it.
[/quote]

What's harder to imagine is an architecture that would require alignment of members of the same type different from the alignment required of elements in an array. That's why I said that they will both work in practice.

If you worry that your compiler might put an infinite loop before returning from main because the standard allows it (at least that's my reading of the section on "observable behavior"), then you may also worry about a compiler putting padding between members of the same basic type. Otherwise, not really something to stress about.

Share this post


Link to post
Share on other sites
Ectara    3097
[quote name='iMalc' timestamp='1355899486' post='5012351']
Ah, I've held onto this link for many years, for just such an occasion.
In this thread you'll find the perfect answer I believe. You can have your cake and eat it too...
[url="http://www.gamedev.net/topic/261920-a-slick-trick-in-c/"]http://www.gamedev.n...ick-trick-in-c/[/url]
[/quote]
The content looks interesting, but I'm having a hard time understanding how the pointer array winds up pointing to the actual data, unless I'm misreading it.

Share this post


Link to post
Share on other sites
Ectara    3097
[quote name='Álvaro' timestamp='1355936781' post='5012491']
[quote name='Ectara' timestamp='1355936352' post='5012489']
I've seen this solution, before, and despite it looking fancy, it ignores what was also stated above "Implementation alignment requirements might cause two adjacent members not to be allocated immediately after each other; so might requirements for space for managing virtual functions (10.3) and virtual base classes (10.1)." It may be implementation defined to have 4 byte int members aligned to 8 byte boundaries. Such a machine that requires it would be inefficient, but that makes no difference in the fact that the standard allows for an implementation that would cause this example to break. Thus, I've avoided it.
[/quote]

What's harder to imagine is an architecture that would require alignment of members of the same type different from the alignment required of elements in an array. That's why I said that they will both work in practice.

If you worry that your compiler might put an infinite loop before returning from main because the standard allows it (at least that's my reading of the section on "observable behavior"), then you may also worry about a compiler putting padding between members of the same basic type. Otherwise, not really something to stress about.
[/quote]
I agree with you, you won't likely encounter this behavior. I'm just saying, if I have a choice between standards compliance and not standards compliance, and they both produce the same results for roughly the same cost, then I have no reason to be non-compliant.

Share this post


Link to post
Share on other sites
alvaro    21246
[quote name='Ectara' timestamp='1355937014' post='5012496']
[quote name='iMalc' timestamp='1355899486' post='5012351']
Ah, I've held onto this link for many years, for just such an occasion.
In this thread you'll find the perfect answer I believe. You can have your cake and eat it too...
[url="http://www.gamedev.net/topic/261920-a-slick-trick-in-c/"]http://www.gamedev.n...ick-trick-in-c/[/url]
[/quote]
The content looks interesting, but I'm having a hard time understanding how the pointer array winds up pointing to the actual data, unless I'm misreading it.
[/quote]

I just tried this and it seems to work:
[code]#include <iostream>

struct Vector {
float x, y, z;
static float Vector::* const a[3];

float operator[](size_t i) const {
return this->*a[i];
}

float &operator[](size_t i) {
return this->*a[i];
}
};

float Vector::* const Vector::a[3] = {&Vector::x, &Vector::y, &Vector::z};

int main() {
Vector v = {1.0, 2.0, 3.0};
for (size_t i=0; i!=3; ++i)
std::cout << v[i] << '\n';
}
[/code]

Share this post


Link to post
Share on other sites
SiCrane    11839
[quote name='Ectara' timestamp='1355937014' post='5012496']
The content looks interesting, but I'm having a hard time understanding how the pointer array winds up pointing to the actual data, unless I'm misreading it.
[/quote]
The important thing is that they aren't regular pointers, they are pointers to members. Pointers to members are effectively offsets into the class. So &Vector::x would resolve to an offset to the beginning of the class, &Vector::y would be something like 4 bytes into the class and &Vector::z would be something like 8 bytes into the class. The ->* operator then takes a pointer to a class object and uses the pointer to member to get at the member at that offset.

Share this post


Link to post
Share on other sites
Ectara    3097
[quote name='Álvaro' timestamp='1355937603' post='5012504']
I just tried this and it seems to work:
[/quote]
[quote name='SiCrane' timestamp='1355938536' post='5012511']
[quote name='Ectara' timestamp='1355937014' post='5012496']
The content looks interesting, but I'm having a hard time understanding how the pointer array winds up pointing to the actual data, unless I'm misreading it.
[/quote]
The important thing is that they aren't regular pointers, they are pointers to members. Pointers to members are effectively offsets into the class. So &Vector::x would resolve to an offset to the beginning of the class, &Vector::y would be something like 4 bytes into the class and &Vector::z would be something like 8 bytes into the class. The ->* operator then takes a pointer to a class object and uses the pointer to member to get at the member at that offset.
[/quote]
Hm... I guess it's something that has no analogue in C. I was curious about ->* being its own operator, and being overloadable. I'll have to look more into this concept. Thank you all, for the new information. This may allow me to implement something new.

Share this post


Link to post
Share on other sites
SiCrane    11839
The closest C analogue to pointers to members is the offsetof() macro, though in order to do anything useful with offsetof() you need to do manual pointer manipulation that ->* takes care of for you in C++.

Share this post


Link to post
Share on other sites
NightCreature83    5002
[quote name='Ectara' timestamp='1355514596' post='5010714']
[quote name='Álvaro' timestamp='1355502452' post='5010648']
I think either method would work fine, and in practice you wouldn't have to worry about things like padding or alignment. The standard only guarantees that method 2 would work. Whether that's important to you is mostly a matter of personal preference.
[/quote]
I plan to use SSE or whichever intrinsics are available if the processor has it, so there's a chance that the data must be non-padded and 16 byte aligned to use SSE, for example. Sounds like method 2 is the way to go for me; I try to go for standards compliance when possible.
[/quote]
If you are moving to SSE why even implement this or care about it's speed, as moving from SSE to FPU involves a LHS anyway. Most libraries i have seen just use the SSE intrinsics for the platform and have an array version for normal vectors and point types, no alligned allocations required. However they all caution to not move from the SSE intrinsics when you can.

Optimised C++ uses the SSE instructions anyway(when enabled in settings) as soon as you do any floating point work. Even if it is just a lonely dived where you just need to float result, this confused my when debugging a release build for something else.

Share this post


Link to post
Share on other sites
Adam_42    3629
[quote name='NightCreature83' timestamp='1355956039' post='5012619']
If you are moving to SSE why even implement this or care about it's speed, as moving from SSE to FPU involves a LHS anyway.[/quote]

Last time I checked LHS isn't a significant issue unless you happen to be using a PowerPC based processor...

Share this post


Link to post
Share on other sites
Ectara    3097
[quote name='NightCreature83' timestamp='1355956039' post='5012619']
If you are moving to SSE why even implement this or care about it's speed, as moving from SSE to FPU involves a LHS anyway. Most libraries i have seen just use the SSE intrinsics for the platform and have an array version for normal vectors and point types, no alligned allocations required. However they all caution to not move from the SSE intrinsics when you can.
[/quote]
I don't understand what you are saying. I have a math vector that uses SSE if it can, if not, then it uses the same interface to do all of the calculations normally. It uses compiler intrinsics for SSE. This is worth implementing, and worth care. Just about all of the operations will be done SSE-side, anyway; all operations in the SSE specialization will use SSE only. I've already written it, and the FPU gets no real use after initializing the vector, until the results are inspected, where it gets loaded back into FPU registers to operate upon each element outside of the vector.

In short, the vector isn't the problem here. There will not be a bottleneck until another part of the code decides it needs to read or write the floating point data individually, which will have the penalties that you describe. However, if one is going to do so few mathematical operations that more time is spent converting or moving between processing units on the processor, then they probably should just do the math normally, without a specialized, high-speed vector class. In my opinion, when the code that needs these actions performed incurs this penalty, it accepts the risks and the responsibility. If it ever gets to the point that the vector is the true bottleneck, not anything that uses it, then I will take action.

[quote name='NightCreature83' timestamp='1355956039' post='5012619']
Optimised C++ uses the SSE instructions anyway(when enabled in settings) as soon as you do any floating point work. Even if it is just a lonely dived where you just need to float result, this confused my when debugging a release build for something else.
[/quote]
The current version of GCC emits aligned SSE instructions even if the data is unaligned, causing the program to crash on the highest two optimization settings. Relying on the compiler to do the heavy lifting is not always an option. The compiler can fail, and is failing right now for the very case you state.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this