# Custom size_t

## Recommended Posts

Hi,

I recently thought about size_t vs std::size_t and ended to the conclusion that std::size_t should be used if the code is c++.
Is it a correct conclusion about the both ?

I then was curious about Unreal Engine and saw this:

template<typename T32BITS, typename T64BITS, int PointerSize>
struct SelectIntPointerType
{
// nothing here are is it an error if the partial specializations fail
};

template<typename T32BITS, typename T64BITS>
struct SelectIntPointerType<T32BITS, T64BITS, 8>
{
typedef T64BITS TIntPointer; // select the 64 bit type
};

template<typename T32BITS, typename T64BITS>
struct SelectIntPointerType<T32BITS, T64BITS, 4>
{
typedef T32BITS TIntPointer; // select the 32 bit type
};

typedef SelectIntPointerType<uint32, uint64, sizeof(void*)>::TIntPointer UPTRINT;

typedef UPTRINT SIZE_T;

Any thought about it ? Is it a good way and safe ?
Thank you

##### Share on other sites

Is there a reason you need a target-platform-agnostic integer interpretation of a pointer to void? It doesn't even make sense to me for serialization/marshalling, since if you're using C++, you really should not be using void* for anything except interfacing to non-C++ code.

The traditional "size-type" is just "whatever unsigned integer is most efficient for the CPU to operate upon". That can be all sorts of sizes, on all sorts of platforms.

##### Share on other sites

I don't know how old the Unreal engine is, but that code may pre-date various std:: members by years.

So indeed, today you would probably write it different, but existing code doesn't change by itself, likely that code has been "working and stable" for many years already, and attention of the developers is towards other parts of the code.

Then there is also the problem of backwards compatibility. Can you be really really sure all old data files that exist at every platform can still be loaded if you change it?

##### Share on other sites

Short version: yes, use std::size_t in C++ and plain size_t in C.  There are a handful of cases where you want more than 32 bits even in 32 bit mode (e.g. when dealing with the size of multi-gigabyte data files that are never loaded into memory in their entirety).  In those cases, use std::uint64_t instead.  But don't try to reinvent size_t.

As for the Unreal engine code, I can think of two possibilities:

• Workaround for crappy console SDKs.  This may be a valid reason for reinventing size_t, but only if and when you actually run into it.
• NIH syndrome.  This is never a good reason for doing anything.

##### Share on other sites

I imagine the root of this  redundancy is the fact that Microsoft's toolchain does (did?) not support C99, and by inference standard C++.  C99 introduced size_t and uintptr_t, and C++ has always provided std::size_t and since C++11 (where is moved to C99 instead of C89) std::uintptr_t.  If you want cross-platform portability you often have to work around the non-conforming platforms or else give in and lock in.

Always good for party conversation, that Microsoft toolchain.  They try.

##### Share on other sites
2 hours ago, Bregma said:

Microsoft's toolchain does (did?) not support C99, and by inference standard C++﻿﻿﻿﻿﻿﻿﻿﻿﻿﻿

What's the connection between C99 support and C++98/03/11/14/17 support?  C99 syntax isn't valid in any compliant C++ compiler, right?

##### Share on other sites

I'm pretty sure that size_t in C predates C99 anyway.

##### Share on other sites

Well .... In my case I often end up storing size values in structures and so I don't want a size that's too big because it just takes up extra space.  In general all my structures are well below 65535 bytes so I often end up using unit16_t for a size so I can pack it in with other small bits of data. Likewise it could be they are defining their own size type because it's important into know exactly how big it is at various points in the library, for instance for alignment somewhere.

##### Share on other sites
15 hours ago, Gnollrunner said:

Well .... In my case I often end up storing size values in structures and so I don't want a size that's too big because it just takes up extra space.  In general all my structures are well below 65535 bytes so I often end up using unit16_t for a size so I can pack it in with other small bits of data. Likewise it could be they are defining their own size type because it's important into know exactly how big it is at various points in the library, for instance for alignment somewhere.

For custom byte sizes and object counts you should simply use the appropriate unsigned type with appropriate automatic conversions to and from size_t and related types; you might put in some form of compile-time assertion that your size type isn't larger than size_t, but in practice you can just try.

If you care about platform-dependent alignment of size_t struct fields you gain nothing by using wrong types, as shorter types are incorrect and longer types waste space and have a chance of masking bugs; realistically the struct is going to be completely different on 32 bit and 64 bit platforms, so you should write it accordingly with preprocessor macros, alignof() etc. instead of trying to be clever.

##### Share on other sites
Posted (edited)
16 minutes ago, LorenzoGatti said:

so you should write it accordingly with preprocessor macros, alignof() etc. instead of trying to be clever.

I don't what to get in a technical discussion but alignof() aligns, however it doesn't necessarily help you pack.  Having fields of a specific size is still often useful if you are tying to optimize stuff.  Also alignof is C++ 11 and I don't know when this was originally written.  In any case I'm just commenting on why they might be using their own instead of the system std::size_t. I'm not recommending anything really.

Edited by Gnollrunner

## Create an account

Register a new account

1. 1
2. 2
3. 3
Rutin
20
4. 4
5. 5
khawk
14

• 9
• 11
• 11
• 23
• 12
• ### Forum Statistics

• Total Topics
633655
• Total Posts
3013186
×