Jump to content

  • Log In with Google      Sign In   
  • Create Account


using static initialisation for paralelization?


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
27 replies to this topic

#1 fir   Members   -  Reputation: -448

Like
0Likes
Like

Posted 13 June 2014 - 04:44 AM

Months a go i found somthing like 'bug' in my code it is when  turning

 

static DWORD processID = 0;
 processID = GetCurrentProcessId();

 

into 

 

static DWORD processID =  GetCurrentProcessId();

 

my exe size jumped from 25k to 75k and I also noticed 

slowdown of my game frame (from 3ms to 4ms) (though i still

do not know how it is possible (as this static initialization should

only appear once not each frame) and how to understand/explain it,

is this some cache efect appearing when exe bigges its size, got no idea)

 

there was answer

 

samoth,

 


"One notable difference of whether you compile C or C++ is for example that C++ (at least C++11, but GCC since ca. version 4.6 also does that for C++03) mandates initialization of function-local static variables being thread-safe. I'm mentioning that since you use that feature (maybe without being aware)."

 

if so I do understand that c++ language did that to can do static initializations 

multithreading so i mean such kode like this

 

static int one = f1();    // potetntialy referencing and updating other data

static int two = f2();

static int three = f3();

static int four = f4();

 
void main() { printf("\n done."); }
 

can/will execute on many cores in parralel ? will main then wait untill the last one of initializing functions end it work?  so it potentially could be used for 

cheap form of multithreading?

 

 

 



Sponsor:

#2 Bregma   Crossbones+   -  Reputation: 4848

Like
9Likes
Like

Posted 13 June 2014 - 05:12 AM

No.  It did it so if you are initializing static-local functions in a multi-threaded environment, it will have defined results.

 

It certainly does not affect namespace-level static initialization, not does it imply anything about the introduction of a threaded environment during application initialization.


Stephen M. Webb
Professional Free Software Developer

#3 fir   Members   -  Reputation: -448

Like
0Likes
Like

Posted 13 June 2014 - 05:25 AM

No.  It did it so if you are initializing static-local functions in a multi-threaded environment, it will have defined results.

 

It certainly does not affect namespace-level static initialization, not does it imply anything about the introduction of a threaded environment during application initialization.

i do not understood (partialt through weak english)

 

so will this maybe work?

 

void main()

{

static int one = f1();    // potetntialy referencing and updating other data

static int two = f2();

static int three = f3();

static int four = f4();

 
 printf("\n done.");
}
 
 

if this is not done to allow some parallel works what it is for? 

and how is the mechanizm it slowed my program and growed it +50KB

size



#4 phantom   Moderators   -  Reputation: 6892

Like
7Likes
Like

Posted 13 June 2014 - 06:53 AM

if this is not done to allow some parallel works what it is for? 
and how is the mechanizm it slowed my program and growed it +50KB
size


No, that will not do anything in parallel either.
It is there so that when you call functions from threads the static data is setup in a thread safe manner.
In order to ensure this the runtime will have to add some code/mutex locks to make sure two threads can't try to initialise at the same time.

void foo()
{
static int one = f1(); // potetntialy referencing and updating other data
static int two = f2();
static int three = f3();
static int four = f4();
}

int main()
{
parallel_invoke(foo, foo); // some function which will run supplied functions on multiple threads
printf("\n done.");
}


So, in the above code 'foo' is invoked on two threads via the helper function. In C++03 the static initialisation of the function's variables would not be safe as both threads might end up running the setup functions. In C++11 only one thread would.

#5 Bregma   Crossbones+   -  Reputation: 4848

Like
5Likes
Like

Posted 13 June 2014 - 07:43 AM


how is the mechanizm it slowed my program and growed it +50KB

I suspect, without proof, that it pulled in some library code to wrap and serialize the initialization of the function-local statics, and the slowdown is because that code gets executed.  Thread serialization generally requires context switches and pipeline stalls.  Without knowing the code, I suspect that code path needs to be executed every time so it can check to see if the object has been initialized.


Stephen M. Webb
Professional Free Software Developer

#6 fir   Members   -  Reputation: -448

Like
0Likes
Like

Posted 13 June 2014 - 07:53 AM

 

if this is not done to allow some parallel works what it is for? 
and how is the mechanizm it slowed my program and growed it +50KB
size


No, that will not do anything in parallel either.
It is there so that when you call functions from threads the static data is setup in a thread safe manner.
In order to ensure this the runtime will have to add some code/mutex locks to make sure two threads can't try to initialise at the same time.


void foo()

{

static int one = f1(); // potetntialy referencing and updating other data

static int two = f2();

static int three = f3();

static int four = f4();

}


int main()

{

parallel_invoke(foo, foo); // some function which will run supplied functions on multiple threads

printf("\n done.");

}



So, in the above code 'foo' is invoked on two threads via the helper function. In C++03 the static initialisation of the function's variables would not be safe as both threads might end up running the setup functions. In C++11 only one thread would.

 

I do not understand, iznt the static data initialized at startup (pre-main) time?

 

If you call it from 2 threads  this statics are shared  but if it was initialized before in pre mian time so it cannot be collision on threads initialization of this (as they do not nitialize it, its already initialized) (?)

 

Im not quite sure as i was doing a little multithreading in my life but I understand that static data are unsafe at all so you nead thread local

storage (?)



#7 fir   Members   -  Reputation: -448

Like
0Likes
Like

Posted 13 June 2014 - 08:02 AM

 


how is the mechanizm it slowed my program and growed it +50KB

I suspect, without proof, that it pulled in some library code to wrap and serialize the initialization of the function-local statics, and the slowdown is because that code gets executed.  Thread serialization generally requires context switches and pipeline stalls.  Without knowing the code, I suspect that code path needs to be executed every time so it can check to see if the object has been initialized.

 

 

 

I can provide the code if you want

 
#define WIN32_LEAN_AND_MEAN
#define WIN32_EXTRA_LEAN
#include <windows.h>
 
#include <psapi.h>
 
#include "..\allmyheaders\allmyheaders.h" //that was temporary skip it
 

long working_set_size = -1;
long paged_pool_size = -1;
long nonpaged_pool_size = -1;
 
static PROCESS_MEMORY_COUNTERS process_memory_counters;
 
long GetMemoryInfo()
{
 
    static HANDLE hProcess;
 
    static int initialised = 0;
    if(!initialised)
    {
     static DWORD processID = GetCurrentProcessId(); // <---THIS CRITICAL LINE
 
     hProcess = OpenProcess(  PROCESS_QUERY_INFORMATION | PROCESS_VM_READ, FALSE, processID );
 
     if (NULL == hProcess)  ERROR_EXIT("cant open process for get memory info");
 
     initialised = 1;
 
 
    }
 
 
    if(GetProcessMemoryInfo(hProcess, &process_memory_counters, sizeof(process_memory_counters)))
    {
      working_set_size    = process_memory_counters.WorkingSetSize;
      paged_pool_size     = process_memory_counters.QuotaPagedPoolUsage;
      nonpaged_pool_size  = process_memory_counters.QuotaNonPagedPoolUsage ;
    }
 
    if(0)
      CloseHandle( hProcess );
 
    return working_set_size;
}
 

damn those bug that cutts the rest of the post after /code tag

(eated a 20 lines of text)

 

this is code for measuring memmory consumption in my window program - it is called each frame but ini block is called only once as you see (some technique with 'initialized' flag of my own 'invention' (though probably other people use it too as its fine )

 

understanding what reaseo id for this and what is realli going on under the hood would be very velocome as it is some slowdown and exe bloat pitfall  for me (its obviously not needed in my case as i say breaking those line on two makes execution faster and exe is smaller)


Edited by fir, 13 June 2014 - 08:07 AM.


#8 Hodgman   Moderators   -  Reputation: 28591

Like
6Likes
Like

Posted 13 June 2014 - 08:07 AM

No, function-scope statics are initialized only when the function s executed. Imaging there's a bool that's initialized before main, which is used in an "if !initialized" in every function call.

#9 fir   Members   -  Reputation: -448

Like
0Likes
Like

Posted 13 June 2014 - 08:26 AM

No, function-scope statics are initialized only when the function s executed. Imaging there's a bool that's initialized before main, which is used in an "if !initialized" in every function call.

 

if so alrighte, so i understand the option for possible collistion

 

anyway this is a pitfal trap for me putting something slowing and bloating my program implicitely 

 

there should be a large text * * * WARNING POSSIBLE CODE SLOWDOWN (reasone here) * * *

 

it would be nice to know what really is putted there just some critical 

section around this function? why it slows so much if ti is only one run called?

 

does mingw compiler has more such slowdown pitfals (Im compiling pure c winapi codes trying to be very carefull against any slowdown

in my generated code - it would be very important to me to be sure

that i got all such overhead slowdowns avoided)



#10 phantom   Moderators   -  Reputation: 6892

Like
5Likes
Like

Posted 13 June 2014 - 08:45 AM

it would be nice to know what really is putted there just some critical 
section around this function? why it slows so much if ti is only one run called?


The initialisation code is only called once but the 'is initialised' flag must be checked every time the function is run; depending on the compile this could involve locking a mutex pre-check which is not a cheap operation.

#11 fir   Members   -  Reputation: -448

Like
0Likes
Like

Posted 13 June 2014 - 08:53 AM

 

it would be nice to know what really is putted there just some critical 
section around this function? why it slows so much if ti is only one run called?


The initialisation code is only called once but the 'is initialised' flag must be checked every time the function is run; depending on the compile this could involve locking a mutex pre-check which is not a cheap operation.

 

but i use tens of itis isinitialised flags in every program and i got no slowdovn in case of them.. does c++ standard just mean that if i just use any static variable it would guard it all with locks? hell no, i hope



#12 fir   Members   -  Reputation: -448

Like
0Likes
Like

Posted 13 June 2014 - 09:03 AM

 


how is the mechanizm it slowed my program and growed it +50KB

I suspect, without proof, that it pulled in some library code to wrap and serialize the initialization of the function-local statics, and the slowdown is because that code gets executed.  Thread serialization generally requires context switches and pipeline stalls.  Without knowing the code, I suspect that code path needs to be executed every time so it can check to see if the object has been initialized.

 

 

i neverd heard the 'serialization' word in such sense (serialisation usually was meant saving some kind of data to disk), though this meaning is quite usable

 

if so doesnt it mean that winapi functions are "serialised" (are the winapi functions serialised?) i use a couple of winapi functions calls in each frame, i even measured a time for some of them and they were quick, i dont remember what it was but something like GetDC and some like that and all they were quick (microseconds maybe 100 microseconds, this scale, at most)


Edited by fir, 13 June 2014 - 09:16 AM.


#13 SeanMiddleditch   Members   -  Reputation: 4776

Like
8Likes
Like

Posted 13 June 2014 - 01:05 PM

but i use tens of itis isinitialised flags in every program and i got no slowdovn in case of them.. does c++ standard just mean that if i just use any static variable it would guard it all with locks? hell no, i hope


C++11 changed the required behavior here. Some compilers support much of C++11 but not this feature. Other compilers have compile options to turn it off.

Instead of guessing what the compiler is doing, _look at the assembly output_. I can't stress this enough. Real engineers delve into how the boxes they build off of are constructed.

Consider:
 
#include <stdlib.h>

int foo() {
  static int bar = rand();
}
On GCC 4.9 with full optimizations, this produces:
 
foo():
	cmp	BYTE PTR guard variable for foo()::bar[rip], 0
	je	.L2
	mov	eax, DWORD PTR foo()::bar[rip]
	ret
.L2:
	sub	rsp, 24
	mov	edi, OFFSET FLAT:guard variable for foo()::bar
	call	__cxa_guard_acquire
	test	eax, eax
	jne	.L4
	mov	eax, DWORD PTR foo()::bar[rip]
	add	rsp, 24
	ret
.L4:
	call	rand
	mov	edi, OFFSET FLAT:guard variable for foo()::bar
	mov	DWORD PTR [rsp+12], eax
	mov	DWORD PTR foo()::bar[rip], eax
	call	__cxa_guard_release
	mov	eax, DWORD PTR [rsp+12]
	add	rsp, 24
	ret
It won't take a lock every single time, but it does check a global boolean. The gist is something like:
 
if not initialized
  lock
  if not initialized
    set initial value
    initialied = true
  end if
  unlock
end if
C++11 only requires that function-scope static initialization is thread-safe, so different compilers or different runtimes may implement this less efficiently.

Note that this only applies to initialization of function-local static variables (to non-zero values). The following bit of code can have the lock optimized away with no non-standard effects:
 
#include <stdlib.h>

bool foo() {
  static bool bar = false;
  if (!bar)
    bar = rand() == 0;
  return bar;
}
Compiles to:
 
foo():
	movzx	eax, BYTE PTR foo()::bar[rip]
	test	al, al
	je	.L7
	ret
.L7:
	sub	rsp, 8
	call	rand
	test	eax, eax
	sete	al
	mov	BYTE PTR foo()::bar[rip], al
	add	rsp, 8
	ret

Edited by SeanMiddleditch, 13 June 2014 - 01:10 PM.


#14 Bregma   Crossbones+   -  Reputation: 4848

Like
2Likes
Like

Posted 13 June 2014 - 01:10 PM

anyway this is a pitfal trap for me putting something slowing and bloating my program implicitely 
 
there should be a large text * * * WARNING POSSIBLE CODE SLOWDOWN (reasone here) * * *

 
 Yes, it sort of goes against the C++ philosophy of "pay only for what you use."  Could be argued, however, that you're using function-local static variables so you're paying the price.  That argument is getting kind of sketchy, though, because it can be countered with "but I'm not using multiple threads, so why should I pay the price?"

Be aware of letting a committee near anything, even for a minute.
 

i neverd heard the 'serialization' word in such sense (serialisation usually was meant saving some kind of data to disk), though this meaning is quite usable

Yes, I've run into that before. A lot of people use 'serialization' to mean streaming data, a synonym for 'marshalling'. I understand Java used that in its docs and it took off from there. Perhaps it originated from the act of sending data over a serial port (RS-232C) although we always used the term 'transmit' for that (and 'write to disk' for saving to disk, maybe 'save in text format' to be more explicit).

I'm using 'serialization' in its original meaning: enforce the serial operation of something that could potentially be performed in parallel or in simultaneous order. The usage predates the Java language and so do I. I apologize for the confusion. If any can suggest a better term, I'm open to suggestions.
Stephen M. Webb
Professional Free Software Developer

#15 Andy Gainey   Members   -  Reputation: 1987

Like
0Likes
Like

Posted 13 June 2014 - 01:58 PM

I'm using 'serialization' in its original meaning: enforce the serial operation of something that could potentially be performed in parallel or in simultaneous order. The usage predates the Java language and so do I. I apologize for the confusion. If any can suggest a better term, I'm open to suggestions.

"Synchronization" is the term I am most familiar with for that, along with the associated notions of synchronous and asynchronous execution.



"We should have a great fewer disputes in the world if words were taken for what they are, the signs of our ideas only, and not for things themselves." - John Locke

#16 swiftcoder   Senior Moderators   -  Reputation: 9761

Like
0Likes
Like

Posted 13 June 2014 - 02:09 PM

"Synchronization" is the term I am most familiar with for that, along with the associated notions of synchronous and asynchronous execution.

To my ear, serialisation implies one-at-a-time execution, while synchronisation does not exclude batches of N-at-a-time.

Tristam MacDonald - Software Engineer @Amazon - [swiftcoding]


#17 Ravyne   Crossbones+   -  Reputation: 6972

Like
2Likes
Like

Posted 13 June 2014 - 02:15 PM

 

No.  It did it so if you are initializing static-local functions in a multi-threaded environment, it will have defined results.

 

It certainly does not affect namespace-level static initialization, not does it imply anything about the introduction of a threaded environment during application initialization.

i do not understood (partialt through weak english)

 

so will this maybe work?

 

void main()

{

static int one = f1();    // potetntialy referencing and updating other data

static int two = f2();

static int three = f3();

static int four = f4();

 
 printf("\n done.");
}

 

 

No, the so-called "magic statics" in C++11 don't cause a new thread to be spawned, they just wrap it in a mutex or similar so that another thread that simultaneously called the same function don't clobber each other.

 

And in any case, the code in f1, f2, f3, and f4 would still need to be written in a thread-safe way, so that they aren't clobbering each other.

 

No free lunch here.



#18 fir   Members   -  Reputation: -448

Like
0Likes
Like

Posted 13 June 2014 - 04:07 PM

Yes, I've run into that before. A lot of people use 'serialization' to mean streaming data, a synonym for 'marshalling'. I understand Java used that in its docs and it took off from there. Perhaps it originated from the act of sending data over a serial port (RS-232C) although we always used the term 'transmit' for that (and 'write to disk' for saving to disk, maybe 'save in text format' to be more explicit).


I'm using 'serialization' in its original meaning: enforce the serial operation of something that could potentially be performed in parallel or in simultaneous order. The usage predates the Java language and so do I. I apologize for the confusion. If any can suggest a better term, I'm open to suggestions.

 

 

serialisation (for making possibly colliding cals serial) is quite good term 

serial and parallel are somewhat orthogonal terms and fits nicely, imo it should be used more



#19 Bacterius   Crossbones+   -  Reputation: 8315

Like
0Likes
Like

Posted 13 June 2014 - 04:54 PM


serial and parallel are somewhat orthogonal terms and fits nicely, imo it should be used more

 

Actually they are opposite terms (antonyms of each other). Orthogonal would mean they are unrelated (in some sense, that their meanings go in completely different directions) smile.png

 

There's also e.g. "sequential" and "concurrent" though some people may use them with subtly different meanings, especially the latter, depending on the task that is actually being done. Anyway it's pretty clear from context what is meant in general.


The slowsort algorithm is a perfect illustration of the multiply and surrender paradigm, which is perhaps the single most important paradigm in the development of reluctant algorithms. The basic multiply and surrender strategy consists in replacing the problem at hand by two or more subproblems, each slightly simpler than the original, and continue multiplying subproblems and subsubproblems recursively in this fashion as long as possible. At some point the subproblems will all become so simple that their solution can no longer be postponed, and we will have to surrender. Experience shows that, in most cases, by the time this point is reached the total work will be substantially higher than what could have been wasted by a more direct approach.

 

- Pessimal Algorithms and Simplexity Analysis


#20 Hodgman   Moderators   -  Reputation: 28591

Like
2Likes
Like

Posted 13 June 2014 - 06:52 PM

On GCC 4.9 with full optimizations, this produces: 

foo():
	cmp	BYTE PTR guard variable for foo()::bar[rip], 0
	je	.L2
	mov	eax, DWORD PTR foo()::bar[rip]
	ret
.L2:
	sub	rsp, 24
	mov	edi, OFFSET FLAT:guard variable for foo()::bar
	call	__cxa_guard_acquire
	test	eax, eax
	jne	.L4
	mov	eax, DWORD PTR foo()::bar[rip]
	add	rsp, 24
	ret
.L4:
	call	rand
	mov	edi, OFFSET FLAT:guard variable for foo()::bar
	mov	DWORD PTR [rsp+12], eax
	mov	DWORD PTR foo()::bar[rip], eax
	call	__cxa_guard_release
	mov	eax, DWORD PTR [rsp+12]
	add	rsp, 24
	ret
It won't take a lock every single time, but it does check a global boolean. The gist is something like: 
if not initialized
  lock
  if not initialized
    set initial value
    initialied = true
  end if
  unlock
end if

I don't mean to second guess the GCC authors here, but isn't that the "double checked locking" anti-pattern?

What if the CPU reorders the first two reads, as it is allowed to do...? [edit] my mistake - x86 isn't allowed to reorder reads with respect to each other [/edit]

second:
	cmp	BYTE PTR guard variable for foo()::bar[rip], 0
	je	.L2
First:
	mov	eax, DWORD PTR

Edited by Hodgman, 14 June 2014 - 02:52 AM.





Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS