# Fundamental Pointer Problems - C++

This topic is 3700 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Recommended Posts

I'm new to C++ so I'm not sure if what I'm doing is the flat out wrong way of doing this or not. I made a header file that contains all of the code that generates a map for my game. Here are the first few lines...
[[MapGen.h]]
void MapGen()
{
int MapSize = 0;
int * P_MapSize = &MapSize;
int MapSize_ResourceVar = 0;
In my main chunk of code for the game, I have this (.cpp file):
#include "MapGen.h"
...
void Population_Start()
{
MapGen ();
int Lands = 0;
int Population = 0;
Lands = &MapSize / 4;
Population = Lands;  //grr... define lands in MapGen().  Need pointers to work :)
int Gold = 100;
int * P_Gold = &Gold;
}
Why is the compiler telling me that &MapSize is undefined? As you can probably guess, it does the same thing for &Gold in other functions etc.

##### Share on other sites
Because the pointers are local variables in the functions you are writing. In example when you call MapGen() it creates all the variables does whatever it does with them, and then when the function is done they are all destroyed.

##### Share on other sites
MapSize is a local variable of the function MapGen(), so it cannot be seen outside of that function (in fact, when the function exits the variable no longer exists and is recreated when the function is called again). Same goes for Gold - it can only be seen inside of Population_Start().

Also, writing:
Lands = &MapSize / 4;

is wrong - you are dividing an address by 4, which doesn't really make sense. I think you meant to leave out the '&'.

One other thing - you should declare and initialize your variables at the same time, so instead of writing:

int a;// Somewhere latera = 5;

you would write:

int a = 5;

This will reduce the chance of the variable being used before it was initialized and will cut down on the code size (which is always nice [smile]). Alongside that, you should declare your variables when you need them and not at the start of a function. Besides also preventing the above, it makes it easier to read the code because the variable appears near the place that it is used.

##### Share on other sites
Thanks for the tips!

Ahh... just checked out the scope of variables at cplusplus.com. I thought that I could treat anything I defined as a pointer as a global variable--big mistake (didn't know they would be deleted when the function ended).

Thanks again.

##### Share on other sites
Remember that space for local variables is allocated on the stack. Once a function exits, the memory where its local variables were stored will probably be overwritten by something else on the next function call.

So you could get a pointer to a local variable, but the memory that the pointer points to won't contain valid data after the function exits and another function is called.

Try the code below to see what I mean:
#include <iostream>using namespace std;/* this function returns a pointer to its local variable */int *set_val(int val){        int i = val;        cout << "i = " << i << endl;        return &i;}int main(void){        int *foo, bar;        foo = set_val(42);        bar = *foo;        cout << "bar = " << bar << endl;        /* previous call to cout wrote over the memory foo points to */        bar = *foo;        cout << "bar = " << bar << endl;        return 0;}

On my system, this prints out:
i = 42bar = 42bar = 134519616

##### Share on other sites
Quote:
 Original post by nibbulerAhh... just checked out the scope of variables at cplusplus.com. I thought that I could treat anything I defined as a pointer as a global variable--big mistake (didn't know they would be deleted when the function ended).

I'm guessing you are probably getting confused with pointers and allocated memory.

void f(){    int *i=new int[200];}

After f() exits, the memory allocated still exists, but the pointer i is deleted, so you have no way of accessing the memory or freeing it.

int *f(){    int *i=new int[200];    return i;}void g(){    int *ptr=f();    // do stuff    delete [] ptr;}

Here you are returning a copy of i before it is destroyed, so you can access the memory.

However, all of the above are typical examples of the potential problems with using pointers directly in C++ and why you should almost always prefer to use standard library containers or smart pointers instead.

std::vector<int> f(){    return std::vector<int>(200);}void g(){    std::vector<int> v=f();    // do stuff}

The above is very hard to break, and is unlikely to have any significant performance penalties. Even the apparent additional copy of the vector when being returned is likely to be optimised away by the compiler.

##### Share on other sites
Quote:
 Original post by nibbulerThanks for the tips!Ahh... just checked out the scope of variables at cplusplus.com. I thought that I could treat anything I defined as a pointer as a global variable--big mistake (didn't know they would be deleted when the function ended).Thanks again.

Careful there. The concept of "deletion" is totally irrelevant here. First off, the 'delete' keyword only applies to things that were dynamically allocated (using 'new'), and then it's the pointed-at thing that actually gets "deleted", not the pointer itself. Second, "lifetime" (how long the data is in memory) is a different concept from "scope" (the region of the code in which a given name is understood as referring to the variable).

##### Share on other sites
Quote:
 Original post by EasilyConfusedThe above is very hard to break, and is unlikely to have any significant performance penalties. Even the apparent additional copy of the vector when being returned is likely to be optimised away by the compiler.

You hope. Unless the function is inlined, I don't see how the compiler can avoid the copy.

Personally for something like that I'd prefer passing a reference or pointer to an existing vector and filling it in the function:

void f(std::vector<int>& v){    v.clear();    v.resize(200);}void g(){    std::vector<int> v;    f(v);    // do stuff}

##### Share on other sites
Quote:
 Original post by JeraxYou hope. Unless the function is inlined, I don't see how the compiler can avoid the copy.

Actually, some compilers can use RVO (return value optimization) or NRVO (named return value optimization) in the absence of inlining to eliminate copies. This will be compiler dependent. For instance, see this article about NRVO in MSVC 2005.

##### Share on other sites
Quote:
 Original post by JeraxPersonally for something like that I'd prefer passing a reference or pointer to an existing vector and filling it in the function:

Therefore creating an empty vector, calling the clear() method, then resizing it, all of which you can again only hope the compiler will optimise away. Given that your function might take an empty vector, or an existing vector with data that needs destructing prior to the resize, the chances of this being optimised out are far lower than the RVOs and NRVOs which are well documented parts of the standard that allow the compiler to alter program flow (i.e. remove a copy constructor).

In the case of

std::vector<int> v=f();

the compiler knows that v is empty prior to being filled with the data returned from f(). Having this information puts the compiler in a far better position to optimise.

And interestingly, in the article SiCrane has referred to, MS state that they implement NRVO by the use of a hidden reference parameter to the function, so the under-the-hood code becomes essentially identicle to your example, but with the additional information available to the compiler that the returned-to object is not an existing object containing data.

[EDIT] Actually, I've just realised that a lot of what I said above is nonsense, since the return result from f() could equally be being assigned to an existing vector.

Sorry.

I guess this optimisation is of less benefit over Jerax's explicit reference passing than I thought.

[Edited by - EasilyConfused on January 2, 2008 2:56:25 PM]

##### Share on other sites
Quote:
 Original post by EasilyConfusedHowever, all of the above are typical examples of the potential problems with using pointers directly in C++ and why you should almost always prefer to use standard library containers or smart pointers instead.std::vector f(){ return std::vector(200);}void g(){ std::vector v=f(); // do stuff}The above is very hard to break, and is unlikely to have any significant performance penalties. Even the apparent additional copy of the vector when being returned is likely to be optimised away by the compiler.

I just wanted to point out the nonsense in this post for any confused third parties. First of all...never use pointers? I scoff at that remark they are one of the reasons C and C++ are so powerful, without pointers there are tons of things you can not accomplish.

Without an understanding of lower level ideas such as pointers you are missing out on how the computer acctually works, which is why people who only understand the higher level languages like Java can be passed up for jobs.

Always prefer standard library containers or smart pointers instead?

Again no, the standard smart pointer that comes with C++ can be a pain to use and is not adequate for some cases. Also, the STL is painfully slow and unoptimized, it seems to be a myth that it is some sort of super fast uber optimized library programmed by God himself. It was not written to be fast, it was written to be portable. In any serious programming project you need to avoid these things like the black plague if you care at all about speed.

If you still do not believe me just write out your own array based string class, for most of the generic string operations it will be hundreds of times faster than the standard string.

##### Share on other sites
Quote:
 Original post by antiquechronoFirst of all...never use pointers? I scoff at that remark they are one of the reasons C and C++ are so powerful, without pointers there are tons of things you can not accomplish.

There are three major uses for pointers:

1) referencing non-local variables. Use references where possible.
2) dynamically allocated data. Use smart pointers.
3) dynamically allocated arrays. Use std::vector.

Most of the things that you "cannot accomplish without pointers" have been implemented for you in the Standard C++ Library or Boost. Remember, this was posted in "For Beginners". Keep that in mind.

Quote:
 Without an understanding of lower level ideas such as pointers you are missing out on how the computer acctually works, which is why people who only understand the higher level languages like Java can be passed up for jobs.

What? C++ pointers have little to no relation to "how a computer works". C++ pointers are just a little more flexible than Java references, because you can use them to iterate over an array, and because primitive data can be referenced. The idea that C++ pointers are closer to the machine is laughable at best.

In any case, I would value a programmer who only understands C++ pretty much the same as one who only knows Java. A good programmer is usually fluent in many languages.

Quote:
 Again no, the standard smart pointer that comes with C++ can be a pain to use and is not adequate for some cases.

Hence boost.

Quote:
 Also, the STL is painfully slow and unoptimized, it seems to be a myth that it is some sort of super fast uber optimized library programmed by God himself.

It is fast enough for 99% of the places it is used. Most of the people who think they can write a faster version will not write one that is significantly faster, certainly not sufficiently fast enough to merit the time spent writing it. In addition, there is a higher probability of bugs.

Quote:
 ... programming project you need to avoid these things like the black plague if you care at all about speed.

80/20 rule. Even in a game not every line needs to be super optimised. Likewise, most of the containers you use will not be used frequently enough to merit optimisation.

Quote:
 If you still do not believe me just write out your own array based string class, for most of the generic string operations it will be hundreds of times faster than the standard string.

Array based? Do you mean an arbitrary upper limit character array? That is comparing apples to oranges.

##### Share on other sites
Quote:
 Original post by antiquechronoI just wanted to point out the nonsense in this post for any confused third parties.

Gosh. Thank heaven the confused third parties have you to protect them from my nonsense.

Quote:
 Original post by antiquechronoFirst of all...never use pointers? I scoff at that remark they are one of the reasons C and C++ are so powerful, without pointers there are tons of things you can not accomplish.

Don't sensationalise my words. I did not say anything of the sort.

Quote:
 Original post by antiquechronoWithout an understanding of lower level ideas such as pointers you are missing out on how the computer acctually works, which is why people who only understand the higher level languages like Java can be passed up for jobs.

So by your "logic", I guess we should be advising beginners on this forum to learn assembly language before C then?

Quote:
 Original post by antiquechronoAlways prefer standard library containers or smart pointers instead?Again no, the standard smart pointer that comes with C++ can be a pain to use and is not adequate for some cases. Also, the STL is painfully slow and unoptimized, it seems to be a myth that it is some sort of super fast uber optimized library programmed by God himself. It was not written to be fast, it was written to be portable. In any serious programming project you need to avoid these things like the black plague if you care at all about speed.

I have nothing to add here to rip-off's comment above.

Quote:
 Original post by antiquechronoIf you still do not believe me just write out your own array based string class, for most of the generic string operations it will be hundreds of times faster than the standard string.

I'd be interested to see some profiling code that supports any of these claims. Hundreds of times faster?

##### Share on other sites
Quote:
 Original post by antiquechronoI just wanted to point out the nonsense in this post for any confused third parties. First of all...never use pointers? I scoff at that remark they are one of the reasons C and C++ are so powerful, without pointers there are tons of things you can not accomplish.

Smart pointers (including boost::shared_ptr, and boost::optional references) and references are enough for 99% of cases. The cases not covered involve mostly pointer arithmetic, string literals and C function interaction, all of which should be cleanly tight-wrapped away from your actual C++ code anyway (usually by converting them to the aforementioned wrappers). Unlike C, programming in C++ is best done without naked pointers.

There's also the part about C and C++ being powerful. I find extremely hilarious the idea of calling "powerful" a language without clean first-class functions. Pointers are the reason why C (and, to a lesser extent, C++) are considered low-level unsafe languages. Power comes from semantic expressiveness, and even a simple non-industrial language like Objective Caml can express more nuances of referencing using merely ref and option than C or C++ could with their entire semantic arsenal or references, constness or pointers.

Quote:
 Without an understanding of lower level ideas such as pointers you are missing out on how the computer acctually works, which is why people who only understand the higher level languages like Java can be passed up for jobs.

A computer does not work 'like pointers'. Pointers have no representation of segmented memory or virtual memory layouts, and low-level memory has no concept of rvalue, lvalue, type, stride or span. Someone who lives with the illusion that the C or C++ operational semantics are somehow representative of computer architectures from the last decade is someone who is not going to be hired.

Quote:
 Again no, the standard smart pointer that comes with C++ can be a pain to use and is not adequate for some cases. Also, the STL is painfully slow and unoptimized, it seems to be a myth that it is some sort of super fast uber optimized library programmed by God himself. It was not written to be fast, it was written to be portable. In any serious programming project you need to avoid these things like the black plague if you care at all about speed.

Now, this is downright stupid. Global replacement of one methodology with another (such as replacing all standard library code with your own) is not optimization, it's voodoo. Optimization consists in writing some working code as quickly as possible (which will involve using the standard library in almost every single case), and only then optimizing your code based on profiling data. Once you have access to profiling data incriminating standard library code for significant performance losses, you can do the local replacement. Reinventing standard library functionality without profiling data will result in almost every single case in spending dozens of man-hours, worsening code quality, and not achieving any observable result on the user side because you made ten times faster (and this is assuming you didn't make it ten times slower) a portion of code which costs you a mere ten milliseconds per day. Nine milliseconds a day might be a 1000% improvement in performance for your code, it's not worth the dozen man-hours, and it's not worth the code worsening.

Quote:
 If you still do not believe me just write out your own array based string class, for most of the generic string operations it will be hundreds of times faster than the standard string.

There are two possibilities here:
• Your code does not provide full standard library functionality. This is alright, until you discover that you need said functionality. Besides, the functionality of many elementary constructs (such as vectors) is so simple that missing any part of it makes the result completely useless.
• Your code provides full standard library functionality. This means that the compiler writers could have used that implementation technique to write the standard library. And, you know what? They already have.

Make sure to enable full optimization of your SC++L distribution before profiling or benchmarking it.

[Edited by - ToohrVyk on January 2, 2008 5:20:13 PM]

##### Share on other sites
Quote:
 Original post by EasilyConfusedSo by your "logic", I guess we should be advising beginners on this forum to learn assembly language before C then?

Well you are kind of responsible for starting the thread hijack into code and compiler optimization, I suppose that is a beginners topic as well?

Quote:
 Original post by EasilyConfusedI'd be interested to see some profiling code that supports any of these claims. Hundreds of times faster?

Like I said, If you are curious then write it and run some tests. I'm just speaking from limited but practical experience. I wrote an RSS feed parser which is obviously heavily text based. I ran callgrind on it and most of my code without surprise was spent manipulating strings thus the reason I wrote my char array based string class which ran circles around the standard string for my purposes.

Quote:
 Original post by rip-offArray based? Do you mean an arbitrary upper limit character array? That is comparing apples to oranges.

No, I said an array based string. A class that has a dynamically allocated array.

Quote:
 Original post by ToohrVykA computer does not work 'like pointers'. Pointers have no representation of segmented memory or virtual memory layouts, and low-level memory has no concept of type or stride. Someone who lives with the illusion that the C or C++ operational semantics are somehow representative of computer architectures from the last decade is someone who is not going to be hired.

I'm not sure if you are doing quantum computing or something, but last I checked the most basic functionality of a processor is to fetch instructions from an address and execute them and in the process more than likely manipulate data which is at another address. It has no concept of a variable, just an addresses to data. Now call me old fashioned but that sounds an awful lot like what a pointer is.

Quote:
 Original post by ToohrVykNow, this is downright stupid. Global replacement of one methodology with another (such as replacing all standard library code with your own) is not optimization, it's voodoo. Optimization consists in writing some working code as quickly as possible (which will involve using the standard library in almost every single case), and only then optimizing your code based on profiling data. Once you have access to profiling data incriminating standard library code for significant performance losses, you can do the local replacement. Reinventing standard library functionality without profiling data will result in almost every single case in spending dozens of man-hours, worsening code quality, and not achieving any observable result on the user side because you made ten times faster a portion of code which costs you a mere ten milliseconds per day.

Yes I realize this and the way I said it came out very wrong. You never optimize until the end of a project. And I never meant to suggest that you should just globally toss out the standard libraries. But, then again what is the point in debating how in the world a compiler is going to optimize your code for you?

Quote:
 Original post by ToohrVykSmart pointers (including boost::shared_ptr, and boost::optional references) and references are enough for 99% of cases. The cases not covered involve mostly pointer arithmetic, string literals and C function interaction, all of which should be cleanly tight-wrapped away from your actual C++ code anyway. Unlike C, programming in C++ is best done without naked pointers.

This is an interesting idea, but how come I never see any smart pointers used in acctual code or examples? My professors certainly have never even mentioned smart pointers before, do you have any resources I can look at discussing this because I have just been taught that *ptr is the way to go.

Quote:
 Original post by rip-offMost of the things that you "cannot accomplish without pointers" have been implemented for you in the Standard C++ Library or Boost. Remember, this was posted in "For Beginners". Keep that in mind.

Ever write a custom data structure before?

Quote:
 Original post by rip-offIn any case, I would value a programmer who only understands C++ pretty much the same as one who only knows Java. A good programmer is usually fluent in many languages.

The problem with Java is that there is really nothing inheriently hard in the language because everything is abstracted away. It may be easyer to learn but how much do you acctually understand if all you know is how to write Java code using the wonderful library where everything is written for you? I'm not bashing Java though it is a great language.

##### Share on other sites
Quote:
 Original post by antiquechronoNo, I said an array based string. A class that has a dynamically allocated array.

But, that is basically what std::string is. Care to post your version?

Quote:
 Original post by ToohrVykThis is an interesting idea, but how come I never see any smart pointers used in acctual code or examples? My professors certainly have never even mentioned smart pointers before, do you have any resources I can look at discussing this because I have just been taught that *ptr is the way to go.

I see them everyday. My professors are often ignorant of more than the basics of some things, or so it would appear. Bear in mind many of them may never have been in industry and may never have written anything more than trivial. Raw pointers are manageable in small quantities. But in complex program even a small pointer error can result in crashes or memory corruption.

Quote:
Quote:
 Original post by rip-offMost of the things that you "cannot accomplish without pointers" have been implemented for you in the Standard C++ Library or Boost. Remember, this was posted in "For Beginners". Keep that in mind.

Ever write a custom data structure before?

Of course I have. And I did use pointers. But I would never use a custom data structure I wrote in production code, unless I found some significant weakness in a standard implementation. Guess what: I never have.

Quote:
 The problem with Java is that there is really nothing inheriently hard in the language because everything is abstracted away. It may be easyer to learn but how much do you acctually understand if all you know is how to write Java code using the wonderful library where everything is written for you? I'm not bashing Java though it is a great language.

All Java data structures are written in Java. You can implement the equivalent of vectors, maps, trees, linked lists etc in pure Java. It isn't significantly different than using C++. In any case, most of the time I want to write game specific code, not re-write some piece of code that has been done to death by many programmers.

##### Share on other sites
Quote:
Original post by antiquechrono
Quote:
 Original post by ToohrVykA computer does not work 'like pointers'. Pointers have no representation of segmented memory or virtual memory layouts, and low-level memory has no concept of type or stride. Someone who lives with the illusion that the C or C++ operational semantics are somehow representative of computer architectures from the last decade is someone who is not going to be hired.

I'm not sure if you are doing quantum computing or something, but last I checked the most basic functionality of a processor is to fetch instructions from an address and execute them and in the process more than likely manipulate data which is at another address. It has no concept of a variable, just an addresses to data. Now call me old fashioned but that sounds an awful lot like what a pointer is.

And this is exactly why your assumption that pointers are an accurate model of how a computer actually works is harming you. And old fashioned. Actual computer memory models are much, much more complex than that for almost any computer produced since 1978. And probably even earlier than that. It's a point of view that completely ignores real details like virtual memory, TLBs, cache, page tables and other details of computer architecture hidden behind the pointer abstraction that can have a real effect in program performance, and are thus important for a professional programmer to understand. Not to mention this understanding is required to take full advantage of extensions to the traditional memory models, such as the Windows AWE.

##### Share on other sites
Quote:
 Well you are kind of responsible for starting the thread hijack into code and compiler optimization, I suppose that is a beginners topic as well?

Please watch your tone. EasilyConfused's original post concerned a very real, very common warning about the dangers of pointers that has quite little to do with optimization.

Quote:
 I'm just speaking from limited but practical experience. I wrote an RSS feed parser which is obviously heavily text based. I ran callgrind on it and most of my code without surprise was spent manipulating strings thus the reason I wrote my char array based string class which ran circles around the standard string for my purposes.

This hardly constitutes a rationale for global procedure, however, valid as it may be for a specific scenario.

Quote:
 I'm not sure if you are doing quantum computing or something, but last I checked the most basic functionality of a processor is to fetch instructions from an address and execute them and in the process more than likely manipulate data which is at another address. It has no concept of a variable, just an addresses to data. Now call me old fashioned but that sounds an awful lot like what a pointer is.

Not really. The concept you're thinking of is that of "referential semantics," the ability to refer to one thing through some kind of proxy or intermediate (typically lighter-weight) form. A pointer is an example of such semantics, and C++ also has "references" which implement the concept more generally. Before you make the typical argument, the standard does not guarantee that references are "pointers under the hood," and in fact may not be on many occasions.

Furthermore, C++, like C, like Java, like C#, is a language defined in terms of an abstract machine (by the language standard). That machine is very simple for C++, but it's still an idealized machine and it is because of this that (a) C++ has so much undefined behavior, and (b) C++ is so portable (read: implementable on many platforms). It is only by accident that the observed or even specified behavior of that machine happens to match or appear similar to the actual underlying platform's behavior. The general assertion that "C++ is closer to the machine," is frequently a fallacy.

Quote:
 The problem with Java is that there is really nothing inheriently hard in the language because everything is abstracted away. It may be easyer to learn but how much do you acctually understand if all you know is how to write Java code using the wonderful library where everything is written for you? I'm not bashing Java though it is a great language.

You learn how to write programs, and that's what programming is about. In many cases, knowing (or caring) about how the processor or whatever hardware is doing its stuff underneath is either irrelevant or a sign of bad code (you're making assumptions and increasing coupling). It's only in rare situtations where it matters, and the situtation originally in question in this thread isn't one of them.

Quote:
 This is an interesting idea, but how come I never see any smart pointers used in acctual code or examples? My professors certainly have never even mentioned smart pointers before, do you have any resources I can look at discussing this because I have just been taught that *ptr is the way to go.

Ever written a function? Why? Probably to stop repeating some code, right? Centralized it all in one spot rather than copy-paste it all over.

That's (part of) the reason you use smart pointers. After all, using a pointer (that you dynamically allocate) you have two options: (1) delete the memory when you're done, (2) let the memory leak.

One who frequently elects (2) is a bad programmer -- ignorant or idiotic, or perhaps both. We won't concern ourselves with them. One who frequently picks (1) is a correct programmer; perhaps even a good programmer.

Good programmers wrap oft-repeated code in functions when appropriate. It follows then that good programmers wrap the deletion of their pointers in a function. In C++, which has support for the RAII idiom, we frequently elect to use the destructor of some proxy class to do this to take advantage of RAII and have the delete operation happen for us.

##### Share on other sites
Quote:
Original post by SiCrane
Quote:
Original post by antiquechrono
Quote:
 Original post by ToohrVykA computer does not work 'like pointers'. Pointers have no representation of segmented memory or virtual memory layouts, and low-level memory has no concept of type or stride. Someone who lives with the illusion that the C or C++ operational semantics are somehow representative of computer architectures from the last decade is someone who is not going to be hired.

I'm not sure if you are doing quantum computing or something, but last I checked the most basic functionality of a processor is to fetch instructions from an address and execute them and in the process more than likely manipulate data which is at another address. It has no concept of a variable, just an addresses to data. Now call me old fashioned but that sounds an awful lot like what a pointer is.

And this is exactly why your assumption that pointers are an accurate model of how a computer actually works is harming you. And old fashioned. Actual computer memory models are much, much more complex than that for almost any computer produced since 1978. And probably even earlier than that. It's a point of view that completely ignores real details like virtual memory, TLBs, cache, page tables and other details of computer architecture hidden behind the pointer abstraction that can have a real effect in program performance, and are thus important for a professional programmer to understand. Not to mention this understanding is required to take full advantage of extensions to the traditional memory models, such as the Windows AWE.

Do you have any good links for required reading on subjects like this? This is the sort of stuff I never get into, or never see referenced anywhere. When you guys get going with these low level discussions I always read along to catch the little bits of interesting info that get scattered all over. I found that article on Named Return Value Optimization to be quite interesting actually.

##### Share on other sites
Quote:
 Original post by antiquechronoI'm not sure if you are doing quantum computing or something, but last I checked the most basic functionality of a processor is to fetch instructions from an address and execute them and in the process more than likely manipulate data which is at another address. It has no concept of a variable, just an addresses to data. Now call me old fashioned but that sounds an awful lot like what a pointer is.

Wrong, on both counts.

First, your description is a gross oversimplification of what the processor does. If this is the kind of 'machine-level' knowledge that can be gathered from using C, I don't really see why we should bother.

Consider a simple C statement such as z[1337]++;. To the C programmer, the program will extract the value at the 1337th position of the buffer pointed to by the integer pointer z, add 1 to that value, and store it back at that position. So far, so good. This is not what happens at the machine level.

First, there's the processor. Today, it's almost guaranteed to be a multi-core one, or perhaps a hyper-threading one. Thus, there's the initial question of which processor the code will be executed on, something which C has nothing to say about.

Then, there's the pipeline. Your typical processor today has between 9 and 30 pipeline stages, which are yet again completely unknown to the C language. Pipeline stages include fetching the instruction, decoding it, fetching memory, processing, waiting, retrieving, outputting, and so on. This is a fundamental point in optimization, because a full pipeline stall in a 30-stage pipeline will divide your performance by 30. In our situation here, using either z or the contents of the buffer nearby the write point will result in read-before-write conflicts and will stall the pipeline for the length of the increment processing.

Then, there's the instructions. Instead of being loaded from disk (where the applications reside), code is first-stage cached in memory and second-stage cached in an on-chip instruction cache. The entire caching process is completely invisible to the C application, which merely sees function pointers and goto labels, yet it is important because of the latency induced by potential cache misses on long jumps.

Then, there's the memory access. Your pointer access may have been successfully aliased by the compiler to a single register representing z[1337], which will need no memory addressing. Or you might be working with register sets in SIMD style, too. Otherwise, chances are that you'll hit the L1 cache, though it is doubtful as the buffer pointed to by z is bigger than many L1 caches. Thus, you might have to fetch the data from the L2 cache. Even assuming that the data was either in the L1 cache or the L2 cache, there's still the possibility that another core on your processor has altered the data, which means that a synchronization protocol between cores might be activated to fetch the data of the other core, just in case. Plus, once the data has been fetched, it might be automatically realigned in case it wasn't in the first place. The C language pointers do not even begin to hint at the depth of all this. And the memory address has not even been decoded yet!

Because, then, you get the actual, uncached, memory access. Your pointer will quite probably be converted, in any recent operating system, to segmented memory (an old relic from the early x86 architectures). The point is that the memory space for every process on your computer is guaranteed to be deterministic and linear by the operating system. That is, each address your program manipulates is an integer number from a minimum value to a maximum value (some of which have not been allocated yet). What happens is that the operating system maps your linear addresses to segments (or, if you prefer, pages) which are handled non-linearly (you have a page index, and then the offset within that page). When you access an address, your processor automatically converts the address based on the instructions it was given by the processor.

If you hit an allocated page that your process is allowed to manipulate, the processor will send read or write instructions through the memory bus, with all the implied latency of such reads. Cache policies will intervene here to determine which L1 and L2 cache lines get filled and which don't, in order to optimize sequential memory access.

If you hit an unallocated page, the operating system is notified through a processor interrupt and will deal with the issue accordingly, usually terminating the infringing process for an access violation or segmentation fault (the terminology depends on the OS culture). This involves suspending your program, flushing the pipeline without damaging too many things, loading the kernel code into the instruction cache for that interrupt handler, elevating to a lower ring, and resuming execution within the kernel's handler. This gets even uglier with multi-cores.

If you hit a guard page, the operating system is also notified, but the reaction will vary. If the page is a lazy-allocation page (that is, you asked for 1GB memory, so the OS gave you 1GB worth of pages but not the memory corresponding to those pages, and will only give you the memory for one of the pages if you actually ask for it), then the operating system interrupt handler will fetch and reserve actual memory, bind it to the segment at the processor level, and resume interaction. Other pages include memory-mapped files or devices, at which point the operating system will forward the written data to the appropriate device driver as part of the interrupt—the actual details of how flushing, caching, multi-cores, pipelines and the rest actually interact with this are too horrible to mention here.

Yet, there's not even the slightest hint in the C language about the existence of pages. Which is not surprising, since C might also be used on page-less, cache-less or pipeline-less architectures. In the end, the result of z[1337]++; is an agonizing detailed and complex process that goes way beyond simply reading data to and from memory. And almost none of it can be inferred from the C operational semantics.

Second, your interpretation of pointer semantics is also oversimplified, and a bit off. This is due, usually, to books and tutorials which give a simplified idea of what pointers are ("memory addresses" is a very frequent misleading explanation, though it does get the fundamental points across) without mentioning that the concept is actually much more complex.

A pointer-to-X rvalue, where X is an actual first-class type, can be one of three distinct things: 1 the 'null pointer' for the type X, which represents the absence of any value. The null pointer evaluates to false in a boolean context (while all other pointers evaluate to true), and the integer constant zero evaluates to the null pointer in a pointer context. 2 an lvalue of the type X. This is the usual 'points at an object of type X'. The definition of lvalue says everything there is to know here. The lvalue and its corresponding rvalue can be accessed through dereferencing (*ptr). 3 a past-the-end pointer. Unlike the null pointer, past-the-end pointers are many, and they differ from each other through '==' comparison. They cannot be dereferenced.

Then, there's the grouping of lvalues: they are grouped in buffers containing zero or more lvalues. A pointer to an lvalue can be incremented or decremented, changing its rvalue the previous or next lvalue in the buffer if it exists, otherwise resulting in either that buffer's past-the-end pointer (if incrementing) or in undefined behaviour (if decrementing). Decrementing a past-the-end pointer yields the last lvalue in the associated buffer, or undefined behaviour if the buffer is empty. Such buffers are created every time you allocate data on the stack or heap, with pointers to the first lvalue being returned in the latter case, or obtained with &var in the former.

The matters are further complexified by the notion of memory layout compatibility, which allows one to see a buffer of X lvalues as a buffer of Y lvalues, under certain conditions of alignment, padding and size. These, I will not go into here, but they are the fundamental element behind casting structures to a buffer of bytes, or behind unions.

The usual 'pointers are addresses' works fine, as long as you consider an address to be a synonym for an lvalue or past-the-end, though it does miss on a lot of subtleties described above. And as soon as you get the strange notion that addresses are numbers, which is almost universally inflicted upon beginners by tutorials and books, you're off course. Unlike numbers, pointers can only be compared for order in very specific cases: when they're within the same buffer. Unlike numbers, pointers cannot reliably be converted to and from numbers (though C99 has done some efforts to solve this) and can certainly not respond correctly to arithmetics on numbers. The list of discrepancies goes on. Ultimately, code such as z[1337]++; actually consists in incrementing an lvalue, not accessing a memory address and incrementing the value found there.

Morality for beginners: the C language is its own complex world that's quite different from how the machine actually works. Knowledge of C will more often than not confuse you about how the machine works, instead of granting you knowledge about it.

Quote:
 This is an interesting idea, but how come I never see any smart pointers used in acctual code or examples? My professors certainly have never even mentioned smart pointers before, do you have any resources I can look at discussing this because I have just been taught that *ptr is the way to go.

Most teachers are not competent enough to be in the industry working a well-paid C++ job. This is generally why they're teachers instead. There are, of course, exceptions which teach out of pleasure, not for money.

All C++ job interviews I've passed have asked for or even checked through tests my knowledge of the SC++L and sometimes even the boost library. And France isn't quite known for its IT prowess.

##### Share on other sites
Since this is turning into a back and forth argument I am going to yield to those of greater experience. I really would like resources to the topics discussed like Mike.Popoloski asked about though as it would help me learn about all the things I obviously have no clue about since three years of college has obviously gotten me nowhere. My professor who acctually worked "in industry" directly stated that no one who writes professional code "in industry" uses the STL and all the companies rewrite it and even had us do all our projects without it. So I'm sorry if I offended anyone.

##### Share on other sites
Quote:
 Original post by Mike.PopoloskiDo you have any good links for required reading on subjects like this? This is the sort of stuff I never get into, or never see referenced anywhere. When you guys get going with these low level discussions I always read along to catch the little bits of interesting info that get scattered all over. I found that article on Named Return Value Optimization to be quite interesting actually.

Well, a four year degree in computer science would help (or possibly computer engineering); a good degree program will cover computer architecture. Otherwise, pretty much any book by Hennessy and Patterson, such as "Computer Architecture" or "Computer Organization and Design". The most recent editions will obviously be more up to date. The fourth edition of "Computer Architecture" has a pretty interesting section on the Pentium 4, but I don't actually own that edition.

I personally know of no good online sources for this kind of material, but I haven't had the need to look for it.

##### Share on other sites
Quote:
 Original post by antiquechronoMy professor who acctually worked "in industry" directly stated that no one who writes professional code "in industry" uses the STL and all the companies rewrite it and even had us do all our projects without it.

Get a new professor. Seriously. Or find a university where the faculty actually know the subjects which they teach. What you describe is akin to learning calculus without any notion of standard limits or derivatives, relying instead on Taylor expressions. It's idiotic.

##### Share on other sites
Quote:
 Original post by ToohrVykThen, there's the pipeline. Your typical processor today has between 9 and 30 pipeline stages, which are yet again completely unknown to the C language. Pipeline stages include fetching the instruction, decoding it, fetching memory, processing, waiting, retrieving, outputting, and so on. This is a fundamental point in optimization, because a full pipeline stall in a 30-stage pipeline will divide your performance by 30. In our situation here, using either z or the contents of the buffer nearby the write point will result in read-before-write conflicts and will stall the pipeline for the length of the increment processing.

Don't you mean read-after-write (modern x86 processors don't suffer from write-after-read)? And the stall is not guaranteed, it depends on whether there are any other instructions to execute and whether or not you get a store forward.
Also using z can only conflict if it's a write to z (since z is only read in the above example).

##### Share on other sites
Quote:
 Original post by JeraxAlso using z can only conflict if it's a write to z (since z is only read in the above example).

Post-increment operator?