Fundamental Pointer Problems - C++

Started by
29 comments, last by _the_phantom_ 16 years, 3 months ago
Quote:Original post by EasilyConfused

However, all of the above are typical examples of the potential problems with using pointers directly in C++ and why you should almost always prefer to use standard library containers or smart pointers instead.

std::vector<int> f(){    return std::vector<int>(200);}void g(){    std::vector<int> v=f();    // do stuff}


The above is very hard to break, and is unlikely to have any significant performance penalties. Even the apparent additional copy of the vector when being returned is likely to be optimised away by the compiler.


I just wanted to point out the nonsense in this post for any confused third parties. First of all...never use pointers? I scoff at that remark they are one of the reasons C and C++ are so powerful, without pointers there are tons of things you can not accomplish.

Without an understanding of lower level ideas such as pointers you are missing out on how the computer acctually works, which is why people who only understand the higher level languages like Java can be passed up for jobs.

Always prefer standard library containers or smart pointers instead?

Again no, the standard smart pointer that comes with C++ can be a pain to use and is not adequate for some cases. Also, the STL is painfully slow and unoptimized, it seems to be a myth that it is some sort of super fast uber optimized library programmed by God himself. It was not written to be fast, it was written to be portable. In any serious programming project you need to avoid these things like the black plague if you care at all about speed.

If you still do not believe me just write out your own array based string class, for most of the generic string operations it will be hundreds of times faster than the standard string.
Advertisement
Quote:Original post by antiquechrono
First of all...never use pointers? I scoff at that remark they are one of the reasons C and C++ are so powerful, without pointers there are tons of things you can not accomplish.


There are three major uses for pointers:

1) referencing non-local variables. Use references where possible.
2) dynamically allocated data. Use smart pointers.
3) dynamically allocated arrays. Use std::vector.

Most of the things that you "cannot accomplish without pointers" have been implemented for you in the Standard C++ Library or Boost. Remember, this was posted in "For Beginners". Keep that in mind.

Quote:
Without an understanding of lower level ideas such as pointers you are missing out on how the computer acctually works, which is why people who only understand the higher level languages like Java can be passed up for jobs.


What? C++ pointers have little to no relation to "how a computer works". C++ pointers are just a little more flexible than Java references, because you can use them to iterate over an array, and because primitive data can be referenced. The idea that C++ pointers are closer to the machine is laughable at best.

In any case, I would value a programmer who only understands C++ pretty much the same as one who only knows Java. A good programmer is usually fluent in many languages.

Quote:
Again no, the standard smart pointer that comes with C++ can be a pain to use and is not adequate for some cases.


Hence boost.

Quote:
Also, the STL is painfully slow and unoptimized, it seems to be a myth that it is some sort of super fast uber optimized library programmed by God himself.


It is fast enough for 99% of the places it is used. Most of the people who think they can write a faster version will not write one that is significantly faster, certainly not sufficiently fast enough to merit the time spent writing it. In addition, there is a higher probability of bugs.

Quote:
... programming project you need to avoid these things like the black plague if you care at all about speed.


80/20 rule. Even in a game not every line needs to be super optimised. Likewise, most of the containers you use will not be used frequently enough to merit optimisation.

Quote:
If you still do not believe me just write out your own array based string class, for most of the generic string operations it will be hundreds of times faster than the standard string.


Array based? Do you mean an arbitrary upper limit character array? That is comparing apples to oranges.
Quote:Original post by antiquechrono
I just wanted to point out the nonsense in this post for any confused third parties.


Gosh. Thank heaven the confused third parties have you to protect them from my nonsense.

Quote:Original post by antiquechrono
First of all...never use pointers? I scoff at that remark they are one of the reasons C and C++ are so powerful, without pointers there are tons of things you can not accomplish.


Don't sensationalise my words. I did not say anything of the sort.

Quote:Original post by antiquechrono
Without an understanding of lower level ideas such as pointers you are missing out on how the computer acctually works, which is why people who only understand the higher level languages like Java can be passed up for jobs.


So by your "logic", I guess we should be advising beginners on this forum to learn assembly language before C then?

Quote:Original post by antiquechrono
Always prefer standard library containers or smart pointers instead?

Again no, the standard smart pointer that comes with C++ can be a pain to use and is not adequate for some cases. Also, the STL is painfully slow and unoptimized, it seems to be a myth that it is some sort of super fast uber optimized library programmed by God himself. It was not written to be fast, it was written to be portable. In any serious programming project you need to avoid these things like the black plague if you care at all about speed.


I have nothing to add here to rip-off's comment above.

Quote:Original post by antiquechrono
If you still do not believe me just write out your own array based string class, for most of the generic string operations it will be hundreds of times faster than the standard string.


I'd be interested to see some profiling code that supports any of these claims. Hundreds of times faster?
Quote:Original post by antiquechrono
I just wanted to point out the nonsense in this post for any confused third parties. First of all...never use pointers? I scoff at that remark they are one of the reasons C and C++ are so powerful, without pointers there are tons of things you can not accomplish.


Smart pointers (including boost::shared_ptr, and boost::optional references) and references are enough for 99% of cases. The cases not covered involve mostly pointer arithmetic, string literals and C function interaction, all of which should be cleanly tight-wrapped away from your actual C++ code anyway (usually by converting them to the aforementioned wrappers). Unlike C, programming in C++ is best done without naked pointers.

There's also the part about C and C++ being powerful. I find extremely hilarious the idea of calling "powerful" a language without clean first-class functions. Pointers are the reason why C (and, to a lesser extent, C++) are considered low-level unsafe languages. Power comes from semantic expressiveness, and even a simple non-industrial language like Objective Caml can express more nuances of referencing using merely ref and option than C or C++ could with their entire semantic arsenal or references, constness or pointers.

Quote:Without an understanding of lower level ideas such as pointers you are missing out on how the computer acctually works, which is why people who only understand the higher level languages like Java can be passed up for jobs.


A computer does not work 'like pointers'. Pointers have no representation of segmented memory or virtual memory layouts, and low-level memory has no concept of rvalue, lvalue, type, stride or span. Someone who lives with the illusion that the C or C++ operational semantics are somehow representative of computer architectures from the last decade is someone who is not going to be hired.

Quote:Again no, the standard smart pointer that comes with C++ can be a pain to use and is not adequate for some cases. Also, the STL is painfully slow and unoptimized, it seems to be a myth that it is some sort of super fast uber optimized library programmed by God himself. It was not written to be fast, it was written to be portable. In any serious programming project you need to avoid these things like the black plague if you care at all about speed.


Now, this is downright stupid. Global replacement of one methodology with another (such as replacing all standard library code with your own) is not optimization, it's voodoo. Optimization consists in writing some working code as quickly as possible (which will involve using the standard library in almost every single case), and only then optimizing your code based on profiling data. Once you have access to profiling data incriminating standard library code for significant performance losses, you can do the local replacement. Reinventing standard library functionality without profiling data will result in almost every single case in spending dozens of man-hours, worsening code quality, and not achieving any observable result on the user side because you made ten times faster (and this is assuming you didn't make it ten times slower) a portion of code which costs you a mere ten milliseconds per day. Nine milliseconds a day might be a 1000% improvement in performance for your code, it's not worth the dozen man-hours, and it's not worth the code worsening.

Quote:If you still do not believe me just write out your own array based string class, for most of the generic string operations it will be hundreds of times faster than the standard string.


There are two possibilities here:
  • Your code does not provide full standard library functionality. This is alright, until you discover that you need said functionality. Besides, the functionality of many elementary constructs (such as vectors) is so simple that missing any part of it makes the result completely useless.
  • Your code provides full standard library functionality. This means that the compiler writers could have used that implementation technique to write the standard library. And, you know what? They already have.


Make sure to enable full optimization of your SC++L distribution before profiling or benchmarking it.

[Edited by - ToohrVyk on January 2, 2008 5:20:13 PM]
Quote:Original post by EasilyConfused
So by your "logic", I guess we should be advising beginners on this forum to learn assembly language before C then?


Well you are kind of responsible for starting the thread hijack into code and compiler optimization, I suppose that is a beginners topic as well?

Quote:Original post by EasilyConfused
I'd be interested to see some profiling code that supports any of these claims. Hundreds of times faster?


Like I said, If you are curious then write it and run some tests. I'm just speaking from limited but practical experience. I wrote an RSS feed parser which is obviously heavily text based. I ran callgrind on it and most of my code without surprise was spent manipulating strings thus the reason I wrote my char array based string class which ran circles around the standard string for my purposes.

Quote:Original post by rip-off
Array based? Do you mean an arbitrary upper limit character array? That is comparing apples to oranges.


No, I said an array based string. A class that has a dynamically allocated array.

Quote:Original post by ToohrVyk
A computer does not work 'like pointers'. Pointers have no representation of segmented memory or virtual memory layouts, and low-level memory has no concept of type or stride. Someone who lives with the illusion that the C or C++ operational semantics are somehow representative of computer architectures from the last decade is someone who is not going to be hired.


I'm not sure if you are doing quantum computing or something, but last I checked the most basic functionality of a processor is to fetch instructions from an address and execute them and in the process more than likely manipulate data which is at another address. It has no concept of a variable, just an addresses to data. Now call me old fashioned but that sounds an awful lot like what a pointer is.

Quote:Original post by ToohrVyk
Now, this is downright stupid. Global replacement of one methodology with another (such as replacing all standard library code with your own) is not optimization, it's voodoo. Optimization consists in writing some working code as quickly as possible (which will involve using the standard library in almost every single case), and only then optimizing your code based on profiling data. Once you have access to profiling data incriminating standard library code for significant performance losses, you can do the local replacement. Reinventing standard library functionality without profiling data will result in almost every single case in spending dozens of man-hours, worsening code quality, and not achieving any observable result on the user side because you made ten times faster a portion of code which costs you a mere ten milliseconds per day.


Yes I realize this and the way I said it came out very wrong. You never optimize until the end of a project. And I never meant to suggest that you should just globally toss out the standard libraries. But, then again what is the point in debating how in the world a compiler is going to optimize your code for you?

Quote:Original post by ToohrVyk
Smart pointers (including boost::shared_ptr, and boost::optional references) and references are enough for 99% of cases. The cases not covered involve mostly pointer arithmetic, string literals and C function interaction, all of which should be cleanly tight-wrapped away from your actual C++ code anyway. Unlike C, programming in C++ is best done without naked pointers.


This is an interesting idea, but how come I never see any smart pointers used in acctual code or examples? My professors certainly have never even mentioned smart pointers before, do you have any resources I can look at discussing this because I have just been taught that *ptr is the way to go.

Quote:Original post by rip-off
Most of the things that you "cannot accomplish without pointers" have been implemented for you in the Standard C++ Library or Boost. Remember, this was posted in "For Beginners". Keep that in mind.


Ever write a custom data structure before?

Quote:Original post by rip-off
In any case, I would value a programmer who only understands C++ pretty much the same as one who only knows Java. A good programmer is usually fluent in many languages.


The problem with Java is that there is really nothing inheriently hard in the language because everything is abstracted away. It may be easyer to learn but how much do you acctually understand if all you know is how to write Java code using the wonderful library where everything is written for you? I'm not bashing Java though it is a great language.
Quote:Original post by antiquechrono
No, I said an array based string. A class that has a dynamically allocated array.


But, that is basically what std::string is. Care to post your version?


Quote:Original post by ToohrVyk
This is an interesting idea, but how come I never see any smart pointers used in acctual code or examples? My professors certainly have never even mentioned smart pointers before, do you have any resources I can look at discussing this because I have just been taught that *ptr is the way to go.


I see them everyday. My professors are often ignorant of more than the basics of some things, or so it would appear. Bear in mind many of them may never have been in industry and may never have written anything more than trivial. Raw pointers are manageable in small quantities. But in complex program even a small pointer error can result in crashes or memory corruption.

Quote:
Quote:Original post by rip-off
Most of the things that you "cannot accomplish without pointers" have been implemented for you in the Standard C++ Library or Boost. Remember, this was posted in "For Beginners". Keep that in mind.


Ever write a custom data structure before?


Of course I have. And I did use pointers. But I would never use a custom data structure I wrote in production code, unless I found some significant weakness in a standard implementation. Guess what: I never have.

Quote:
The problem with Java is that there is really nothing inheriently hard in the language because everything is abstracted away. It may be easyer to learn but how much do you acctually understand if all you know is how to write Java code using the wonderful library where everything is written for you? I'm not bashing Java though it is a great language.


All Java data structures are written in Java. You can implement the equivalent of vectors, maps, trees, linked lists etc in pure Java. It isn't significantly different than using C++. In any case, most of the time I want to write game specific code, not re-write some piece of code that has been done to death by many programmers.
Quote:Original post by antiquechrono
Quote:Original post by ToohrVyk
A computer does not work 'like pointers'. Pointers have no representation of segmented memory or virtual memory layouts, and low-level memory has no concept of type or stride. Someone who lives with the illusion that the C or C++ operational semantics are somehow representative of computer architectures from the last decade is someone who is not going to be hired.


I'm not sure if you are doing quantum computing or something, but last I checked the most basic functionality of a processor is to fetch instructions from an address and execute them and in the process more than likely manipulate data which is at another address. It has no concept of a variable, just an addresses to data. Now call me old fashioned but that sounds an awful lot like what a pointer is.


And this is exactly why your assumption that pointers are an accurate model of how a computer actually works is harming you. And old fashioned. Actual computer memory models are much, much more complex than that for almost any computer produced since 1978. And probably even earlier than that. It's a point of view that completely ignores real details like virtual memory, TLBs, cache, page tables and other details of computer architecture hidden behind the pointer abstraction that can have a real effect in program performance, and are thus important for a professional programmer to understand. Not to mention this understanding is required to take full advantage of extensions to the traditional memory models, such as the Windows AWE.
Quote:
Well you are kind of responsible for starting the thread hijack into code and compiler optimization, I suppose that is a beginners topic as well?

Please watch your tone. EasilyConfused's original post concerned a very real, very common warning about the dangers of pointers that has quite little to do with optimization.

Quote:
I'm just speaking from limited but practical experience. I wrote an RSS feed parser which is obviously heavily text based. I ran callgrind on it and most of my code without surprise was spent manipulating strings thus the reason I wrote my char array based string class which ran circles around the standard string for my purposes.

This hardly constitutes a rationale for global procedure, however, valid as it may be for a specific scenario.

Quote:
I'm not sure if you are doing quantum computing or something, but last I checked the most basic functionality of a processor is to fetch instructions from an address and execute them and in the process more than likely manipulate data which is at another address. It has no concept of a variable, just an addresses to data. Now call me old fashioned but that sounds an awful lot like what a pointer is.

Not really. The concept you're thinking of is that of "referential semantics," the ability to refer to one thing through some kind of proxy or intermediate (typically lighter-weight) form. A pointer is an example of such semantics, and C++ also has "references" which implement the concept more generally. Before you make the typical argument, the standard does not guarantee that references are "pointers under the hood," and in fact may not be on many occasions.

Furthermore, C++, like C, like Java, like C#, is a language defined in terms of an abstract machine (by the language standard). That machine is very simple for C++, but it's still an idealized machine and it is because of this that (a) C++ has so much undefined behavior, and (b) C++ is so portable (read: implementable on many platforms). It is only by accident that the observed or even specified behavior of that machine happens to match or appear similar to the actual underlying platform's behavior. The general assertion that "C++ is closer to the machine," is frequently a fallacy.

Quote:
The problem with Java is that there is really nothing inheriently hard in the language because everything is abstracted away. It may be easyer to learn but how much do you acctually understand if all you know is how to write Java code using the wonderful library where everything is written for you? I'm not bashing Java though it is a great language.

You learn how to write programs, and that's what programming is about. In many cases, knowing (or caring) about how the processor or whatever hardware is doing its stuff underneath is either irrelevant or a sign of bad code (you're making assumptions and increasing coupling). It's only in rare situtations where it matters, and the situtation originally in question in this thread isn't one of them.

Quote:
This is an interesting idea, but how come I never see any smart pointers used in acctual code or examples? My professors certainly have never even mentioned smart pointers before, do you have any resources I can look at discussing this because I have just been taught that *ptr is the way to go.

Ever written a function? Why? Probably to stop repeating some code, right? Centralized it all in one spot rather than copy-paste it all over.

That's (part of) the reason you use smart pointers. After all, using a pointer (that you dynamically allocate) you have two options: (1) delete the memory when you're done, (2) let the memory leak.

One who frequently elects (2) is a bad programmer -- ignorant or idiotic, or perhaps both. We won't concern ourselves with them. One who frequently picks (1) is a correct programmer; perhaps even a good programmer.

Good programmers wrap oft-repeated code in functions when appropriate. It follows then that good programmers wrap the deletion of their pointers in a function. In C++, which has support for the RAII idiom, we frequently elect to use the destructor of some proxy class to do this to take advantage of RAII and have the delete operation happen for us.
Quote:Original post by SiCrane
Quote:Original post by antiquechrono
Quote:Original post by ToohrVyk
A computer does not work 'like pointers'. Pointers have no representation of segmented memory or virtual memory layouts, and low-level memory has no concept of type or stride. Someone who lives with the illusion that the C or C++ operational semantics are somehow representative of computer architectures from the last decade is someone who is not going to be hired.


I'm not sure if you are doing quantum computing or something, but last I checked the most basic functionality of a processor is to fetch instructions from an address and execute them and in the process more than likely manipulate data which is at another address. It has no concept of a variable, just an addresses to data. Now call me old fashioned but that sounds an awful lot like what a pointer is.


And this is exactly why your assumption that pointers are an accurate model of how a computer actually works is harming you. And old fashioned. Actual computer memory models are much, much more complex than that for almost any computer produced since 1978. And probably even earlier than that. It's a point of view that completely ignores real details like virtual memory, TLBs, cache, page tables and other details of computer architecture hidden behind the pointer abstraction that can have a real effect in program performance, and are thus important for a professional programmer to understand. Not to mention this understanding is required to take full advantage of extensions to the traditional memory models, such as the Windows AWE.


Do you have any good links for required reading on subjects like this? This is the sort of stuff I never get into, or never see referenced anywhere. When you guys get going with these low level discussions I always read along to catch the little bits of interesting info that get scattered all over. I found that article on Named Return Value Optimization to be quite interesting actually.
Mike Popoloski | Journal | SlimDX
Quote:Original post by antiquechrono
I'm not sure if you are doing quantum computing or something, but last I checked the most basic functionality of a processor is to fetch instructions from an address and execute them and in the process more than likely manipulate data which is at another address. It has no concept of a variable, just an addresses to data. Now call me old fashioned but that sounds an awful lot like what a pointer is.


Wrong, on both counts.

First, your description is a gross oversimplification of what the processor does. If this is the kind of 'machine-level' knowledge that can be gathered from using C, I don't really see why we should bother.

Consider a simple C statement such as z[1337]++;. To the C programmer, the program will extract the value at the 1337th position of the buffer pointed to by the integer pointer z, add 1 to that value, and store it back at that position. So far, so good. This is not what happens at the machine level.

First, there's the processor. Today, it's almost guaranteed to be a multi-core one, or perhaps a hyper-threading one. Thus, there's the initial question of which processor the code will be executed on, something which C has nothing to say about.

Then, there's the pipeline. Your typical processor today has between 9 and 30 pipeline stages, which are yet again completely unknown to the C language. Pipeline stages include fetching the instruction, decoding it, fetching memory, processing, waiting, retrieving, outputting, and so on. This is a fundamental point in optimization, because a full pipeline stall in a 30-stage pipeline will divide your performance by 30. In our situation here, using either z or the contents of the buffer nearby the write point will result in read-before-write conflicts and will stall the pipeline for the length of the increment processing.

Then, there's the instructions. Instead of being loaded from disk (where the applications reside), code is first-stage cached in memory and second-stage cached in an on-chip instruction cache. The entire caching process is completely invisible to the C application, which merely sees function pointers and goto labels, yet it is important because of the latency induced by potential cache misses on long jumps.

Then, there's the memory access. Your pointer access may have been successfully aliased by the compiler to a single register representing z[1337], which will need no memory addressing. Or you might be working with register sets in SIMD style, too. Otherwise, chances are that you'll hit the L1 cache, though it is doubtful as the buffer pointed to by z is bigger than many L1 caches. Thus, you might have to fetch the data from the L2 cache. Even assuming that the data was either in the L1 cache or the L2 cache, there's still the possibility that another core on your processor has altered the data, which means that a synchronization protocol between cores might be activated to fetch the data of the other core, just in case. Plus, once the data has been fetched, it might be automatically realigned in case it wasn't in the first place. The C language pointers do not even begin to hint at the depth of all this. And the memory address has not even been decoded yet!

Because, then, you get the actual, uncached, memory access. Your pointer will quite probably be converted, in any recent operating system, to segmented memory (an old relic from the early x86 architectures). The point is that the memory space for every process on your computer is guaranteed to be deterministic and linear by the operating system. That is, each address your program manipulates is an integer number from a minimum value to a maximum value (some of which have not been allocated yet). What happens is that the operating system maps your linear addresses to segments (or, if you prefer, pages) which are handled non-linearly (you have a page index, and then the offset within that page). When you access an address, your processor automatically converts the address based on the instructions it was given by the processor.

If you hit an allocated page that your process is allowed to manipulate, the processor will send read or write instructions through the memory bus, with all the implied latency of such reads. Cache policies will intervene here to determine which L1 and L2 cache lines get filled and which don't, in order to optimize sequential memory access.

If you hit an unallocated page, the operating system is notified through a processor interrupt and will deal with the issue accordingly, usually terminating the infringing process for an access violation or segmentation fault (the terminology depends on the OS culture). This involves suspending your program, flushing the pipeline without damaging too many things, loading the kernel code into the instruction cache for that interrupt handler, elevating to a lower ring, and resuming execution within the kernel's handler. This gets even uglier with multi-cores.

If you hit a guard page, the operating system is also notified, but the reaction will vary. If the page is a lazy-allocation page (that is, you asked for 1GB memory, so the OS gave you 1GB worth of pages but not the memory corresponding to those pages, and will only give you the memory for one of the pages if you actually ask for it), then the operating system interrupt handler will fetch and reserve actual memory, bind it to the segment at the processor level, and resume interaction. Other pages include memory-mapped files or devices, at which point the operating system will forward the written data to the appropriate device driver as part of the interrupt—the actual details of how flushing, caching, multi-cores, pipelines and the rest actually interact with this are too horrible to mention here.

Yet, there's not even the slightest hint in the C language about the existence of pages. Which is not surprising, since C might also be used on page-less, cache-less or pipeline-less architectures. In the end, the result of z[1337]++; is an agonizing detailed and complex process that goes way beyond simply reading data to and from memory. And almost none of it can be inferred from the C operational semantics.

Second, your interpretation of pointer semantics is also oversimplified, and a bit off. This is due, usually, to books and tutorials which give a simplified idea of what pointers are ("memory addresses" is a very frequent misleading explanation, though it does get the fundamental points across) without mentioning that the concept is actually much more complex.

A pointer-to-X rvalue, where X is an actual first-class type, can be one of three distinct things: 1 the 'null pointer' for the type X, which represents the absence of any value. The null pointer evaluates to false in a boolean context (while all other pointers evaluate to true), and the integer constant zero evaluates to the null pointer in a pointer context. 2 an lvalue of the type X. This is the usual 'points at an object of type X'. The definition of lvalue says everything there is to know here. The lvalue and its corresponding rvalue can be accessed through dereferencing (*ptr). 3 a past-the-end pointer. Unlike the null pointer, past-the-end pointers are many, and they differ from each other through '==' comparison. They cannot be dereferenced.

Then, there's the grouping of lvalues: they are grouped in buffers containing zero or more lvalues. A pointer to an lvalue can be incremented or decremented, changing its rvalue the previous or next lvalue in the buffer if it exists, otherwise resulting in either that buffer's past-the-end pointer (if incrementing) or in undefined behaviour (if decrementing). Decrementing a past-the-end pointer yields the last lvalue in the associated buffer, or undefined behaviour if the buffer is empty. Such buffers are created every time you allocate data on the stack or heap, with pointers to the first lvalue being returned in the latter case, or obtained with &var in the former.

The matters are further complexified by the notion of memory layout compatibility, which allows one to see a buffer of X lvalues as a buffer of Y lvalues, under certain conditions of alignment, padding and size. These, I will not go into here, but they are the fundamental element behind casting structures to a buffer of bytes, or behind unions.

The usual 'pointers are addresses' works fine, as long as you consider an address to be a synonym for an lvalue or past-the-end, though it does miss on a lot of subtleties described above. And as soon as you get the strange notion that addresses are numbers, which is almost universally inflicted upon beginners by tutorials and books, you're off course. Unlike numbers, pointers can only be compared for order in very specific cases: when they're within the same buffer. Unlike numbers, pointers cannot reliably be converted to and from numbers (though C99 has done some efforts to solve this) and can certainly not respond correctly to arithmetics on numbers. The list of discrepancies goes on. Ultimately, code such as z[1337]++; actually consists in incrementing an lvalue, not accessing a memory address and incrementing the value found there.

Morality for beginners: the C language is its own complex world that's quite different from how the machine actually works. Knowledge of C will more often than not confuse you about how the machine works, instead of granting you knowledge about it.

Quote:This is an interesting idea, but how come I never see any smart pointers used in acctual code or examples? My professors certainly have never even mentioned smart pointers before, do you have any resources I can look at discussing this because I have just been taught that *ptr is the way to go.


Most teachers are not competent enough to be in the industry working a well-paid C++ job. This is generally why they're teachers instead. There are, of course, exceptions which teach out of pleasure, not for money.

All C++ job interviews I've passed have asked for or even checked through tests my knowledge of the SC++L and sometimes even the boost library. And France isn't quite known for its IT prowess.

This topic is closed to new replies.

Advertisement