• FEATURED

View more

View more

View more

Free!

### Image of the Day Submit

IOTD | Top Screenshots

### The latest, straight to your Inbox.

Subscribe to GameDev.net's newsletters to receive the latest updates and exclusive content.

## C or C++ or C#

Old topic!

Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

47 replies to this topic

### #21Hodgman  Moderators

Posted 28 August 2007 - 05:46 PM

@Nathan Baum
I don't like your tone mate, can you keep it civil eh?

Quote:
Original post by Nathan Baum
Quote:
 Original post by HodgmanC is *closer* to machine level than higher level languages like C# (i.e. it has raw pointers, can be freely mixed with assembly code, etc...).

No it can't. Certain implementations of C offer inline assembly. How "freely" the two can be mixed is highly variable. And C# has pointers, and at least one C# compiler supports inline MSIL assembly. So, you're wrong all around, basically.

Are you really trying to argue that C isn't less abstracted than C#? That's all I was saying... Again, can't see the woods for the trees...

Generally speaking, if you have a "certain implementation of C", like the MSVS version that is in wide use, you can include inline assembly in your C files. Also, all pointers in C are "raw" (as opposed to C# which includes "managed" pointers).
If both of those general statements are true, then how am I "wrong all round"?

You're not even arguing against my point (that C is less abstracted) - you're trying to find specific cases where my statements are also true for C# (in effect, trying to argue that C# is not more abstracted than C).

Quote:
Original post by Nathan Baum
Quote:
 e.g. #1 Learning how to deal with strings correctly in C will teach you a lot more about "how the computer works as a machine" compared to using a C++/C#/Java/etc string class.

No, it'll teach you about how C-style strings work. Given that they work the same on all implementations of C, even when running on machines with wildly differing architectures, that should be a clue that string handling doesn't reveal the secrets of "how the computer works as a machine".

Again, you're not arguing against my point - I didn't say that it would "reveal the secrets", I was trying to illustrate that using lower level constructs teaches you more about the machine than using higher level constructs.
Do you dispute this point, or are you just being argumentative without reason?

C-style strings involve manually dealing with pointers/arrays/null-termination, which are all low-level concepts. In contrast, C++/C#/Java strings involve using a simple high-level API.
Therefore learning to use C-style strings will require more low-level knowledge, which in turn reveals a thinner abstraction to "how the computer works as a machine".

Quote:
Original post by Nathan Baum
Quote:
 e.g. #2 You can write a program in C that accepts an arbitrary hexadecimal value from the user and tries to read 17 bytes from that memory address (which may well cause a seg-fault etc...). Higher languages such as C#/Java won't let you do silly low-level things like this because they are *more abstracted*.

C# will let you do silly things like that, not that it matters. The ability to crash your program by probing random memory addresses does not make a language "closer to the metal". BASIC allowed you to probe random memory addresses, yet it was, in many ways, "higher level" than C.

Ok, I'm not that experienced with C# so scratch it from my quote. My point still stands though - AFAIK, Java's version of 'pointers' ('references') can't be assigned to arbitrary memory addresses - the language's abstraction of memory is "more abstract" than C's abstraction of memory.

### #22Julian90  Members

Posted 28 August 2007 - 07:42 PM

Quote:
 Ok, I'm not that experienced with C# so scratch it from my quote. My point still stands though - AFAIK, Java's version of 'pointers' ('references') can't be assigned to arbitrary memory addresses - the language's abstraction of memory is "more abstract" than C's abstraction of memory.

Well since "C the language" doesn't define the behavior of doing such things the only difference is that in one the compiler checks it and issues a error message and in the other it doesn't, in fact since it's undefined a smart enough C compiler can issue an error message at compile time if it wants (although I don't know of any that do).

### #23Hodgman  Moderators

Posted 28 August 2007 - 07:58 PM

Quote:
 Original post by Julian90***nitpicking***

That's not the point (and this hypothetical smart compiler could only catch that error in specific cases).
C pointers are numbers, Java pointers are numbers in bubble-wrap.
Hence, to learn low-level computing concepts, C is probably more appropriate than Java.

### #24MaulingMonkey  Members

Posted 28 August 2007 - 08:26 PM

Quote:
 Original post by Hodgman@Nathan BaumI don't like your tone mate, can you keep it civil eh?

I'm confused. Where has he been anything but civil? Normally we're a bit abrasively blunt, but I'm not even seeing anything I'd qualify as that in his posting (to my suprise!)

Quote:
 Are you really trying to argue that C isn't less abstracted than C#?

C attempts to be a complete abstraction of assembly, just like C# is to bytecode (assembly or otherwise). The fact that it's such a leaky one in it's attempt isn't in it's favor -- it's just leaky enough to be a pain in the ass without actually uncovering anything at all useful whatsoever. That takes skill... or in this case, legacy.

Quote:
 Generally speaking, if you have a "certain implementation of C", like the MSVS version that is in wide use, you can include inline assembly in your C files. Also, all pointers in C are "raw" (as opposed to C# which includes "managed" pointers).

Actually, C# managed references are going to use the exact same patterns as C++ ones, the C# environment just knows that it's allowed to take advantage of that information too for GC. C# can also use C++ style "raw" pointers with the use of the unsafe keyword. Unlike C++, it teaches a fundamental computing truth: Full manual all the time is a bad idea ™ because programmers are fallible humans.

Quote:
Quote:
Original post by Nathan Baum
Quote:
 e.g. #1 Learning how to deal with strings correctly in C will teach you a lot more about "how the computer works as a machine" compared to using a C++/C#/Java/etc string class.

No, it'll teach you about how C-style strings work. Given that they work the same on all implementations of C, even when running on machines with wildly differing architectures, that should be a clue that string handling doesn't reveal the secrets of "how the computer works as a machine".

Again, you're not arguing against my point - I didn't say that it would "reveal the secrets", I was trying to illustrate that using lower level constructs teaches you more about the machine than using higher level constructs.

No, it teaches you more about just those "low level" concepts. Are those concepts useful to programming, or actually teach you anything about how computers work? No. They'll show you a sliver of how they worked on a 286, but modern hardware merely emulates that model for backwards compatibility.

Hell, even x86 is a bit of a sham now. Modern CPUs merely decode it into their own internal microformats for actual execution.

Quote:
 C-style strings involve manually dealing with pointers/arrays/null-termination, which are all low-level concepts. In contrast, C++/C#/Java strings involve using a simple high-level API.

Which will give you more time to read a book on how computers actually work, with information actually relevant to modern programming, some of which Nathan mentioned -- such as knowing the effects of the various cache layers on your code.

Or, for example, reading why null termination is horribly inefficient. Note that C's strlen/strcpy/strcat all abstract away the reason behind this just as much as those higher level APIs would if they were as broken as C is in their choice of preferred string format.

Quote:
Quote:
Original post by Nathan Baum
Quote:
 e.g. #2 You can write a program in C that accepts an arbitrary hexadecimal value from the user and tries to read 17 bytes from that memory address (which may well cause a seg-fault etc...). Higher languages such as C#/Java won't let you do silly low-level things like this because they are *more abstracted*.

C# will let you do silly things like that, not that it matters. The ability to crash your program by probing random memory addresses does not make a language "closer to the metal". BASIC allowed you to probe random memory addresses, yet it was, in many ways, "higher level" than C.

Ok, I'm not that experienced with C# so scratch it from my quote. My point still stands though - AFAIK, Java's version of 'pointers' ('references') can't be assigned to arbitrary memory addresses - the language's abstraction of memory is "more abstract" than C's abstraction of memory.

I thought we were talking about C#, not Java. I mean, it's just in the freakin' thread title. Yes, there are HLLs that will forcibly abstract things even higher, we generally don't have much love for Java around here.

Forcibly deabstracting things like C does, however, is also the wrong answer. With flexibility comes the ability to do whatever you want whenever it's appropriate. Telling a beginner to use a language that's unsafe by default in it's very design, before mastering even the most basic of concepts of programming -- even the simplest things such as the flow of execution -- is not appropriate.

And, given a choice between forcibly abstracting or deabstracting, for a beginner, I'd go with forcibly abstracting. Abstraction is fundamental to good programming -- especially defensive programming. Deabstraction is only fundamental to http://worsethanfailure.com/'s existance.

### #25Crypter  Members

Posted 28 August 2007 - 08:58 PM

Quote:

Quote:

Quote:
 e.g. #1 Learning how to deal with strings correctly in C will teach you a lot more about "how the computer works as a machine" compared to using a C++/C#/Java/etc string class.

No, it'll teach you about how C-style strings work. Given that they work the same on all implementations of C, even when running on machines with wildly differing architectures, that should be a clue that string handling doesn't reveal the secrets of "how the computer works as a machine".

Again, you're not arguing against my point - I didn't say that it would "reveal the secrets", I was trying to illustrate that using lower level constructs teaches you more about the machine than using higher level constructs.

They are C concepts, not low level concepts. Strings dont need any null termination. Pointers are used ALOT in low level programming, but only as dereferencing addresses and address arithmetic.

C is an abstraction of assembly language.

assembly language is an abstraction of machine language.

machine language is an abstraction of microcode.

microcode is either stored in via EEPROM (The BIOS, for example); or hardwired in.

With that done, I think this is getting a bit offtopic...[wink]

### #26Buster2000  Members

Posted 28 August 2007 - 09:14 PM

Isn't this getting getting a bit too far off the origional topic. The OP wanted to know which language to start learning between C, C++ or C#.
He didn't ask which was closer to the operating system or which would teach the most about computer architechture.
C# is good enough for a begginer. If later down the line you want to change to C or C++ then you'll be able to pick up these languages very quickly.
Most things that you can do in C you can do in C# apart from the really really low level stuff which as operating systems become more and more abstracted away from hardware you are less likely to need.

### #27ToohrVyk  Members

Posted 28 August 2007 - 10:03 PM

OP: to learn about coding, learn all three. And learn them correctly.

My ideal sequence of learning would be to start with a simple functional language such as Caml Light to familiarize yourself with elementary algorithms without all the clutter of real-world industry languages. Once you learn how to do things in the small, tight and controlled environment of a very simple language, you can move on to learn more detailed concepts. The next step would be object-oriented development, as part of C# (Java is fine, but I find it more frustrating due to the lack of first-class functions). Then, learning C, Scheme and Prolog for the sheer cultural importance. And finally, once you have the maturity, learning C++—you need to be a lawyer to understand how it works.

Quote:
 Original post by HodgmanC pointers are numbers, Java pointers are numbers in bubble-wrap.Hence, to learn low-level computing concepts, C is probably more appropriate than Java.

In the C language, Pointers. Are. Not. Numbers. This is possibly the greatest misconception about the C memory model that could be imagined.

A C pointer is either null, or combination of a buffer B, an offset O, with the constraints that sizeof(B) >= O and O >= 0; and has a type T. Notice that pointers do not necessarily contain a number (a null pointer contains no numbers at all), and that non-null pointers also reference the buffer within which they exist, in addition to their offset within that buffer. So the ANSI sayeth.

This is not merely a theoretical rambling. In real life, on a segmented memory model (such as the one in Joe User's 2007 x86 processor), pointers are represented as a 48-bit Segment + Offset combination, stored separately into an offset register (32-bit) and a segment register (16-bit). If you consider C pointers to be just numbers (which they aren't), then you are actually hiding low-level concepts under the rug, instead of learning about them.

In short, some hardware uses numbers to represent memory addresses (the GeForce 8800 uses uint32_t for this), some hardware uses a segmented memory model, and most operating systems add virtual memory, paging, swaps, memory-mapped devices and files on top of everything to further muddle everything.

This, at least to me, shows that both the C memory model and the Java memory model are both woefully irrepresentative of how the low-level memory models (yes, there are several, at least on the x86) actually work. If anything, the C memory model is a weak approximation of the x86 segmented mode, and the Java memory model is a weak approximation of the x86 protected mode, which hardly classifies either Java or C as being more representative of lower-level concepts than the other.

Ironically, I could even argue that Java tells us more about how the typical memory models work, than C ever could. This is because almost every modern operating system in existence provides its users with virtual memory. This means that the system allocates memory pages when the program needs them, and may choose to garbage-collect them when the program doesn't need them anymore, or may opt to move them around in actual physical memory, for instance swapping them to disk to increase the speed of other processes, or even saving them to disk for hibernation and restoring them later on. All of these "memory moves around and is garbage-collected" mumbo-jumbo is neatly and cleanly abstracted away by both the Java and the C memory model, which both say "here's a pointer to something, you can always access that something through this pointer". However, by making garbage collection of unused memory and movement of memory buffers during garbage collection an explicit part of the Java way of life, Java gives a little insight about how the virtual memory works, something that C could never have any hope of explaining.

### #28SLaYaH  Members

Posted 29 August 2007 - 04:55 AM

Sorry everyone I didn't mean for this to happen, I did read through all your posts and learned about a few things. Thank you everyone for posting, I do believe I have recieved my anwser through a few various posts.

I'll be sure to post soon again at GD with another problem probably after I have progressed a bit, Thanks again.

### #29PaulCesar  Members

Posted 29 August 2007 - 05:18 AM

Since this thread is already derailed beyond any imaginable change of getting back on target, I have a simple question...

Many of you who have contributed to the derailing of this thread completely are people who at the same time are consistently discussing the dillema of people derailing threads. wt?

- Ricky

### #30Promit  Senior Moderators

Posted 29 August 2007 - 08:09 AM

Quote:
 Original post by PaulCesarSince this thread is already derailed beyond any imaginable change of getting back on target, I have a simple question...Many of you who have contributed to the derailing of this thread completely are people who at the same time are consistently discussing the dillema of people derailing threads. wt?- Ricky
For what it's worth, I don't consider this thread derailed at all.

### #31GeneDefekt  Members

Posted 29 August 2007 - 08:21 AM

As an all around programmer learn them all.

As for which is best to use:

There are no silver bullets in programming. Each problem you encounter will always have a "best" solution. After learning all of them you will be able to choose which language to use for what.

So before you choose a language from your tool kit you should ask, "What exactly is it that I want?".

How much control over memory do you want? How large is the project? Will there be extensions to said project in the future? All of these questions and more should go into deciding which language you wish to us.

So in short learn them all for the sake of more tools. After doing this you will probably know which tool will best fit your problem.

Chris

### #32Ravyne  Members

Posted 29 August 2007 - 11:58 AM

It appears I've sparked a bit of controversy. In hindsight, I regret the phrase about "Learning how the computer works as a machine." -- I reality, I meant that C exposes a lower-level, logical model (not physical as the word machine would imply) of the machine.

As has been noted, everything above the microcode layer is an abstraction, technically speaking. So I think the question is really this: At what level of abstraction do you find the optimal position for a beginner to enter into the world of programming?

For the majority of people of the bottom-up mindset, I believe that position to be C (or another small, procedural or even functional language) for 95% of people, with Assembly picking up the remainder. The top-down crowd would be more at home with C# or java or even C++ (though again, I don't recommend it as a first language.)

Why do I believe that C is the optimal beginner language for so many people? I believe that because it introduces programming concepts (storage, conditionals, loops, etc) at a sufficiently high level to be friendly, but also doesn't introduce a lot of high-level concepts of its own in the way that Java, C# or even C++ does. Null-terminated strings are the only abstraction that comes to mind (and I do agree that null-termination was a bad design choice in the long term, probably driven by memory constraints at the time.) While, at the same time, it also exposes the low-level workings of the computer (again, as a logical model not the physical machine.) and facilitating a programmer to create their own higher-level abstractions.

C was, after all, explicitly designed as "a portable assembly language" -- the lowest convenient abstraction above disparate architectures. And while I also believe that a modern reinterpretation of C might benefit by including concepts such as multiple processors and the memory hierarchy, I'm glad C itself does not -- whether by design or glorious accident -- because it could not have served its position as least common denominator if it had.

throw table_exception("(ノ ゜Д゜)ノ ︵ ┻━┻");

### #33MaulingMonkey  Members

Posted 29 August 2007 - 02:34 PM

Quote:
 Original post by ravyne2001It appears I've sparked a bit of controversy. In hindsight, I regret the phrase about "Learning how the computer works as a machine." -- I reality, I meant that C exposes a lower-level, logical model (not physical as the word machine would imply) of the machine.

Given the extremely complex set of failure cases in C's model, I'd argue that higher level languages (by virtue of having sane, defined behavior for such things) actually exposes a simpler, saner, lower-level logical model than C does!

In fact, this is the exact reason I don't recommend C! I think a beginner should be able to give their full undivided attention to learning programming, based on a simple, sane, dependable logical level rather than learn the dearth of complexities involved with that genuine-imitation-real-fake-computer-"logic"-programming-model that layers atop that concept to implement it.

I'd argue one should take it bottom-up from the logical model, rather than the hacked up legacy once-upon-a-time-physical* model, in my opinion. And C's model is anything but logical. C's model is all about statistical analysis of mapping failure cases to probable cause, to such an extent that it's really closer related to lawyers and the court system in general than it is to mathematicians.

Modern logic, modern hardware, or a legacy model that's a terrible substitute for either. That's how I see the choices.

(*I remember a time when we used near and far pointers, and when compilers payed heed to the register keyword)

### #34Hodgman  Moderators

Posted 29 August 2007 - 04:38 PM

This is getting out of hand; lets return to the origin of this off-topic argument:
Quote:
 Original post by ravyne2001On the other hand, C is a great language to learn how the computer works as a machine, because it provides no high-level abstractions of its own and essentially frees the programmer to do whatever they wish (Hardware/OS restrictions aside.) You can, for example, impliment features in C that are commonly found in OO languages like C++ or C#, such as polymorphism. C can give you a window into the workings of things that you might take for granted in another language. Another plus is that C is a much smaller language than, for example, C++ (in both the number of keywords and the scope of its built-in features) so you can get your head around it more quickly.

Ravyne's statement that "C is a great language to learn how the computer works as a machine" is the genesis of this off-topic debate.

This statement can be interpreted a few ways, such as:
A) Learning C will teach you how a computer works as a machine.
B) If you are learning how a computer works as a machine, then C will aid you in your studies.

Anyone with a degree in computing will know that statement (A) is false, and seeing as Ravyne isn't a newb, it's pretty safe to assume that he intended this statement to be interpreted as something closer to (B), especially when reading the surrounding context of the statement.
However, cshowie assumed that Ravyne actually meant (A) and decided to start a debate about whether this misinterpretation of Ravynes argument is false or not.

Any debate about statement (A) is useless, because noone actually made that statement. Debate based on this controversial statement should be aimed at whether C is a useful aid for students of computer science, not whether C is a useful teacher of computer science!

Nathan Baum then joined the debate, backing up cshowie on his irrelevant argument against the non-existent statement (A). As I already mentioned, these additions to the debate are useless, because there is only one side to this debate (cshowie's view VS cshowie's misinterpretation of ravyne's view).

Baum's comments aimed at me were not actually against statement (B), they were against statement (A). Seeing as I was putting forth an argument in support of statement (B) (that C is lower level than C#, therefore suitable for implementing lower level concepts), and that we all already know that statement (A) is not only wrong, but irrelevant, I found his comments to be of a condescending tone (Just as if during an argument about how to best slice a pizza, your opponent started explaining to you how to tie your shoes as if you were a child).

Ok? So if you're arguing that learning C will not teach you computer science, then you can stop now, because noone has or is supporting that statement.

Quote:
 Original post by CrypterThey are C concepts, not low level concepts. Strings dont need any null termination. Pointers are used ALOT in low level programming, but only as dereferencing addresses and address arithmetic.

Arrays, pointers and null-termination are not just C concepts. The "null terminated string" may be used by C, but null termination is just a general computing concept that any student will learn at some point.
Therefore if you are a student learning about the general concept of null termination, or arrays, or pointers, then C will be a useful tool to aid you in your studies of these concepts.

Quote:
 Original post by Crypter*list of facts*

Noone is disputing this.

Quote:
Original post by MaulingMonkey
Quote:
Quote:
Original post by Nathan Baum
Quote:
 e.g. #1 Learning how to deal with strings correctly in C will teach you a lot more about "how the computer works as a machine" compared to using a C++/C#/Java/etc string class.

No, it'll teach you about how C-style strings work. Given that they work the same on all implementations of C, even when running on machines with wildly differing architectures, that should be a clue that string handling doesn't reveal the secrets of "how the computer works as a machine".

Again, you're not arguing against my point - I didn't say that it would "reveal the secrets", I was trying to illustrate that using lower level constructs teaches you more about the machine than using higher level constructs.

No, it teaches you more about just those "low level" concepts. Are those concepts useful to programming, or actually teach you anything about how computers work? No. They'll show you a sliver of how they worked on a 286, but modern hardware merely emulates that model for backwards compatibility.

Hell, even x86 is a bit of a sham now. Modern CPUs merely decode it into their own internal microformats for actual execution.

Noone ever said that learning C will teach you how a computer actually works.
What I was trying to say is that if you are learning how a computer actually works, then a lower level language, like C, is a more appropriate choice for practising these concepts than a higher level language.

Computers still use arrays, computers still use pointers, computers still use null termination. If you're a computer scientist you will know these basic concepts, to know them you must learn them, to learn them you must use them. An appropriate way to use them is to learn C.

Quote:
Original post by MaulingMonkey
Quote:
Quote:
Original post by Nathan Baum
Quote:
 e.g. #2 You can write a program in C that accepts an arbitrary hexadecimal value from the user and tries to read 17 bytes from that memory address (which may well cause a seg-fault etc...). Higher languages such as C#/Java won't let you do silly low-level things like this because they are *more abstracted*.

C# will let you do silly things like that, not that it matters. The ability to crash your program by probing random memory addresses does not make a language "closer to the metal". BASIC allowed you to probe random memory addresses, yet it was, in many ways, "higher level" than C.

Ok, I'm not that experienced with C# so scratch it from my quote. My point still stands though - AFAIK, Java's version of 'pointers' ('references') can't be assigned to arbitrary memory addresses - the language's abstraction of memory is "more abstract" than C's abstraction of memory.

I thought we were talking about C#, not Java. I mean, it's just in the freakin' thread title. Yes, there are HLLs that will forcibly abstract things even higher, we generally don't have much love for Java around here.

In the context of that quote, I was talking about HLLs in general (such as Java/C#) in an attempt to illustrate that they are more abstracted than lower level languages. It's a fairly simple concept (that higher level == more abstract), but for some reason argumentative people keep trying to disprove it...

Quote:
 Original post by MaulingMonkeyForcibly deabstracting things like C does, however, is also the wrong answer. With flexibility comes the ability to do whatever you want whenever it's appropriate. Telling a beginner to use a language that's unsafe by default in it's very design, before mastering even the most basic of concepts of programming -- even the simplest things such as the flow of execution -- is not appropriate.

You're arguing against points that haven't been made! I didn't recommend C to beginners, I just supported ravyne's view that low-level languages are ok to use when learning low-level concepts.

Quote:
 Original post by ToohrVykLots of info on pointesr

Sorry ToohrVyk, when I said compared C pointers to numbers and HLL pointers to numbers in bubble wrap, I didn't intend you to read that far into it. What I wanted you to focus on was the bubble-wrap part of the analogy (such as garbage collectors, etc...).

### #35Promit  Senior Moderators

Posted 29 August 2007 - 04:51 PM

I know that you're from Australia, but do you realize that "no one" is, in fact, two words?

### #36MaulingMonkey  Members

Posted 29 August 2007 - 06:59 PM

Quote:
Original post by Hodgman
Quote:
 Original post by MaulingMonkeyNo, it teaches you more about just those "low level" concepts. Are those concepts useful to programming, or actually teach you anything about how computers work? No. They'll show you a sliver of how they worked on a 286, but modern hardware merely emulates that model for backwards compatibility.Hell, even x86 is a bit of a sham now. Modern CPUs merely decode it into their own internal microformats for actual execution.

Noone ever said that learning C will teach you how a computer actually works.

Quote:
 Original post by Hodgman, again, earlier, emphasis mine.C is *closer* to machine level than higher level languages like C# (i.e. it has raw pointers, can be freely mixed with assembly code, etc...).Everyone (including ravyne) knows C isn't machine level, but it's C L O S E R.e.g. #1 Learning how to deal with strings correctly in C will teach you a lot more about "how the computer works as a machine" compared to using a C++/C#/Java/etc string class.

Okay, so maybe you didn't mean it like that. Hell, ravyne already addressed this point and moved on, although still leaving things horribly muddled (seemingly, to me,) which I've tried to address. My basic point is this:

A) Closer to the physical machine
B) Closer to the logic
C) Closer to the virtual machine

Pick one. Okay, I hear you, you're not trying to pick both choice A and choice B. You really mean choice C, not choice A. That's still two, distinct, incompatible choices. I've already elaborated on the difference between B and C, and which one the language C is.

Quote:
 Computers still use arrays, computers still use pointers, computers still use null termination. If you're a computer scientist you will know these basic concepts, to know them you must learn them, to learn them you must use them. An appropriate way to use them is to learn C.

C# still uses arrays, pointers, and yes, null termination if you need to support that ungodly legacy hack of a string format. You can do it, just like you could in C. If you're a computer scientist you will know such basic concepts such as garbage collection. To know them you must learn them, to learn them you must use them. An appropriate way to use them is to learn C#.

Quote:
Quote:
 Original post by MaulingMonkeyI thought we were talking about C#, not Java. I mean, it's just in the freakin' thread title. Yes, there are HLLs that will forcibly abstract things even higher, we generally don't have much love for Java around here.

In the context of that quote, I was talking about HLLs in general (such as Java/C#) in an attempt to illustrate that they are more abstracted than lower level languages. It's a fairly simple concept (that higher level == more abstract), but for some reason argumentative people keep trying to disprove it...

The fairly simple concept that I'm trying to explain to you is that "totally lacking anything higher level whatsoever" isn't a necessary prerequisite to having low level access. C# is a prime counterexample that this isn't the case at all. It's a fairly simple concept (that you can have the best of both worlds at the same time, sometimes), but for some reason argumentative people keep trying to disprove it... as if low level was only the realm of C.

Quote:
Quote:
 Original post by MaulingMonkeyForcibly deabstracting things like C does, however, is also the wrong answer. With flexibility comes the ability to do whatever you want whenever it's appropriate. Telling a beginner to use a language that's unsafe by default in it's very design, before mastering even the most basic of concepts of programming -- even the simplest things such as the flow of execution -- is not appropriate.

You're arguing against points that haven't been made!

I'm elaborating further on the cons of your suggestion -- unless you don't consider espousing C's supposed generalized pros in the context of a For Beginners forum thread, the author of whom is clearly language hunting, to not be making any sort of implicit suggestion whatsoever. Not that I was talking only to you -- others here have quite clearly, explicitly, suggested C, and in the very original post the author has made it quite clear he is, in fact, looking for the pros and cons of the various suggestions, making my comments clearly relevant to this thread.

Quote:
 Original post by ToohrVykLots of info on pointesr

Sorry ToohrVyk, when I said compared C pointers to numbers and HLL pointers to numbers in bubble wrap, I didn't intend you to read that far into it. What I wanted you to focus on was the bubble-wrap part of the analogy (such as garbage collectors, etc...).[/quote]

1959:   GC1978:   8086 (segmented 16-bit)1985:   386  (32-bit)Now:    CPU & GPU:  Still distinct address spaces.        At home:    Virtualized memory still bubble-wrapping                    over cache, RAM, pagefile, mapped files        At work:    Unified addressing across multiple nodes of a cluster                    still extremely uncommon (although not unheard of)        GC:         still operating on the same principle of unreferenced                    data being collectible garbage.

Active memory as a single integer-addressed whole is the bubble-wrap. Garbage collection, however, is clearly a fundamental concept to programming logic. Not one every project needs to use (at least as an automated process), but one nevertheless.

### #37ToohrVyk  Members

Posted 29 August 2007 - 07:07 PM

Quote:
 Original post by HodgmanOk? So if you're arguing that learning C will not teach you computer science, then you can stop now, because noone has or is supporting that statement.

Most of the points made in favour of C in this thread focused on the "features" of C, such as null-terminated strings, pointers, and similar concepts. Most of the counter-points explained that none of these concepts actually provided any relevant information about how the computer works as a machine. These features may well teach you about some implementation details of some high-level languages (though most HLLs use some tricks that cannot be explained using C concepts, such as tail recursion).

Quote:
 Arrays, pointers and null-termination are not just C concepts. The "null terminated string" may be used by C, but null termination is just a general computing concept that any student will learn at some point.

I would argue that null-termination is a narrow and rarely used feature (outside of C) that deserves less focus than the more important concepts of type-safety, polymorphism, or concurrency, that C++ iterators are a much better way of learning about pointers, and that arrays are an almost universal feature of programming languages.

Therefore if you are a student learning about the general concept of null termination, or arrays, or pointers, then C will be a useful tool to aid you in your studies of these concepts.

Quote:
 Original post by Crypter*list of facts*

Noone is disputing this.

Quote:
 e.g. #1 Learning how to deal with strings correctly in C will teach you a lot more about "how the computer works as a machine" compared to using a C++/C#/Java/etc string class.

Quote:
 Noone ever said that learning C will teach you how a computer actually works.

Quote:
 What I was trying to say is that if you are learning how a computer actually works, then a lower level language, like C, is a more appropriate choice for practising these concepts than a higher level language.

This, we know. However, you have so far only provided a list of C features that do not actually teach, or help learning, anything useful about the inner workings of the computer. You have mentioned C-strings and pointers, for example, none of which have any relevance or bearing to the way the computer works.

The correct statement, where everyone would agree, is that C teaches you how low-level programming works. But both high-level programming and low-level programming have become completely disconnected of the actual way the computer works.

Quote:
 Computers still use arrays, computers still use pointers, computers still use null termination.

Actually, programs use these. Computers have no notion of array, pointer or null termination: these are merely programming language concepts, not hardware concepts.

Quote:
 In the context of that quote, I was talking about HLLs in general (such as Java/C#) in an attempt to illustrate that they are more abstracted than lower level languages. It's a fairly simple concept (that higher level == more abstract), but for some reason argumentative people keep trying to disprove it...

Actually, everyone agrees with this. What we don't agree with is that "lower level == closer to the machine", which isn't the case. See below.

Quote:
 Sorry ToohrVyk, when I said compared C pointers to numbers and HLL pointers to numbers in bubble wrap, I didn't intend you to read that far into it. What I wanted you to focus on was the bubble-wrap part of the analogy (such as garbage collectors, etc...).

So, let's focus on the bubble-wrap part. Java provides garbage collection while C doesn't. Therefore, the abstraction of the Java memory model is actually closer to the machine than the abstraction of the C memory model, by virtue of the machine using a garbage collection scheme to allocate, deallocate and move objects around. C pointers are in fact implemented as Java references behind-the-scenes, but C programmers never know it. I could even go as far as saying that a compacting garbage collector like that of Java or C# teaches important things about the importance of cache coherency, a subject totally eluded by the C language, as previously noted in this thread.

In short:
• The C language has a very simplistic view of the world.
• Higher-level languages have a more complex view of the world.
• The computer, as a machine, is extremely complex.

Some of the complex concepts seen in higher-level languages are bound to represent and explain the hardware-level concepts.

### #38Tape_Worm  Members

Posted 29 August 2007 - 07:12 PM

Quote:
 Original post by SLaYaHI am stuck trying to decide, originally I was only deciding between C or C++ but now theres C# and I'm not sure what to do. If at all possible I'd like to know pros and cons of each and whcih you would suggest to learn coding for an ALL AROUND programmer, not just for gaming but may be part in gaming.Thanks in advanced.

Oh god, not another one of these threads? Do we not learn people?

Here's some tips: Search the site for C# vs. C++, C# vs. Java, C# vs. Godzilla and so on.. there's a billion of these threads and they all end in tears. Promit did a nice catalogue of the threads a while back.

My bit of advice since you seem to be new:
Pick one, try it, don't like it? Move on and until you find something you're comfortable with. Believe me, there's plenty of information about each language floating around and enough people here proficient in all 3 to give you advice should you need it. In all honesty, it doesn't hurt to know as many languages as you can. It'll give you a broader perspective. At the end of the day you should eventually try and learn all 3 and many more besides if you have the time and patience for it. Just don't bite off more than you can chew.

Good luck.

### #39Hodgman  Moderators

Posted 29 August 2007 - 07:29 PM

Quote:
 Original post by HodgmanNoone ever said that learning C will teach you how a computer actually works...e.g. #1 Learning how to deal with strings correctly in C will teach you a lot more about "how the computer works as a machine" compared to using a C++/C#/Java/etc string class.

This isn't a contradiction...

By "how the computer works as a machine" I meant low level computing concepts in general (as I've already said, multiple times - stop it with the straw men already please!), using the example of all the crap you have to do manually in C compared to a language with proper string support.

In one approach you just use a few function calls, in the other you have to manually use basic computing concepts, therefore, the manual approach requires more knowledge about how computers work at a lower level.

Quote:
 I'm elaborating further on the cons of your suggestion

That's ok, I just wanted to point out that I never actually suggested that a beginner should choose C over something else (you implied that I did), just that it can be a useful learning tool.

Quote:
 Active memory as a single integer-addressed whole is **the** bubble-wrap. Garbage collection, however, is clearly a fundamental concept to programming logic. Not one every project needs to use (at least as an automated process), but one nevertheless.

It may be bubble-wrap, but it wasn't THE bubble wrap I was talking about.
It's already been said many times that C is an abstraction (hence a C pointer is going to have bubble-wrap compared to what's really happening), I guess I should have explicitly said that *managed* pointers have *more* bubble wrap than *raw* pointers.

Quote:
 The fairly simple concept that I'm trying to explain to you is that "totally lacking anything higher level whatsoever" isn't a necessary prerequisite to having low level access.

Ok, I've never disagreed with that.
There's nothing wrong with making your own string class in C# or your own linked list in Java - you can still learn these low level concepts in a HLL. As the above quoted example shows, I was saying that doing them in C (i.e. manually) will teach you more about these low-level concepts than using a string API.
If you're doing them manually in C and manually in C# then there's not much difference (the main one being that you don't have to do it manually in C#).

Quote:
 C# still uses arrays, pointers, and yes, null termination if you need to support that ungodly legacy hack of a string format. You can do it, just like you could in C. If you're a computer scientist you will know such basic concepts such as garbage collection. To know them you must learn them, to learn them you must use them. An appropriate way to use them is to learn C#.

Yeah, I agree.

So, seeing as C# can do all the (unsafe) things C can do, you could use it to learn all of these old C techniques. However, I (personally) still think it's appropriate to learn them using C for the same reason that you don't conduct survival training in a hotel resort - you do it in the desert.
(On that note, it's probably important for programming students to use some assembly code at some point too, to understand even lower concepts, like how a function call is implemented -- but don't take this as a suggestion that you should build apps in ASM over C#!!!)

Quote:
 Original post by ToohrVykMost of the points made in favour of C in this thread focused on the "features" of C, such as null-terminated strings, pointers, and similar concepts.

As I've already said, I pointed out these "features" as part of an example of the amount of manual work required in C compared to a langage with a good standard library.
I.e. the important part is that you do it manually, not that you use a particular "feature".

Quote:
 Original post by ToohrVykYou have so far only provided a list of C features that do not actually teach, or help learning, anything useful about the inner workings of the computer. You have mentioned C-strings and pointers, for example, none of which have any relevance or bearing to the way the computer works.

Again, I've already said that I wasn't talking about the "inner workings of the computer", but the "low level workings of the computer".

Quote:
 Original post by ToohrVykJava provides garbage collection while C doesn't. Therefore, the abstraction of the Java memory model is actually closer to the machine than the abstraction of the C memory model, by virtue of the machine using a garbage collection scheme to allocate, deallocate and move objects around. C pointers are in fact implemented as Java[-style] references behind-the-scenes, but C programmers never know it. I could even go as far as saying that a compacting garbage collector like that of Java or C# teaches important things about the importance of cache coherency, a subject totally eluded by the C language, as previously noted in this thread.

Now this is a counter to the argument about LLLs being closer to the machine than HLLs.
However, my point about doing things manually being a better learning activity than using an API (and C being a good language for this exercise) still stands.

[Edited by - Hodgman on August 30, 2007 2:29:24 AM]

### #40Promit  Senior Moderators

Posted 30 August 2007 - 10:10 AM

I just want to point out that null terminated strings are a C concept. They have nothing to do with the low level workings, or inner workings, or any of that bullshit. Most languages don't even use null terminated strings; C++, Java, C#, and Pascal/Delphi all store explicit lengths.

Old topic!

Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.