• Create Account

We need 7 developers from Canada and 18 more from Australia to help us complete a research survey.

Support our site by taking a quick sponsored survey and win a chance at a $50 Amazon gift card. Click here to get started! # How many of you use C for game programming? Old topic! Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic. 108 replies to this topic ### Poll: How many of you use C for game programming? (152 member(s) have cast votes) #### Do you use C? 1. Yes (30 votes [19.74%]) Percentage of vote: 19.74% 2. No (80 votes [52.63%]) Percentage of vote: 52.63% 3. Sometimes (42 votes [27.63%]) Percentage of vote: 27.63% Vote ### #81phresnel Members - Reputation: 949 Posted 03 February 2011 - 06:34 AM Well frankly I don't have problems managing my own memory. You just haven't worked on a big enough application. The underlying issues are still there, you just haven't hit them yet. Oh trust me I have. My own personal project is quite big My own main project is not so big (just 60000 lines, roughly), but there's a lot of data-sharing. Keeping track of all the sharing would be near-impossible with raw pointers. Sub-components may insert new objects into parent-objects, and vice-versa, whole trees are passed into external components, data structures are extended, et cetera. Without shared_ptr's, I would really have a hard time keeping it correct while doing all the mutations. Which code in my codebase depends on properly managing specific objects? Which are forbidden to releasre objects? Et cetera. And if this paragraph is slightly chaotic, then that's by intention. Of course, a lot is possible (see e.g. the kernel linux), but the more you automate, the more time you have for actual features your users are interested in. and I don't have problems because I actually know how to control it But your users possibly don't. They won't help you as RAII would in exceptional cases. The number of flow-permutations in only a small sized project of <10KLoC is usually big enough already to make good testing a nightmare, and automating the right deletion moment even for exceptional cases helps a zillion tons. Sponsor: ### #82Garnier Members - Reputation: 102 Posted 03 February 2011 - 07:37 AM Like I said the only thing that makes me not just to C# is due to my gui framework not being maintained because pretty much 99% are using the C++ one. My gui library handles pretty much all the memory for me that's why I don't really ever have to deal with memory leaks and I'm happy with it. the other gui's are just not available for cross platform since that's the development stuff I do. But let me be clear, I don't care if others think Linux or Mac doesn't have a high market. the people I target do so yeah. If C# offered a better cross platform api like Qt I'd jump but until then I'm sticking with C++ no matter how better C# gets. Yeah that makes sense. If your programs are not for the Windows market, you can't just use whatever you want. That's not something that I can argue with. There is Mono but I've never used it so I don't know how it would compare. ### #83Fiddler Members - Reputation: 856 Posted 03 February 2011 - 03:15 PM If you've never spent time hunting down a memory leak, that's great Seabourne. I imagine most of the rest of us have. I haven't had to in languages with automatic memory management and regardless of the other pros and cons, I consider that a benefit. I'm sure there are times when writing your own code to manage the memory is important, but I haven't encountered them yet myself. C# allows you to manage it yourself if you want to, and I imagine Java does as well, so I don't worry about encountering such a situation. Now if Microsoft bothered writing an up to date that was much better and drop that ugly poor implementation of Hungarian notation it would be easier to use. I believe this is what .NET is. It works for me, and I enjoy it a lot better than I did Win32 and MFC. Win32 was kind of fun when I first used it because of the mystery of it, felt like I was using what real programmers used, but eventually I discovered I could make the same gui program five times faster and with very little pain in .NET. Like I said the only thing that makes me not just to C# is due to my gui framework not being maintained because pretty much 99% are using the C++ one. My gui library handles pretty much all the memory for me that's why I don't really ever have to deal with memory leaks and I'm happy with it. the other gui's are just not available for cross platform since that's the development stuff I do. But let me be clear, I don't care if others think Linux or Mac doesn't have a high market. the people I target do so yeah. If C# offered a better cross platform api like Qt I'd jump but until then I'm sticking with C++ no matter how better C# gets. qt4dotnet looks pretty solid. Yeah that makes sense. If your programs are not for the Windows market, you can't just use whatever you want. That's not something that I can argue with. There is Mono but I've never used it so I don't know how it would compare. Mono works fine and, in some cases, even better than the .Net runtime. As long as you avoid Windows-only technologies (WPF, DirectX), it is great. [OpenTK: C# OpenGL 4.4, OpenGL ES 3.0 and OpenAL 1.1. Now with Linux/KMS support!] ### #84Seaßourne Members - Reputation: 120 Posted 03 February 2011 - 10:04 PM If you've never spent time hunting down a memory leak, that's great Seabourne. I imagine most of the rest of us have. I haven't had to in languages with automatic memory management and regardless of the other pros and cons, I consider that a benefit. I'm sure there are times when writing your own code to manage the memory is important, but I haven't encountered them yet myself. C# allows you to manage it yourself if you want to, and I imagine Java does as well, so I don't worry about encountering such a situation. Now if Microsoft bothered writing an up to date that was much better and drop that ugly poor implementation of Hungarian notation it would be easier to use. I believe this is what .NET is. It works for me, and I enjoy it a lot better than I did Win32 and MFC. Win32 was kind of fun when I first used it because of the mystery of it, felt like I was using what real programmers used, but eventually I discovered I could make the same gui program five times faster and with very little pain in .NET. Like I said the only thing that makes me not just to C# is due to my gui framework not being maintained because pretty much 99% are using the C++ one. My gui library handles pretty much all the memory for me that's why I don't really ever have to deal with memory leaks and I'm happy with it. the other gui's are just not available for cross platform since that's the development stuff I do. But let me be clear, I don't care if others think Linux or Mac doesn't have a high market. the people I target do so yeah. If C# offered a better cross platform api like Qt I'd jump but until then I'm sticking with C++ no matter how better C# gets. qt4dotnet looks pretty solid. Yeah that makes sense. If your programs are not for the Windows market, you can't just use whatever you want. That's not something that I can argue with. There is Mono but I've never used it so I don't know how it would compare. Mono works fine and, in some cases, even better than the .Net runtime. As long as you avoid Windows-only technologies (WPF, DirectX), it is great. Doesn't seem to be up to date, 2 versions behind. Yeah I have used Mono and it is quite nice. Though sadly their documentation is half completed and the IDE isn't all that great =\ ### #85Aardvajk Crossbones+ - Reputation: 9208 Posted 05 February 2011 - 02:11 PM Yeah, yeah, yeah, Microsoft, the evil empire, releasing software for free just so we all get infected then using their market position to release a technology that allows for cross-platform development and then creating MSIL purely to try to make life more difficult for programmers. Then they had the audacity to release all their pro tools for free, while only expecting people to pay for the team features, given that if you are in a team you probably already have funding. Come on. Kill these threads. I'm personally very grateful that I can use VS for free and if I really wanted to cross-platform develop then the open nature of MSIL would probably seem my best bet. (Just because I can't say Ubuntu doesn't make my opinion worthless). ### #86MrDaaark Members - Reputation: 3555 Posted 09 February 2011 - 07:59 AM I should make a video on youtube that starts with the quote 'Trust Me, I can manage my own memory!", followed my a non stop montage of memory leaks, blue screens, general protection faults, games that were released in a state that they could only ever crash to desktop, buffer overflow exploits (the entire C standard library needed to be deprecated and replaced with safer alternative functions to remedy this), air traffic control failures, road sign failures, etc, etc, to this music. ### #87davepermen Members - Reputation: 1042 Posted 11 March 2011 - 11:59 AM c# here. because i CAN MANAGE MY OWN MEMORY, because i CAN write my own containers, because i can write my own file handlers for stuff like xml, because i can write my own database like interface and stuff. but i don't WANT to. oh, and strings, too. i want to get my stuff going. and my stuff is an end product, not a library or tool for other programmers. oh, and, because i know that i'm not perfect, i know i would do errors with all of the above listed stuff. and those might be hard to trace. memory leaks are one of the biggest issues in the computer industry since years, source of bugs, crashes, hacks, viruses, etc. knowing that i will not be able to 100% prevent it, i prefer to have a compiler/language setup that i know it will. If that's not the help you're after then you're going to have to explain the problem better than what you have. - joanusdmentia My Page davepermen.net | My Music on Bandcamp and on Soundcloud ### #88Sudi Members - Reputation: 761 Posted 13 March 2011 - 03:01 PM Almost everything you can do in C++, you can do in C, but just requires a bit more effort. Yeah thats the point most of the stuff is actually easier to achieve with c++. And i really don't get why a c programm is easier to maintain. ### #89mhagain Crossbones+ - Reputation: 9953 Posted 13 March 2011 - 03:02 PM I've used both C and C++, but prefer to restrict myself to the C-like subset of C++ with maybe a few C++ specific features here and there. I don't subscribe to the thinking that "if you're not using STL/new and delete/etc you're not using C++" and I've seen enough C++ code that focusses more on designing pretty class structures than on actually getting the job done. One reason I can think of to prefer C is that it compiles faster. It appears that the gentleman thought C++ was extremely difficult and he was overjoyed that the machine was absorbing it; he understood that good C++ is difficult but the best C++ is well-nigh unintelligible. ### #90ChaosEngine Crossbones+ - Reputation: 3323 Posted 14 March 2011 - 09:17 PM I've used both C and C++, but prefer to restrict myself to the C-like subset of C++ with maybe a few C++ specific features here and there. I don't subscribe to the thinking that "if you're not using STL/new and delete/etc you're not using C++" and I've seen enough C++ code that focusses more on designing pretty class structures than on actually getting the job done. One reason I can think of to prefer C is that it compiles faster. So what do you use for collections? How do you manage memory? If you're focused on "getting the job done", why not use a proven set of tested tools? While C++ compile times are a pain, I would argue that they are less painful than the alternative. That said, I wouldn't use C or C++ if I had the choice. if you think programming is like sex, you probably haven't done much of either.-------------- - capn_midnight ### #91Hodgman Moderators - Reputation: 42232 Posted 14 March 2011 - 10:11 PM I've used both C and C++, but prefer to restrict myself to the C-like subset of C++ with maybe a few C++ specific features here and there. I don't subscribe to the thinking that "if you're not using STL/new and delete/etc you're not using C++" and I've seen enough C++ code that focusses more on designing pretty class structures than on actually getting the job done. So what do you use for collections? How do you manage memory? If you're focused on "getting the job done", why not use a proven set of tested tools? I'm trying to "get a job done" at the moment, which involves the very strict requirement that no globals be used. C++'s new/delete (and C's malloc/free) unfortunately are (usually) globals - they provide access to a global memory allocator object. This makes them unusable to me - it also makes large parts of the STL unusable. The easiest way to get the job done is to simply get on with it and KISS (if passing strings around as char* turns out to be the simplest option in a certain situation, then so be it). Massaging the STL to work in this environment would be possible, but it's a whole bag of complexity I don't need. I am using C++, just very differently than it "should" be used. The result of this may be "ugly" code, but it's often very shallow in depth (easy to read, explore and understand), doesn't require chasing of abstractions, is more easily documented and has extremely minimal dependencies on other parts of the code base. It's still well engineered, it's just different. As usual, there's no absolute answers, just circumstances. I should make a video on youtube that starts with the quote 'Trust Me, I can manage my own memory!", followed my a non stop montage of memory leaks, blue screens, general protection faults, games that were released in a state that they could only ever crash to desktop, buffer overflow exploits (the entire C standard library needed to be deprecated and replaced with safer alternative functions to remedy this), air traffic control failures, road sign failures, etc, etc, to this music. To provide a totally balanced counter-point, you should also make a video on youtube that starts with the quote 'Trust Me, I can manage your memory!", followed by the Java loading icon animating for 10 minutes ### #92agottem Members - Reputation: 87 Posted 14 March 2011 - 11:21 PM I've used both C and C++, but prefer to restrict myself to the C-like subset of C++ with maybe a few C++ specific features here and there. I don't subscribe to the thinking that "if you're not using STL/new and delete/etc you're not using C++" and I've seen enough C++ code that focusses more on designing pretty class structures than on actually getting the job done. One reason I can think of to prefer C is that it compiles faster. So what do you use for collections? How do you manage memory? If you're focused on "getting the job done", why not use a proven set of tested tools? While C++ compile times are a pain, I would argue that they are less painful than the alternative. That said, I wouldn't use C or C++ if I had the choice. If you find that your biggest time sink is memory management and collections, you have bigger problems. Besides, in my experience, STL has it's own issues. 1. Say you are trying to optimize some critical execution paths. You find some areas where an std::vector is created, populated, and passed around to a few methods for manipulation before handing it off to a library. For whatever reason, the vector would be more efficiently allocated on the stack, as certain characteristics are known. Great, you just need to write a custom allocator and you're all set, right? Not quite...unfortunately, the newly defined std::vector type that makes use of the custom allocator will no longer be compatible with the type required by the library call. 2. STL collections are not thread safe. Usually this is acceptable, as in the worst case scenario, collections can be made thread safe via some sort of synchronization. However, this may not be optimal. Consider a linked list. In the optimal case, the mutex only needs to be acquired for a very brief moment -- when assigning the next/prev/head pointers. The allocation and initialization of the node/type can be done outside of the lock. Using an std::list, this isn't possible. The mutex needs to be wrapped around the call to list.push_back (or whatever method you use), which means you are locking needlessly over the duration of the memory allocation and initialization of the type. In some cases, this is acceptable, in others, it's not. 3. The rules for iterators are a pain. For example, when are iterators invalidated for a linked list? What about a deque? What about a map? What about a vector? Yea, it's very difficult to keep track of, and have encountered way too many bugs as a result. Also, this makes it very tricky to just 'change' the container type. 4. Using the STL adds lots and lots of line noise to your source code. You either need to riddle your code with typedefs, or live with never-ending std::map<std::string, std::pair<std::vector<foo>, int> > lines. Yea, I hear C++0x introduces 'auto' to 'fix' that, but that's just a whole other can of worms. C++0x has its own problems. And still, 'auto' won't save you from the 20 line template compiler errors. 5. While not necessarily a flaw in the STL itself, I've found that using the STL encourages a 'lazy' atmosphere. I can't count the number of times I've seen code such as: std::map<> m; if(m.find(foo) != m.end()) use(m[foo]); Or looping and doing collection.push_back() without reserving the length (again, sometimes acceptable, sometimes not.) 6. STL (or maybe this belongs more in the C++ critiques) has spawned an entire cult of syntax optimizers. With C++, very very subtle syntax style can negatively effect performance. Take this rather basic example: for(std::vector<>::iterator it= v.begin(); it != v.end(); it++){} // Where's the performance problem?  C++ *is* a terrible language, and the STL doesn't save it. C++ requires loads and loads of context just to fully understand simple snippets of code. You need to examine methods, determine if those methods are being overloaded, make sure those '+' signs aren't overloaded, decide how member variables are shared between class hierarchies (multiple virtual inheritance anyone?), figure out if any exceptions might be thrown that yank you out of the code path you think is being executed, etc, etc. ### #93Hodgman Moderators - Reputation: 42232 Posted 15 March 2011 - 12:03 AM in my experience, STL has it's own issues. Those are some pretty weak issues. 1. No, you refactor the consumers to take begin/end pointers instead of a container. KISS. 2. A thread-safe container would be an even worse choice for a general-purpose container. If you're using a general-purpose container as an inter-thread communication tool, then you obviously don't care about performance. 3. How often do you want to just change a map to a list to a vector without expecting to have to refactor your design? That's like renaming a PNG to WAV and expecting to hear a picture. If I have a pointer to an item in a custom array or a custom linked list, what's the invalidation rules for that pointer? This is the exact same problem (iterators/pointers into any container have the invalidation problem)! If every container API has this problem, then at least the STL is kind enough to make the rules very clear-cut and easy to remember. 4. The occasional 1 line typedef isn't "lots and lots" of noise in my book. That's a pretty subjective / style-based call. 5. You've seen retarded STL usage, therefore the STL causes retardation? I've seen bad programmers write retarded code with pretty much every API I've had to use. 6. Again, this is something that separates a junior C++ programmer from an experienced one. Not an STL issue. Plus an optimising compiler can fix this mistake in simple cases. C++ *is* a terrible language, and the STL doesn't save it. C++ requires loads and loads of context just to fully understand simple snippets of code. You need to examine methods, determine if those methods are being overloaded, make sure those '+' signs aren't overloaded, decide how member variables are shared between class hierarchies (multiple virtual inheritance anyone?), figure out if any exceptions might be thrown that yank you out of the code path you think is being executed, etc, etc. That's why every C++ project usually decides on a sub-set of the language that they're going to use. I've never seen a design incorporating multiple virtual inheritance ever actually get chosen over something simpler. A lot of projects outright ban virtual and multiple inheritance. Also, I've never seen exceptions used in a game project. Ever. You can simply choose not to take that level of complexity on board, and it's actually recommended practice (by the console compiler authors) that you do not enable exception handling. As for the context argument - you can make the same case against any language... If I have a C function called add, do I have to go and make sure that it actually adds things? Overloading operators to do non-obvious things is simply bad code - I can write bad code in any language (e.g. a Java class FileOpener than actually deletes files).This isn't an anti-C++ argument; it's an anti-bad-code argument. C++ is just like democracy. It's the worst systems engineering language, except all the others that have been tried. Also, yes, C/C++ are systems languages. If you've got junior programs breaking your game with their bad code, then you should've seen it coming and written your gameplay in Lua/etc while leaving C/C++ at the bottom of the system stack ### #94agottem Members - Reputation: 87 Posted 15 March 2011 - 09:43 AM 1. No, you refactor the consumers to take begin/end pointers instead of a container. KISS. You've assumed you have control over the library, and you *can* just refactor. If you're the sole developer, this may a reasonable solution. Often times it's not. 2. A thread-safe container would be an even worse choice for a general-purpose container. If you're using a general-purpose container as an inter-thread communication tool, then you obviously don't care about performance. I never claimed this wasn't the case. I was interested in the cases where you do need a thread safe container. If you've ever dealt with bottlenecks due to contention, you'd understand that there are times when holding the lock while initializing / allocating memory is completely unacceptable. You'd end up needing to roll-your-own container anyways just to fix the contention problems. 3. How often do you want to just change a map to a list to a vector without expecting to have to refactor your design? That's like renaming a PNG to WAV and expecting to hear a picture. If I have a pointer to an item in a custom array or a custom linked list, what's the invalidation rules for that pointer? This is the exact same problem (iterators/pointers into any container have the invalidation problem)! If every container API has this problem, then at least the STL is kind enough to make the rules very clear-cut and easy to remember. I agree it's not often you just want to change your container type. I only brought it up because I've heard people claim it to be an advantage of C++. My intention with this point was to highlight that the rules for iterators and containers are vast. It's very easy to make assumptions and make mistakes. Just look at the rules for iterators with regards to std::vector: Insertion will invalidate all iterators, unless those iterators are prior to the point of insertion. However! If the std::vector::capacity() isn't sufficient, they are all invalidated. Then you move on to std::deque, and discover there's an entirely new set of rules wrt iterators. It's a lot of rules to remember, and very easy to make mistakes. I've seen it, even from good programmers. 4. The occasional 1 line typedef isn't "lots and lots" of noise in my book. That's a pretty subjective / style-based call. It doesn't sound like you've spent a lot of time working with large code bases. Have you ever had the pleasure of chasing down a trail of typedefs just to determine the type used to define a map? Sure, with C you might have to chase down a few types once in a while, but with C++...the chase is much longer. 5. You've seen retarded STL usage, therefore the STL causes retardation? I've seen bad programmers write retarded code with pretty much every API I've had to use. I thought I was clear when I said: "While not necessarily a flaw in the STL itself". My primary claim was that STL encourages a 'lazy atmosphere'. Again, it's not just bad programmers who do it. When modifying code, it's very easy to jump into a file...and find an already existing conditional that reads:  if(m.find(foo) != m.end()) { /* big chunk of code */ } A developer tracks down a bug in the big chunk of code, and makes a change similar to what I described. Obviously you can point out the error, but often it goes unnoticed simply by accident. Again, have you worked in large teams on large projects with unfamiliar code bases before? 6. Again, this is something that separates a junior C++ programmer from an experienced one. Not an STL issue. Plus an optimising compiler can fix this mistake in simple cases. A compiler can absolutely NOT fix that mistake. Not even the best compiler can say 'i++ should be ++i'. These are two different function calls. So now, with C++, you are stuck paying attention to insane details that no developer should have to care about. That's why every C++ project usually decides on a sub-set of the language that they're going to use. I've never seen a design incorporating multiple virtual inheritance ever actually get chosen over something simpler. A lot of projects outright ban virtual and multiple inheritance. Also, I've never seen exceptions used in a game project. Ever. You can simply choose not to take that level of complexity on board, and it's actually recommended practice (by the console compiler authors) that you do not enable exception handling. Yes, you can agree on a subset...but as the project continues, team members quit / new people come on board...it becomes harder and harder to enforce the specific subset you agree with. And I think you missed my point. My point wasn't that virtual multiple inheritance should be avoided, it's that you need extra context to determine multiple virtual inheritance wasn't used. With C, you can often take a look at a single snippet of code (say a patch file) and can give a pretty good estimate of what it's doing. You can easily find the function calls, and can likely give a good estimate of it's runtime. With C++, this is absolutely not the case. Often times significantly more context is needed (header files to determine what's virtual, what's overloaded, where you're inheriting from, etc, etc). Actually, I believe this is accepted common knowledge. As for the context argument - you can make the same case against any language... If I have a C function called add, do I have to go and make sure that it actually adds things? Overloading operators to do non-obvious things is simply bad code - I can write bad code in any language (e.g. a Java class FileOpener than actually deletes files).This isn't an anti-C++ argument; it's an anti-bad-code argument. Yes, I agree...any language can have problems. I think you're crazy to deny that C++ tends to require more context than C. I feel there's already a good thread on this subject, involving Linus Torvalds and a few other prominent developers who can discuss the finer points better than me: http://www.realworld...110549&roomid=2 One simple example Linus gives: when you communicate in fragments (think "patches"), it's always better to see "sctp_connect()" than to see just "connect()" where some unseen context is what makes the compiler know that it is in the sctp module. C++ is hard for a compiler to parse, and even harder for a human being. C++ is just like democracy. It's the worst systems engineering language, except all the others that have been tried. Also, yes, C/C++ are systems languages. Funny, I've worked on device drivers for embedded devices. Many device drivers, actually. They were all in C. No one had any desire to transition to C++. C was wonderful because it is absolutely straight forward and simple. The simplicity of C is a strength of the language, not a weakness. ### #95rip-off Moderators - Reputation: 9922 Posted 15 March 2011 - 04:53 PM A compiler can absolutely NOT fix that mistake. Not even the best compiler can say 'i++ should be ++i'. They don't need to change the function call. These are templates, the compiler will probably inline either function and figure out what to do from there. Sample program: #include <vector> #include <iostream> int main() { std::vector<int> v; int i; while(std::cin >> i) { v.push_back(i); } // Print post for(std::vector<int>::iterator it = v.begin(); it != v.end(); it++) { printf("%d\n", *it); } // Print pre for(std::vector<int>::iterator it= v.begin(); it != v.end(); ++it) { printf("%d\n", *it); } }  Lets examine the compiler asm output (MSVC 2010, Release mode): ; 13 : // Print post ; 14 : for(std::vector<int>::iterator it = v.begin(); it != v.end(); it++) { mov ebx, edi cmp edi, esi je SHORT$LN4@main
$LL148@main: ; 15 : printf("%d\n", *it); mov ecx, DWORD PTR [ebx] push ecx push OFFSET$SG-31
call	_printf
cmp	ebx, esi
jne	SHORT $LL148@main$LN4@main:

; 16   : 	}
; 17   :
; 18   : 	// Print pre
; 19   : 	for(std::vector<int>::iterator it= v.begin(); it != v.end(); ++it) {

mov	ebx, edi
cmp	edi, esi
je	SHORT $LN1@main npad 3$LL178@main:

; 20   : 		printf("%d\n", *it);

mov	edx, DWORD PTR [ebx]
push	edx
push	OFFSET $SG-32 call _printf add ebx, 4 add esp, 8 cmp ebx, esi jne SHORT$LL178@main
\$LN1@main:

; 21   : 	}

I'm not spotting this performance problem you're talking about...

### #96Hodgman  Moderators   -  Reputation: 42232

Posted 15 March 2011 - 05:50 PM

Note that I'm not attacking C here - I was just pointing out that your issues with the STL were all pretty weak. Only the first one was actually a problem, and it was demonstrated though a spurious example.

You've assumed you have control over the library, and you *can* just refactor. If you're the sole developer, this may a reasonable solution. Often times it's not.

Sorry, I misread your statement as if you were already refactoring the library to take a new type of vector.
The fact that containers shouldn't be used like this is a valid issue with the STL, but because of this, a library that requires a specific kind of container to be passed in is possibly badly designed (note how STL algorithms all operate on iterators/pointers, not containers). This means your example would never happen in the real world, as the library wouldn't take a vector as an argument.

If you've ever dealt with bottlenecks due to contention, you'd understand that there are times when holding the lock while initializing / allocating memory is completely unacceptable. You'd end up needing to roll-your-own container anyways just to fix the contention problems.

Yeah I have dealt with this problem, and the solution was to use a wait-free structure instead of one that requires mutual exclusion whatsoever. If you're using a mutex in performance critical code, you're doing it wrong.

It doesn't sound like you've spent a lot of time working with large code bases. Have you ever had the pleasure of chasing down a trail of typedefs just to determine the type used to define a map? Sure, with C you might have to chase down a few types once in a while, but with C++...the chase is much longer.

I've worked on engines like Unreal and Gamebryo, if they count as "large code bases", and no I don't find the over-abuse of typedefs to be an everyday problem caused by C++.
Sounds like you've never worked on a large, well-run C++ project using decent tools?

A developer tracks down a bug in the big chunk of code, and makes a change similar to what I described. Obviously you can point out the error, but often it goes unnoticed simply by accident. Again, have you worked in large teams on large projects with unfamiliar code bases before?

Yes, and bad code like this occurred in C projects, C++ projects, Lua projects, etc... This isn't specific to C++, or to the STL.

A compiler can absolutely NOT fix that mistake.

I was going to post an assembly dump to demonstrate this, but I've been beaten to it Sounds like you need to upgrade your compiler.

My point wasn't that virtual multiple inheritance should be avoided, it's that you need extra context to determine multiple virtual inheritance wasn't used.

Well as I said before, on a specific project with specific guidelines, you can assume it wasn't used, because it would've failed a code review if it was used.

With C, you can often take a look at a single snippet of code (say a patch file) and can give a pretty good estimate of what it's doing. You can easily find the function calls, and can likely give a good estimate of it's runtime. With C++, this is absolutely not the case.

Get a modern IDE, then it is the case.

I think you're crazy to deny that C++ tends to require more context than C.

I'm not denying it, I'm denying that it poses any kind of problem or hindrance (unless you're coding in the 80's).

C++ is just like democracy. It's the worst systems engineering language, except all the others that have been tried.

Funny, I've worked on device drivers for embedded devices. Many device drivers, actually. They were all in C. No one had any desire to transition to C++. C was wonderful because it is absolutely straight forward and simple. The simplicity of C is a strength of the language, not a weakness.

Your C code would likely be accepted by a C++ compiler though ;D So it's a C++ project with a restrictive coding guideline (yes, j/k).
Again I'm not attacking C - see my post above where I'm advocating the use of char* instead of std::string etc...(in some circumstances)
Yes, simplicity can be a very good thing.

### #97agottem  Members   -  Reputation: 87

Posted 15 March 2011 - 07:25 PM

They don't need to change the function call. These are templates, the compiler will probably inline either function and figure out what to do from there.

I was going to post an assembly dump to demonstrate this, but I've been beaten to it Sounds like you need to upgrade your compiler.

I think you're both confused. A compiler is never allowed to change 'i++ to ++i'. It may be that when the definition of the post-increment / pre-increment operators are available, the compiler can more aggressively optimize the functions through inlining and unused variable elimination. However, this doesn't change the fact that in one case, the compiler is optimizing the call to ++i, and in the other case, the compiler is optimizing the call to i++. In the case of a vector, the methods for either implementation are simple enough that the compiler can optimize them both to the point of identical assembly.

As the iterators become more complicated for the compiler to analyze, or, if the definition is not available to the compiler...you may see less optimal code due using post/pre increment inappropriately.

My point remains, the compiler can never substitute ++i for i++. C++ forces you to pay attention to silly, subtle syntax.

### #98Hodgman  Moderators   -  Reputation: 42232

Posted 15 March 2011 - 08:44 PM

in one case, the compiler is optimizing the call to ++i, and in the other case, the compiler is optimizing the call to i++. In the case of a vector, the methods for either implementation are simple enough that the compiler can optimize them both to the point of identical assembly.

Exactly

My point remains, the compiler can never substitute ++i for i++. C++ forces you to pay attention to silly, subtle syntax.

But it just did! It took code that was requesting "i++" to be performed (increment and return previous), but the compiler decided that a simple increment (with no return) would work just as well. In other words "an optimising compiler can fix this mistake in simple cases".

It's not silly syntax - they're two completely different operations. If you were to implement STL's vectors/iterators in C, you'd get something like:
struct Iterator {...};
struct Vector {...};
Vector vec;
for( Iterator it = Begin(vec), end = End(vec); !IteratorsEqual(it,end); IncrementIterator_ReturnNewValue(&it) )
{
}
for( Iterator it = Begin(vec), end = End(vec); !IteratorsEqual(it,end); IncrementIterator_ReturnPreviousValue(&it) )
{
}
Of course in reality you'd just use a raw-pointer instead of some abstract iterator data type, in which case p++ or ++p will optimise to the same assembly in both C and C++.... Often in C++ you'd use a pointer instead of an actual iterator class too.

### #99zerothrillz  Members   -  Reputation: 152

Posted 15 March 2011 - 09:41 PM

I should make a video on youtube that starts with the quote 'Trust Me, I can manage my own memory!", followed my a non stop montage of memory leaks, blue screens, general protection faults, games that were released in a state that they could only ever crash to desktop, buffer overflow exploits (the entire C standard library needed to be deprecated and replaced with safer alternative functions to remedy this), air traffic control failures, road sign failures, etc, etc, to this music.

I think that should be the theme song to this thread.

Though, not because I think it lacks valuable information or valid arguments.

### #100agottem  Members   -  Reputation: 87

Posted 15 March 2011 - 10:25 PM

But it just did! It took code that was requesting "i++" to be performed (increment and return previous), but the compiler decided that a simple increment (with no return) would work just as well.

Yes, the compiler performed an i++, not a ++i. This was precisely my point, you seemed to have missed it.

In other words "an optimising compiler can fix this mistake in simple cases".

I don't think you've been following the conversation. I already said this much.

What you seemed to have missed is that there are cases where even an optimizing compiler cannot turn i++ into the equivalent of ++i. For example, if the definition of i++ isn't known (e.g., not defined in a header file), it won't be able to optimize out the unused return value.

Alternatively, if the definition of i++ *is* complex, the compiler may be unable to inline appropriately.

Alternatively again, if the compiler is setup to optimize for size, and not for speed, it may turn those inline methods into function calls. In which case, the compiler won't inline and optimize the call to i++, and again, you'll be paying for the overhead of using a post-increment instead of a pre-increment.

Of course in reality you'd just use a raw-pointer instead of some abstract iterator data type, in which case p++ or ++p will optimise to the same assembly

Yes, in C, p++ and ++p turn into the same assembly because you are working with a primitive type. You are guaranteed the compiler will always have 'the function definition' for ++p or p++ when working with a primitive type. Iterators and overloaded increment operators are hardly primitive types, and you certainly can't guarantee the compiler will have access to the definition in order to optimize it.

Again, a C++ developer is forced to pay attention to small syntax nonsense that a C developer can ignore.

Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

PARTNERS