Improve on Java or preapre for C++?

Started by
7 comments, last by EmeryBerger 10 years, 12 months ago

I'll cut to the chase. It's our summer vacation and in 2 months are school will start and one of the subject is C++,which I am eager to learn.And recently I finished a 10 day Summer Program that teaches Java.

The main problem is should I develop my skills on Java or start studying C++.

I already have experience about programming,I learned HTML,CSS,Javascipt and I recently made a game (Pong) with Actionscipt(I didn't really focused on Actionscipt but more on games).

Another problem is that I want to start studying game development so I don't know which program,Java or C++, will be suitable for me.And I know it will take several years to make a good game but I will commit myself to study programming.

I really want to make a good decision here guys.and Thanks in advance!

Advertisement

Learn both, sometimes you learn a concept in one language that can easily transfer to another language. In the end its the ability to create algorithms efficiently and fast.

For making your own game and what language to use there, it depends. If your making a indie game yourself it may be worth using a managed language such as Java (or C#), otherwise if your after performance C++ is always usually the answer.

Java/C# are usually used for productivity, you don't have to worry about things like memory management in C#/Java.

C++ is usually used when performance is more important.

Thank you for the advice

I had bad thoughts about doing both because It might overwhelm me and might slow the leraning progress of both.

But since you put it in that way,maybe studying both will be more fun!

What do you mean by "performance" exactly?

Managed languages like C#/Java have to convert MSIL/Java Bytecode (which is what C#/Java compilers create) into machine code at runtime. This can slow down the initial startup up of the application (using something like NGEN can overcome this though). Not only that but both languages use a garbage collector to pick up and delete unused objects as well - which also uses process time.

C++ does not do this and is always compiled to machine code before use. No garbage collector exists in C++. C++ is just lower-level in general compared to C#/Java and allows you to deal with smaller things for specific optimizations.

You don't have to learn both at the SAME time, if you think your going good with Java then keep using it. I'm just saying there's nothing wrong with learning both :)

C++ is usually used when performance is more important.

...
Not only that but both languages use a garbage collector to pick up and delete unused objects as well - which also uses process time.

When I read about why C++ is faster than higher language X, I always hear Garbage Collectors. I don't know why it is such a common misconception. While I do agree with you in general, I want to look a bit closer at the details:

Garbage Collectors do not make the program slower, quite the contrary. If you use a GC or not, you still have the same amount of memory to clean up. The biggest difference here is that C++ cleans it bit by bit, every time an object isn't referenced anymore, while a GC does it in one fellow swoop.

Garbage Collection has it's downsides, like you said when it kicks in, it tends to stop the whole program for a bit until it's finished, which isn't very nice. It also needs more memory. The Java GC can throw an OutOfMemoryException while a third of it's heap is technically free.

Where a GC is better however is throughput, a program tends to spend less time cleaning it's memory with a GC than with reference counting. It finds cyclic dependancies on it's own, throughput tends to scale with the amount of memory you give plus object allocation is faster than in C++. A modern garbage collector shines when you have a program that has many objects that become garbage quickly and a hand full of objects that stay alive during the whole runtime of the program.

So no, a Garbage Collector does not in general negatively impact performance. It all comes down to what task there is to do. And that's where I agree again wink.png . I wouldn't write a high performance math library in Java, nor would I implement a high performance streaming service in C++.

E: also with today's runtime optimization of JVM/CLR, you don't necessarily get a performance advantage when you just switch to C++. You can be quite a bit faster, but you have to know what can be optimized and how.

Project: Project
Setting fire to these damn cows one entry at a time!

I don't recommend sticking with Java. For basically anything you might choose it for, there is a better language choice. All programming experience helps, so it's great you have put 10 days in Java, but in the big picture ("what are you going to work with for the next few years?") 10 days is nothing. Switching to a more productive language would pay off almost immediately. Depending on the stuff you want to do, some reasonable choices would be Python, C# or Scala.

The other choice would be to get a head start in C++. I usually recommend strongly against C or C++ for beginners, but since you are going on a C++ course anyway, this is a special case. Don't worry about things like "am I wasting time on learning stuff now that I'd learn on the course" because no one learns C++ in a semester anyway. The more you learn now, the more you can absorb from the course.

No matter what you do, don't split your time between two languages (at least before the C++ course begins). Stick with one. With any language, you'll be initially spending time learning the quirks of the language and the tools. This stuff is superficial. You could "learn" 20 languages and still not be much of a programmer. You develop deep, fundamental programming skill by working on larger projects and trying to actually get something done.

For the record, I like Java and its whole toolchain (from the very good (free!) IDEs around to the JVM itself).

But, it seems that you haven't experimented with a non-managed language yet. You should. It helps a lot to understand the "automagical" stuff the GC/VM does in all managed languages (you can better spot different kind of casts, boxing/unboxing, what is a reference and what is a value, etc) and it also makes you more conscious about how you handle your memory (which is ultimately good no matter what language you code on).

"I AM ZE EMPRAH OPENGL 3.3 THE CORE, I DEMAND FROM THEE ZE SHADERZ AND MATRIXEZ"

My journals: dustArtemis ECS framework and Making a Terrain Generator

Thank you for the advice
I had bad thoughts about doing both because It might overwhelm me and might slow the leraning progress of both.
But since you put it in that way,maybe studying both will be more fun!

What do you mean by "performance" exactly?

Languages don't really affect performance by themselves(With some exceptions), their implementations (compilers/vms) do.

Lower level languages give you more control over the exact way in which things are done and thus more of the responsibility for creating efficient/secure/whatever code is moved from the compiler/vm to the programmer, in cases where the compiler is unable to do a good job a good programmer can get better performance by using a lower level language.

Compilers are improving rapidly though and hardware is becoming more complex, when i started out we wrote all performance critical routines using x86 assembly, C++ was considered a slow high level language and Java didn't exist.

Today C++ is considered low level, almost noone uses assembly directly since it only confuses the compiler and it is becoming really difficult to do a better job than the compiler anyway.

JIT compilers are allready outperforming ahead of time compilers in some cases. (Java using Oracles server JVM outperforms both msvc and gcc(the two most popular C++ compilers on the PC) if you make heavy use of dynamic dispatch for example). the performance differences between ahead of time compilers and JIT compilers are very small today and on the PC where you do not know what hardware the user will have it is quite likely that ahead of time compilers used by for example C++ will result in worse performance.

(Java has a few performance problems on the PC(x86) due to accuracy requirements in the language that cannot be solved unless either the language or the platform changes but C# doesn't have any such problems).

Today the biggest performance disadvantage you get with C# or Java is that you cannot write SIMD (SSE etc) code yourself (Mono has limited SIMD support that is a bit of a work in progress but .Net has nothing yet) and the VMs still do a fairly poor job at writing it for you. (Strictly speaking you cannot write SIMD code using C++ either but since C++ allows compilers to add their own extensions to the language you can still do it (allthough such code is not portable between compilers it does get the job done)
[size="1"]I don't suffer from insanity, I'm enjoying every minute of it.
The voices in my head may not be real, but they have some good ideas!

The key difference between Java and C++ is not the speed of the generated code. It really is garbage collection. Garbage collection is a space-time tradeoff.

TL;DR - Garbage collectors need lots of RAM to run fast, but given enough RAM, they are as fast as malloc/free (BUT NOT FASTER).

The more space you are willing to devote to the garbage-collected heap, the faster it will run (since it will collect garbage less frequently). Given about 5x as much memory as you'd need with a C++ program, you will get about the same performance. Of course, if you need that memory for other reasons (i.e., caching data from disk / network, for large-scale in-memory computations, etc.), then that extra space comes at an extra cost. And if it would trigger paging, then obviously that is disastrous. Here's the paper and an excerpt from a Lambda the Ultimate discussion of a paper I wrote with my student Matthew Hertz in OOPSLA 2005 (paper and presentation attached).

Quantifying the performance of garbage collection vs. explicit memory management

Abstract:

Garbage collection yields numerous software engineering benefits, but its quantitative impact on performance remains elusive. One can compare the cost of conservative garbage collection to explicit memory management in C/C++ programs by linking in an appropriate collector. This kind of direct comparison is not possible for languages designed for garbage collection (e.g., Java), because programs in these languages naturally do not contain calls to free. Thus, the actual gap between the time and space performance of explicit memory management and precise, copying garbage collection remains unknown.

We introduce a novel experimental methodology that lets us quantify the performance of precise garbage collection versus explicit memory management. Our system allows us to treat unaltered Java programs as if they used explicit memory management by relying on oracles to insert calls to free. These oracles are generated from profile information gathered in earlier application runs. By executing inside an architecturally-detailed simulator, this "oracular" memory manager eliminates the effects of consulting an oracle while measuring the costs of calling malloc and free. We evaluate two different oracles: a liveness-based oracle that aggressively frees objects immediately after their last use, and a reachability-based oracle that conservatively frees objects just after they are last reachable. These oracles span the range of possible placement of explicit deallocation calls.We compare explicit memory management to both copying and non-copying garbage collectors across a range of benchmarks using the oracular memory manager, and present real (non-simulated) runs that lend further validity to our results. These results quantify the time-space tradeoff of garbage collection: with five times as much memory, an Appel-style generational collector with a non-copying mature space matches the performance of reachability-based explicit memory management. With only three times as much memory, the collector runs on average 17% slower than explicit memory management. However, with only twice as much memory, garbage collection degrades performance by nearly 70%. When physical memory is scarce, paging causes garbage collection to run an order of magnitude slower than explicit memory management.

....

Quantifying the Performance of Garbage Collection vs. Explicit Memory Management, by Matthew Hertz and Emery D. Berger:

Overall, generational collectors can add up to 50% space overhead and 5-10% runtime overheads if we ignore virtual memory. Very reasonable given the software engineering benefits. However, factoring in virtual memory with its attendant faulting paints a very different picture in section 5.2:

Garbage collection yields numerous software engineering benefits, but its quantitative impact on performance remains elusive. One can measure the cost of conservative garbage collection relative to explicit memory management in C/C++ programs by linking in an appropriate collector. This kind of direct comparison is not possible for languages designed for garbage collection (e.g., Java), because programs in these languages naturally do not contain calls to free. Thus, the actual gap between the time and space performance of explicit memory management and precise, copying garbage collection remains unknown.

We take the first steps towards quantifying the performance of precise garbage collection versus explicit memory management. We present a novel experimental methodology that lets us treat unaltered Java programs as if they used explicit memory management. Our system generates exact object reachability information by processing heap traces with the Merlin algorithm [34, 35]. It then re-executes the program, invoking free on objects just before they become unreachable. Since this point is the latest that a programmer could explicitly free objects, our approach conservatively approximates explicit memory management. By executing inside an architecturally-detailed simulator, this “oracular” memory manager eliminates the effects of trace processing while measuring the costs of calling malloc and free.

We compare explicit memory management to both copying and non-copying garbage collectors across a range of benchmarks, and include real (non-simulated) runs that validate our results. These results quantify the time-space tradeoff of garbage collection: with five times as much memory, an Appel-style generational garbage collector with a non-copying mature space matches the performance of explicit memory management. With only three times as much memory, it runs on average 17% slower than explicit memory management. However, with only twice as much memory, garbage collection degrades performance by nearly 70%. When physical memory is scarce, paging causes garbage collection to run an order of magnitude slower than explicit memory management.

These graphs show that, for reasonable ranges of available memory (but not enough to hold the entire application), both explicit memory managers substantially outperform all of the garbage collectors. For instance, pseudoJBB running with 63MB of available memory and the Lea allocator completes in 25 seconds. With the same amount of available memory and using GenMS, it takes more than ten times longer to complete (255 seconds). We see similar trends across the benchmark suite. The most pronounced case is javac: at 36MB with the Lea allocator, total execution time is 14 seconds, while with GenMS, total execution time is 211 seconds, over a 15-fold increase.

The culprit here is garbage collection activity, which visits far more pages than the application itself [60]

This topic is closed to new replies.

Advertisement