Go for video game programming?

Started by
18 comments, last by larsbutler 9 years, 11 months ago

AFAIK the problem isn't one of "garbage collected language" vs "non garbage collected language". Problem is that Go's particular implementation isn't the fastest around. There isn't a single garbage collection method that all GC'd languages use, they're all different and are targeted for probably different use cases.

For example: http://benchmarksgame.alioth.debian.org/

If you compare Go's performance with other garbage collected languages (C# on Mono, F# on Mono, Java, Scala, etc) in those benchmarks, you get pretty different results. With Mono based languages its mostly 50/50, whereas with JVM based languages it compares unfavorably on most cases.

So even among GC'd languages you can find wildly different results, just like you can't expect to see the same results among compiled languages.

I'm not saying to take those numbers as the holy grail of programming language comparisons, but just to show that it isn't just a decision between GC and no GC.

"I AM ZE EMPRAH OPENGL 3.3 THE CORE, I DEMAND FROM THEE ZE SHADERZ AND MATRIXEZ"

My journals: dustArtemis ECS framework and Making a Terrain Generator

Advertisement

Rust (talking about bad names tongue.png what were they thinking? ) also seems to miss the commercial Wow factor.. the slogans are mostly about safety here and there.

But that's exactly what developers *should* want.

Perhaps.. but the reality is that what programmers want is a language that doesn't get in the way of what I am doing.. and Go is sublime in this... it's actually funny to see the horrified reactions of language elitists and purist when confronted with the current success of Go :P

We'll see what happens with Rust.. I haven't done any serious coding with it yet and I am not planning to until it goes v1.0 and feature an IDE support offering productivity aids like good intellisense (code completion), project navigation and refactoring.

We'll have to see how the rules and limitations on object ownership and all the other mechanism to improve safety will impact day by day productivity and "flow".. at the moment, the °periodic table of pointer types" in Rust is not too promising :P

Stefano Casillo
TWITTER: [twitter]KunosStefano[/twitter]
AssettoCorsa - netKar PRO - Kunos Simulazioni

If you compare Go's performance with other garbage collected languages (C# on Mono, F# on Mono, Java, Scala, etc) in those benchmarks, you get pretty different results. With Mono based languages its mostly 50/50, whereas with JVM based languages it compares unfavorably on most cases.

true, but the good news is that Go is relatively speaking a very young language and it's closing that gap quickly with every new release as the focus is pretty much focused on performances right now. The JVM had had decades of man power dedicated to achieve the current level of performances.

Stefano Casillo
TWITTER: [twitter]KunosStefano[/twitter]
AssettoCorsa - netKar PRO - Kunos Simulazioni

The problem with GC in games isn't really performance, particularly for the indie crowd who are using those types of languages. At least, not directly. GC is plenty fast. What's lacking is control. The GC based languages are horrified that the application might want to exercise any kind of control or hinting about what to GC and when. I don't want random allocations to block and trigger GC. I want to be able to dispatch a bunch of GPU calls, then tell the runtime "hey, you've got 3ms to do as much incremental GC as you can". I want to be able to control the balance of GC time and convergence, and force full GC when required. I'd love to be able to tag objects explicitly with lifetime hints.

And frankly, I need to be able to rely on a consistent implementation underneath with known characteristics. I get why the language designers don't want this, but it's a huge practical problem for games. We NEED to be able to exercise real-time guarantees.

SlimDX | Ventspace Blog | Twitter | Diverse teams make better games. I am currently hiring capable C++ engine developers in Baltimore, MD.

I want to be able to dispatch a bunch of GPU calls, then tell the runtime "hey, you've got 3ms to do as much incremental GC as you can".

ya I am always amazed at how this has never made into any language as far as I know.. that's EXACTLY what performance critical application need! it really looks like the perfect trade off between convenience of CG and performances. Treat the GC exactly as any other task (hopefully doable concurrently) that an app has to accomplish.

But, hey, what do I know? I make games and not compilers, I think (and hope) that there is a solid technical reason for not doing it this way.

Stefano Casillo
TWITTER: [twitter]KunosStefano[/twitter]
AssettoCorsa - netKar PRO - Kunos Simulazioni

What's lacking is control. The GC based languages are horrified that the application might want to exercise any kind of control or hinting about what to GC and when.

And it's not just about 'when' or 'how long', it's also about 'where'.

Cache coherency is a big issue, and if you can't control the layout of your memory allocations, your work is cut out for you. Try allocating a contiguous array of 4x4 matrices in Java, and you'll rapidly start tearing your hair...

Tristam MacDonald. Ex-BigTech Software Engineer. Future farmer. [https://trist.am]


My core problem with D in this respect is that it *is* just a C++ clone with some (but far from all) rough edges rounded off.

Sure it makes template metaprogramming simpler than C++, but it doesn't really add/enable a whole lot else. No native support for high concurrency, no attempt at a new memory model, no attempt to simplify or improve the basic programming model, and so forth...

All variables are thread-local by default; shared and immutable variables are the building blocks for concurrency and functional programming; library support for concurrency in the form of std.concurrency and std.parallelism; ranges (support both built-in and in the library); a growing subset of the language can be executed at compile time; compile-time and runtime introspection; template constraints; pluggable, component-based algorithms... I don't know when you last looked at it, but the D I know has got a bit more to it than C++. Though, it seems C++11 and C++14 have narrowed the gap. I expect that to continue with future versions of C++.

The problem with GC in games isn't really performance, particularly for the indie crowd who are using those types of languages. At least, not directly. GC is plenty fast. What's lacking is control. The GC based languages are horrified that the application might want to exercise any kind of control or hinting about what to GC and when. I don't want random allocations to block and trigger GC. I want to be able to dispatch a bunch of GPU calls, then tell the runtime "hey, you've got 3ms to do as much incremental GC as you can". I want to be able to control the balance of GC time and convergence, and force full GC when required. I'd love to be able to tag objects explicitly with lifetime hints.

And frankly, I need to be able to rely on a consistent implementation underneath with known characteristics. I get why the language designers don't want this, but it's a huge practical problem for games. We NEED to be able to exercise real-time guarantees.

This is spot on. It also makes me wonder if there is any research into timesliced GCs. I'm not entirely sure if the productivity cost makes GCs even worth using at that point, but it does address the major sore in GCs in general, games or otherwise -- poor performance characteristics for real-time (hard or soft) applications.

This is spot on. It also makes me wonder if there is any research into timesliced GCs. I'm not entirely sure if the productivity cost makes GCs even worth using at that point, but it does address the major sore in GCs in general, games or otherwise -- poor performance characteristics for real-time (hard or soft) applications.

Real-time GCs are really not all that uncommon, they just aren't very widely used outside of fields requiring hard-realtime performance.

For example, see this article by IBM re Java.

Tristam MacDonald. Ex-BigTech Software Engineer. Future farmer. [https://trist.am]


There's nothing intrinsically wrong with it, but I personally don't like Go's hardline stance against generics

"This remains an open issue." is not a hardline stance. See http://golang.org/doc/faq#generics

This topic is closed to new replies.

Advertisement