Sign in to follow this  

Unity C# .Net Open Source

This topic is 1116 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Quote :

 

"Today, Microsoft is making the core parts of its .Net framework open-source, and cross-platform on Windows, Mac OS X, and Linux. Microsoft is also committing to adding Android and iOS support in the upcoming Visual Studio 2015 — in fact, there’s already an Android emulator in Visual Studio 2015 Preview, and iOS support will be added soon. Furthermore, Microsoft is releasing a new version of Visual Studio — “Community 2013? — that is free and full-featured.This is a bold move that will attempt to cement .Net, C#, and Visual Studio as the dominant development platform across Windows, Linux, Android, iOS, and Mac "

 

 

Is this going to change the world of programming ? I mean will it hurt Java or ruby or the dev stuff xcode or linux a lot in the long run ?? For the first time C# and Visual Studio will be able to do everything on every Platform what it mean for the rest like netbeans eclipse and others older Tools etc lot's of experienced programmer believe Visual Studio is the greatest IDE and C# 1 of the best language out there so how can it help Microsoft and new programmers ??? Could it make C# more popular on the server side of things againts java ?

 

Thanks for your opinions and views on this very appreciated I am french thanks for the advices and sorry for the english

 

 

Some Source :

 

http://www.extremetech.com/computing/194099-microsoft-makes-net-open-source-finally-embraces-ios-android-and-linux

 

http://visualstudiomagazine.com/Home.aspx

 

 

Mederick

Share this post


Link to post
Share on other sites

I don't like Mono but might try C# when it will be integrated fully in Ubuntu. When speed is comparable to C++ and doesn't pose install fuss for users, I might use it or simply stick to C++. I'm curious if Unity3D will move to MS C# as they didn't update to the latest Mono.

Share this post


Link to post
Share on other sites

I don't like Mono but might try C# when it will be integrated fully in Ubuntu. When speed is comparable to C++ and doesn't pose install fuss for users, I might use it or simply stick to C++. I'm curious if Unity3D will move to MS C# as they didn't update to the latest Mono.

 

C# is generally speaking only slower on first boot, because the runtime is compiling the application down to native instructions based on the environment setup.

 

Poor cache awereness in the application code however might hurt the performance more, but then again if you dont do this in C++ you will have the a similar slowdown.
 

Share this post


Link to post
Share on other sites

I'm curious if Unity3D will move to MS C# as they didn't update to the latest Mono.

 

Doubtful they have there own branch of mono and it has not been synced with main branch for years; as a result it is not compatible with up to date versions of MS .net or Mono. If there was a easy fix to get away from the 'stop the world' garbage collector in there version they would have updated a long time ago ... (note: I have not looked at unity 5 so I might be outdated)

 

Not a big fan of Java or C# but lets hope this means we are one step closer to ditching security issues caused by Java or that it forces Oracle to step up and fix there broken platform/patching schedule.

Share this post


Link to post
Share on other sites

I love C#.

 

Just for the assemblies you can get hold of and plugin. Want a web browser in your application 10 minutes work, want a delauney triangulation system, another 10 minutes.

 

For tools and general applications it's awesome. You can really get things done quickly.

 

Also the ability to compile in code on the fly is very useful, you don't need a seperate scripting system. Write your scripts in C# and compile them in on the fly.

 

The garbage collector can be a pain in the ass, so watching what you use as temporary objects is important.

 

However all in all. It's a really good language to work in.

Share this post


Link to post
Share on other sites


Is this going to change the world of programming ? I mean will it hurt Java or ruby or the dev stuff xcode or linux a lot in the long run ??

I'm a new programmer, so take my words with a fistful of salt. I believe this will likely not hurt Java or C++, as Java is already used in many different jobs, and C++ is just much more powerful than C#. It'd be a hassle to convert an entire company from Java to C#, getting software, training, etc.

I do believe, however, smaller languages will likely become even smaller until they do something that makes them better than C#/Java

Or they will go extinct

Either way

Your move, Microsoft.

Share this post


Link to post
Share on other sites

I love C#.
 
Just for the assemblies you can get hold of and plugin. Want a web browser in your application 10 minutes work, want a delauney triangulation system, another 10 minutes.
 
For tools and general applications it's awesome. You can really get things done quickly.
 
Also the ability to compile in code on the fly is very useful, you don't need a seperate scripting system. Write your scripts in C# and compile them in on the fly.
 
The garbage collector can be a pain in the ass, so watching what you use as temporary objects is important.
 
However all in all. It's a really good language to work in.


Agreed, assemblies just work(TM) and Nuget makes it even better, installing a new dependency is - literally - one click away. C# is a very fun, enjoyable language to work with, even though it has its kinks and isn't necessarily suitable for every task, it does a great job at being flexible enough for most purposes while still remaining efficient enough both in terms of program performance and developer resources (and, importantly, it gives you most of the tools needed to improve performance in critical code, for instance manual struct layouts and reducing pressure on the GC by using short-lived value types on the stack; of course, you actually need to [be able to] use them to benefit from them). It's not the one true language, however it strikes a very good balance IMHO.

Share this post


Link to post
Share on other sites
I feel like I should point out that this is not limited to C#. MS is open-sourcing the .NET libraries which work with any .NET compatible language (C++/CLI, C#, VB, F#, variants on Python and other languages).

What this is basically going to do is give an "official" distribution of .NET on non-MS platforms that open-source people can freely include in their distributions (and I believe Mono is the actual port, so they're not going to be "killed" by this, just "legitimized" if that's the right word...)

Even better is that MS is open-sourcing the runtime which includes the JITer, GC, and other bits that were "black boxed" before, allowing the community to improve/port them.

As to C#, that was already freely usable since it was made (it's an ISO standard) and MS open-sourced their C# and VB compilers earlier this year - Roslyn.

Java is too entrenched to go anywhere, but this may make .NET far more popular then it has been in the open-source and mobile community. Edited by SmkViper

Share this post


Link to post
Share on other sites

I haven't got the time yet to take a look at what's been open-sourced, but the advantage of Mono over .NET is, and has always been, that it can be embedded in your app. No messy installations, just copy-paste a lot of files into your game directory. A lot of people underestimate installers/uninstallers :)

 

I'm curious what will follow. Making VS free for "non-enterprise users" (in the Connect() videos they said small companies and indie devs are OK) is an awesome move. If they invest in clang for windows apps (mainly for the clang tools), it would be really awesome.

Share this post


Link to post
Share on other sites

I haven't got the time yet to take a look at what's been open-sourced, but the advantage of Mono over .NET is, and has always been, that it can be embedded in your app. No messy installations, just copy-paste a lot of files into your game directory. A lot of people underestimate installers/uninstallers smile.png
 
I'm curious what will follow. Making VS free for "non-enterprise users" (in the Connect() videos they said small companies and indie devs are OK) is an awesome move. If they invest in clang for windows apps (mainly for the clang tools), it would be really awesome.


VS is free for dev teams of five people or smaller, educational institutions, and open source projects. Visual Studio Community Edition - this is equivalent to the "Pro" version of VS.

They are already making Clang bindings for Android/iOS, but I don't see any reason for them to use Clang for Windows since they'd rather use their own compiler. Clang does not (yet) support the necessary extensions to compile with Windows and related headers and libraries, though there are teams working on it. I believe C++ Builder uses their own Clang port for 64-bit Windows but they don't have one for 32-bit yet.

And copying Mono into your app just wastes the user's storage space/download cap. I'd (personally) rather just install a shared library once and re-use it rather then having unique copies in each program I have installed. (Though I do see the value in a drag-and-drop "install" process, but that's more of a OSX thing) Edited by SmkViper

Share this post


Link to post
Share on other sites
 

I don't like Mono but might try C# when it will be integrated fully in Ubuntu. When speed is comparable to C++ and doesn't pose install fuss for users, I might use it or simply stick to C++. I'm curious if Unity3D will move to MS C# as they didn't update to the latest Mono.

 

Unity3D has already started using C# as it's scripting Language and in conjuction with MS has developed a Visual Studio Plugin that will interface with it for writing your scripts while in Unity.

Share this post


Link to post
Share on other sites

Its not going to take many existing java jobs away from places where java is already in place; the only reason such a migration might make sense is if a company was having trouble finding java people (or had an abundance of C# people) in their local area. For new jobs where neither Java nor C# is already in place, C# will be more attractive now -- to be perfectly blunt, C# is a better language than Java, full-stop. The only advantage Java has really had is that it had been more open, and had gotten a head-start, especially on non-Microsoft platforms. Through Mono, C# has already been an option in many places, but people are wary of Mono for fear of it not being "official" or for fear of Microsoft one day coming after them. Those concerns are now moot.

 

The core of .NET is open, but not everything. So you won't see total compatibility of any .net desktop application over night. What you will see, eventually, is that the open source core will be pulled into /drawn from in projects like Mono or Unity. As a result, those projects will have an easier time maintaining parity with language features, and will have more time to work on the things that aren't part of the open-source core. The runtime, and effectively the languages, are all part of that core though -- I think its just parts of the platform libraries that aren't open yet.

 


Poor cache awereness in the application code however might hurt the performance more, but then again if you dont do this in C++ you will have the a similar slowdown.

 

Its true, but the design of managed languages and the CLR give you less control over very precise behaviors of memory use. Cache-aware C# runs better than non-cache-aware C#, but will likely never run as well as cache-aware C or C++, and still lacks truly deterministic resource reclaimation which is also a hindrance to performance-tuned C#.

 


Unity3D has already started using C# as it's scripting Language and in conjuction with MS has developed a Visual Studio Plugin that will interface with it for writing your scripts while in Unity.

 

Actually, Microsoft bought a company called SyntaxTree who already made and sold a plugin called UnityVS. Those folks are now working as part of Microsoft, together with the Visual Studio folks to offer a better product. On top of that, the product, now called Unity Tools for Visual Studio, has been made fre, and there's now VS2013 Community that supports such plugins in a free version of Visual Studio. VS Community and UTVS are part of a general trend of making tools more accessible.

Share this post


Link to post
Share on other sites

Microsoft bought a company called SyntaxTree who already made and sold a plugin called UnityVS

 
I love UnityVS. When I got the email from Microsoft that announced they had acquired UnityVS and would be contacting the purchasers with additional information I was very concerned.
 
Was it going to be Microsoft embrace/extend/extinguish, or a case of promoting it to world class software? 
 
Sadly the jury is still out on this.


While the jury might be out I'd be less concerned with this outcome now than a few years ago; the opening up of a free VS with plugins combined with them acquiring the plugin would seem to indicate a desire to support Unity on their platforms (and tempt people to using non-unity tools going forward) so people keep using their platforms - provide a good experience on Windows for Unity dev and people will keep using and releasing on Windows.
(On a related note VS2015 is being specifically tested with UE4 to improve the development experience there too - so it seems MS are keen to support engine users via their tools.)

Same goes for the Android debugging support - with that one announcement my love for MS just went up 10,000 fold because frankly trying to do anything on Android from Windows right now remains a complete cluster-fuck and I have no faith that Google will do anything useful about it any time soon.

I've been saying this for a while but recent actions have made it clearer; this isn't the old MS, there is a large shift happening from giving away things free to admitting other OSes exist beyond Windows, so I suspect a lot of the old habits are going to die too.

Share this post


Link to post
Share on other sites

If you follow a few simple practices (and sometimes a few complex practices) this ["no way to get away from the 'stop the world' garbage collector"] is actually an amazing feature of the languages.

The GC may be amazing, but why is barring you from having any control an amazing feature? Wouldn't it be nice if you could choose to opt in to specifying the sizes of the different heaps, hinting at good times to run the different phases, specifying runtime limits, providing your own background threads instead of automatically getting them, etc? Would it harm anything to allow devs to opt in to that stuff? Do the amazing features require the GC to disallow these kinds of hints?
 

With modern versions of both Java and C# ... On rare occasions  [when GC runs at the wrong time, it consumes] on the order of 1/10,000 of your frame time.

16.667ms / 10000 = 1.7 microseconds
Having seen GC's eat up anywhere from 1-8ms per frame in the past (when running on a background low-priority thread), claims of 1?s worst-case GC times sound pretty unbelievable -- the dozen cache misses involved in a fairly minimal GC cycle would alone cost that much time!
 
I know C# has come a long way, but magic of that scale that are justifiably going to be met with some skepticism.
Combine that skepticism with the  huge cost involved in converting an engine over to use a GC as it's core memory management system, and you've got still a lot of resistance in accepting them.
Also, often it's impossible to do an apples to apples comparison because the semantics used by the initial allocation strategies and the final GC strategy end up being completely different, making it hard to do a valid real world head-to-head too...
 

while your program has some spare time on any processor (which is quite often)

Whether it's quite often or not entirely depends on the game. If you're CPU bound, then the processor might never be idle. In that case, instead of releasing your per-frame allocations every frame, they'll build up until some magical threshold out of your control is triggered, causing a frame-time hitch as the GC finally runs in that odd frame.

Also when a thread goes idle, the system knows that it's now safe to run the GC... but the system cannot possibly know how long it will be idle for. The programmer does know that information though! The programmer may know that the thread will idle for 1 microsecond at frame-schedule point A, but then for 1 millisecond at point B.
The system sees both of those checkpoints as equal "idle" events and so starts doing a GC pass at point A. The programmer sees them as having completely different impacts on the frame's critical path (and thus frame-time) and can explicitly choose which one is best, potentially decreasing their critical path.
 

In C++ ... collection (calling delete or free) takes place immediately ... this universally means that GC runs at the worst possible time, it runs when the system is under load.

I assume here we're just dealing with the cost in updating the allocator's management structures -- e.g. merging the allocation back into the global heap / the cost of the C free function / etc?

In most engines I've used recently, when a thread is about to idle, it first checks in with the job/task system to see if there's any useful work for it to do instead of idling. It would be fairly simple to have free push the pointer into a thread-local pending list, which kicks a job to actually free that list of pointers once some threshold is reached.
I might give it a go biggrin.png Something like this for a quick attempt I guess.
 
However, the cost of freeing an allocation in a C++ engine is completely different to the (amortized) cost of freeing an allocation with a GC.
There's no standard practice for handling memory allocation in C++ -- the 'standard' might be something like shared_ptr, etc... but I've rarely seen that typical approach make it's way into game engines.
The whole time I've been working on console games (PS2->PS4), we've used stack allocators and pools as the front-line allocation solutions.

Instead of having one stack (the call stack) with a lifetime of the current program scope, you make a whole bunch of them with different lifetimes. Instead of having the one scope, defined by the program counter, you make a whole bunch of custom scopes for each stack to manage the sub-lifetimes within them. You can then use RAII to tie those sub-lifetimes into the lifetimes of other objects (which might eventually lead back to a regular call-stack lifetime).
Allocating an object from a stack is equiv to incrementing a pointer -- basically free! Allocating N objects is the exact same cost.
Allocating an object from a pool is about just as free -- popping an item from the front of a linked list. Allocating N objects is (N * almost_free).
Freeing any number of objects from a stack is free, it's just overwriting the cursor pointer with an earlier value.
Freeing an object from a pool is just pushing it to the front of the linked list.

 

 

Also, while we're talking about these kinds of job systems -- the thread-pool threads are very often 'going idle' but then popping work from the job queue instead of sleeping. It's pretty rediculous to claim that these jobs are free because they're running on an otherwise 'idle' thread. Some games I've seen recently have a huge percentage of their processing workload inside these kinds of jobs. It's still vitally important to know how many ms each of these 'free' jobs is taking.
 

In the roughly 11 major engines I have worked with zero of them displaced the heap processing to a low priority process.

The low priority thread is there to automatically decide a good 'idle' time for the task to run. The engines I've worked with recently usually have a fixed pool of normal priority threads, but which can pop jobs of different priorities from a central scheduler. The other option is the programmer can explicitly schedule the ideal point in the frame for this work to occur.

I find it hard to believe that most professional engines aren't doing this at least in some form...?
e.g.
When managing allocations of GPU-RAM, you can't free them as soon as the CPU orphans them, because the GPU might still be reading that data due to it being a frame or more behind -- the standard solution I've seen is to push these pointers into a queue to be executed in N frame's time, when it's guaranteed that the GPU is finished with them.
At the start of each CPU-frame, it bulk releases a list of GPU-RAM allocations from N frames earlier.
Bulk-releasing GPU-RAM allocations is especially nice, because GPU-RAM heaps usually have a very compact structure (instead of keeping their book-keeping data in scattered headers before each actually allocation, like many CPU-RAM heaps do) which can potentially entirely fit into L1.
 
Also, when using smaller, local memory allocators instead of global malloc/free everywhere, you've got thread safety to deal with. Instead of the slow/general-purpose solution of making your allocators all thread-safe (lock-free / surrounded by a mutex / etc), you'll often use a similar strategy to the above, where you batch up 'dead' resources (potentially using wait-free queues across many threads) and then free them in bulk on the thread that owns the allocator.
e.g. a Job that's running on a SPU might output a list of Entity handles that can be released. That output buffer forms and input to another job that actually performs the updates on the allocator's internal structures to release those Entities.
 
One engine I used recently implemented something similar to the Actor model, allowing typical bullshit style C++ OOP code to run concurrently (and 100% deterministically) across any number of threads. This used typical reference counting (strong and weak pointers) but in a wait-free fashion for performance (instead of atomic counters, an array of counters equal in size to the thread pool size). Whenever a ref-counter was decremented, the object was pushed into a "potentially garbage" list. Later in the frame schedule where it was provable that the Actors weren't being touched, a series of jobs would run that would aggregate the ref counters and find Actors who had actually been decremented to zero references, and then push them into another queue for actual deletion.
 
Lastly, even if you just drop in something like tcmalloc to replace the default malloc/free, it does similar work internally, where pointers are cached in small thread-local queues, before eventually being merged back into the global heap en batch.
 

When enough objects are ready to move to a different generation of the GC (in Mono the generations are 'Nursery', 'Major Heap', in Java they are "Young Collection" and "Old Space Collection") the threads referencing the memory are paused, a small chunk of memory is migrated from one location to another transparently to the application, and the threads are resumed.

Isn't it nicer to just put the data in the right place to begin with?
It's fairly normal in my experience to pre-create a bunch of specialized allocators for different purposes and lifetimes. Objects that persist throughout a whole level are allocated from one source, objects in one zone of the level from another, objects existing for the life of a function from another (the call-stack), objects for the life of a frame from another, etc...
Often, we would allocate large blocks of memory that correspond to geographical regions within the game world itself, and then create a stack allocator that uses that large block for storing objects with the same lifespan as that region. If short-lived objects exist within the region, you can create a long-lived pool of those short-lived objects within the stack (within the one large block).
When the region is no longer required, that entire huge multi-MB block is returned to a pool in one single free operation, which takes a few CPU cycles (pushing a single pointer into a linked list). Even if this work occurs immediately as you say is a weakness of most C++ schemes, that's still basically free, vs the cost of tracking the thousands of objects within that region with a GC...
 

On extremely rare occasions (typically caused by bad/prohibited/buggy practices) it will unexpectedly run when the system is under load, exactly like C++ except not under your control.

So no - the above C++ allocation schemes don't sound exactly like a GC at all tongue.png

Edited by Hodgman

Share this post


Link to post
Share on other sites

smile.png

 

C++ memory handling is a completely different subject, you could write a garbage collector for your c++ project, but I don't advise it.

 

We typically have four different memory managers in our c++ engines, all for different situations, and it's up to the coders to manage their use themselves. Yes it does mean I have to yell at people when they do something stupid. Yes it does mean that a single bad check in can break the entire game, but it's still preferable to a garbage collector.

 

Also I have seen cases when the C# garbage collector can hang for several seconds, and in one case actually crashed the game, but in all these cases it was bad programming that caused the issue. Not a fault in C#

 

Anyway going back to the OP's original question.

 

1) Will it change programming?

   Of course not. We will still be sat in front of a monitor typing on a keyboard swearing at a compiler, .Net won't change that.

 

2) Will it hurt Java

  Yes, and so it should  java is a pile of ....<<3 megabytes of expletives removed by filter >> Several years of my life have been wasted writing Java VM's so I know how it works and I wish it had been starngled at birth.

 

3) Eclipse et al.

   Will continue. People like what they know and are hard to persuade to change. Hell I am sure there are people that still miss Netscape. I still have a copy of Vi on my machines. Nothing Microsoft does will change that.

Share this post


Link to post
Share on other sites

The GC may be amazing, but why is barring you from having any control an amazing feature? Wouldn't it be nice if you could choose to opt in to specifying the sizes of the different heaps, hinting at good times to run the different phases, specifying runtime limits, providing your own background threads instead of automatically getting them, etc? Would it harm anything to allow devs to opt in to that stuff? Do the amazing features require the GC to disallow these kinds of hints?

 

You can do a lot of that now. You've always been able to hint that a new collection should be run (which you might want to do right after loading a new level, say). Newer versions of .NET let you disable the GC and turn it back on again for sections of your code, so you could leave it disabled for your main loop and then flip it back on again during level load. There are different levels to this feature as well; you can set it to *never* run, or set it to "low latency", where it almost never runs unless you get critically close to running out of memory. You can also manually compact the LOH, letting you choose good times to reduce fragmentation.

 

If you want even more control, like taking full control of thread scheduling of the GC or setting size limits, you can host the CLR, similar to how Unity works. There are a crazy amount of knobs to tweak there.

 

Of course, the simplest advice that avoids all of this is what it has always been in both the managed and native worlds: during the main loop of your game, don't heap allocate. Not necessarily easy, but simple to understand. It's certainly easier to do in C++, but also doable in C# (in fact, it was almost a hard requirement that you do that for Xbox Arcade XNA games, since the Xbox's GC was pretty crappy). Unlike in some other managed languages that will remain unnamed, the .NET CLR supports value types, so you can with just a bit of effort cut down heavily on the amount of garbage you're generating.

 

For the times you absolutely need heap allocations but really need to avoid the managed heap, you can always just *allocate native memory* anyway! There's nothing stopping you from malloc-ing some native memory blocks and doing your work there. I do this pretty commonly in my own projects for certain classes of memory where I need explicit control over lifetime or alignment.

Edited by Mike.Popoloski

Share this post


Link to post
Share on other sites
One thing that hasn't been considered in all this GC discussion is that the GC only knows about memory.

That's it.

The C++ "RAII" idiom actually can clean up any resource immediately. And this is incredibly important. It is, in fact, so important that .NET has the IDisposable idiom to perform RAII-like tasks in its GC world.

Sure, I may not mind if my 6-core desktop with 32gb of ram leaves a few data structures around and cleans them up with a background thread later, but I do want it to not keep a file open forever because the GC has decided that it doesn't need to run yet.

And that doesn't even get into the issues that GC in general has on memory-limited devices like phones, tables, and consoles. In fact there are special versions of .NET that run on some of these devices with a non-generational GC because they can't afford all the extra memory a generational GC requires. And that kind of GC is most certainly not "invisible" to your program.

Well-written GCs (like .NET's) are great for memory on non-memory constrained systems. But they never mean you don't have to care about memory. And they don't do anything for you with non-memory resources.

Share this post


Link to post
Share on other sites

Indeed, you still have to play nice with the GC. For example, do not generate tons of medium lifetime objects. Short-lived objects are more or less free, large allocations are also more or less free (from a processing perspective) but medium lifetime objects tend to trigger the more expensive collections which notably affects framerate etc. And those do take their time, so need to avoid getting there.

 

Use value types (aka structs) where appropriate. A giant array of structs is still only a single GC reference, unless they have reference members ofcourse. They are also more cache friendly.

 

You can use pools in C# as well. A lot of people seem to forget about this. Granted, they are a bit messier to use since you have to preallocate the actual objects you will reuse, but still.

 

Be aware of what language features cause memory allocations (aka garbage) and avoid using those in tight situations. Foreach, yield, params/variable argument methods, string concatenations, .. If you have ReSharper (you should), there's a plugin which highlights heap allocations.

Share this post


Link to post
Share on other sites

Was it going to be Microsoft embrace/extend/extinguish, or a case of promoting it to world class software?



Sadly the jury is still out on this.


Its a bit of a shame that worries of the Old Microsoft follow the New Microsoft so doggedly, but your skepticism is understandable.

Speaking generally, and from the inside, I can assure you that the new leaf really is genuine. If you consider the corpus of all the announcements made last week, and the sheer effort or expenditure that it took to make them happen, I'd hope that'd go a long way towards convincing you of the new strategy.

Share this post


Link to post
Share on other sites

One thing that hasn't been considered in all this GC discussion is that the GC only knows about memory.

That's it.

The C++ "RAII" idiom actually can clean up any resource immediately. And this is incredibly important. It is, in fact, so important that .NET has the IDisposable idiom to perform RAII-like tasks in its GC world.

Yep, and that means you get scoped RAII the same as C++, so I don't really see why that's an issue?
 

And they don't do anything for you with non-memory resources.


Not entirely true. Consider the following class

public class Resource<T> : IDisposable
    where T : class, new()
{
    private readonly string _id;
    private T _resource;

    public Resource(string id)
    {
        _id = id;
        _resource = new T();
        Console.WriteLine("{0} acquired", _id);
    }

    public void Dispose()
    {
        if (_resource != null)
        {
            _resource = null;
            Console.WriteLine("{0} released", _id);
        }
    }
        
    ~Resource()
    {
        Dispose();
    }
}

In this case, the resource can either be manually scoped (i.e. using) or if we're not concerned about when the Resource is released, we can let the GC take care of it

new Resource<object>("global");
using (new Resource<object>("local"))
{
               
}

// output 
global acquired
local acquired
local released
global released

Share this post


Link to post
Share on other sites

One thing that hasn't been considered in all this GC discussion is that the GC only knows about memory.

That's it.

The C++ "RAII" idiom actually can clean up any resource immediately. And this is incredibly important. It is, in fact, so important that .NET has the IDisposable idiom to perform RAII-like tasks in its GC world.

Yep, and that means you get scoped RAII the same as C++, so I don't really see why that's an issue?


Because I have to write a huge amount of boilerplate code, and write it perfectly, knowing whether other "garbage collected" pointers I'm using have been cleaned up or not, just do implement something I get for "free" in C++.

I have to actively work against the GC because the GC is not designed for this.
 

And they don't do anything for you with non-memory resources.


Not entirely true. Consider the following class

public class Resource<T> : IDisposable
    where T : class, new()
{
    private readonly string _id;
    private T _resource;

    public Resource(string id)
    {
        _id = id;
        _resource = new T();
        Console.WriteLine("{0} acquired", _id);
    }

    public void Dispose()
    {
        if (_resource != null)
        {
            _resource = null;
            Console.WriteLine("{0} released", _id);
        }
    }
        
    ~Resource()
    {
        Dispose();
    }
}
In this case, the resource can either be manually scoped (i.e. using) or if we're not concerned about when the Resource is released, we can let the GC take care of it

new Resource<object>("global");
using (new Resource<object>("local"))
{
               
}

// output 
global acquired
local acquired
local released
global released

Again, the GC only handles memory. It has no sense of urgency with how needed another resource is because it can only see memory pressure, not active file system requests (for example). Also, you have actually implemented it incorrectly. You do not handle cleanup of managed resources, and you do not tell the GC to suppress the finalizer when it is disposed (reference).

The sheer number of people asking how to implement IDisposable properly show how much of a hack it is.

I'm not saying you should avoid C# because of this. But it is a major downside to the way it (and other GC languages) are designed that languages like C++ just handle far better.

Oh, and I almost forgot. The IDisposable pattern requires your user to be aware that you use it, and to handle it correctly on their end. Or even better - they have to figure out how to implement the disposable pattern if they store an instance of your class in their own. Both things that, again, C++ gives you automatically with its stack and RAII wrappers for heap objects. Edited by SmkViper

Share this post


Link to post
Share on other sites

I agree with a hell of a lot of what you have said in this thread, but I think we are missing the point a little.

 

Yes you have to think of the GC when you write your code, it's really important, but if you screw up, then your game runs slowly because the GC is running all the time.

 

If you screw up in c++, the game crashes.

 

Now I personally prefer the game to crash, it MAKES you fix the problem, but for a newb starting out the game running slowly is probably a better idea.

 

I spend a hell of a lot of my working week finding memory issues. They usually come down to some idiot grabbing a reference to something and failing to release it at the right time. Simple fix when I do find it. 

 

Hell of a job to find it.

 

One line of code missing in several million LOC's...... whimper.

 

I also have to deal with several completely different memory managers, fast ones, safe ones, ones that defrag themselves, ones that were written in 1991, even one called a Two Level Segregated Fit memory allocator,  sad.png

 

GC is looking rather nice at the moment.

Share this post


Link to post
Share on other sites

This topic is 1116 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Forum Statistics

    • Total Topics
      628721
    • Total Posts
      2984394
  • Similar Content

    • By INTwindwolf
      THE PROJECT

      INT is a 3D Sci-fi RPG with a strong emphasis on story, role playing, and innovative RPG features such as randomized companions. The focus is on the journey through a war-torn world with fast-paced combat against hordes of enemies. The player must accomplish quests like a traditional RPG, complete objectives, and meet lively crew members who will aid in the player's survival. Throughout the game you can side and complete missions through criminal cartels, and the two major combatants, the UCE and ACP, of the Interstellar Civil War.
      Please note that all of our current positions are remote work. You will not be required to travel.
      Talent Needed
       
      Unity Engine Programmer
      Website Administrator
      3D Animator
      We have made great strides in the year 2017! INT has received a comprehensive face-lift compared to the start of the year. We look forward to a productive, fruitful year 2018!
      Revenue-Share
      This is the perfect opportunity to get into the game development industry. Being an Indie team we do not have the creative restrictions often imposed by publishers or other third parties. We are extremely conscientious of our work and continuously uphold a high level of quality throughout our project.
      We are unable to offer wages or per-item payments at this time. However revenue-sharing from crowd-funding is offered to team members who contribute 15-20 hours per week to company projects, as well as maintain constant communication and adhere to deadlines. Currently the crowd-funding campaign is scheduled for the year 2018. Your understanding is dearly appreciated.
       
      Thank you for your time! We look forward to hearing from you!
       
      John Shen
      HR Lead
      Starboard Games LLC
    • By Apollo Cabrera
      Energy particles being harnessed by collection multi-hedron energy matrix. Whuuuttt?
      Love it :)
    • By AndySv
        Total Music Collection (http://u3d.as/Pxo)   THE COLLECTION CONTAINS:   Mega Game Music Collection   Universal Music Collection   Huge library of high quality music for any project! All at an incredibly low price!   - 2,5GB of high quality audio - 100+ different music tracks - Loop and short versions   Action, fantasy, casual, horror, puzzle, epic, dramatic, romantic, positive, inspiring, motivational and more!
    • By Dafu
      FES Retro Game Framework is now available on the Unity Asset Store for your kind consideration!
      FES was born when I set out to start a retro pixel game project. I was looking around for an engine to try next. I tried a number of things, from GameMaker, to Fantasy Consoles, to MonoGame and Godot and then ended up back at Unity. Unity is just unbeatable in it's cross-platform support, and ease of deployment, but it sure as heck gets in the way of proper retro pixel games!
      So I poured over the Unity pipeline and found the lowest levels I could tie into and bring up a new retro game engine inside of Unity, but with a completely different source-code-only, classic game-loop retro blitting and bleeping API. Months of polishing and tweaking later I ended up with FES.
      Some FES features:
      Pixel perfect rendering RGB and Indexed color mode, with palette swapping support Primitive shape rendering, lines, rectangles, ellipses, pixels Multi-layered tilemaps with TMX file support Offscreen rendering Text rendering, with text alignment, overflow settings, and custom pixel font support Clipping Sound and Music APIs Simplified Input handling Wide pixel support (think Atari 2600) Post processing and transition effects, such as scanlines, screen wipes, screen shake, fade, pixelate and more Deploy to all Unity supported platforms I've put in lots of hours into a very detail documentation, you can flip through it here to get an better glimpse at the features and general overview: http://www.pixeltrollgames.com/fes/docs/index.html
      FES is carefully designed and well optimized (see live stress test demo below). Internally it uses batching, it chunks tilemaps, is careful about memory allocations, and tries to be smart about any heavy operations.
      Please have a quick look at the screenshots and live demos below and let me know what you think! I'd love to hear some opinions, feedback and questions!
      I hope I've tickled your retro feels!



      More images at: https://imgur.com/a/LFMAc
      Live demo feature reel: https://simmer.io/@Dafu/fes
      Live blitting stress test: https://simmer.io/@Dafu/fes-drawstress
      Unity Asset Store: https://www.assetstore.unity3d.com/#!/content/102064

      View full story
    • By Dafu
      FES Retro Game Framework is now available on the Unity Asset Store for your kind consideration!
      FES was born when I set out to start a retro pixel game project. I was looking around for an engine to try next. I tried a number of things, from GameMaker, to Fantasy Consoles, to MonoGame and Godot and then ended up back at Unity. Unity is just unbeatable in it's cross-platform support, and ease of deployment, but it sure as heck gets in the way of proper retro pixel games!
      So I poured over the Unity pipeline and found the lowest levels I could tie into and bring up a new retro game engine inside of Unity, but with a completely different source-code-only, classic game-loop retro blitting and bleeping API. Months of polishing and tweaking later I ended up with FES.
      Some FES features:
      Pixel perfect rendering RGB and Indexed color mode, with palette swapping support Primitive shape rendering, lines, rectangles, ellipses, pixels Multi-layered tilemaps with TMX file support Offscreen rendering Text rendering, with text alignment, overflow settings, and custom pixel font support Clipping Sound and Music APIs Simplified Input handling Wide pixel support (think Atari 2600) Post processing and transition effects, such as scanlines, screen wipes, screen shake, fade, pixelate and more Deploy to all Unity supported platforms I've put in lots of hours into a very detail documentation, you can flip through it here to get an better glimpse at the features and general overview: http://www.pixeltrollgames.com/fes/docs/index.html
      FES is carefully designed and well optimized (see live stress test demo below). Internally it uses batching, it chunks tilemaps, is careful about memory allocations, and tries to be smart about any heavy operations.
      Please have a quick look at the screenshots and live demos below and let me know what you think! I'd love to hear some opinions, feedback and questions!
      I hope I've tickled your retro feels!



      More images at: https://imgur.com/a/LFMAc
      Live demo feature reel: https://simmer.io/@Dafu/fes
      Live blitting stress test: https://simmer.io/@Dafu/fes-drawstress
      Unity Asset Store: https://www.assetstore.unity3d.com/#!/content/102064
  • Popular Now