Jump to content

  • Log In with Google      Sign In   
  • Create Account

FREE SOFTWARE GIVEAWAY

We have 4 x Pro Licences (valued at $59 each) for 2d modular animation software Spriter to give away in this Thursday's GDNet Direct email newsletter.


Read more in this forum topic or make sure you're signed up (from the right-hand sidebar on the homepage) and read Thursday's newsletter to get in the running!


valderman

Member Since 02 Aug 2002
Offline Last Active Feb 27 2014 07:19 AM

Posts I've Made

In Topic: Help choosing tools for development using C and Haskell

02 November 2011 - 12:26 PM

You're going to want to use GHC for Haskell, most likely in the form of the Haskell Platform; Hugs is simply too slow (also, AFAIK it's just a REPL shell and doesn't come with a compiler.)

In Topic: Starting to hate Google...

02 November 2011 - 12:17 PM

I was referring to the common implementation of web workers which under the definition of no shared mutable state usually requires the data be duplicated so that it isn't changed on the original thread while the new thread works with it. The alternative is to use a lock statement and lock the resource while it's being used so only one thread may work with it at a time without any overhead. It's usually much more flexible and has higher performance when working with large data sets.

"Higher performance" doesn't belong in the same sentence as "locks" and "shared state" if we're talking parallellism.

In regard to passing references in a message to a thread I'm not sure what your question is. I never said they're mutually exclusive. How did you imagine working with a reference in a concurrent thread? Say you don't have locks and you just access it then it's no different than what I described as a more flexible system to allow.

You seem to imply that if threads can have no shared mutable state, then data needs to be copied somewhere to avoid it being modified, that having a reference to a piece of data confers the ability to modify it. I'm trying to figure out if that's actually the case, or if I've misunderstood your point.

That's an extremely naive view of parallel processing. When used correctly the speedup is rather impressive especially on dual and quad-core systems. Using it only as an abstraction is kind of missing the big picture especially when those threads are mapped to kernel threads like in most languages.

Concurrency doesn't imply parallellism, just as parallellism doesn't necessarily imply concurrency.

In Topic: Starting to hate Google...

02 November 2011 - 03:26 AM



Regarding concurrency:

An isolate is a unit of concurrency. It has its own memory and its own thread of control. Isolates communicate by message passing (10.14.4). No state is ever shared between isolates.

They're just taking the stupid idea of webworkers which was a useless feature and just throwing it into their language instead of implementing actual language locks and mutices.

I can't agree with you there. The locks & mutices threading model you are talking about is vastly inferior to Erlang-style concurrency (i.e. message passing, no shared mutable state).

Dart's Isolate is likely the result of the work they have done on coroutines in Google's Go - they quite closely resemble one another, although isolate appears to be a little more general (can spawn OS threads as well coroutines).

Seems like a good idea in theory until you actually do it. Pretend you have a set of data you want to work on with multiple threads. Like for multithreaded pathfinding or image processing. Suddenly you're forced to copy data. (In webworker's case you can't copy more than 1 MB of data). Doesn't matter really though since the time waste copying is so expensive that you've effectively negated any performance increase. That's not even counting the copying of the data back. The alternative is to just pass in a reference and go "hey you 2 threads work on half the data each and get back to me when you're done". No copying overhead. You'll notice in that situation that neither thread even had to touch the other's data at the same time so no lock was needed.

Care to clarify why you believe references and message passing are mutually exclusive?

The only time it makes sense is when your passing off like a few values or a small array for heavy processing. Then again you lock out the bigger uses of threads that way. The lock and mutex allows everything without restrictions.

Concurrency is an abstraction, not an optimization; the basic bronze age concurrency primitives are quite obviously useless in this regard, when compared to Erlang-style message passing.

In Topic: So who is still around these days?

05 October 2011 - 05:19 PM

I've been here for, like, forever, but I doubt I've ever made much of an impression. (Except perhaps as "annoying kid who posts too much" - after being away for several years I still have more than 1.3 posts per day!)

In Topic: SDD price trends

06 September 2011 - 07:41 AM

their oversized music collection that they don't even listen to

Noise. Power consumption. Form. And even access times, scanning ID3 tags of 5000 files is quite slow on standard HDs.

Yeah, those are valid concerns. However, the reason quoted for that storage solution is HOLY SHIT THIS IS SO FAST I CAN MAKE FIVE COPIES OF THIS ALBUM IN LIKE NO TIME AT ALL rather than the others. Of course, if that's what you want to do with your money then that's an equally valid reason. Doesn't seem very reasonable to me though.

PARTNERS