Rebooted

Members
  • Content count

    815
  • Joined

  • Last visited

Community Reputation

612 Good

About Rebooted

  • Rank
    Advanced Member
  1. My point wasn't that quantum mechanics is weird, it was that I don't see how a deterministic theory of wavefunction collapse can reproduce the predictions made by quantum mechanics. And I don't see how FTL communication and the associated paradoxes could be ruled out either. However, I do think it is possible, even likely, that a (probabilistic) collapse theory could be found. What's the most obvious difference between a macroscopic object like a person and a microscopic object like an electron? A significant gravitational field. What's the only force we don't have a quantum theory of? Gravity. It doesn't seem too unlikely to me that a quantum theory of gravity would explain the conditions that wavefunction collapse occurs under. Penrose has put forward a reasonable theory based upon this idea. Weinberg talks about the possibility here. On the other hand, if no theory of wavefunction collapse is found and in fact evidence is found that entanglement occurs for macroscopic objects, then you would be led to the conclusion that collapse doesn't happen and to something like the many worlds theory.
  2. Quote:Original post by Daerax A proper answer is here:Quote:Can quantum nonlocality be used for faster-than-light or backward-in-time communication? Perhaps, for example, a message could be telegraphed from one measurement site of the EPR experiment to the other through a judicious choice of which measurement was performed. The simple answer to this question is "No!" Eberhard has used the standard formalism of quantum mechanics to prove a theorem demonstrating the impossibility of such nonlocal superluminal communication [11,12]. Briefly, the quantum operators characterizing the separated measurements always commute, no matter which measurement is chosen, so non-local information transfer is impossible. Nature's superluminal telegraph cannot be diverted to mundane human purposes. If I understand this argument correctly, it goes like this. I have two separated entangled quantum subsystems, A and B. When I measure the state of A the entire system collapses, either to |0>A|1>B or to |1>A|0>B. There is a probability amplitude for each of these outcomes. If the first outcome occurs and I find 0 at A, then when you measure B you will find 1, with 100% certainty. But you, at B, cannot know that the result of your measurement has already been fixed by my measurement at A, unless I send this information to you in some other, slow way. You cannot tell the difference between a result I have already fixed and a purely random outcome. In fact, relativistically, to another observer the measurement at B was actually the one that occurred first and this fixed the result at A to be 0. Because the measurements commute this all works out nicely. However, if you remove the probabilities from this and replace them with a deterministic theory of the collapse, this argument seems to fall apart. The measurements seem to become non-commuting.
  3. Can someone explain something to me? Under the Bohm interpretation, or the explanation based upon entropy which Daerax was discussing (for what it's worth, Roger Penrose talks about a similar idea in Shadows of The Mind), or any non-local hidden variable theory, how is it that FTL transfer of information is ruled out? If measurements are not probabilistic, and so their results can be known, doesn't this mean the non-local effects in quantum mechanics can be used to transfer information faster than the speed of light?
  4. prefix lists in Haskell

    Quote:Original post by DevFred I'm sure there's something like that in the standard prelude, only better. How is it called? take. If you know the type of the function you're looking for, which in this case is Int -> [a] -> [a] then you can search for it with Hoogle.
  5. Higher Kinded Types, Sorts and Birds

    Kinds serve the same purpose as types, but at the level of types rather than at the level of terms. Without kinds or equivalently with only one kind (the kind of proper types, usually called *) you would have trouble writting, say, a generic monad interface. The type of return in Haskell is (Monad m) => a -> m a, where m has kind * -> *. You need to be able to parametrise over not just proper types, but type constructors like Maybe, List, IO, etc.
  6. Quote:Original post by Bregma The right thing to do is to use a refcounted pointer, not auto_ptr, with a standard library container. Or use a boost pointer container.
  7. Quote:Original post by Dmytry There is no point, unless you got a solution that is really as good for everything as real overloading. It is, that was the whole point of my post. The only difference is that when you want the specific case of an "overloaded" function rather than the generic case, you have to instantiate the type. The solution is identical semantically and completely equal up to local syntactic transformations. I'm specifically not talking about global restructurings of a program to obtain a turing equivalent one. I also listed a set of features you can't obtain by local syntactic transformation. I'm not really defending ML as such, I just wanted to point out how "overloading" is done, since modules are a key part of the language; and wanted to get more details on problems people have with the languages, since I'm quite interested in how they can be improved.
  8. Quote:Original post by Hk Well, lets start with my favorite language, so I am hated right away. This language is C. Yeah. I love C, as C is pretty much functional programming without all the jumping through hoops due to theoretical obfuscations and missed simple things. (Granted, C has VERY dark spots, however, usually, you just need a small set of consistent rules and things work.).C is a a functional language without higher-order functions, exceptions/continuations, parametric polymorphism (generics), type abstraction, a sound semantics, or type inference; and with no way of expressing these things. So it's not really much of a functional language. The things you want - while loops for example - can be expressed in functional languages. That means its a simple local translation. It's just a matter of syntax - map code A to code B (OCaml has special syntax for while anyway). You can't do this for the above features which are lacking from C. Instead you need to globally reorganise the program; its not just a local mapping (and in some cases even a global reorganisation won't do). As a result, programs are longer and less modular. There's something lacking semantically, not just syntactically from C. Quote:There are algorithms that want to be written with recursion, but there are many other things that just want a little while-loop (or, for. iterators. something like that.) churning through data and saying "5" after that. I agree completely. But functional languages do allow both styles. Anyway, I agree with Simian Man that you should check it OCaml. It has parametric polymorphism/generics, loop syntax, structs/records and all the stuff I mentioned above. Getting round OCaml's lack of overloading: Haskell has overloading (with type inference) through type classes. OCaml's modules are equivalent to type classes in some ways. Look up functors, which are basically parametised interfaces (modules) which you instantiate to specific types to get specific implementations. Type classes are a restricted version of functors which don't require you to explicitly instantiate them. Quote:Original post by Sneftel Modules. This is a little more difficult to get right, but OCaML already did so there's no excuse. Thumbs down.Yeah, Haskell's module system is a bit weak compared to ML's. What exactly do you find lacking though? Quote:Can't use type classes for runtime polymorphism. In a lazy language, this should be eminently possible. Not allowing it smacks of language theory fascism. Not everyone loves algebraic types all the time, particularly when it comes time to make a nontrivial application. Give us our damn virtual function binding.I don't get this. Type classes are usually implemented using dynamic dictionary passing and look-up. That's runtime polymorphism, is it not? Can you give an example of something type classes can't do that you want them to be able to? [Edited by - Rebooted on June 18, 2008 1:44:54 AM]
  9. A C-like scripting language?

    Quote:Original post by SnotBob I think ToorhVyk has a problem with the universal type not being there out of the box, which I agree is an issue, because otherwise there isn't one agreed upon way of doing it.That is an issue. So ideally you want a standardised Uni type of numerical constants, strings, tuples, functions and the like. Then a standardised library of functions for working with Uni values, like the ones I wrote in my last post. Other than that, I don't think there's much of a problem. You could use a macro system to define syntactic sugar for the type similar to the [.., ..] syntax for building lists (the macro would automatically map '1' to 'I 1', etc.), but I don't really think it's too much trouble to work without special syntax. Let's see how painless it is in Haskell, once I've overloaded some infix functions. instance Num Uni where x + y = plus $ x $ y -- ... *Main> :t I 1 + I 3 I 1 + I 3 :: Uni *Main> I 1 + I 3 I 4 *Main> :t not $ True not $ True :: Uni *Main> not $ True False *Main> :t I 3 + False I 3 + False :: Uni *Main> I 3 + False *** Exception *Main> :t Fun (\x -> x) Fun (\x -> x) :: Uni *Main> :t Fun (\x -> I 3 + False) Fun (\x -> I 3 + False) :: Uni *Main> Fun (\x -> x + I 3 + I 2) $ I 17 I 22 *Main> :t [I 1, True, Fun (\x -> x)] [I 1, True, Fun (\x -> x)] :: [Uni] Personally, I don't have a problem with this syntax. Fun (\x -> ...) doesn't look too different from ML function syntax: fn x => ...; I don't mind writing f $ x instead of f x or I 1 instead of 1; and once you overload infix operators, I don't mind writing untyped arithmetic expressions either. Notice that the addition of 3 and False only signals an error at execution time; and the "heterogeneous" list, supposedly something statically typed languages can't do. Edit: I just realised you can get remove the need for the 'I' syntax by implementing the Num type class method fromInteger: fromInteger i = I i. Now you can just say: *Main> :t 3 + False 3 + False :: Uni *Main> :t [1, True, Fun (\x -> x)] [1, True, Fun (\x -> x)] :: [Uni] And the compiler knows from the context that you mean the Uni 3, not the Int 3 or the Float 3. After this there is nearly no syntactic complexity left. [Edited by - Rebooted on June 5, 2008 9:51:35 PM]
  10. A C-like scripting language?

    Quote:Original post by ToohrVyk How do you call this "not writing an interpreter or virtual machine" ? I don't understand.. No interpreter or virtual machine is written. There is just a direct mapping of untyped programs from language A as fully equivalent typed programs of language B. If you mean the syntactic sugar, then I don't see how a fully syntactic mapping purely for convenience counts as a virtual machine. It's not like it's hard to program without special syntactic sugar for the type anyway, we do it in ML for most types we define: import Prelude hiding (($), True, False, not) data Uni = True | False | I Int | Fun (Uni -> Uni) (Fun f) $ x = f x not :: Uni not = Fun (\b -> case b of True -> False False -> True) inc :: Uni inc = Fun (\(I i) -> I (i + 1)) plus :: Uni plus = Fun (\(I x) -> Fun (\(I y) -> I (x + y))) *Main> not $ True False *Main> inc $ I 3 I 4 *Main> plus $ I 12 $ I 7 I 19 *Main> plus $ I 12 $ True *** Exception [Edited by - Rebooted on June 5, 2008 12:30:14 PM]
  11. A C-like scripting language?

    I'm not sure functional programming is going to become mainstream soon either. Maybe though; C# is getting more functional programming support and F# is becoming an official Visual Studio language. Quote:Original post by ToohrVyk This is the classic Turing tarpit argument. Yes, since OCaml and PHP are both turing-complete, you can trivially write in both languages a virtual machine for a simple type-less language. The real question is whether this fits within the language's design and typical usage patterns. The argument doesn't have much to do with turing-equivalence. I'm not talking about writing an interpreter or virtual machine for an untyped language in a typed language. I'm talking about the relative expressive power of languages in the vein of On the Expressive Power of Programming Languages. You said "sure, the absence of types means your language is more expressive than about every single other language except Java and C#", presumably because Java and C# have an Object type. I'm just saying that holds for ML too, you just have to write the Object type that is the type of all untyped values first. It's the typed language therefore, that is more expressive because it can do everything the untyped language can and more. Quote:In practice, nobody writes OCaml type-less code by hand I've written code like that before. All you need to make it practical is syntactic sugar for creating values of type Uni, so you can say something like '1' and get the value representing the number 1 of type Uni, and something like 'true' to get the value representing the boolean true of type Uni. This would be easy to add using Camlp4. Quote:In fact, some type-less idioms are so powerful that the OCaml developers made heavy efforts to incorporate them into the type system (for instance, format strings for use by the Printf module)—I could write a printf-like function in PHP with ease (although it involved defining my own format specifiers) yet could not do the same in a type-safe manner in OCaml.Well you're comparing two different functions, an untyped printf and a typed printf. Untyped printf (or of type Uni) should be just as easy to write in OCaml as in PHP (well, if OCaml had variable arguments). Typed printf like the one in the Printf module that does static checking of its arguments takes much more effort, but it's not writable at all in PHP!
  12. A C-like scripting language?

    The ML code above uses brackets for precedence like in C, and for tuples. The reason brackets are needed around -20 and -15 is so that the '-' is parsed as part of the number, not as applications of the subtraction operator. Personally I find ML code much, much easier to understand than C. The code isn't short because of the syntax of the language - short abbreviations and the like can only get you so far. It's short because of the semantics of the language - it has features like first class functions which mean you can express algorithms much more directly. On top of this the language has a sound semantics - code cannot crash, and execution is completely defined by the language definition: it can't go off into undefined territory and the like; and a rich type system with excellent support for data abstraction, which C noticeably lacks. A competent programmer should be able to pick up a new syntax easily. Syntax is trivial, unless it was actually designed to be incomprehensible like Brainfuck's syntax. You should judge languages by their semantics not syntax. And for a scripting language you should jump at the chance to use a language with more expressive semantics than C, whether its Lua, Python, ML, Scheme or whatever. Quote:Original post by ToohrVyk While I do consider myself a good developer, the number of errors that can happen with PHP is staggering—sure, the absence of types means your language is more expressive than about every single other language except Java and C# (and even then, object can only get you so far), but this usually means that you can express original and unusual errors in that language.I think you're wrong here, but this is an obvious viewpoint. A language with types is more expressive than an untyped language. Every ML has a LISP as a sub-language. Say we have a typeless LISP-like language with integers, booleans and functions. You can view this language as untyped, or equivalently unityped. The ML has not just one but many types, one of which is equivalent to the unitype of the LISP language: data Univ = I Int | True | False | F (Univ -> Univ) If you program using terms of only this type, then you can do everything you can in the LISP language. The LISP language can only express un(i)typed programs, but the ML language can express un(i)typed and (many-)typed programs. It's strictly more expressive, and strictly less restrictive. Every LISP program can be directly translated to an ML program with the same semantics, but the reverse isn't true (I'm talking about two hypothetical ML-like and LISP-like languages that are completely the same except for typing). It's not that OCaml is more restrictive, it's that it enables you to use typeful techniques to do data abstraction and like. If you want to, you can program like you would in a typeless/dynamically typed language, just like you can program without first-class functions if you want to. I believe the school of thought that develops languages like ML is basically that more expressive languages are better than less expressive ones, because programs in more expressive languages can be shorter, more modular and easier to understand. A language with first-class functions is more expressive than one without, a language with first-class continuations is more expressive than one without, a language with a more powerful type system is more expressive than the language with the less powerful one.
  13. Applicabilty of Haskell

    Quote:Original post by Phil Scott There are plenty of benefits in realising IO in monads, so again, I would not call them a hack. I wasn't saying I think IO monads are a hack. I actually said I like the IO monad for input/output, but I don't think mutable state should be bundled in the IO monad. State is a completely internal effect that exists to make programs easier to write, not to have some external effect on the world. It should be handled some other way I think, but that other way could still preserve referential transparency, like GHC's ST monad. What I was saying, is that there are two kinds of monads - library monads written in Haskell code like State, Either and Maybe which are organised monadically so that we can operate on all monads generically; and primitive language features like file IO and true mutable state which would otherwise break referential transparency organised monadically so that they are RT and so that effect information is encoded in the type. Then I said you can't expect State to compensate for lack of true mutable state or the CPS monad for lack of first class continuations. If you want to do state-chaining then State is great, and if you want to do continuation passing style then Cont is great. But if you want to use true mutable state or real first class continuations you are out of luck. It's a fact that you can't express true mutable state in terms of higher order functions. If you want true state it has to be primitive in the language like it is in imperative languages or in GHC with its IORefs or ST monad. Or you need some other primitive feature you can express mutable state in terms of, like Erlang-style message ports or delimited continuations. Finally I said that I think you definitely want your language to be as expressive as possible, but whether you want true mutable state to be referentially transparent like in GHC or not like in ML is not so clear cut. There are advantages and disadvantages of each approach. I think calling one language "functional" and another "imperative" is probably wrong. You can talk about programming in functional and imperative styles, using certain idioms, but a programming language should be able to express all these styles. The key thing for a language are the key semantic features it has - does it have higher order functions? First class continuations? Mutable state? Types? What properties do programs have - referential transparency? Decidable type inference? Etc. Quote:Original post by SamLowry Also, it seems Haskell does offer a way to localize state, i.e. not having those ugly "monad infections" going up your abstraction ladder (slide 46).Yes, it does. That's the ST monad I mentioned. You can 'run' stateful code written in the ST monad to "escape" from it while still preserving referential transparency. Therefore you can implement imperative algorithms that are externally seen as pure functions. This is much better than bundling state in the IO monad.
  14. Applicabilty of Haskell

    Haskell is used extensively at a few places. Barclay's bank, Galois, Bluespec, Credit Suisse and Linspire come to mind. Quote:Although there is stuff being done with Software transactional memory, Haskell does not have built in concurrency. Haskell 98 doesn't have support for concurrency/parallelism, but GHC Haskell does. You have Control.Concurrent with forkIO, channels and MVars; you have STM; you have Control.Parallel with par and Control.Parallel.Strategies; and you have Nested Data Parallelism. Quote:If I wanted glue haskell to c (c-functions in haskell, haskellfunctions in c), what would be the way? Haskell has a nice foreign function interface (FFI). Superpig: I might have some style advice on your code later. The first thing that jumps out at me is that you're using type classes very strangely. Type classes are useful for defining interfaces like Monad, Monoid, Functor, etc. They aren't intended to be used like classes in OOP. Although I use type classes like Num all the time, I very rarely need to write my own: code tends to be written in terms of ADTs. Daerax: Quote:This is because although Haskell is a beautiful language, it really is useless in the real world. It is like an ornate sword only useful among nobility but when it comes to hacking rats, a dull, much more homely more attuned for dealing death type of sword would be preferable. I don't agree with this at all. Personally I'd reach for Haskell before I reached for any other language for most apps. And it's proven to be a language well suited to developing lots of stuff: web apps, window managers, compilers, interpreters, "glue code", financial applications, critical applications.. I do agree with your criticisms of the language, though. Haskell is certainly not perfect. For example I like the IO monad for performing IO and external effects, but not for internal/control effects like state. Internal/control effects like mutable state, exceptions and continuations exist to increase the expressiveness of the language. A language with mutable state is provably more expressive than a language without mutable state (assuming the language is the same in all other respects). That means in general you can't take a program in the language with state and translate it to the language without state by making only local transformations - you have to globally reorganise the program. In terms of monads, that means you have to lift the whole program over the State monad. Monads are a hackish way to simulate features they can't express, only simulate locally, like continuations or mutable state. They're very nice when they can fully express something though - like parser combinators, the Maybe monad for possibility of null, the Either monad for failure. (GHC) Haskell does have real mutable state, though, but it bundles it in the IO monad which seems less than perfect. This isn't simulating state like the State monad, it's true mutable state given a monadic interface to make it referentially transparent. A better way to do it might be to have a separate Control monad that makes control effects RT and allows you to 'run' it and escape from it if doing so would preserve RT. GHC does have something like this - the ST monad. You definitely want your language to be as expressive as possible, but I think whether you want it to be referentially transparent is a software engineering matter with trade offs. RT really is nice - the best explanations I've seen are those by Don Stewart, one of the authors of the new haskell book someone mentioned. See his blog posts. On the other hand, it does have drawbacks - this is a good discussion of them. Quote:Its true that most of a game is amenable to functional programming since complex data types (representing stuff) are always being manipulated but a game also maintains state, something which Haskell gets ugly at quick. I think it would make more sense if most of a game were written in haskell - no effects - with another language acting as glue. To me, this seems to be the perfect case for writing the whole thing in Haskell then. If you can write the main code in Haskell with glue in another language, why not use the IO monad or ST monad as that language: they are essentially DSLs for writing imperative code, with mutable state. Quote:Truly for safety I think Eiffel's manner of contracts is more sensible where even a theorem prover could be used for some static time verification. You can't really hope for contracts to verified statically. Dependent types would be what you want, but even they are pretty impractical. Personally I like type systems for the purposes of abstraction, not correctness as such, so I'm not really interested in dependent types. I'll reply to your journal comment too, I haven't had much time over the last week. [Edited by - Rebooted on April 26, 2008 3:08:25 AM]
  15. Python.

    Quote:Original post by Edward Ropple Python, being a Turing-complete language, is as good at anything as any other Turing-complete language. Whether you like its quirks (and there are many) are up to you. It's one of those love-it-or-hate-it languages.Even at a theoretical level, that's not true. Anyone who has studied PLT in any depth knows there are more important things that can be said about language semantics and the relative expressive power of languages than turing completeness (and these things are not a matter of taste, they are fact). And at the practical level you also have to consider things like the quality of the language implementations, the quality of the libraries, etc. And yes, Python is a good choice as a starting language. Much better than C or C++, at least.