I Hate Type inferrence.

Started by
10 comments, last by Nypyren 13 years, 7 months ago
Really I love type inference, but what I hate about it is that I've grown so used to it that i can't stand working in languages which don't have it; which includes some of the languages I have to use for uni!
Advertisement
Did you have a question, or were you looking for any sort of feedback here?

If you just want to vent, you might consider services such as Twitter or a Facebook status.

- Jason Astle-Adams

So... you expect a hug or something?
Sounds like a personal problem...

That said, I tend to actively dislike type inference. Makes code harder to grok, makes it a little easier to break at runtime, for what? 20 seconds less typing... once.

It's not the worst thing in the world, and works better in some languages than others, and is fine in certain cases (ie var foo = new Foo();). But mostly bleh.
I feel your pain Luca. The redundancy in Java kills me now that I know how much better it can be done (C#).
Quote:Original post by Telastyn
That said, I tend to actively dislike type inference. Makes code harder to grok, makes it a little easier to break at runtime, for what? 20 seconds less typing... once.


Harder to grok perhaps, but the strong, static type inference systems of Haskell or Objective Caml are extremely hard to break at runtime.
Quote:Original post by Simian Man
Quote:Original post by Telastyn
That said, I tend to actively dislike type inference. Makes code harder to grok, makes it a little easier to break at runtime, for what? 20 seconds less typing... once.


Harder to grok perhaps, but the strong, static type inference systems of Haskell or Objective Caml are extremely hard to break at runtime.


To be clear I was talking about writing code (or changing/maintaining code) where everything compiles nicely (the type inference resolves to something that still 'works') but does not do what the author intended. Still hard to do (and harder to do in Haskell than... C# for example), but easier than in non-inferred scenarios.

jbadams: I apologise for the esoteric post, but yes i was looking for a discussion; not just venting. :P

I'm not sure what you mean by resolving to something that still 'works' I've very rarely had a problem to do with type inference, normally it ends up boiling down to me writing 1 instead of 1.0 when i want it to infer a float rather than an int.

(I use haXe which is like type-inferred Java/AS3 mixed with enums that work much like haskell style data structures)
Quote:Original post by Telastyn
Quote:Original post by Simian Man
Quote:Original post by Telastyn
That said, I tend to actively dislike type inference. Makes code harder to grok, makes it a little easier to break at runtime, for what? 20 seconds less typing... once.


Harder to grok perhaps, but the strong, static type inference systems of Haskell or Objective Caml are extremely hard to break at runtime.


To be clear I was talking about writing code (or changing/maintaining code) where everything compiles nicely (the type inference resolves to something that still 'works') but does not do what the author intended. Still hard to do (and harder to do in Haskell than... C# for example), but easier than in non-inferred scenarios.


I knew what you meant, but that is much, much rarer in any ML-based type system than in a C-based, explicit type system. Pointers or references pointing to things you're not expecting is practically impossible in Ocaml for example, while it is perhaps the biggest source of runtime errors in C-based languages.
My .02:

Explicit types may have been nice in the pre-IDE days, as they made code more self-documenting. Today, I'd much rather mouse over a variable and let the compiler tell me its inferred type if I need to know, rather than be forced to read and type it everywhere.

Even C# is doing inferred types these days.
Anthony Umfer

This topic is closed to new replies.

Advertisement