Sign in to follow this  
lgc_ustc

Type checking in a dynamic typed script language.

Recommended Posts

Hello, all, I am designing a dynamic typed script language these days. This language does not have typed variables, instead, types accompany values, like Lua. But such a language has a disadvantage regarding the type conversion, for example, the arithmatic multiplication of a string representing a number and a floating number. Usually this case is handled by converting the string into a number and then the multiplication is performed. However, under certain condition, such conversion is not desired or unexpected. This is a disadvantage of such "prototying languages" criticized by some people. I think that we can let the compiler do the data type checking at compile time, instead of let this be found by the virtual machine during run time and do the unexpected conversion, or the worse case, throw an exception and stop running. Say, we still let variables have types, but they are not specified using declarations, like "int a;" in C; instead, we set a variable's type to the type of the value the first time it is assigned. For example, say we have "a = 1.05; ", and then we know a's type is float. Once a variable's type is determined, it can not be changed any more; if a subsequent assignment gives a value whose type is different than the destination variable's type, the compiler can then detect this and prompts a warning. Still take the example above, suppose we later have an assignment like "a = 5;", that is fine. But if we encounter this assignment "a = \"hello world\";", things would be different since this time a string-typed value is assigned to the float-typed variable. We can thus let the compiler perform such checks to insure that no unexpected type conversions, while making the syntax more concise, since variable declarations are not needed. What do you guys say? Thanks for any response.

Share this post


Link to post
Share on other sites
I seem to recall that old versions of BASIC would sometimes change the type of the variable in unpredictable ways, but I don't ever recall this happening with Lua or Python. I think these days it's rare for a language to treat a string that contains digits as if it was an integer or a floating point number because of the ambiguity you mention. So why not just eliminate such conversions up front?

I don't think a hybrid approach is worth the effort anyway, given the amount of possibilities. How will you handle this:

a = 1
if (someRuntimeCondition) then b = 1.0 else b = "1.0"
a = b + a

Share this post


Link to post
Share on other sites
Quote:
Original post by Kylotan

I seem to recall that old versions of BASIC would sometimes change the type of the variable in unpredictable ways, but I don't ever recall this happening with Lua or Python.

Actually, Lua does have a habit of transparently converting between numbers and strings. It's not unpredictable, but it is surprising at times.



It's only ever surprising, though, if you're in the habit of giving a damn about the types of variables. And that situation is really the exception, rather than the rule.

Share this post


Link to post
Share on other sites
Depends what you mean by 'variable' really.

Consider OCaml (strong, static typing), this expression:

let a = 4 in
let b = 5 in
let a = 10 in
a + b

is legal, but this one is too:

let a = 4 in
let b = "Hello, " in
let a = "World!" in
b ^ a

(^ is string concat operator).

Note the = is not an assignment, but a binding of value to name. The types of each variable are inferred at compile time. You might want to try using this style instead of mutable variables (of course, you'll need to do research into how all this works and how you'd compile it).

Keyword the OP would want to look up is type inference.

Depending on how complex your language is it can be quite simple to do type inference so I'd recommend it if you're going for a statically typed language.


Quote:

"doing it at compile time" means abandoning dynamic typing.


Yes and no. In dynamically typed languages there is much ground to be gained by doing compile time type analysis, consider the performance implications of:

a = 4
b = 15

d = a + b

Now, the compiler can determine that a,b are integers in the addition. It can therefore omit the type checks/conversions that might otherwise be needed. Of course that is a toy example but it does scale. It can also help to detect errors, say

a = 4
b = "Hello, world!"

d = a + b

The compiler could detect that "Hello, World" can't be converted to a valid integer and warn about this (presuming the + operator isn't stupidly overloaded to do string concatenation ;) ).

Share this post


Link to post
Share on other sites
Hi, JuNC:

The example you cited from OCaml means exactly the same thing which I wanted to say: the type of a variable(name) is determined by the value it is first bound to, and the compiler checks consequent bindings to make sure that the type of that variable is consistent. The meanings of assignment and binding are also clearer to me now.

Maybe an option can be provided to the programmer to check potential logic errors caused by inconsistent type changing of a variable. If the user selects the highest warning level during the compilation process, this option takes place and generates warning messages, thus helping the user to debug; otherwise it keeps silent.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this