•      Sign In
• Create Account

# A pragmatic language for the future

Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

610 replies to this topic

### #21Rebooted  Members   -  Reputation: 612

Like
0Likes
Like

Posted 28 March 2006 - 11:07 AM

I need to make clear that where constraints are used, the system is statically typed, all verifications are done at compile time where possible. Its actually impossible for OddNumber to contain an even number: the compiler will choke if you attempt to assign an even literal to it, or the explicit cast required to assign a variable of another type to it will fail at runtime.

Falling Sky:
This language is mostly statically typed, I don't think Parrot would be a good fit. I think it is important we are able to target .Net, too.

How did they get IronPython's dynamic typing to work with .Net?

Sponsor:

### #22ApochPiQ  Moderators   -  Reputation: 10412

Like
0Likes
Like

Posted 28 March 2006 - 11:30 AM

Quote:
Original post by CoffeeMug
Quote:
 Original post by ApochPiQStatic typing as a general default

I strongly disagree. I want inferred typing where possible by default. When inference fails, the most I can handle is a warning. I want to be able to place type constraints and have the type checker generate warnings. I don't want to spend my time making the compiler happy for the sake of making the compiler happy.

How do you propose implementing type inference when there are nontrivial semantic types defined by the user? I don't see these things being reconcilable. In any case I think the ability to define type munging operations makes type inference redundant for the most part.

Frankly, it has nothing to do with "making the compiler happy for the sake of making the compiler happy." It has to do with contractual obligations, consistent obedience of type semantics, and eliminating undefined behavior by clamping things to clearly denoted boundaries. The idea is to make it hard to write invalid code, not make it hard to write valid code.

### #23Azh321  Members   -  Reputation: 569

Like
0Likes
Like

Posted 28 March 2006 - 11:40 AM

Parrot also has numbers, strings, and floats. But yes I had .net in mind also. Its possible to do dynamic types in it...look at Boo's duck type. Seems pretty complex to imeplement it though.

### #24Nathan Baum  Members   -  Reputation: 1027

Like
0Likes
Like

Posted 28 March 2006 - 11:48 AM

Quote:
 Original post by ApochPiQIf I can't be shown a simple example that clearly demonstrates the advantage of some feature, I don't want it.

This is a dangerous requirement. Many very useful language features are only seen to be useful with complex examples.

Think of the myriad arguments regarding "goto". Nefarious goto abolitionists demand simple examples where goto simply must be used; knowing full well that any simple example can be transformed into something equally simple that doesn't use goto.

Similar arguments apply to implicit typing -- in a simple example, adding explicit types isn't a hassle; labelled break and continue -- you can either use goto, or add a little more logic; lexical closures -- you can just use boost::lambda.

Closures are also a good example where a useful language feature is primarily useful in concert with other language features. Closures are really only syntax sugar in languages without garbage collection and a rich set of higher-order functions in the standard library. But then, without closures, having a rich set of higher-order functions in the standard library is easily argued against. And some people will always argue against garbage collection.
Quote:
 Codify semantics of a system on both abstract and technical/concrete levelsAttempt to unify a compiler's total knowledge of a system with the programmer's total knowledge of that system

What do these mean?

Everything else is faultless.

Of particular note is the idea of building high-level abstraction upon low-level abstraction but, effectively, 'within the same language'. One obvious advantage of this is that you can mix-and-match high-level abstractions, rather than having to seperate your program into, let's say, seperate SQL, XSLT, PHP and Perl components. Microsoft are doing something similar with Comega, bringing together high-level abstractions from different domains, except their abstractions are fiat, rather than being built from lower level abstractions (so you couldn't, for example, adjust the XML reader if you have a special need to do so in your particular project).

Another point is that you can also mix-and-match the level of abstraction itself: have C-level access to the graphics hardware when you need it, but not have to sacrifice Perl/Python/Ruby-level access to everything else where you do that. Pyrex is an example of this approach: it (partially) unifies Python and C -- it wraps Python's syntax over C, and allows 'C' functions to transparently refer to 'Python' variables and functions as though they were C variables and functions, and vice versa.

As regards typing, another interesting concept is the fact that in most languages, an object always has the same type. These 'validating types', or predicate types as they are more usually known, are interesting because the type of a mutable object may change over the course of its lifetime.

Microsoft's Vault language has a similar system where it statically tracks the possible states an object could be in over the course of its lifetime. For example, a file might be closed or open. When closed, certain functions are not permitted upon it. Vault not only flags an error at run time when a closed file is written to (for example), but can also detect when a possibly-closed file is written to at compile time.

There are no just safety gains to be had, either. An OOP OpenGL wrapper's textures could be in the state of "bound" or "unbound". Rather than flagging an error if the wrapper attempts to set the scaling mode of an unbound texture, however, it could just bind it. The particular advantage of this is that when the compiler can prove that a given texture is still bound at compile time, it can avoid even checking if the texture is bound at run time.

class Texture{  state bound;  ...  void border_color= (Color c) [bound]  {    glTexParameterfv(GL_TEXTURE_2D, GL_TEXTURE_BORDER_COLOR, c.toArray.toPointer);  }  ...};// [bound] attribute ensures the caller will check if the texture is bound where// this is necessary.void do_something_with_texture (Texture [bound] texture){  texture.border_color = #ff0; // Compiler knows texture is still bound -- check elided.  texture.scale = SMOOTH;      // Compiler knows texture is still bound -- check elided.}void texture_mangler (Texture texture){  do_something_with_texture(texture); // Runtime check.  texture.priority = 1;               // No check.  texture.wrap.s = CLAMP;             // No check.}

What does this buy you? It buys you efficiency -- because the checks are elided when the compiler can prove they are unnecessary, your program spends less time performing calculations which are irrelevent. It buys you safety -- because you aren't responsible for checking that the texture is bound, you won't ever accidentally apply an operation to the wrong texture.

It's also an example of features working in concert: you can get efficiency without this feature, and you can get safety without this feature. But the only way to get efficiency is to sacrifice safety by manually ensuring you don't bind textures unnecessarily, whilst the only way to get safety is to sacrifice efficiency by always binding textures even if it might be unnecessary.

datatype oddnumber = integer where oddnumber % 2 == 1;datatype evennumber = integer and not oddnumber;oddnumber x = 3;evennumber y = 2;y = x; // barf, evennumber is explicitly not oddnumber

Quote:
 Original post by ZQJThis problem got me thinking that a function is often more defined by what it does than what it operates on, so perhaps the concept of member functions is unnecessary.

I'm of two minds.

Member functions are certainly technically unnecessary. However, they do seem to aid discoverability and recall: I find it easier to remember methods associated with classes than module-level functions which accept a class instance as an argument. This may reveal something deep and fundamental about the human brain, or it could just be that I've been afflicted with too much Visual Basic and Java.

IDEs typically also make it easier to discover methods: type the object's name, press dot, and you get a list of the methods you can apply to that object. I've never seen that happening for unbound functions -- type the object's name, press nothing, and you get a list of the functions you could have put before the object's name.
Quote:
 Of course if functions were separate to classes they would need a separate access scheme to prevent internal functions being called on classes. This would also allow multiple instead of single dispatch but might require all virtual functions to be listed in the class definition so the compiler knows what to look for when building the vtable. I think LISPs CLOS works something like this but I've never used it so I'm not sure.

Nope. I'm not sure what a "virtual function" would be when there are no member functions, but Common Lisp doesn't require the declaration of any functions when declaring a class, although it provides syntax for conveniently declaring accessors at the same time.
Quote:
 4) Operators: LISP doesn't suffer any kind of precedence issues because the order is totally explicit. However I think this leads to a pretty verbose language and I'd prefer it if operators had precedence as in C/C++ because I think it makes formulae clearer.

Lisp isn't really that verbose:
      Terminal velocity                 Lorentz factorC     sqrt((2 * m * g) / (Cd * p * A))  1 / sqrt(1 - pow(u / c, 2))Lisp  (sqrt (/ (* 2 m g) (* Cd p A))    (/ (sqrt (- 1 (expt (/ u c) 2))))

For the most part, it's verbosity comes from the long (in some cases insanely long) names for things in the standard library.

For me, the primarily fault with Lisp's standard math syntax isn't the verbosity, but the fact that it's harder to compare your equation with the one in the physics book, because they don't look anything alike.
Quote:
 6) Garbage collection - I'm not an expert on how to implement garbage collection efficiently and as far as I remember a lot of the sources I've read have been conflicting. Anyway it seems to me that in a lot of cases ownership of objects is pretty obvious and therefore garbage collection is superfluous so I think there should be a mechanism for restricting garbage collection to where's it's needed.

I think that object ownership is rarely terribly obvious. Perhaps the reason your experience differs is that you're used to programming with languages where object ownership has to be perfectly clear if you are to have a hope of managing your memory.

Whilst disabling garbage collection in particular parts of the program is not unreasonable, it should be noted that simply not reporting a given object to the garbage collector, thereby "disabling" GC for that object, is not usually reasonable.

A manually managed object could not, in general, contain references to automatically managed objects, because the garbage collector would never see those references and could destroy the automatically managed object, even though it can still be accessed.

[Edited by - Nathan Baum on March 28, 2006 6:48:20 PM]

### #25Roboguy  Members   -  Reputation: 794

Like
0Likes
Like

Posted 28 March 2006 - 11:48 AM

Quote:
 Original post by Falling SkyYou guys realize its going to be really... "fun" implementing dynamic types in a compiled language? OR will this be interpreted? In my opinion I think we should use Parrot. Its the VM being made as a universal interpreter for dynamic languages and is being made for Perl 6's primary target.

It wouldn't be that difficult. Just make a super-type from which every other type is derived.

Quote:
Original post by CoffeeMug
Quote:
 Original post by ApochPiQStatic typing as a general default

I strongly disagree. I want inferred typing where possible by default. When inference fails, the most I can handle is a warning. I want to be able to place type constraints and have the type checker generate warnings. I don't want to spend my time making the compiler happy for the sake of making the compiler happy.

What does that have to do with whether it's statically typed or not?

### #26ZQJ  Members   -  Reputation: 496

Like
0Likes
Like

Posted 28 March 2006 - 11:49 AM

In the static vs. inferred type argument, here's something worth considering: matrix inversion. The inversion algorithm (at least the version I usually use) involves knowing the absolute value of elements of the matrix (this isn't really necessary but there's no reason it shouldn't be allowed). For complex numbers obviously the absolute value is a different type from the original value, so in order to write a generic inversion function for matrices it must be possible to infer types.

Also, I think the Number type requires a lot of operators to be defined. For the most generic programming we want to require the fewest possible things of our types, therefore the DoubleNumber function should only require x can be multiplied by a number. The trouble with this seems to be that the function itself could rapidly become the shortest way of defining the requirements on a type which seems like a bad idea.

ApochPiQs idea of inferring the return type from what it's assigned to seems like a good one, unfortunately I think it also would require the compiler check the code of the called function, which could make compiling difficult.

### #27ZQJ  Members   -  Reputation: 496

Like
0Likes
Like

Posted 28 March 2006 - 12:01 PM

Quote:
 Original post by Nathan BaumNope. I'm not sure what a "virtual function" would be when there are no member functions, but Common Lisp doesn't require the declaration of any functions when declaring a class, although it provides syntax for conveniently declaring accessors at the same time.

That's easy - it's a function which is selected at runtime based on the types of one or more of its arguments. However now that I think of it that's a silly thing to talk about at this point because it seems that static and dynamic dispatch should appear the same.

Quote:
 I think that object ownership is rarely terribly obvious. Perhaps the reason your experience differs is that you're used to programming with languages where object ownership has to be perfectly clear if you are to have a hope of managing your memory.

Well, to take a simple example: a string in usually consists of an object containing a pointer to an array of characters and possibly an integer giving the length of that array. Nothing outside of the string object should ever point to that character array, and therefore it should be safe to say the they are both destroyed at the same time and there's no need to garbage collect the character array itself. I've never had much trouble with memory ownership, but as you say, perhaps that's my style, I've got C on the brain.

### #28Azh321  Members   -  Reputation: 569

Like
0Likes
Like

Posted 28 March 2006 - 12:09 PM

Quote:
Original post by Roboguy
Quote:
 Original post by Falling SkyYou guys realize its going to be really... "fun" implementing dynamic types in a compiled language? OR will this be interpreted? In my opinion I think we should use Parrot. Its the VM being made as a universal interpreter for dynamic languages and is being made for Perl 6's primary target.

It wouldn't be that difficult. Just make a super-type from which every other type is derived.

Ummm wow....so your saying just to make a super type in asm...well, you do that. Wait...thats another fun fun fun thing, implementing OOP in asm! Youve really got to show me that ;) j/k

---

Apoch and I have been talking. We think the best approach is to have a compiler that takes Foo and spits out an intermediate language/bytecode. The intermediate lang is simply a bridge between Foo and other langs like asm. That way it would be easier to target asm, .net, ect ect.

### #29Roboguy  Members   -  Reputation: 794

Like
0Likes
Like

Posted 28 March 2006 - 12:11 PM

Quote:
Original post by Falling Sky
Quote:
Original post by Roboguy
Quote:
 Original post by Falling SkyYou guys realize its going to be really... "fun" implementing dynamic types in a compiled language? OR will this be interpreted? In my opinion I think we should use Parrot. Its the VM being made as a universal interpreter for dynamic languages and is being made for Perl 6's primary target.

It wouldn't be that difficult. Just make a super-type from which every other type is derived.

Ummm wow....so your saying just to make a super type in asm...well, you do that. Wait...thats another fun fun fun thing, implementing OOP in asm! Youve really got to show me that ;) j/k

---

Apoch and I have been talking. We think the best approach is to have a compiler that takes Foo and spits out an intermediate language/bytecode. The intermediate lang is simply a bridge between Foo and other langs like asm. That way it would be easier to target asm, .net, ect ect.

Assembly and assembly-like languages are rarely statically typed, so I don't see how that would be an issue. The type checking is done by the compiler before it's compiled into the intermediate language.

### #30Azh321  Members   -  Reputation: 569

Like
0Likes
Like

Posted 28 March 2006 - 12:16 PM

Im talking about if you had dynamic types.
If you did say:
myVar = input()
myVar = 1000

The compiler doesnt know what type is first put into myVar. Variants are not easy to implement in asm

### #31Roboguy  Members   -  Reputation: 794

Like
0Likes
Like

Posted 28 March 2006 - 12:18 PM

Quote:
 Original post by Falling SkyIm talking about if you had dynamic types.If you did say:myVar = input()myVar = 1000The compiler doesnt know what type is first put into myVar. Variants are not easy to implement in asm

Um, why would this be an issue if it's dynamically typed?

### #32Azh321  Members   -  Reputation: 569

Like
0Likes
Like

Posted 28 March 2006 - 12:22 PM

Its been in issue in all the compilers ive ever written, the operands to use, how to handle + 5 if its a string or even a user defined type?

### #33Azh321  Members   -  Reputation: 569

Like
0Likes
Like

Posted 28 March 2006 - 12:23 PM

I forgot to mention, one of the main reasons it would be better to have a intermediate language is so people can just write compilers to take the intermediate code to the target code and not worry about error checking the Foo code.

### #34Nathan Baum  Members   -  Reputation: 1027

Like
0Likes
Like

Posted 28 March 2006 - 12:44 PM

Quote:
 Original post by ApochPiQI think there might be some room for type inference in Foo, but I'd prefer to avoid it. I'd really like to see the notion of data types enter the semantic realm, where I define abstract data types myself in each application (like milliseconds, pixels, etc.) instead of just working in vague stuff like float and int. Since each user-defined semantic type can have quirky behavior, it probably will become impossible to do genuine type inference.For example, consider the oddnumber concept that I've mentioned before. If I need to infer the type of the literal 7, what do I do? If we allow a duck-typing model, we'll have to run the validator of every semantic type in the program's knowledge space in order to try to infer the type. And what if we get multiple positives? The oddnumber validator will say that 7 is definitely a valid oddnumber, but so will the primenumber validator. Type inference doesn't work here. I'm not entirely sure yet, but I suspect that with implicit munging facilities definable by the programmer, type inference won't even really be needed.

A language without type inference is incredibly annoying.

I intensely dislike having to do "std::vector<std::pair<int, std::string> >::iterator i = foo.begin();", and there's no reason for it when the compiler obviously knows that's the type that foo.begin() returns.

When types are not infered but not explicit you save on typing, but then you don't get to know about type mismatches until they actually cause an error at run time.

The fact that type inference is not perfect is not a good reason not to include type inference; it's just a good reason to allow for explicit typing where necessary.

With my favourite programming language, the equivalent of "foo + foo.length" generates a warning because the fact that you're trying to add foo implies a number, but trying to take its length implies a sequence and numbers and sequences don't overlap.
Quote:
 In fact, DoubleNumber() will never return a valid oddnumber, but that is not verifiable at compile time.

Not necessarily. "oddnumber" and "evennumber" could extend the definitions of the arithmetic operations to provide further information about what they return. (Of course, DoubleNumber could return a valid oddnumber, since 1.5 is a Number, and 1.5 doubled is 3.)

Of course, if you've explicitly defined it as returning Number, then it would be improper for the compiler to add an evennumber annotation to that: you might have some real function which doubles its argument but later decide it should triple it.

In either case, it's obviously the Right Thing not to verify that DoubleNumber returns an evennumber. The onus is on the caller to ensure that the return value is of the appropriate type (maximum static safety being granted by using either polymorphic dispatch or a switch-on-type).
Quote:
 This of course raises a few important questions:Can users define concepts? I'd like this ability, but I don't know how it would be done yet.

I don't think I know what you mean by concept. I assumed that you were talking about the 'concept of an odd number', but obviously users can define that kind of concept: you've given an example syntax for how they might do it.
Quote:
 What about a true-generic "Thing" that encompasses numbers, strings, etc?

I've yet to see a good argument against a common type.
Quote:
 How does this mesh with OO concepts?

Obviously depends upon what you mean with OO.
Quote:
 Some final assorted thoughts:OperatorsImplicit precedence is good, IMHO. We're used to it, and it allows us to avoid lots of nested parentheses [grin] Mainly, though, my concerns in the area of operators lies in the notion of defining them. Defining custom operators (as words, not magic symbols) would make things very, very handy, especially if operators are allowed to also do type conversions. For instance, I could define a DotProduct and CrossProduct set of operators, and get highly self-descriptive code. Yeah, there is potential for abuse and obfuscation here as well, but as with C++'s operator overloading I think the risk is justified. Defining custom operators (and their precedence hierarchy) is important to defining domain-specific abstraction layers.

Haskell uses backticks to allow arbitrary functions to be used as binary operators:
dot (a, b, c) (x, y, z) = a * x + b * y + c * z;foo = (1, 0, 0) dot (0, 1, 0);

The backticks make it quite clear why the word is there, and also means that nobody has to do any magic when actually declaring the function.
Quote:
 SyntaxBEGIN and END are gross. I like the curly-brace-block style, but I think it's important to keep a healthy balance between magic shapes/symbols and readable text.

Readable text like "begin" and "end"? [wink]
Quote:
 C++ has too many magic squiggles, for instance; stuff like "= 0" for pure virtual function specification is just gross.

I don't see that being a fault of the fact that it isn't a word.

If pure virtual functions had to be followed by the keyword "while", that wouldn't be any better. The problem is obviously that the syntax looks like the function is being initialised to zero, but initialisation doesn't even mean anything for functions.

(Yes, you could argue that initialising a function to null is obviously saying that the function explicitly doesn't have a definition, but can you honestly say that was obvious before you were told what it did?)
Quote:
 Some symbols are needed for expression - parentheses, brackets, block braces, operators, etc. But reliance on magic squigglies can be taken too far, and I'd like to avoid it. Part of what gives me a headache any time I try to read Lisp code is the mass of squigglies.

But there are only three 'squigglies' in typical Lisp code.
Quote:
 Brevity and clearness are good, but when communication of purpose becomes important, let's prefer words to squigglies. I never want to see a comma operator in Foo.

*blink* No "foo(10, 20, 30)" allowed? But I thought you wanted "implicit precedence" because it was what we were used to? (Never mind that you said we should "dispense with convention and tradition when they are at odds with getting results" [wink])
Quote:
 Garbage CollectionDefinitely has to be a way to denote stuff that is garbage collected, as well as individual blobs of data and sections of code which are not. I think GC should be the default preference as much as possible. Manual memory allocation/control is going to be important for concreteness, but it should be harder to use than the GC model, to help underscore the fact that it is also much harder to get right.

No. It's quite plain that it should be as easy to use as reasonably possible. Given that manual memory management is difficult enough as it is, I don't see a good reason to make it even more difficult just to rub it in.
Quote:
 MetaprogrammingI really like the notion of running code at compile-time. I think C++ templates are half a step in the right direction; they got the Turing completeness, but fail to be really practically useful for a lot of things. I'd like to see the compile-time metaprogramming capabilities be basically identical to runtime - i.e. my compiled metaprogramming is a Foo program that operates on itself. I firmly believe that code should be self-conscious: it should know that it's code, and it should know what to do about it.

Quote:
Original post by ApochPiQ
Quote:
Original post by CoffeeMug
Quote:
 Original post by ApochPiQStatic typing as a general default

I strongly disagree. I want inferred typing where possible by default. When inference fails, the most I can handle is a warning. I want to be able to place type constraints and have the type checker generate warnings. I don't want to spend my time making the compiler happy for the sake of making the compiler happy.

How do you propose implementing type inference when there are nontrivial semantic types defined by the user? I don't see these things being reconcilable. In any case I think the ability to define type munging operations makes type inference redundant for the most part.

The easiest way to implement type inference in cases where types cannot be infered is for the compiler to hold up its hands and say "I can't infer a precise type for this expression". Note that if you have a common type, any expression can be infered to be of that type, so it isn't as if type inference would make it impossible to write non-trivial programs. The only potential problem is when you don't allow a common type and refuse to accept programs where the type of some expressions cannot be infered.
Quote:
 Frankly, it has nothing to do with "making the compiler happy for the sake of making the compiler happy." It has to do with contractual obligations, consistent obedience of type semantics, and eliminating undefined behavior by clamping things to clearly denoted boundaries. The idea is to make it hard to write invalid code, not make it hard to write valid code.

What invalid code is harder to write when you don't have type inference? What valid code is harder to write when you do have type inference?

If type inference is 'convervative' -- only rejecting those constructs which it can prove invalid, rather than those constructs which it can't prove valid -- and you can add explicit type annotations if you want to (which is true for every mainstream type infered language I'm aware of), then I can't see valid code being any harder with type inference.

### #35Nathan Baum  Members   -  Reputation: 1027

Like
0Likes
Like

Posted 28 March 2006 - 12:52 PM

Quote:
 Original post by RoboguyWhat does that have to do with whether it's statically typed or not?

Terminology misus. Plenty of people understand "static typing" to mean "types are explicit in the source", rather than just "types are known statically".

### #36Nathan Baum  Members   -  Reputation: 1027

Like
0Likes
Like

Posted 28 March 2006 - 12:53 PM

Quote:
 Original post by Falling SkyIts been in issue in all the compilers ive ever written, the operands to use, how to handle + 5 if its a string or even a user defined type?

Call a generic addition function if you don't know. Sheesh. Anyway:
Quote:
 Original post by ApochPiQFavor programmer productivity and efficiency, even if it makes implementation of language features difficult

### #37Azh321  Members   -  Reputation: 569

Like
0Likes
Like

Posted 28 March 2006 - 05:14 PM

Quote:
Original post by Nathan Baum
Quote:
 Original post by Falling SkyIts been in issue in all the compilers ive ever written, the operands to use, how to handle + 5 if its a string or even a user defined type?

Call a generic addition function if you don't know. Sheesh. Anyway:
Quote:
 Original post by ApochPiQFavor programmer productivity and efficiency, even if it makes implementation of language features difficult

OMG assembly has generic functions? NO WAY...

Please dont reply unless you have a helpful answer.
The second part of your post is helpful. I can agree with that to an extent. Its hard to follow through with though, very hard.

### #38Roboguy  Members   -  Reputation: 794

Like
0Likes
Like

Posted 28 March 2006 - 05:20 PM

Quote:
Original post by Falling Sky
Quote:
Original post by Nathan Baum
Quote:
 Original post by Falling SkyIts been in issue in all the compilers ive ever written, the operands to use, how to handle + 5 if its a string or even a user defined type?

Call a generic addition function if you don't know. Sheesh. Anyway:
Quote:
 Original post by ApochPiQFavor programmer productivity and efficiency, even if it makes implementation of language features difficult

OMG assembly has generic functions? NO WAY...

Please dont reply unless you have a helpful answer.

You could just store the type along with the data and check the type in the functions which preform operations on the data ("generic functions"). How else would you do it?

### #39Azh321  Members   -  Reputation: 569

Like
0Likes
Like

Posted 28 March 2006 - 06:50 PM

Ive been thinking of that but how well will that work...I cant see anything but for some reason im thinking there might be a prob when it comes to getting input...but I dont see anything wrong...Ill try it out and tell you how it works ;) thanks

### #40ZQJ  Members   -  Reputation: 496

Like
0Likes
Like

Posted 28 March 2006 - 10:26 PM

To Fallen Sky: handling structures and virtual functions in ASM isn't that hard (and you can implement the + operator via a double-dispatch function). Agreed, writing the assembler code is nasty because it involves lots of offsets you need to keep track of, but that's why we have compilers to do that sort of thing and we don't try to do it by hand. Anyway I think about the easiest way to build a compiler is to write a GCC frontend - it's hardly a trivial exercise but it's a lot simpler than doing it all yourself.

Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

PARTNERS