I've took a look into the scriptbuilder-addon and saw, that you store includes first and then include right after the full file has been parsed.
This has some negative consequences which breaks sequential pre-processor commands.
Is this realized this way because of a better design or is the order in which script sections are added to a module important?
Maybe I'm on a complete wrong way.. Does AngelScript even take care of script appearance? Does the order matters?
Secondly, I'm trying to get the exact token return which asCTokenizer::GetToken returns.
Is there a way to get the eTokenType-value without hacking?
The order of the script sections doesn't matter to AngelScript. The only thing that matters is that a script section added to AngelScript only contains fully declared entities, i.e. you cannot declare part of a class or a function one section and the other part in another section.
Sequential pre-processing commands were not on my mind when I implemented the script builder add-on. To support sequential pre-processing commands the includes would probably have to be pre-processed as they are found. This would likely make the add-on more complex.
The token definitions used internally by AngelScript are not exposed to the application, so there is no way to get them without modifying the library. The ParseToken() method can quite easily be modified to return the token definition. What exactly do you want this for?
I want this to allow people use #include <path>. TC_VALUE seems only to allow " and ', so I thought of modifying it. The tokenizer does not seem to pass back TC_VALUE when there are no quotes. Of course I can work-around this, but I wanted to know firstly that there is no other possibility.
Thanks and good work btw ;)
By the way, I've got a question to the coding style. Is there a reason for using a non objective programming style in some cases?
Example: engine->DiscardModule(const char*). I intuitionally tried to use module->Discard() and then engine->DiscardModule(module).
Is the reason the more easy binding to other programming languages, like C? Or are there other reasons?
Edited by thewavelength, 22 September 2012 - 08:55 AM.
The <path> will be parsed as 3 separate tokens, ParseToken will return TC_KEYWORD, TC_IDENTIFIER, and then TC_KEYWORD.
In your case I would change the script builder to recognize the first TC_KEYWORD, and verify it as <, then manually search for the end of the path in the script, rather than using ParseToken(). There is no need to change the library to return the internal token definitions. It wouldn't really give you any benefit in this case.
There is no hidden motive behind the coding style. It's just a left over from the origins of the library more than 9 years ago, when the module interface didn't exist and it was accessed directly through the engine interface. I just haven't gotten around to changing it yet.