Hi everyone.
I have some theoretical doubt.
I am implementing a small compiler of a custom language. For some internal reasons i have to fully implement the compiler(no lex, no yacc, etc...yeah, reinvent the wheel..).
Anyway. Until now, i already have the lexic parser and the syntax parser.
For the lexic parser, i used a hand-made automata which is quite fast.
For the syntax parser i did the following(which i'm not quite sure it is the correct way):
I defined my grammar in a text file.
Then i implemented a "grammar parser" which builds the LR(1) matrix for that grammar. The matrix is saved in another text file.
Then the compiler loads all the data from this file(terminals, no-terminals, the matrix).
The i apply the bottom-up LR(1) parsing algorithm.
From here on, is where i get a bit lost.
I have read that i can generate code while doing the syntax parsing. But some semantic properties cannot be known until i have a full syntax tree.
So i decided to build the tree.
So now the syntax parser builds a tree(which is my first IR).
From there i made a simplified in-order tree.
By traversing the tree i think i can do the code generation. That's where i have stopped.
I read about attribute grammars. Although they seem useful, i don't know how can be used.
The theory says that each grammar rule now has some pseudo-code which specify what happens with the symbols parsed.
My idea and question is this:
Do i have now to create a new parser which reads not only the grammar, but it's attributes and then.. what???
This feels like implementing a compiler to implement a compiler......
So, how are attribute grammars implemented or used in a compiler.
I know it may sound dumb, but there is no book nor article or tutorial which actually says something about implementing attribute grammars.