Archived

This topic is now archived and is closed to further replies.

stodge

Design patterns/OO design for file input

Recommended Posts

I'm looking for some good resources on which design patterns to use for designing file input for my framework. Mostly I need to read (and later write) simple XML files. I'm trying to understand how to implement a general OO file I/O framework to keep code re-use at a maximum. Any suggestions? Thanks Edit: I want it to be generic and flexible, so that I can plug in the ability at a later date to read/write Zip files, binary files, proprietary format data files etc... [edited by - stodge on January 17, 2003 6:02:19 PM]

Share this post


Link to post
Share on other sites
I'm not sure how well documented of a pattern this is but I integerated some parsing techniques studied in my parsing/comiler course.

One of the difficult things to do is to remove white space padding from the file that we are reading. This offen results in many if statements. Also, I found that when writing different file loaders i ended up reproducing this code. So what i have done is to create a parser that reads in charaters and converts them into tokens (identifers, floats, keywords, strings, '{', '}', etc...), I can use this to screen for keywords and discard white spaces easily. This also has the disired effect of code reuse. Instead of reproducing this code each time i write a loader I just use the same tokenizer class for all formats. I then take these tokens and pass them to my file loader (for a specific format). This class will parse the tokens and instantiate the objects that i need. This way you can abstract a lot of stuff away from the loader (after paring into tokens you don't have to worry about white space padding and tokens are all identifed by their kind) And write many loaders to load in different formats.

file --> LexicalParser --> tokens --> FileLoader --> objects

This first class converts from characters to tokens (LexicalParser) Second have a class that takes a set of tokens and parses it and loads in data (I guess its a modified syntactic transducer but it spits out objects and not code). With this approach I could write different file loaders for different formats. As well My LexicalParser was written so that I could load in different table sets that would tokenize different types of tokens.


Additionally i use a state-pattern based approach with finite state machines to to the lexical parsing.

I'm not sure if this is the best approach but it works quite well for me and i can easily create loaders that load in different formats or modify the lexicaParser tables to accomodate for different keywords or token types.



[edited by - _walrus on January 17, 2003 12:11:37 AM]

Share this post


Link to post
Share on other sites