The program I'm currently being paid to work uses database stuffs via ADO components; TADO components infact as we are using Borland Developer Studio 2006 to develop this application. Keep in mind that before I started there 4 weeks back now I hadn't touched this program at all.
Having been given a crash course on how to hook datasets and data sources together and to connections by my boss I went on my merry way developing and converting everything from file based to database based (and removing various memory leaks and other insane bits of coding as I went).
This all worked very well until I got onto one perticular section where I needed to create a new entry in a database table and then associate entries in another table with the automatically generated id of that new entry in the first table. Oh, and this all had to be undo-able incase they want to cancel working on it and undo all the changes. Oh, and the same form acts as an editor for the new entry and old entries, which also need to be undo-able.
Anyways, so last week I spent far far too many hours performing voodoo with writing to the db, keeping track of when my table was new and all that lark and I could add, delete, delete all and edit things to my hearts content.
About sunday I realised that rolling back was impossible under the system I was using... :|
Yesterday I started working on a new piece of code voodoo in order to deal with roll backs and the like; I then got side tracked by my boss to do something else so that never got done.
Good job as today I found out about 'cached datasets'.
The long and the short of it is;
- you pull a data set into memory
- you mess with it there
- if you want to hold on to the changes you commit it back to the db
- if you don't you just cancel the update
Net result; I recoded aload of stuff this afternoon and the resulting system does more and is cleaner.
I still had to perform some id fix-ups as the new line in the first table doesn't get an id until its commited back to the db, however that's a minor problem.
So, last weeks work was effectively a complete waste of my time, huzzah!
At some point I've got to try and convince my boss that our method of doing the final rendering is sane.
Basically we have an output page which has certain bits of text which need replacing with details; the inital solution was a RTF file, a search and replace and job done. However that means dealing with RTF files.
The new system uses HTML and my solution is to use client side ADO to connect to the database, extract the details and then render the page using script to insert the right thing where required.
My boss has reservations about this, the main one being 'what if it breaks?'.
Well, I can see three possible lines of it breaking;
1; the database dies, frankly if that happens the whole app is dead
2; it tries to render from an entry which doesn't exist, which should never happen as we tell it the id directly.
This way we get the best of both worlds [smile]
At some point I dare say the question of file locations is going to come up, I just hope we dont' end up in a disagreement over the correct way to do things. Frankly this program is going to require an installer anyways so there is no good reason not to have the program have files in the 'correct' locations with regards to user access permissions. However, my boss is old skool and I can see this being an issue.
In other news, the state of OpenVG on the PC disappoints me.
For those who don't know what OpenVG it's an API, styled after OpenGL2.x, which draws vector graphics. The spec is doable in both hardware and software however what software there is is very much aimed at the mobile market; this isn't much use for those of us wanting to use it with OpenGL on the PC.
To that end we have 3 choices;
- A commerical implimentation, the cost of which isn't advertised (which never bodes well)
- An OpenSource version which is totally tied to Qt (and this Qt's annoying licence)
- A LGPL version which appears to not have been updated since July when it was first submitted to SVN at sourceforge and isn't complete.
The final verison might be usable, although LGPL makes me shudder, but it's still disappointing. The Qt one shouldn't have been tied to the Qt library, sure make it usable with it, but at the same time a library which could be extracted would have been nice.
I'd write my own if I had the time, but I don't.
Infact, really I should be working on plans for GTL3.0 and Bonsai; both of which are penciled in for tomorrow when I wake up and don't feel as tired as I am now.
Penciled in GTL features are;
- Dropping the need for Boost::IOStreams
- Introduction of async file loading
- Introduction of network async loading (so from http sources) via libcurl most probably.
- Same interface as now (with additions to support async where required).
I'd also like to add a utility library to the current GTL implimentation (and by extension the new one)to do things like image scaling (and maybe, maybe direct upload to OpenGL textures, although I'm loathed todo anything that API specfic), decompression might also move out and maybe a DXT compression lib so that non-DXT files can be compressed. Oh, and mipmap generation might be nice. However, these are secondary goals and would, as noted, be in the form of a utility library and not part of the main API; overloading the main API is something I wanted to avoid.
Async file loading brings an intresting problem forward as it might require threading and I'm not sure how to handle that without introducting additional depenancies; I could use boost as that would keep it crossplatform but that would mean replacing one compiled lib (IOStreams) with another (Threads). OpenMP is another idea, however that would cut out VS.Net support as OpenMP was only introduced in VS2005.
Maybe the alternative is to make it configurable... we'll see how the design pans out I think.
I think I'm done for now, a few more hours awake then sleep and plans to crack on with.