Feedback - Benchmarking XML Reading

Started by
4 comments, last by dmatter 10 years, 7 months ago

Guys and Girls,

Looking for some feedback on a XML reader that i've thrown together, simply put, it reads the file in, verbose, and using vectors recreates the structure of the XML file into memory, which can then be searched...

I am using a COLLADA file as a benchmark, particularly the larger of the two files contained in the rar at this location: http://www.wazim.com/Downloads/AstroBoy_Walk.rar, for clarity purposes using astroBoy_walk_Max.DAE.

The file is 1.5mb decompressed, comprises of ~14000 lines of xml structured data.

Currently based on my basic benchmarking, i'm reading the file and creating the structure in, on average 9.5 seconds, i feel this is too slow especially if i plan to read in a number of these files... to be fair though this is a marked improvement over my first effort that took on average 45 seconds to read the same file.


int PopulateNode(std::string fC){
		//get line
		int lStart = 0, lEnd = 0, tagsOpen = 0;
		std::vector<XMLNODE*> nodeStack;
		nodeStack.push_back(this);
		while(((lStart = fC.find_first_not_of(" ", lStart)) != -1) && ((lEnd = fC.find("\n", lStart)) != -1)){
		//do{
			//lStart = fC.find_first_not_of(" ", lStart);
			//lEnd = fC.find("\n", lStart);
			std::string curLine;

			curLine = fC.substr(lStart,lEnd-lStart);
			
			if (curLine.at(0) != '<'){
				//no open tag found, but there is data on the line... approach as though elements of the node.
				nodeStack.back()->elements.push_back(XMLELEMENT("value",curLine.substr(lStart, lEnd - lStart- 1)));
			}else{
				switch (curLine.at(1)){
				case '/':
					{
						//close tag
						std::string closetag;
						nodeStack.pop_back();
						tagsOpen--;

						break;
					}
				case '?':
					{
						//question mark found
						std::string qnMark;

						break;
					}
				default:
					{
						//not closed/not question
						int bSBoundS = 0;
						int bSBoundE = curLine.find('>', bSBoundS);

						std::vector<std::string> tempStrVec = SubStrDelim(curLine.substr(bSBoundS+1, bSBoundE-bSBoundS-1),' ');

						std::string closeTag = std::string ("</").append(tempStrVec.at(0));
						std::string selfClose = "/>";

						if(!nodeStack.back()->nodeType.empty()){
							nodeStack.back()->childNodes.push_back(XMLNODE());
							nodeStack.push_back(&nodeStack.back()->childNodes.back());
						}

						nodeStack.back()->nodeType = tempStrVec.at(0);
						
						
						if (tempStrVec.size() > 1){
							for(int i=1; i<tempStrVec.size();++i){
								nodeStack.back()->elements.push_back(XMLELEMENT(tempStrVec.at(i)));
							}
						}


						int closeTagPos = curLine.find(closeTag);
						int selfCloseTag = curLine.find(selfClose);

						if(closeTagPos != -1){
							nodeStack.back()->elements.push_back(XMLELEMENT("value",curLine.substr(bSBoundE+1, closeTagPos-bSBoundE-1)));
						}

						if((closeTagPos == -1) && (selfCloseTag == -1)){
							//no close tag on line.
							nodeStack.back()->childNodes.push_back(XMLNODE());
							nodeStack.push_back(&nodeStack.back()->childNodes.back());
							tagsOpen++;
						}else{
							nodeStack.pop_back();
						}

						break;
					}
				}
			}
		
			lStart = lEnd +1;
		}//while(lStart < fC.size() && lEnd != -1);

		return 0;
	}

In this case the input is a Char* / std string from a full file read to memory.

I appreciate this might not be the most tidy, or error safe code, i'm working on getting it reading first then adding in some error trapping as i'm using the errors (if any are generated) to stop me in my tracks as i'm doing this rapid style.

To head off questions of "why not use a library out there?", i'm using this as a learning experience, and would like to learn ways of improving my own code over using someone elses...

So i'd like to ask for some C&C, pointers, or even just an idea on how long it takes others to read in this same file to see if my expectations are being unfair.

Thanks.

Advertisement

I appreciate this might not be the most tidy, or error safe code, i'm working on getting it reading first then adding in some error trapping as i'm using the errors (if any are generated) to stop me in my tracks as i'm doing this rapid style.

You are approaching this backwards.

First make your implementation correct, and come up with test cases to ensure correctness. Then optimize, using the test cases to ensure that you don't compromise correctness.

Tristam MacDonald. Ex-BigTech Software Engineer. Future farmer. [https://trist.am]

You are doing a lot of temporary memory allocation and copying during your parsing. This will rack up a lot in terms of degraded performance.

Strings, in particular, are expensive because they require a memory allocation and a copy anytime you use substr(), append() or when you push one into a vector. At one point you create a temporary vector of strings (tempStrVec) and transfer them into an 'elements' vector using push_back (which will allocate space and copy all the chars for each string).

Secondary to that, seeking through strings multiple times at every iteration using find() methods is going to ramp up the time complexity. Find is implemented as a loop, it has to search through character-by-character until it finds (or not) the characters you are looking for. So you have loops within loops, repeatedly re-scanning through the text.
For speedy file parsing you generally want to load the entire file into a character buffer in one go, then zip through that, only allocating and copying out what you intend to keep at the end, producing little to no temporary structures. You also either want to take it character-by-character (without looking ahead using searching), or do an initial pass over it to split it all up into tokens and then examine token-by-token. Char-by-char is usually faster but token-by-token can sometimes be easier to implement.
@dmatter thanks for the feedback, i've taken it away and considered it over the last couple of months and have revisited my approach.

The use of tokens, over substr, appeared to have the most impact to the code that i had. Char-by-char was more time consuming to write and I found it more problematic for the generation of the strings etc, whereas using tokens allowed for reasonably quick translation from char* to string, char* to float etc.

Before changing tact completely (now using a version of TinyXML, i know i didnt want to use libraries but i think i learnt what i needed) i had the reader down from 9.5 seconds, to around 2, noting that this was pulling the entire file and generating a structure to use it, rather than polling up to the points that i actually need and discounted the rest.

The TinyXML implementation now pulls the file apart and gets what i need to survive in about 0.2 seconds depending on the machine.

Thanks for the advice.
Writing a XML parser as a learning experience makes sense only if you want to become a XML *expert*, which I presume isn't your purpose. Your code has nothing to do with XML parsing: it tries to parse a specific file with extremely specific assumptions (e.g. tags at line start), with the effect of being dangerous to you (what if the file changes?) and counterproductive towards learning actual XML. So you should really, really use a library to load the COLLADA files you need in a robust way; if you want to do something related and useful, and acquire some modest but valid knowledge of XML, I suggest writing XML files instead (saved game configurations, tool output, etc.).

Omae Wa Mou Shindeiru

Reading between the lines here a bit: Once you progress past the academic goals of writing your own parsers and 3d model loading you might like to look into using something like AssImp as a robust way to load in your mesh and animation resources.

This topic is closed to new replies.

Advertisement