.obj model files and slow loading times

Started by
3 comments, last by Antheus 15 years, 1 month ago
I've created a basic program as the framework for displaying 3D scenes. The program loads .obj model files and parses them to get the proper data structures for rendering (DirectX vertex buffers). I've tested it with various 3D models and my file loading code appears slow. The lag between loading and displaying is noticeable. For models around 2500 triangles, it takes roughly 1.2 seconds to read and parse the file. (this file is 235kb in size and around 7,700 lines long) The file parsing functions use the fstream library to open and read files. I don't know if this is necessarily a very efficient way to read lots of ASCII text. Or maybe .obj isn't the best format to use for complex 3D objects (even though they're just either single vehicles or characters). I can't imagine how long it would take to load a few dozen different models for a scene. Can .obj files still be practical for loading scenery? Is this performance typical for text files thousands of lines in size?
Electronic Meteor - My experiences with XNA and game development
Advertisement
1)
Try reading the whole file in one go, and parse once that is done. (235K should load from disk very quickly)
You can use tools like stringstream and boost::lexical_cast to parse out your text as numbers, but try to do as much in-place as possible.

2)
Text parsing can just be plain slow, consider a binary format that contains information in the exact format you need so you don't have to parse anything, just read it directly into your structures and go. Just be aware of endian issues if you use another processor platform.
(DirectX .X files?)
Like KulSeran said, load the whole file into a buffer with fstream::read(), then parse that. Loading small sequences of characters can be very slow.
After using the read() function and storing the file to a buffer, I've been able to cut down the time needed to load and parse files by about 25%. However, I've been tweaking some code just to experiment, and realized that most of the overhead came not from parsing the files, but storing the data into a vector.

Since I am using vectors of D3D vertex structures to put into framebuffers, they have to be dynamically resized, and I push all the elements into the vector once all the vertex structures are created.

In one of my tests, I commented out the vector push_back() functions that add the vertices to be used in a vertex buffer. Of course, no models were being rendered, but the delay between startup and first frame of rendering was cut by around 80%. How is performance usually rated for adding thousands of structures to a vector? (this structure is the size of 8 floats, an int and a DWORD) Is it expected to be noticeably slow? At this point I'd like to try dynamic creation of arrays in place of vectors if it can improve the speed.
Electronic Meteor - My experiences with XNA and game development
Quote:Original post by JustChris

How is performance usually rated for adding thousands of structures to a vector? (this structure is the size of 8 floats, an int and a DWORD) Is it expected to be noticeably slow? At this point I'd like to try dynamic creation of arrays in place of vectors if it can improve the speed.


To add 100 million objects(!!!) to a std::vector, the vector will need to be resized about 26 times. That means 26 calls to new, which is nothing. Using arrays won't change that.

To eliminate vector resizing altogether, first count the number of elements it should have, then .resize() or .reserve() it.

What happens if you '#define _SECURE_SCL 0' to disable secure iterators? It needs to be define before any part of STL is used, so it's probably best to put it into preprocessor settings (_SECURE_SCL=0).

This topic is closed to new replies.

Advertisement