View more

View more

View more

### Image of the Day Submit

IOTD | Top Screenshots

### The latest, straight to your Inbox.

Subscribe to GameDev.net Direct to receive the latest updates and exclusive content.

# [C++] Holy cow are string streams slow

Old topic!

Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

15 replies to this topic

### #1AvCol  Members

Posted 29 April 2011 - 01:31 AM

I decided to change my parser ( not really a parser more of a line getter and string manipulator ) to using string streams for more elegant and readable code. Elegant and readable code was what I got but the performance hit is massive. Currently it uses getline to get a string and then uses that string to construct a string stream. The string stream is used for various operations like getting the line header / tokens / body / ignoring comments etc. etc..

But just that one operation string -> stringstream takes 6 seconds(!!) looping through a 40MB standford lucy wavefront obj file. 6 seconds!!!!! and that's on a Q6600 (not the best desktop chip nowadays but its not like its some ancient pos either). By comparison just getline without the string stream construction takes 2.3 seconds.

Why on earth does this one constructor triple the time of the code.

The >> operator seems slow as well. The body of the obj loader (consists of getting a header and tokens and then doing some atois and atofs) went from taking 13 seconds to taking 28.

My old parser used manual searching and tokenizing with for loops, but the input stream was still a C++ stl style one, and for comparison it was able to load lucy in ~18 seconds total, which isn't fast but its a lot better than the current 48 (12 of which is just sstream construction ).

So three questions:

1) Is using old school C style file input recommended over the C++ way?
2) alternatively are there some C++ stream speed secrets I am not yet privy to?
3) Is there some way to bypass that slow slow slow slow string stream constructor and getline directly into the string stream? Seriously.

### #2fastcall22  Moderators

Posted 29 April 2011 - 01:53 AM

Unless you're comparing against a build with full optimizations enabled, your claims are unfounded. Do you have full optimizations enabled?
zlib: eJzVVLsSAiEQ6/1qCwoK i7PxA/2S2zMOZljYB1TO ZG7OhUtiduH9egZQCJH9 KcJyo4Wq9t0/RXkKmjx+ cgU4FIMWHhKCU+o/Nx2R LEPgQWLtnfcErbiEl0u4 0UrMghhZewgYcptoEF42 YMj+Z1kg+bVvqxhyo17h nUf+h4b2W4bR4XO01TJ7 qFNzA7jjbxyL71Avh6Tv odnFk4hnxxAf4w6496Kd OgH7/RxC

### #3AvCol  Members

Posted 29 April 2011 - 02:17 AM

Unless you're comparing against a build with full optimizations enabled, your claims are unfounded. Do you have full optimizations enabled?

Of course. /Ox in VC++ 10. I also turned off checked iterators and secure scl ( I think I read somewhere that one of these has implications for even release builds ).

### #4fastcall22  Moderators

Posted 29 April 2011 - 02:28 AM

You could also try compiling the text file into a binary format for faster parsing.
zlib: eJzVVLsSAiEQ6/1qCwoK i7PxA/2S2zMOZljYB1TO ZG7OhUtiduH9egZQCJH9 KcJyo4Wq9t0/RXkKmjx+ cgU4FIMWHhKCU+o/Nx2R LEPgQWLtnfcErbiEl0u4 0UrMghhZewgYcptoEF42 YMj+Z1kg+bVvqxhyo17h nUf+h4b2W4bR4XO01TJ7 qFNzA7jjbxyL71Avh6Tv odnFk4hnxxAf4w6496Kd OgH7/RxC

### #5AvCol  Members

Posted 29 April 2011 - 02:36 AM

You could also try compiling the text file into a binary format for faster parsing.

I could, and I do this with my own internal format (which I export as a raw chunk of indices, vertices, skeletal weights, material strings etc.). But I am not concerned with that at the moment, I am concerned with why I can't write a fast text file parser using the classes of the C++ standard library.

### #6_swx_  Members

Posted 29 April 2011 - 02:56 AM

Tried running the program through the profiler in VS2010? I would try reusing the same stringstream object if possible to avoid having to reallocate the internal buffer all the time.

### #7rip-off  Moderators

Posted 29 April 2011 - 03:01 AM

Can you show us the code?

### #8AvCol  Members

Posted 29 April 2011 - 03:03 AM

Tried running the program through the profiler in VS2010? I would try reusing the same stringstream object if possible to avoid having to reallocate the internal buffer all the time.

You are onto something. Please explain further. My current code is


string line;
getline( input, line );
stream = stringstream( line ); // member of the class doing this code.



This is code that takes 6 seconds going through the entire file.

By contrast


string line;
getline( input, line );



takes only 2. There must be a better way to get a line into a stream. I feel like such a noob with this

### #9AvCol  Members

Posted 29 April 2011 - 03:11 AM

Can you show us the code?

Why not.


class Parser
{
public:
Parser( std::wstring file );

virtual void Ignore( const std::string& start, const std::string& end );
virtual void Rewind( void );
virtual void Next( void );
virtual void GetLine( std::string& line );
virtual void GetTokens( std::vector<std::string>& tokens );
virtual void GetBody( std::string& body );
virtual void GetBodyTokens( std::vector<std::string>& bodyTokens );
virtual bool Good( void );

std::stringstream stream;

protected:
void                     TrimLine( std::string& line );

int                      ignoring;
std::vector<std::string> excludeDelims;
std::vector<std::string> includeDelims;
std::ifstream            input;
};

void Parser::Ignore( const std::string& start, const std::string& end )
{
excludeDelims.push_back( start );
includeDelims.push_back( end );
}

void Parser::Rewind( void )
{
input.seekg( 0, ios::beg );
input.clear();

ignoring = -1;
stream = stringstream( "" );
}

void Parser::Next( void )
{
string line;
getline( input, line );

if( !input.good() )
return;

if( line.empty() )
{
Next();
return;
}

TrimLine( line );
if( line.empty() )
{
Next();
return;
}

stream = stringstream( line );
}

void Parser::GetLine( std::string& line )
{
line.assign( stream.str() );
}

void Parser::GetTokens( std::vector<std::string>& tokens )
{
tokens.clear();
stream.clear();
stream.seekg( 0, ios::beg );

string token;

while( stream >> token )
tokens.push_back( token );
}

{
stream.clear();
stream.seekg( 0, ios::beg );

}

void Parser::GetBody( std::string& body )
{
body.clear();
stream.clear();
stream.seekg( 0, ios::beg );

body.assign( stream.str() );

size_t i = 0;

// Ignore any white spaces at the beginning of the line.
while( body[i] == ' ' && body[i] == '\r' && body[i] == '\t' && i < body.length() )
i++;

// Ignore the first word
while( body[i] != ' ' && body[i] != '\r' && body[i] != '\t' && i < body.length() )
i++;

body = body.substr( i, body.length() );
}

void Parser::GetBodyTokens( std::vector<std::string>& bodyTokens )
{
bodyTokens.clear();
stream.clear();
stream.seekg( 0, ios::beg );

string token;

stream >> token;
while( stream >> token )
bodyTokens.push_back( token );
}

bool Parser::Good( void )
{
return input.good();
}

void Parser::TrimLine( string& line )
{
if( ignoring != -1 )
{
size_t incPos = line.find( includeDelims[ignoring] );
if( incPos != string::npos )
{
line = line.substr( incPos, line.length() );
ignoring = -1;
TrimLine( line );
}
else
line.clear();
}
else
{
for( size_t i = 0; i < excludeDelims.size(); i++ )
{
size_t excPos = line.find( excludeDelims[i] );
if( excPos != string::npos )
{
string tail = line.substr( excPos, line.length() );
line = line.substr( 0, excPos );

// If the includeDelim is the end of the line just return the head.
if( includeDelims[i] == "\n" )
return;

ignoring = i;
TrimLine( tail );
line += tail;
return;
}
}
}
}


Here is the obj loader code, although this hasn't changed since my 18 second lucy bench mark, just the above posted backend has.

shared_ptr<Mesh> ImportImpl::LoadObjMesh( wstring file )
{
shared_ptr<Mesh> mesh = LookupMesh( file );
if( mesh )
return mesh;

mesh = shared_ptr<Mesh>( new Mesh );

wstring path = FindFullPath( file );
ObjParser parser( path );

int numPositions = 0;
int numTexcoords = 0;
int numNormals   = 0;
int numGroups    = 0;
int numFaces     = 0;

parser.Ignore( "#", "\n" );

// Preliminary run through to gather information.
while( parser.Good() )
{
parser.Next();
string line;

parser.GetLine( line );

switch( line[0] )
{
case 'v':
switch( line[1] )
{
case ' ':
numPositions++;
break;
case 't':
numTexcoords++;
break;
case 'n':
numNormals++;
break;
}
break;
case 'f':
numFaces++;
break;
case 'g':
numGroups++;
break;
}
}

if( !numPositions )
throw ExcFailed( L"[ImportImpl::LoadObjMesh] " + file + L" does not contain vertex positions.\n" );
if( numPositions < 0 || numFaces < 0 || numGroups < 0 )
throw ExcFailed( L"[ImportImpl::LoadObjMesh] " + file + L" holds way too much attribute data.\n" );
parser.Rewind();

vector<Position> positions;
vector<Normal> normals;
vector<Texcoord> texcoords;

positions.reserve( numPositions );
normals.reserve( numNormals );
texcoords.reserve( numTexcoords );
mesh->subMeshes.reserve( numGroups );
mesh->triangles.reserve( numFaces );

wstring_convert<std::codecvt_utf8<wchar_t>> converter;
Hash hasher;
forward_list<int> hashGrid[65536];

while( parser.Good() )
{
parser.Next();

vector<string> tokens;

parser.GetBodyTokens( tokens );

{
Position p;

p.x = float( ( atof( tokens[0].c_str() ) ) );
p.y = float( ( atof( tokens[1].c_str() ) ) );
p.z = float( ( atof( tokens[2].c_str() ) ) );

if( tokens.size() == 4 )
p.w = float( ( atof( tokens[3].c_str() ) ) );
else
p.w = 1.0f;

positions.push_back( p );
}
else if( header == "vt"  )
{
Texcoord o;

o.s = float( atof( tokens[0].c_str() ) );
o.t = float( atof( tokens[1].c_str() ) );

texcoords.push_back( o );
}
else if( header == "vn" )
{
Normal n;

n.x = float( atof( tokens[0].c_str() ) );
n.y = float( atof( tokens[1].c_str() ) );
n.z = float( atof( tokens[2].c_str() ) );

normals.push_back( n );
}
else if( header == "f" )
{
vector<Vertex> faceVertices = parser.GetFaceVertices( positions, normals, texcoords );

for( unsigned int i = 0; i < tokens.size() - 2; i++ )
{
Vertex v[3];

v[0] = faceVertices[0];
v[1] = faceVertices[i + 1];
v[2] = faceVertices[i + 2];

// Fill out the vertex indices of the triangle by either pushing vertices into
// the mesh vector, or finding the index of an already existant equivalent.
Triangle tri;
for( int j = 0; j < 3; j++ )
{
unsigned int hash = hasher.GenerateHash16( v[j] );
bool found = false;
int index;

forward_list<int>::iterator it = hashGrid[hash].begin();
while( it != hashGrid[hash].end() )
{
if( mesh->vertices[*it] == v[j] )
{
index = *it;
found = true;
break;
}
it++;
}

if( !found )
{
index = mesh->vertices.size();
mesh->vertices.push_back( v[j] );
hashGrid[hash].push_front( index );
}

// Vertices are even indices in the t array.
tri.t[j * 2] = index;
}

mesh->triangles.push_back( tri );

if( !mesh->subMeshes.empty() )
mesh->subMeshes.back().triangleIndices.push_back( mesh->triangles.size() );
}
}
else if( header == "g"  )
{
mesh->subMeshes.push_back( SubMesh() );
}
else if( header == "usemtl" )
{
wstring mtl = converter.from_bytes( tokens[0] );

mesh->subMeshes.back().materialIndex = mesh->materials.size();
}
}

mesh->FindTriangleNeighbors();

if( normals.empty() )
mesh->FindVertexNormals();

mesh->Trim();

meshCache.push_back( Record<Mesh>( file, mesh ) );
return mesh;
}


Before you mention it, TrimLine has an overhead of an extra 0.3 seconds on lucy, and that is an overhead I am willing to pay for something that can get rid of /* */ and // style commented text on the fly.The GetBody function has an assignment that I can probably cut out but I don't use it in the obj loader ( I do in the md5 loader but I am just looking at the obj loader for profiling the backend for now ). The part of the obj loader that hashes vertices is welding them on the fly as my internal format requires them that way: that algorithm is not super fast (incurs at least a 6 second overhead) but fast enough considering what it does.

### #10Gorbstein  Members

Posted 29 April 2011 - 03:32 AM

stream = stringstream( line );

should you not use..

stream << line ;

I haven't read every line of the code but I'm not sure you need to be creating a new stringstream on each read.

### #11AvCol  Members

Posted 29 April 2011 - 03:47 AM

stream = stringstream( line );

should you not use..

stream << line ;

I haven't read every line of the code but I'm not sure you need to be creating a new stringstream on each read.

That one change actually makes things at least one order magnitude slower, that was the only change I made and my timing went over 800 seconds so I decided to stop. But thanks for trying to give me some practical advice anyway, its appreciated.

### #12dmatter  Members

Posted 29 April 2011 - 04:35 AM

I've found that if you just want to write a straight forward implementation and don't want to spend a lot of time optimising it then the C-style stream functions are the way to go. There's nothing inherently slow with the C++ streams by design it's just that the VC++ implementation of them does a lot of extra work (even with the full array of optimisation flags).

### #13__sprite  Members

Posted 29 April 2011 - 06:24 AM

Is using the trim line function actually faster than just letting the switch statements ignore the unnecessary characters?

While going through the file twice (to find number of verts etc.) may be faster, perhaps keeping the file in memory and using that the second time would be quicker than reading everything off disk twice?

### #14AvCol  Members

Posted 29 April 2011 - 06:42 AM

Is using the trim line function actually faster than just letting the switch statements ignore the unnecessary characters?

While going through the file twice (to find number of verts etc.) may be faster, perhaps keeping the file in memory and using that the second time would be quicker than reading everything off disk twice?

The going through the file twice isn't slowed down because of the disk, as I am sure this gets cached by the OS / HDD anyway: its slowed down most by that string stream constructor. And yeah it is way faster finding the right amount of verts because for a large data set ( like lucy ) if you don't reserve vector space, the whole thing runs almost five times as long (184 seconds )

No, it is a little slower. A switch statement works well in ignoring lines that start with # as the comment character ( like in obj files ) but if you have a comment like /* this is a comment */ that can span over multiple lines or be in the middle of a line or even have a line /* blah blah blah */ full off /* blah blah blah */ such comments like that, then you need something better, and the parser class is used to parse a few types of text files.

So everyone can see here is an implementation of my parser class I just created which loads lucy in 10 seconds, compared to the 48 it takes with the before posted one using string streams.


Parser::Parser( wstring file )
{
input.open( file );
ignoring = -1;

if( !input.is_open() )
throw ExcFailed( L"[Parser::Parser] Could not open file " + file + L"\n" );
}

void Parser::Ignore( const std::string& start, const std::string& end )
{
excludeDelims.push_back( start );
includeDelims.push_back( end );
}

void Parser::Rewind( void )
{
input.seekg( 0, ios::beg );
input.clear();

ignoring = -1;
line.clear();
}

void Parser::Next( void )
{
getline( input, line );

if( !input.good() )
return;

if( line.empty() )
{
Next();
return;
}

TrimLine( line );
if( line.empty() )
{
Next();
return;
}
}

void Parser::GetLine( std::string& _line )
{
_line = line;
}

void Parser::GetTokens( std::vector<std::string>& tokens )
{
tokens.clear();
string buff;

size_t from = 0;
while( from < line.length() )
{
GetNextToken( buff, from );
tokens.push_back( buff );
}
}

{

size_t from = 0;
}

void Parser::GetBody( std::string& body )
{
body.clear();

size_t i = 0;
// Ignore any white spaces at the beginning of the line.
while( line[i] == ' ' && line[i] == '\r' && line[i] == '\t' && i < line.length() )
i++;

// Ignore the first word
while( line[i] != ' ' && line[i] != '\r' && line[i] != '\t' && i < line.length() )
i++;

body = line.substr( i, line.length() );
}

void Parser::GetBodyTokens( std::vector<std::string>& bodyTokens )
{
bodyTokens.clear();

string buff;

size_t from = 0;
GetNextToken( buff, from );
while( from < line.length() )
{
GetNextToken( buff, from );
bodyTokens.push_back( buff );
}
}

bool Parser::Good( void )
{
return input.good();
}

void Parser::TrimLine( string& line )
{
if( ignoring != -1 )
{
size_t incPos = line.find( includeDelims[ignoring] );
if( incPos != string::npos )
{
line = line.substr( incPos, line.length() );
ignoring = -1;
TrimLine( line );
}
else
line.clear();
}
else
{
for( size_t i = 0; i < excludeDelims.size(); i++ )
{
size_t excPos = line.find( excludeDelims[i] );
if( excPos != string::npos )
{
string tail = line.substr( excPos, line.length() );
line = line.substr( 0, excPos );

// If the includeDelim is the end of the line just return the head.
if( includeDelims[i] == "\n" )
return;

ignoring = i;
TrimLine( tail );
line += tail;
return;
}
}
}
}

void Parser::GetNextToken( string& container, size_t& from )
{
size_t to = from;

while( from != line.length() && ( line[from] == ' ' || line[from] == '\t' || line[from] == '\r' ) )
from++;

to = from + 1;
while( to != line.length() && line[to] != ' ' && line[to] != '\t' && line[to] != '\r' )
to++;

container = line.substr( from, to - from );

from = to;
}


Which is a shame because I think string streams are a really elegant way of parsing and formatting data, but I don't know how to use them in a way that isn't mega mega slow.

### #15__sprite  Members

Posted 29 April 2011 - 06:45 AM

A switch statement works well in ignoring lines that start with # as the comment character ( like in obj files ) but if you have a comment like /* this is a comment */ that can span over multiple lines or be in the middle of a line or even have a line /* blah blah blah */ full off /* blah blah blah */ such comments like that, then you need something better.

But that's not a valid comment in a .obj file.

If the stringstream is really that slow, you could try processing the line string using boost string algorithms or something instead (I'm not sure it would be faster, but it's an option to try).

### #16Antheus  Members

Posted 29 April 2011 - 06:51 AM

This is text book example of how costly memory allocations are.

There is no decent way around it using standard stream implementations.

One could use custom allocator which would go a long way, but very third-party libraries support such strings, so if you depend on any of them, it'll be a problem.

Parsing .obj files is also best done using standard FSM-based parser which can work with no overhead or extra allocations beyond the extracted data.

Old topic!

Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.