• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.


  • Content count

  • Joined

  • Last visited

Community Reputation

102 Neutral

About AvCol

  • Rank
  1. [quote] There may still be less RAM used (or at least touched per frame) overall anyway, because it may be the case that most pixels need far less than 32 linked list nodes (although you'd think they'd still have to prepare for the worst case!); [/quote] I was thinking this too, and I share the exact same complaint. To what I mean to use this algorithm for (light volumes or volumetric shadows if you will) it would look very silly if all of a sudden some pixels started being lit when they shouldn't be. I can see how OIT would be more forgiving to such missing information though. You're right about increased cache coherency during the writing stage too, I hadn't considered that. Here is the direct link to the slides: [url="http://developer.amd.com/gpu_assets/OIT%20and%20Indirect%20Illumination%20using%20DX11%20Linked%20Lists_forweb.ppsx"]http://developer.amd.com/gpu_assets/OIT%20and%20Indirect%20Illumination%20using%20DX11%20Linked%20Lists_forweb.ppsx [/url]You can find more of AMD's stuff on their conferences page [url="http://developer.amd.com/documentation/presentations/Pages/default.aspx"]http://developer.amd...es/default.aspx[/url]
  2. [quote name='SHilbert' timestamp='1305654079' post='4812006'] I don't know the details of their particular use case, but I would guess they are using a linked list because you can do constant time insertions or deletions anywhere in the list. [/quote] Well the thing is the GPU has no allocator so something like deletions is impossible without quickly running out of memory. Insertions are possible but jumping from node to node on a gpu is far too slow. This is why when sorting each pixel's list they copy it over to a temp array first. Their implementation is thus: They have one continuous block of memory as an HLSL structured buffer that stores some structure plus an index into that buffer. They then have a second buffer which simply stores an index for each pixel into the first buffer. I will call these the linked list buffer and the pointer buffer. Their allocator if it can be called that just points to the next unoccupied space in the linked list buffer (i.e it is just one atomic counter). Whenever a new fragment is drawn (anywhere on the screen) it needs to be added to the linked list buffer. This is done in three steps 1) Set the new node's link (the new node is at linked_list_buffer[allocator_counter] ) to the value in the pointer buffer for this pixel 2) Set the pointer buffer for this pixel to the allocator's counter 3) Increment the allocator counter. As you can see their allocator can not track where free memory is so deletions would make memory run out very quickly. Besides, as I mentioned earlier jumping from node to node and modfiying their links is too slow on the GPU so they recommend to never do this. What they are using it for is an A-buffer. They could have just as easily declared a fixed 32 floats per pixel but opted to use a linked list like structure instead, and I am wondering why. Originally I thought it was because they were relying on the fact that a linked list would use less memory 99% of the time as not each pixel has 32 triangles drawn on it every frame, but then I found out you had to declare a fixed amount of memory anyway, In fact, you have to reserve more memory to get the same amount of accuracy cause you have to store the link, not to mention that each pixel needs its own start node pointer (just an index into the big pre-allocatied buffer). Now I doubt that that's the whole story cause the guys at AMD are probably pretty smart, I am just wondering what they know that I do not.I don't see any advantage to using a gpu linked list.
  3. I was looking up depth peeling type algorithms for rendering shadow volumes in fog and then noticed that this is really the same sort of thing you do for OIT. So I had a closer look at AMD's slides from last year's GDC explaining what they did in the mecha demo. The thing I don't get about it is that when you are creating a buffer resource in direct x you have to declare a size to the buffer right? Sure enough in AMD's slides they say when declaring the UAV to store the linked list nodes the buffer "must be large enough to store all fragments". But isn't the whole point of the linked list so that it *doesn't* use a fixed amount of memory? So why use linked lists if they use a fixed amount of memory anyway?
  4. Most articles pertaining to water rendering deal with how to rasterize water planes or how to rasterize water as a simple world space height, but I can't find any that focus on rasterizing a mesh of polygons each representing the water surface (think of someone spilling water in space, or those fluid simulation videos) Do any techniques like that exist? I spent most of yesterday on google scholar, gamedev and the real time rendering website / book searching for techniques to rasterize water meshes but I could not find any. I understand that this could be accomplished through ray tracing or photon mapping, but I am curios as to how it might be done drawing good ol' fashion triangles.
  5. I've been setting shader resources in many a shader for quite a while but embarassingly I've never understood one basic thing: Lets say in HLSL I declare a few textures and a structured buffer and use them in a vertexx and pixel shader (this is a hypothetical example, I know its not realistic) [code] Texture2D albedoMap; Texture2D surfaceMap; Texture2D normalMap; StructuredBuffer<SomeStruct> someBuffer PixelShaderIn VertexShader( VertexShaderIn input ) { // use albedoMap, normalMap and someBuffer } PixelShaderOut PixelShader( PixelShaderIn input ) { // use surfaceMap and someBuffer } [/code] Now when I am setting these client side with [code] ImmediateContext->VSSetShaderResources( ?, ?, &SRV ) [/code] or [code] ImmediateContext->PSSetShaderResources( ?, ?, &SRV ) [/code] How do the start and end slots correspond to the resources as declared in the shader? For example if I set the start slot to 0 and end slot to 1 with the above HLSL code for the PixelShader would I be setting albedoMap or surfaceMap and would it be different for the vertex shader? In other words what determines what HLSL side shader variable each client side input slot corresponds to. Is it the order they are declared? Is it the order they are declared but only for the ones a given shader program uses? This is probably an extremely nooby question but I haven't been able to find the answer on MSDN or elsewhere (perhaps its just too obvious to even be said, but hell it confuses me)
  6. [quote name='Emergent' timestamp='1303931471' post='4803675'] I think it makes a little more sense to define them relative to the model frame, not to a particular bone frame, because you'll often want a vertex to have nonzero weights associated to more than one bone. [/quote] That makes a lot of sense. In this situation, this would mean that all skeletons passed in to the shader need to be relative to the bind pose skeleton. i.e bone_going_into_shader = model space( bone ) - model space( bindpose bone ). Do I have this correct? But the way Md5 does it is that each vertex is just an index to a few weights + a bias. The weights are a position and an index to a bone. So the weight positions get transformed into model space by their respective bones, and then finally the vertex is calculated from those using its bias. This avoids skeleton - skeleton arithmetic, and has always been the way I thought of things.
  7. [quote name='__sprite' timestamp='1304079841' post='4804397'] Is using the trim line function actually faster than just letting the switch statements ignore the unnecessary characters? While going through the file twice (to find number of verts etc.) may be faster, perhaps keeping the file in memory and using that the second time would be quicker than reading everything off disk twice? [/quote] The going through the file twice isn't slowed down because of the disk, as I am sure this gets cached by the OS / HDD anyway: its slowed down most by that string stream constructor. And yeah it is way faster finding the right amount of verts because for a large data set ( like lucy ) if you don't reserve vector space, the whole thing runs almost five times as long (184 seconds ) No, it is a little slower. A switch statement works well in ignoring lines that start with # as the comment character ( like in obj files ) but if you have a comment like /* this is a comment */ that can span over multiple lines or be in the middle of a line or even have a line /* blah blah blah */ full off /* blah blah blah */ such comments like that, then you need something better, and the parser class is used to parse a few types of text files. So everyone can see here is an implementation of my parser class I just created which loads lucy in 10 seconds, compared to the 48 it takes with the before posted one using string streams. [code] Parser::Parser( wstring file ) { input.open( file ); ignoring = -1; if( !input.is_open() ) throw ExcFailed( L"[Parser::Parser] Could not open file " + file + L"\n" ); } void Parser::Ignore( const std::string& start, const std::string& end ) { excludeDelims.push_back( start ); includeDelims.push_back( end ); } void Parser::Rewind( void ) { input.seekg( 0, ios::beg ); input.clear(); ignoring = -1; line.clear(); } void Parser::Next( void ) { getline( input, line ); if( !input.good() ) return; if( line.empty() ) { Next(); return; } TrimLine( line ); if( line.empty() ) { Next(); return; } } void Parser::GetLine( std::string& _line ) { _line = line; } void Parser::GetTokens( std::vector<std::string>& tokens ) { tokens.clear(); string buff; size_t from = 0; while( from < line.length() ) { GetNextToken( buff, from ); tokens.push_back( buff ); } } void Parser::GetHeader( std::string& header ) { header.clear(); size_t from = 0; GetNextToken( header, from ); } void Parser::GetBody( std::string& body ) { body.clear(); size_t i = 0; // Ignore any white spaces at the beginning of the line. while( line[i] == ' ' && line[i] == '\r' && line[i] == '\t' && i < line.length() ) i++; // Ignore the first word while( line[i] != ' ' && line[i] != '\r' && line[i] != '\t' && i < line.length() ) i++; body = line.substr( i, line.length() ); } void Parser::GetBodyTokens( std::vector<std::string>& bodyTokens ) { bodyTokens.clear(); string buff; size_t from = 0; GetNextToken( buff, from ); while( from < line.length() ) { GetNextToken( buff, from ); bodyTokens.push_back( buff ); } } bool Parser::Good( void ) { return input.good(); } void Parser::TrimLine( string& line ) { if( ignoring != -1 ) { size_t incPos = line.find( includeDelims[ignoring] ); if( incPos != string::npos ) { line = line.substr( incPos, line.length() ); ignoring = -1; TrimLine( line ); } else line.clear(); } else { for( size_t i = 0; i < excludeDelims.size(); i++ ) { size_t excPos = line.find( excludeDelims[i] ); if( excPos != string::npos ) { string tail = line.substr( excPos, line.length() ); line = line.substr( 0, excPos ); // If the includeDelim is the end of the line just return the head. if( includeDelims[i] == "\n" ) return; ignoring = i; TrimLine( tail ); line += tail; return; } } } } void Parser::GetNextToken( string& container, size_t& from ) { size_t to = from; while( from != line.length() && ( line[from] == ' ' || line[from] == '\t' || line[from] == '\r' ) ) from++; to = from + 1; while( to != line.length() && line[to] != ' ' && line[to] != '\t' && line[to] != '\r' ) to++; container = line.substr( from, to - from ); from = to; } [/code] Which is a shame because I think string streams are a really elegant way of parsing and formatting data, but I don't know how to use them in a way that isn't mega mega slow.
  8. [quote name='Gorbstein' timestamp='1304069522' post='4804366'] In your next function: [code]stream = stringstream( line );[/code] should you not use.. [code] stream << line ;[/code] I haven't read every line of the code but I'm not sure you need to be creating a new stringstream on each read. [/quote] That one change actually makes things at least one order magnitude slower, that was the only change I made and my timing went over 800 seconds so I decided to stop. But thanks for trying to give me some practical advice anyway, its appreciated.
  9. [quote name='rip-off' timestamp='1304067719' post='4804356'] Can you show us the code? [/quote] Why not. [code] class Parser { public: Parser( std::wstring file ); virtual void Ignore( const std::string& start, const std::string& end ); virtual void Rewind( void ); virtual void Next( void ); virtual void GetLine( std::string& line ); virtual void GetTokens( std::vector<std::string>& tokens ); virtual void GetHeader( std::string& header ); virtual void GetBody( std::string& body ); virtual void GetBodyTokens( std::vector<std::string>& bodyTokens ); virtual bool Good( void ); std::stringstream stream; protected: void TrimLine( std::string& line ); int ignoring; std::vector<std::string> excludeDelims; std::vector<std::string> includeDelims; std::ifstream input; };[/code] [code] void Parser::Ignore( const std::string& start, const std::string& end ) { excludeDelims.push_back( start ); includeDelims.push_back( end ); } void Parser::Rewind( void ) { input.seekg( 0, ios::beg ); input.clear(); ignoring = -1; stream = stringstream( "" ); } void Parser::Next( void ) { string line; getline( input, line ); if( !input.good() ) return; if( line.empty() ) { Next(); return; } TrimLine( line ); if( line.empty() ) { Next(); return; } stream = stringstream( line ); } void Parser::GetLine( std::string& line ) { line.assign( stream.str() ); } void Parser::GetTokens( std::vector<std::string>& tokens ) { tokens.clear(); stream.clear(); stream.seekg( 0, ios::beg ); string token; while( stream >> token ) tokens.push_back( token ); } void Parser::GetHeader( std::string& header ) { header.clear(); stream.clear(); stream.seekg( 0, ios::beg ); stream >> header; } void Parser::GetBody( std::string& body ) { body.clear(); stream.clear(); stream.seekg( 0, ios::beg ); body.assign( stream.str() ); size_t i = 0; // Ignore any white spaces at the beginning of the line. while( body[i] == ' ' && body[i] == '\r' && body[i] == '\t' && i < body.length() ) i++; // Ignore the first word while( body[i] != ' ' && body[i] != '\r' && body[i] != '\t' && i < body.length() ) i++; body = body.substr( i, body.length() ); } void Parser::GetBodyTokens( std::vector<std::string>& bodyTokens ) { bodyTokens.clear(); stream.clear(); stream.seekg( 0, ios::beg ); string token; stream >> token; while( stream >> token ) bodyTokens.push_back( token ); } bool Parser::Good( void ) { return input.good(); } void Parser::TrimLine( string& line ) { if( ignoring != -1 ) { size_t incPos = line.find( includeDelims[ignoring] ); if( incPos != string::npos ) { line = line.substr( incPos, line.length() ); ignoring = -1; TrimLine( line ); } else line.clear(); } else { for( size_t i = 0; i < excludeDelims.size(); i++ ) { size_t excPos = line.find( excludeDelims[i] ); if( excPos != string::npos ) { string tail = line.substr( excPos, line.length() ); line = line.substr( 0, excPos ); // If the includeDelim is the end of the line just return the head. if( includeDelims[i] == "\n" ) return; ignoring = i; TrimLine( tail ); line += tail; return; } } } } [/code] Here is the obj loader code, although this hasn't changed since my 18 second lucy bench mark, just the above posted backend has. [code] shared_ptr<Mesh> ImportImpl::LoadObjMesh( wstring file ) { shared_ptr<Mesh> mesh = LookupMesh( file ); if( mesh ) return mesh; mesh = shared_ptr<Mesh>( new Mesh ); wstring path = FindFullPath( file ); ObjParser parser( path ); int numPositions = 0; int numTexcoords = 0; int numNormals = 0; int numGroups = 0; int numFaces = 0; parser.Ignore( "#", "\n" ); // Preliminary run through to gather information. while( parser.Good() ) { parser.Next(); string line; parser.GetLine( line ); switch( line[0] ) { case 'v': switch( line[1] ) { case ' ': numPositions++; break; case 't': numTexcoords++; break; case 'n': numNormals++; break; } break; case 'f': numFaces++; break; case 'g': numGroups++; break; } } if( !numPositions ) throw ExcFailed( L"[ImportImpl::LoadObjMesh] " + file + L" does not contain vertex positions.\n" ); if( numPositions < 0 || numFaces < 0 || numGroups < 0 ) throw ExcFailed( L"[ImportImpl::LoadObjMesh] " + file + L" holds way too much attribute data.\n" ); parser.Rewind(); vector<Position> positions; vector<Normal> normals; vector<Texcoord> texcoords; positions.reserve( numPositions ); normals.reserve( numNormals ); texcoords.reserve( numTexcoords ); mesh->subMeshes.reserve( numGroups ); mesh->triangles.reserve( numFaces ); wstring_convert<std::codecvt_utf8<wchar_t>> converter; Hash hasher; forward_list<int> hashGrid[65536]; while( parser.Good() ) { parser.Next(); string header; vector<string> tokens; parser.GetHeader( header ); parser.GetBodyTokens( tokens ); if( header == "v" ) { Position p; p.x = float( ( atof( tokens[0].c_str() ) ) ); p.y = float( ( atof( tokens[1].c_str() ) ) ); p.z = float( ( atof( tokens[2].c_str() ) ) ); if( tokens.size() == 4 ) p.w = float( ( atof( tokens[3].c_str() ) ) ); else p.w = 1.0f; positions.push_back( p ); } else if( header == "vt" ) { Texcoord o; o.s = float( atof( tokens[0].c_str() ) ); o.t = float( atof( tokens[1].c_str() ) ); texcoords.push_back( o ); } else if( header == "vn" ) { Normal n; n.x = float( atof( tokens[0].c_str() ) ); n.y = float( atof( tokens[1].c_str() ) ); n.z = float( atof( tokens[2].c_str() ) ); normals.push_back( n ); } else if( header == "f" ) { vector<Vertex> faceVertices = parser.GetFaceVertices( positions, normals, texcoords ); for( unsigned int i = 0; i < tokens.size() - 2; i++ ) { Vertex v[3]; v[0] = faceVertices[0]; v[1] = faceVertices[i + 1]; v[2] = faceVertices[i + 2]; // Fill out the vertex indices of the triangle by either pushing vertices into // the mesh vector, or finding the index of an already existant equivalent. Triangle tri; for( int j = 0; j < 3; j++ ) { unsigned int hash = hasher.GenerateHash16( v[j] ); bool found = false; int index; forward_list<int>::iterator it = hashGrid[hash].begin(); while( it != hashGrid[hash].end() ) { if( mesh->vertices[*it] == v[j] ) { index = *it; found = true; break; } it++; } if( !found ) { index = mesh->vertices.size(); mesh->vertices.push_back( v[j] ); hashGrid[hash].push_front( index ); } // Vertices are even indices in the t array. tri.t[j * 2] = index; } mesh->triangles.push_back( tri ); if( !mesh->subMeshes.empty() ) mesh->subMeshes.back().triangleIndices.push_back( mesh->triangles.size() ); } } else if( header == "g" ) { mesh->subMeshes.push_back( SubMesh() ); } else if( header == "usemtl" ) { wstring mtl = converter.from_bytes( tokens[0] ); mesh->subMeshes.back().materialIndex = mesh->materials.size(); mesh->materials.push_back( LoadMtlMaterial( mtl ) ); } } mesh->FindTriangleNeighbors(); if( normals.empty() ) mesh->FindVertexNormals(); mesh->Trim(); meshCache.push_back( Record<Mesh>( file, mesh ) ); return mesh; } [/code] Before you mention it, TrimLine has an overhead of an extra 0.3 seconds on lucy, and that is an overhead I am willing to pay for something that can get rid of /* */ and // style commented text on the fly.The GetBody function has an assignment that I can probably cut out but I don't use it in the obj loader ( I do in the md5 loader but I am just looking at the obj loader for profiling the backend for now ). The part of the obj loader that hashes vertices is welding them on the fly as my internal format requires them that way: that algorithm is not super fast (incurs at least a 6 second overhead) but fast enough considering what it does.
  10. [quote name='_swx_' timestamp='1304067364' post='4804353'] Tried running the program through the profiler in VS2010? I would try reusing the same stringstream object if possible to avoid having to reallocate the internal buffer all the time. [/quote] You are onto something. Please explain further. My current code is [code] string line; getline( input, line ); stream = stringstream( line ); // member of the class doing this code. [/code] This is code that takes 6 seconds going through the entire file. By contrast [code] string line; getline( input, line ); [/code] takes only 2. There must be a better way to get a line into a stream. I feel like such a noob with this
  11. [quote name='Hodgman' timestamp='1304036769' post='4804217'] 2) Writing exception-safe code in C++ is hard. Sticking to RAII makes it sound easy, but there's still all sorts of pitfalls, like objects being left in an inconsistent state ([i]some members being written to prior to a throw, but others that were going to be written after[/i]), double-throwing ([i]your raii classes themselves aren't allowed to trigger errors during resource deallocation[/i]), etc... It's very easy to create very subtle bugs if your whole team aren't C++ experts (which is usually the case). I've known guys who've been writing games in C++ for a decade who still aren't familiar with many of the language's quirks and subtleties. 3) Performance. It's not a big deal on a modern PC, and not [i]as much[/i] of a big deal on modern consoles [i]as it used to be[/i], but most C++ exception implementations cause horrendous amounts of code bloat (large executables). In console environments, large executables can in fact be a real problem for performance. On older consoles, it definately was a huge problem, so this ties into the 'tradition' problem as well. 4) Style. The elegance argument is highly subjective, and attempting to debate it will only incite a holy war. Some people would argue that manual error codes are more readable, maintainable, etc... especially since C++'s [u][url="http://stackoverflow.com/questions/88573/should-i-use-an-exception-specifier-in-c"]exception specifiers[/url][/u] are retarded, which makes reasoning about whether code will throw or not quite a pain when compared to newer languages. Even outside of games, some big players like Google even shun C++ exceptions. 5) Multi-core engines. Standard C++ exceptions aren't clonable, which means passing them across thread boundaries is a pain. In a job-based architecture, the point of invocation and the point of execution aren't actually connected by a call-stack, so handling errors by unwinding the stack does [/quote] I see that I am walking on thin ice. 2) For me and my code: the rule is constructors, copy constructors, assignment operators and destructors are not allowed to throw, just like they can't pass an error code (other than by reference I suppose). And that one rule gets rid of every single exception based pitfall. 3) I agree, this is a good concern. 4) Ok. I am not going to push anything but to me err = someFunction(); if( err == ERROR_FOR_SOMEONE_ELSE_TO_HANDLE ) return err; *is* unwinding the call stack, isn't it? 5) Interesting, definitely a cause for future concern for me. My current multi threading is Thread 1: input read input and queue message as fast as possible, Thread 2: game state, advance self as fast as possible. Thread 3: Read game state draw everything, play all sounds as fast as possible. Thread 4: Nothing for now. Might be a naive architecture but its responsive and clean.
  12. [quote name='fastcall22' timestamp='1304065702' post='4804349'] You could also try compiling the text file into a binary format for faster parsing. [/quote] I could, and I do this with my own internal format (which I export as a raw chunk of indices, vertices, skeletal weights, material strings etc.). But I am not concerned with that at the moment, I am concerned with why I can't write a fast text file parser using the classes of the C++ standard library.
  13. [quote name='fastcall22' timestamp='1304063621' post='4804342'] Unless you're comparing against a build with full optimizations enabled, your claims are unfounded. Do you have full optimizations enabled? [/quote] Of course. /Ox in VC++ 10. I also turned off checked iterators and secure scl ( I think I read somewhere that one of these has implications for even release builds ).
  14. I decided to change my parser ( not really a parser more of a line getter and string manipulator ) to using string streams for more elegant and readable code. Elegant and readable code was what I got but the performance hit is massive. Currently it uses getline to get a string and then uses that string to construct a string stream. The string stream is used for various operations like getting the line header / tokens / body / ignoring comments etc. etc.. But just that one operation string -> stringstream takes 6 seconds(!!) looping through a 40MB standford lucy wavefront obj file. 6 seconds!!!!! and that's on a Q6600 (not the best desktop chip nowadays but its not like its some ancient pos either). By comparison just getline without the string stream construction takes 2.3 seconds. Why on earth does this one constructor triple the time of the code. The >> operator seems slow as well. The body of the obj loader (consists of getting a header and tokens and then doing some atois and atofs) went from taking 13 seconds to taking 28. My old parser used manual searching and tokenizing with for loops, but the input stream was still a C++ stl style one, and for comparison it was able to load lucy in ~18 seconds total, which isn't fast but its a lot better than the current 48 (12 of which is just sstream construction ). So three questions: 1) Is using old school C style file input recommended over the C++ way? 2) alternatively are there some C++ stream speed secrets I am not yet privy to? 3) Is there some way to bypass that slow slow slow slow string stream constructor and getline directly into the string stream? Seriously.
  15. [quote name='Hodgman' timestamp='1303978246' post='4803887'] Ok. If an exception isn't handled, and does make it all the way up to main, then it is fatal -- the entire program has been irreversibly unwound by that point. It is common to put a "catch all" up in main to catch and log these, but there's no returning to the throw-location to recover. You're right to question "[i]how do you return control flow back to where it was?[/i]", because you can't - this doesn't make sense. I'm guessing you've mixed up two different "I heard's" about exceptions and logging. BTW, in C++ games programming, exceptions are almost universally shunned. They're much more feasible in other languages like C# or Java though. [/quote] Ok, I thought as much, and this is how I had interperted exceptions before speed reading what I did and confusing myself. Quite relieving actually, I was worried that some basic concept was going right over my head. Why are they universally shunned though? They are more elegant than returning error codes up the stack, and if you stick to RAII principles they should clean up after themselves too. I use them mainly for things like initialization / loading files. Initialization is usually fatal while loading files usually allows whatever was asking for the file to put a dummy file in if lets say it wasn't found, or there was some data missing and so on.