Jump to content
  • Advertisement

chairthrower

Member
  • Content Count

    262
  • Joined

  • Last visited

Community Reputation

440 Neutral

About chairthrower

  • Rank
    Member
  1. chairthrower

    Do MP3s degrade over time?

    The 'degradation effect' has been observed many times and is well documented so do not be too worried. It is the variety of exploding mp3s that are currently not well-understood and a subject of active inquiry. You can probably imagine that these are actually quite dangerous, since to date, no one has been able develop a reliable 'tell-tale' that would serve as an indication that the mp3 is about to undergo rapid and spontaneous decomposition. Exploding mp3's will normally take out a platter of the HD, and due to the high spin-rate of modern drives, it is entirely possible for the platter ceramic to breach the aluminium enclosure, sending out a sort of spray of platter shrapnel soup throughout the room/ office etc. Obviously this can be quite unfriendly to any warm bodied humans that might also be caught up in the path [orthogonal to the axis of drive rotation]. Although a better understanding of these events and triggers would be desirable (the last I heard was that new research is pending) they are reasonably rare, and therefore deemed not a cause for excessive alarm.
  2. Quote:Original post by Dim_Yimma_H Quote:Original post by chairthrower streams shouldnt allocate disproportionately to their work as they maintain an internal buffer. stringstream seems faster for me than sprintf for a not too contrived example even considering conversion to std::string, perhaps due to better inlining. stringstream elapsed 970 sprintf/stack elapsed 1100 That's interesting, though I should mention that I got the following in a VC++ 2005 release build with standard settings: stringstream elapsed 21593 sprintf/stack elapsed 4782 I ran it on a Pentium 4 2.8 GHz CPU, not sure yet where the difference comes from, maybe I should look closer on the code. What kind of demon computer did you run it on? =) Sigh, Well I guess I just succeed in disproving my argument if it fails to be fast across various machines/ os's / compilers and I should have tested various specs. Sorry for misleading, fwiw I tend to write small programs like this on a recent intel laptop, posix OS, gnu toolchain.
  3. streams shouldnt allocate disproportionately to their work as they maintain an internal buffer. stringstream seems faster for me than sprintf for a not too contrived example even considering conversion to std::string, perhaps due to better inlining. stringstream elapsed 970 sprintf/stack elapsed 1100 (in ms) #include <sstream> #include <iostream> #include <cstdlib> #include <ctime> static clock_t sw_start; static void start_timer() { sw_start = clock(); } static double elapsed_time() { clock_t stop = clock(); return double(stop - sw_start) * 1000.0 / CLOCKS_PER_SEC; } int main() { unsigned ret = 0; unsigned n = 10000; unsigned n2 = 1000; start_timer(); for( unsigned k = 0; k < n; ++k) { std::stringstream os; for( unsigned i = 0; i < n2; ++i) { os << i; os << "foo"; } std::string s = os.str(); //std::cout << s << std::endl; ret += s[ rand() % s.size() ]; } std::cout << "stringstream elapsed " << elapsed_time() << std::endl; start_timer(); for( unsigned k = 0; k < n; ++k) { char buf[ 1000000]; char *p = buf; for( unsigned i = 0; i < n2; ++i) { p += sprintf( p, "%d", i); p += sprintf( p, "%s", "foo"); } *p = 0; //std::cout << buf << std::endl; ret += buf[ rand() % ( p - buf ) ]; // edit } std::cout << "sprintf/stack elapsed " << elapsed_time() << std::endl; return ret; }
  4. chairthrower

    [win32] Edit boxes

    'Subclass' the control by overiding the winproc of the control, something like this... originalProc = (WNDPROC ) SetWindowLong( hWnd, GWL_WNDPROC, (LONG) Context::WndProc ); and then you can catch the WM_KEYDOWN message and make sure to call the original handler - to let the control do its thing, return CallWindowProc( originalProc, hWnd, Msg, wParam, lParam); On closing of window you should probably catch WM_NCDESTROY and restore the original proc, SetWindowLong( hWnd, GWL_WNDPROC, (LONG) originalProc );
  5. chairthrower

    C++ Logger Tutorial

    Maybe gamedev.net log tutorial ?. Simple STL Logging System
  6. unique() indicates that there is only one reference - or use_count() will give you a count of outstanding references. Using this information would allow you to periodically sweep and delete the already deallocated resource from the holding collection. Alternatively for your collection use weak_ptr's but pass out shared_ptr's and then in the destructor (or custom deleter) of the resource fire an event (ie drop()) which the collection could hook to erase it automatically. Alternatively use an boost intrusive pointer (weak_ptr's dont work with it) and manually adjust the ref_count to account for the additional reference maintained by the collection (eg decrement). Then do the periodic sweep and delete.
  7. Quote:I don't want to copy ANY objects. I'd much more prefer to have one copy of each and share that one copy among all the parts of the program which need to access it. Actually, that's why I moved from vector to ptr_vector in the first place ;) Given pointers in a vector ie vector< A *>, then when an element is removed from the vector (erase() or clear() or destruction of the vector) the pointer is removed but the pointed to object will still persist. This is bad since it can easily lead to memory leaks if you don't also arrange to delete the pointed to object as well as removing the pointer from the container. But it is good from the point of view that another pointer or reference to the object somewhere else in the program can continue to use the object. In general a vector (when holding pointers) does not take responsibility for lifetime management of the underlying object. ptr_vector is quite different in that it claims ownership of the objects pointed to by the pointers that it holds. This means that removing an element will remove the pointer *and* free the pointed to object. However because it claims ownership over the underlying object it is bad if you wish to make use of that object somewhere else in the program, since another pointer will now be left 'dangling' - eg pointing at a location or object that has been freed. If you want to share the same objects freely without having to worry (too much) about which part of the code is responsible for managing lifetimes, or because one part of the program considered in isolation simply cannot know what the lifetime of a set of objects are, then investigate shared_ptr. A shared pointer will maintain a count of the outstanding references to the object and then do the work of deleting the object when there are no pointers left outstanding. They work well with vectors as well so that it is possible to do vector< shared_ptr< A> >. It is a lot of typing - perhaps the price of c++. In general if you can manage items by always copying (and avoid pointers) then this is good since it is simple and the c++ way. If the object is complex, involves heavy setup or uses polymorphism for example, then you will probably need to work with pointers or references. If you can arrange for a clear lifetime then use a vector (and your own delete) or ptr_vector - to my mind they are reasonably equivalent. If you can't do this and you need to share out references to your objects and are unsure about lifetimes then consider shared_ptr.
  8. chairthrower

    how come game industry is mostly old men with beards

    Because at the end of the day, when it's time to kick back and relax it makes the transition to pirate that much easier. Aye, a mighty day, figh'n t' path-finding 'though I struck the VC a fearsome blow after dinner. So c’mere, me proud beauty, and smartly with the rum.
  9. chairthrower

    std::map, pool alocators

    Ok, I had to clarify this again for myself, and with a restricted example. Comparing speed of boost pool allocator with std::allocator, recent g++ -O3. clock_t and uint64_t (c99?) may not be portable unfortunately. dummy just contains summed contents of raw pointer values and is returned from main() to force evaluation and avoid compiler optimising too much away. boost fast pool allocator allocates 1m ints and then frees them all in one hit. std::allocator allocates 1m ints and frees each one immediately after allocating as it goes. Each test runs 100 times. Results are about 3 : 1 improvement in speed for pooled over std allocator, fast_pool_allocator: 1290ms std::allocator: 3830ms The test is *heavily* orientated in favour of the std::allocator and probably could not even be considered a representative example. This is due to the immediate free after allocation and the presumption that std::allocator will just reallocate what it just freed. Additionally pool_alloc with 1mx4 bytes might be going outside L2 cache. It would be possible to change the behaviour to be a more representative use case but I believe it shows that pooling is not a given order of magnitude type win. Other factors such as standardization of code and contingent operations (tree rotatations in std::set etc) and cache should be considered. #include <iostream> #include <ctime> #include <boost/pool/pool_alloc.hpp> static clock_t m_sw_start; static void start_timer() { m_sw_start = clock(); } static double elapsed_time() { clock_t stop = clock(); return double(stop - m_sw_start) * 1000.0 / CLOCKS_PER_SEC; } int main() { unsigned n = 100; uint64_t dummy = 0; start_timer(); for( unsigned k = 0; k < n; ++k) { typedef boost::fast_pool_allocator< int> alloc_type; alloc_type pa; for( unsigned i = 0; i < 1000000; ++i) { int *pi = pa.allocate( 1); dummy += (uint64_t)pi; } boost::singleton_pool< boost::fast_pool_allocator_tag, sizeof( int)>::release_memory(); } std::cout << "fast_pool_allocator " << elapsed_time() << "ms" << std::endl; start_timer(); for( unsigned k = 0; k < n; ++k) { typedef std::allocator< int> alloc_type; alloc_type pa; for( unsigned i = 0; i < 1000000; ++i) { int *pi = pa.allocate( 1); dummy += (uint64_t)pi; pa.deallocate( pi, 1); } } std::cout << "std::allocator " << elapsed_time() << "ms" << std::endl; return dummy; }; Edit: this is a much better example for std::allocator that for gcc still gives the 1/3 improvement for( unsigned i = 0; i < 10000; ++i) { const unsigned nn = 100; int *pi[ nn]; for( unsigned j = 0; j < nn; ++j) pi[ j] = pa.allocate( 1); func( pi, nn); // do nothing function defined in external file guarantees // filling of pi etc. for( unsigned j = 0; j < nn; ++j) { dummy += (uint64_t)pi[ j]; pa.deallocate( pi[ j], 1); } } [Edited by - chairthrower on May 21, 2009 4:33:58 PM]
  10. chairthrower

    std::map, pool alocators

    Quote:Quote: Quote: Original post by chairthrower I suspect the reason that my performance gains with pooling were marginal is because a standard allocator's job is really very simple. When using custom allocator, the run-time improvement of allocations should be between factor 5 and 50. Perhaps your use of structures performs negligible number of allocations so they don't represent a bottleneck. The std::allocator is the absolute worst possible thing to use for small allocations (maps, linked lists). I think (I have notes but its jumbled with source that is also mixed with lots of other tests) that I used my own and a boost pool allocator for an adapted interval tree based on stl port's rbtree as a sweepline container compiled with modern g++ with optimization turned up. My best recollection is about 2x performance improvement running the sweep while avoiding any numeric edge intersection calcs. I believe that this structure was probably quite heavy to manipulate - eg tree rebalancings (the additional interval requirement meant it was a couple of times slower than a std::set or std::map structure by comparison). So perhaps this extrac work tended to predominate in my timings and you would see higher ratios working with say a simple linked list. Something that did play on my mind was the fact that only approx logn number of edges would be needed to be contained at any point in the sweep - and so I believed there could have been an issue ensuring everything relevant was kept packed tightly in cache which I decided could be better guaranteed by using std::allocator (freeing my edge structures as it went) - but it is possible that using the pooling meant things were spilling to main memory or even l2 and that this would play on timings. My test case was map geometry with approx 100k vertices - but I freely acknowledge that I do not really have the experience to make any judgement about this and know little about what type of cache behaviour I could expect. In any case this is always going to be anectodal so perhaps I should have refrained from making any comment about my relative specific performance tests. It should be relatively easy to develop and test an allocator for a specific problem. I agree that pool allocators would seem better adapted to lists sets maps and should probably be avoided for things like vectors that allocate relatively infrequently. Edit: More context, since my algorithms were dominated by my initial logn sort on edge.min_y and also by the logn behaviour of the sweepline container - I judged the improvement to be not justified in my case. [Edited by - chairthrower on May 21, 2009 12:35:33 PM]
  11. chairthrower

    std::map, pool alocators

    I implemented some computational geometry algorithms and experimented a bit with pool allocation. My structures involved things like interval trees, hashtables and lots of linked lists. I experimented with my own allocator<> using pooling behind the scenes as a drop in replacement for the stl managed stuff and obtained a modest improvement, but didnt find the results compelling enough to continue using it over the far more standard std::allocator<>. My experiments with boost pool obtained similar performance results. One thing to be aware of is that the boost stuff doesn't (so far as I know) have a drop in stl compatable allocator that also supports granular control over deallocation - instead its a sort of singleton global cleanup thing that frees up everything at one time - which is a bit of a comprimise. I suspect the reason that my performance gains with pooling were marginal is because a standard allocator's job is really very simple. It knows the size of the thing to be allocated and destroyed, so can probably just maintain a linked list embedded into relatively large contiguous blocks. On reallocation there is no trying to match up an appropriately sized block from the store like malloc() or new)_ and instead it ought to be a pretty guaranteed O(1) operation that is also going to get mostly inlined. One thing I suspect is that there may be cache implications involved with pooling whereas more deterministic memory handling would reuse reclaimed memory which might be overallocated with pooling. This of course is more in the realm of speculation on my part and would vary wildly on use anyway. I would agree that an allocator<> is the right point at which to experiment - at least so that it can be reused with the stl and ease doing comparative type testing with a minimum of effort.
  12. Would two feathers move together with the same acceleration (profile) as two canonballs ? - I liked chemistry more than pyhsics so i dont know. My guess, faster than my current refactor - about 2 weeks.
  13. chairthrower

    Loading PNG Files

    This is what I currently use, it's rough as guts and just wraps the libpng calls to give a c++ function. Limited or no error handling and only a subset of pixel types - it basically only works. My Renderable class is just a std::vector of agg rgba32 pixels. You will want to change this to your own representation but you can see how the pixels are set in the code. Also remove the timer stuff which I was using because I was loading some big textures. header.h #ifndef LIBPNG_H #define LIBPNG_H #include <boost/shared_ptr.hpp> struct Renderable; boost::shared_ptr< Renderable> read_png_file( const std::string & file_name ); #endif source.cpp// g++ main.cpp libpng.a -lm -lz #include <iostream> #include <cassert> #include <vector> #include <string> #include <stdexcept> //#include <unistd.h> #include <stdlib.h> #include <stdio.h> #include <string.h> #include <stdarg.h> #include "libpng.h" // our interface #include "renderable.h" #include "timer.h" #define PNG_DEBUG 3 #include <png.h> void abort_(const char * s, ...) { va_list args; va_start(args, s); vfprintf(stderr, s, args); fprintf(stderr, "\n"); va_end(args); abort(); } boost::shared_ptr< Renderable> read_png_file( const std::string &filename) //void read_png_file( const char *filename, Renderable &buffer) { start_timer(); boost::shared_ptr< Renderable> buffer( new Renderable); png_structp png_ptr; png_infop info_ptr; int number_of_passes; char header[8]; // 8 is the maximum size that can be checked /* open file and test for it being a png */ FILE *fp = fopen( filename.c_str(), "rb"); if (!fp) { // abort_("[read_png_file] File %s could not be opened for reading", filename); throw std::runtime_error( "could not open file"); } fread(header, 1, 8, fp); if (png_sig_cmp( (png_byte *)header, 0, 8)) abort_("[read_png_file] File %s is not recognized as a PNG file", filename.c_str()); /* initialize stuff */ png_ptr = png_create_read_struct( PNG_LIBPNG_VER_STRING, NULL, NULL, NULL); if (!png_ptr) abort_("[read_png_file] png_create_read_struct failed"); info_ptr = png_create_info_struct(png_ptr); if (!info_ptr) abort_("[read_png_file] png_create_info_struct failed"); if (setjmp(png_jmpbuf(png_ptr))) abort_("[read_png_file] Error during init_io"); // we need gamma handling of color space somewhere // ?? png_init_io(png_ptr, fp); png_set_sig_bytes(png_ptr, 8); png_read_info(png_ptr, info_ptr); number_of_passes = png_set_interlace_handling(png_ptr); png_read_update_info(png_ptr, info_ptr); // bloody hell what a messy interface // does it always have an alpha or not an alpha ?? // we want to initialize the io with a pure memory buffer // the thing is not alpha channedled // this is not critical assert( sizeof( png_bytep) == 8); /* read file */ if (setjmp(png_jmpbuf(png_ptr))) abort_("[read_png_file] Error during read_image"); #if 0 // we have the bitdepth ... ie 4 or 8 // but how do we classify -- ugghhh std::cout << "------------------------" << "\n"; std::cout << "width/height " << info_ptr->width << ", " << info_ptr->height << "\n" ; std::cout << "color_type " << (unsigned)info_ptr->color_type << "\n"; std::cout << "bit_depth " << (unsigned)info_ptr->bit_depth << "\n"; std::cout << "rowbytes " << info_ptr->rowbytes << std::endl; #endif // what type of structure are we going to use ?? std::vector< png_byte > buf( info_ptr->height * info_ptr->rowbytes); std::vector< png_byte*> rows( info_ptr->height); for( unsigned y = 0; y < rows.size(); ++y) rows[ y] = & buf[ y * info_ptr->rowbytes]; png_read_image( png_ptr, &rows[ 0]); fclose(fp); buffer->resize( info_ptr->width, info_ptr->height ); for( unsigned y = 0; y < rows.size(); ++y) { png_byte *row = rows[ y]; unsigned x = 0; if( info_ptr->color_type == PNG_COLOR_TYPE_RGB && info_ptr->bit_depth == 8) { for( unsigned i = 0; i < info_ptr->width * 3; i += 3) { assert( i + 2 < info_ptr->rowbytes); // potentially need to adjust bitdepth to our 0xff range unsigned char r = row[ i]; unsigned char g = row[ i + 1]; unsigned char b = row[ i + 2]; unsigned char a = 0xff; buffer->rbase.copy_pixel( x, y, agg::rgba8( r, g, b, a)); ++x; } } else if (info_ptr->color_type == PNG_COLOR_TYPE_GRAY && info_ptr->bit_depth == 8) { for( unsigned i = 0; i < info_ptr->width ; i ++) { // potentially need to adjust bitdepth to our 0xff range unsigned char r = row[ i]; unsigned char a = 0xff; // there's not enough range ? // r = (r / 2) + 0x7f; buffer->rbase.copy_pixel( x, y, agg::rgba8( r, r, r , a)); ++x; } } else assert( 0); } // std::cout << "file '" << filename << "' load time " << elapsed_time() << "ms\n"; return buffer; }
  14. Humbug, so it was a bad analogy - I'll dispense with any attempt at explanation and instead just link to some more, high school bad analogies
  15. >> bad hair day? Just write down your numbers and be done with it. Alternatively, it was a great insight into current technical expectations of engire developer candidates, the scale and scope of modern engines, and trends to maximise capabilities against technical challenges, and how technical development is both driven and impeded by the bottom line with an added personal reflection. Given the OP's reformulated question about 'trying to gain a better understanding of the technical side of game development', it was informative in a way that a statement about the length of a piece of string is not.
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!