Advertisement Jump to content
  • Advertisement

Thunder Sky

Member
  • Content Count

    49
  • Joined

  • Last visited

Community Reputation

144 Neutral

About Thunder Sky

  • Rank
    Member
  1. I am trying to take advantage the power of unix shells on windows. I wanted my solution to be native (ie no cygwin) and integrate well with windows (C:\foo\bar style paths, calling unix commands from batch scripts etc). I currently use zsh-nt/unxutils/gnuwin32/others and it is working pretty well. When building a custom WinXP CD I have a long chain calling scripts from other scripts and want indentation of output to simplify understanding of what is going on. It is not working as well as I would like. An example is in order: > cat a.sh echo "${IND}a in" ( export IND="$IND "; sh b.sh ) echo "${IND}a out" > cat b.sh echo "${IND}b in" ( export IND="$IND "; sh c.sh ) echo "${IND}b out" > cat c.sh echo "${IND}c in" echo "${IND}c out" > sh a.sh a in b in c in c out b out a out The parenthesis creates a subshell and then export sets the global environment variable for that subshell. As you can see indentation is working from a->b but not from b->c even though a and b are almost identical. Looks like "export IND" is not updating the variable if it already exists. Would this be a correct assumption? Does anyone have a solution to this? Thanks in advance.
  2. Thunder Sky

    boost::thread_specific_object?

    Quote:Original post by Hodgman For non-static thread-locals, you've got to do it though these APIs, but a template class can do all the work for you, so you can just write e.g. thread_local<int> m_Variable and the right API calls will be done behind the scenes to allocate your variable. Provided 'thread_local<int>' has the characteristics of an 'int' and not an 'int*', this is exactly what I am looking for, but as far as I can see this isnt possible because of conversion operators returning by value, right? If not, then back to my original question, where can I find a library that gives me such a templete class? EDIT: I think I got the answers to my questions; I'm "stuck" with boost::thread_specific_ptr, but it can be used with non-static data. Thank you all.
  3. Thunder Sky

    boost::thread_specific_object?

    Thank you for replies Quote:Original post by MaulingMonkey Quote:Original post by Thunder Sky but why not standardize the above abstraction and releave the user of an extra dereference? Because you can't, 99% of the time. Consider: thread_specific_object<myclass> o; o.myclassmethod();This cannot be made to work as C++ does not allow the overloading of 'operator.' However ((myclass)o).myclassmethod() would work right? I was hoping that since 'operator.' isnt defined for thread_specific_object<myclass> the compiler would implicitly try to convert it (via defined conversion operators) to a type that has it defined, ie handle the rather ugly cast above for us. I dont have boost installed yet so I can't check if that's the case I'm afraid.. Quote:Original post by MaulingMonkey The extra character involved with the occasional '*' or the occasional '->' instead of '.' isn't worth worry aboutAgreed, though if you have a lot of existing code, replacing 'myclass o' with 'thread_local_object<myclass> o' would magically add thread local functionality without requiring any other change. Quote:Original post by MaulingMonkey Quote:Original post by Thunder Sky Also, is there a good way to implement thread local (non-static) member variables? Surely you could just use boost::thread_specific_ptr again? Again, since I cant test for myself yet I'm not sure. I assumed that since the __thread compiler extension requires the data to be static, boost did as well. How else would it implement the functionality? Still, boost has surprised me before. If this is the case, I'll have to look into how the std::vector solution would fit my situation
  4. Hi, I was wondering why boost doesn't define something like this: template <typename T> class thread_specific_object { public: thread_specific_object() { ptr.reset(new T()); } T& operator T() { return *ptr; } private: static boost::thread_specific_ptr<T> ptr; }; Not only boost, but every other library/API/compiler extension I have looked at always makes the user access thread local data through pointers. I realize pointers may have to be used behind the scenes, but why not standardize the above abstraction and releave the user of an extra dereference?. Anyway, I guess there's a logical explanation, please enlighten me :) Also, is there a good way to implement thread local (non-static) member variables? I've heard arguments that this would never be necessary, however none of them convinced me and I have a situation where I believe they would certainly be helpful. Regards Rob
  5. Thunder Sky

    Where to obtain optimal drivers?

    Well thanks I guess it wont get much clearer than that Ill download both and do some benchmarking EDIT: Quote:I T I S N O T I M P O R T A N TWhat I mean is there's clearly a difference. Just wanted to know if that difference is nothing more than motherboard manufacturers adding bloat software to their packages or if they actually do something useful (something like linking the storage drivers with the audio drivers to obtain better performance when playing sound files) Cheers [Edited by - Thunder Sky on April 13, 2009 9:43:32 AM]
  6. Thunder Sky

    Where to obtain optimal drivers?

    I kind of feel I am not making myself understood here Basically the only thing I want to know is what to do with my motherboard: Will downloading the chipset/LAN/audio drivers individually from their respective manufacturers result in worse performance that downloading the "Mobo-pack" from (in my case) ASUS? My CDs are long gone so Im going to have to download the drivers off the net anyhow.
  7. Thunder Sky

    Where to obtain optimal drivers?

    Thank you owl for a very quick reply I am aware that it mostly really doesnt matter what driver version you have, but during this reformat/reinstall I have become rather picky and would like to get it right. Therefore.. What do you mean by "the only drivers that are worth updating are video drivers"? Is that worth as in "worth the time and effort to get the chipset/component drivers from their respective manufacturers (cryptic) websites" (this I really dont mind) or "worth the risk of it not working as well/at all" (I dont want that!) ? thanks again
  8. Thunder Sky

    Where to obtain optimal drivers?

    Hi, I have made a fresh windows (xp) install and need to install my drivers, starting with my motherboard drivers. Is it best to obtain the drivers from the motherboard manufacturer or the individual chipset/component manufacturer? The chipset/parts manufacturer often have more up to date (higher version number) drivers but I have heard the motherboard manufacturer might make modifications to the drivers to optimize them for the particular motherboard. Maybe chipset drivers are ok to download from their respecive manufacturer while one should stick with the motherboard manufacturers Audio/LAN drivers? Driver packages are often a bit vauge on what they include. How likeley is it that USB 2.0 will work without downloading extra drivers for it? While we're at it, what about graphics cards? ATI/nVidia versus Gigabyte/Saphire/ect I have spent some time slimming down my windows install, creating scripts for installing software, setting up different hardware profiles etc so I would like to get the most out of my hardware. Any insight would be highly apreciated.
  9. Thunder Sky

    IDE Performance

    Prior to formating the hard drive of an older machine I wanted to store my data on another computer. I thought running the data through a network cable would be too slow for my taste so I removed the drive from my old computer and stuck it into in my newer one. I connected the drive with my IDE cable (which already had a DVD-burner attached) and booted up. Performance was terrible. I thought that maybe it was a master/slave problem. The drives have no writings on them telling me about the jumper settings and the manuals are long gone so I simply disconnected the DVD-burner. Performance was better but as soon as I started copying the files my mouse started to lagg. I have no idea what's happening but surely this indacates something isn't right. HD Tune 2.55 reports that my transfer rate is 1.4 MB/s. It also reports its temperature being 50C/120F degrees witch is a ~10-20C/15-30F degrees higher than my other drives. Im having a hard time imagining this being the cause though. I tried updating my drivers in the My Computer->Manage->Hardware but it couldnt find any newer drivers. I have a ASUS P5B SE motherboard. The only options available in the BIOS IDE Settings menu is: Sata Configuration: Disabled/Compatible/Enhanced Configure SATA as: IDE/AHCI IDE Detect Time Out: 0/5/10/15/20/25/30/35 JMB363 RAID Controller: Enabled/Disabled JMB363 Mode Select: RAID/IDE/AHCI I downloaded the manual for the MB but strangely it didnt explain more thuroughly what any of these do. Any help would be appreciated.
  10. Thank you for the replies MortusMaximus and Captain P. Unfortunately you seem to have misunderstood the situation Im in. I am creating an underlying tracking system, how the user wishes to create objects that uses that system is beyond my control. Therefore I have to "complicating it with arrays of objects" since I want the system to be transparent and deal with all different situations. Also, as stated in the OP, Im doing this as a learning exercice. Therefore Boost is not an option just yet (until I expand to what will become a bigger framework). The statement that "only the first object is treated as reference-counted" is not entirely correct. All objects are tracked with its own RefCount, the first element is special only in that it represents the entire array in the garbage collector. Since it's the first element that gets delete[]ed, and therefore is responsible for the lives of all the elements, it has to keep track of them. On a side note, in MortusMaximus example a simple sizeof(T) would do since we know the type T of the objects we want to create. That is not the case for us since we only have controll over and can execute code in the base class. So Im still stuck Im afraid.. EDIT - I am willing to change my design if anyone has a better idea of how to do this. The system should not impose any restrictions on how the objects can be used though. EDIT2 - just noticed that the size var in MortusMaximus example would be 1. ie 1 object would fit between the two addresses. This is how pointer arithmetic works, as far as I know. To get the size in bytes you would have to reinterpret_cast to byte* then do the substraction. Cheers [Edited by - Thunder Sky on November 15, 2008 11:24:05 AM]
  11. That would indeed work and I have considered it. The only caveat with this approach is that it is legal (and quite possible if the number of elements to allocate is determined at runtime) for the user to allocate an array of size 1. If the size of the array is 1 the MM_HEAP_ARRAY on top of the StaticStack would have to be poped in the first elements constructor. At that point there is no way (that I know of, this is the real problem really) to tell if there are more elements after the first. Thanks very much for the fast response though! PS Quote: I think new allocates everything contiguously, but I'm not 100% on my pointer arithmetic... It does so pointer arithmetic is ok
  12. Hi, I'm creating a simple object tracking system based on the Enginuity articles, primarily for learning purposes. It uses reference counting and comes with a simple garbage collector. When an object's RefCount reaches zero the objetc is considered dead and might potentially be deleted at any time. If the RefCount is above zero the object is considered alive. The garbage collector works with a dead and live list. Pretty basic. I've got everything working except for arrays. When allocating an array on the heap the address of the first element (the address obtained in an overloaded operator new[]) is added to the deadlist. It is my intention that every object in an array of tracked objects should hold the address of the first object in the array. Whenever an object in the array is alive it increases the RefCount of the first objetc in the array by one. This makes sure that the first object's RefCount (the one checked by the garbage collector) isnt zero when any of the other objects in the array are being used. I use a static variable to hold what type of object is currently being constructed. The constructor then assigns this value to a non-static member var. Since tracked objects may contain other tracked objects within themselves this static variable is actually a stack of variables. Here is my problem: When do I pop MM_HEAP_ARRAY(_TRAIL) off the stack? ie when do I know that the entire array has been constructed? My first idea was to try to retrieve x in an expression "new Object[x]" inside the overloaded operator new[]. operator new[] would then push the value onto the static stack. This would be the easiest approach but to my knowlage it isnt possible (at least without non-portable hacks). Since we do know the size of the block of memory allocted I tried to test for static_cast<byte*>(this)+sizeof(*this)==static_cast<byte*>(start_address)+mem_size This isnt good since I use a TrackedObject base class that others derive from and sizeof(*this) returns the size of the base class, not the size of the actuall object in the array. For those interested, here is another thread where I tried to tackle the problem I'm having a bit differently. It didnt work out. Thanks for any replies.
  13. Thunder Sky

    Casting an address to the type of this

    Ok I think I oversimplified the the situation Im in. I didnt want to bore you with the details but here goes: As I vaugly stated in the last sentence in the OP Im writing an object tracker/collector. Im using basic reference counting. This is designed to be completely transparent and should therefore work with all kinds of objects (stack, heap, arrays). All memory managed objects are to derive from a IMMObject class that handles it all. It's the arrays that are giving me problems. I have this: class IMMObject { public: struct MMArrayInfoStruct { void* start; //filled in by IMMObject::operator new[] unsigned long size; //filled in by IMMObject::operator new[] unsigned long alive; //number of objects in the array that are not ready for deletion (has RefCount>0) inline bool InRange(void* ptr) { return (start<=ptr || ptr<static_cast<char*>(start)+size); } } std::list<MMArrayInfoStruct> MMArrayInfo; MMFindArrayInfo() { for(std::list<MMArrayInfoStruct>::iterator it=IMMObject::MMArrayInfo.begin();it!=IMMObject::MMArrayInfo.end();++it) if(it->InRange(this)) return &(*it); return NULL; } void* IMMObject::operator new[](size_t size) { MMMemoryTypeMessenger=MM_HEAP_ARRAY; void* ptr=::operator new[](size); std::cout << "IMMObject::operator new[] " << ptr << std::endl; MMArrayInfoStruct info; info.start=ptr; info.size=size; MMArrayInfo.push_back(info); return ptr; } } Now, In IMMObject::AddRef() I do (amongst other things) this: void IMMObject::MMAddRef() { if(MMRefCount==0) { if(MMMyMemoryType==MM_HEAP_ARRAY || MMMyMemoryType==MM_HEAP_ARRAY_TRAIL) { MMArrayInfoStruct* info=MMFindArrayInfo(); assert(info); if(info->alive==0) { MMDeadList.Remove(FIRST_ELEMENT); //THIS IS WHAT IM HAVING PROBLEMS WITH MMLiveList.Add(FIRST_ELEMENT); } ++(info->alive); } } ++MMRefCount; } Only deletable objetcs are ever put in MMDeadList, that means MM_HEAP_ARRAY_TRAIL objects are not. The dead/alive state of the first instance in the array is to represent the state of the whole array (since delete[]ing the first instance deletes the whole array). I think that is enough to give you an understanding why Im doing things the way I am, if something is unclear just ask me to specify and I'll post the code. Quote: Quote: A quick question #2: My (top level) derived class' constructor reports that (this) points to a location 4 bytes after &array[0]. This is impossible if the array is defined as an array of derived class instances. You are correct, I meant to say: My (top level) derived class' constructor reports that (this) points to a location 4 bytes after the address returned by MyClass::operator new[] (which gets the address from IMMObject::operator new[] which in turn gets its address from ::operator new[]) EDIT - this means in a sence we're not delete[]ing the same address as new[] gave us..wierd I assume that these 4 bytes is used by the compiler to store information about the nature of the array. What I dont understand is why the compiler doesnt chose to omit those bytes in the address returned by ::operator new[]. The programmer should not be aware of compiler-specific things such as this (I must say I havnt read the standard so Im not sure but I think the compiler is free to store this kind of info anyware it likes) On a related note, all my operator new[] functions has return type void*. This is then casted implicitly to MyClass* in a statement like MyClass* a=new MyClass[10]; I thought that implicit conversions from void* to any other type wasnt legal. Ive had trouble converting from void* using dynamic_cast and maybe even static_cast (cant remember). Thank you very much for your replies EDIT - just a small reflection on the return of new[]
  14. Hi, Im having a problem referencing the first instance of an array from a (inherited) member function of another instance in the same array. In the specific method I know: 1) the address of the current instance, ie "this" 2) the address of the first instance in the array (the value returned by ::operator new) 3) even though this method resides in a base class that has beed derived the compiler should be able to figure out the run-time type of the object 4) 3 should be able to give us the correct sizeof(*this) The above 4 points should be all thats needed to get a TypeOfThis& reference to the first element. Actually, while I wrote this I came up with the rather ugly: (*(this-(reinterpret_cast<char*>(const_cast<IMMObject*>(this))-(reinterpret_cast<char*>(arrayInfo.start)))/sizeof(*this))) This (for the lazy) casts both this (of type MMObject*) and the address to the start of the array (of type void*) to char*. Then it subtracts to get the size in bytes between the two (this could be dangerous since I've heard sizeof(char) isnt allways 1 on all platforms). It then divides the given value by sizeof(*this) to get the number of objects between the current *this object and the first object in the array Last, it subtracts that value from (this) to get the final destination Well..whats to say..I dont like it. What buggs me is I have the correct address from the start (arrayInfo.start), I just want to cast it to the right pointer type. Do you guys know of any better way to do it? A quick question #2: My (top level) derived class' constructor reports that (this) points to a location 4 bytes after &array[0]. This in turn means that my arrayInfo.start isnt correct (in the given context, I want it to point to an object of the most derived type). This is true even for Release builds. How do I fix this? Thank you verry much for any replies, my quick and simple objet tracker has become a monster
  15. Im in no way qualified to answer this but it seems like your preprocessor is only making one pass and the others making at least two. Since the "kk" in your result is generated "on the fly" by "k ## k" the preprocessor wont recognize it as a macro on its first pass and hence not evaluate it. Since "kk(1)" evaluates to "1" the only thing you have to do to get the same result as the other preprocessors is evaluating it again (ei making another pass)
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!