Advertisement Jump to content
  • Advertisement

kRogue

Member
  • Content Count

    527
  • Joined

  • Last visited

Community Reputation

100 Neutral

About kRogue

  • Rank
    Advanced Member
  1. kRogue

    Pros and cons of singleton classes

    I am starting to think this looks like a contest of trolls, but here here we go, just so you know: Quote: That isn't an error condition. If it is then its an artifically created one to "prove" a point you've already settled on. The question of 'which one do we use' comes down to context. To go back to my early example of a front end GUI and a game level the usage is quite clear; the front end system can use its own asset cache while the game level has its own. BS, unbelievable BS. Why do you want to have two separate resource managers, especially if they are managing the same type data? Give me a really freaking use case and a reason why you want them strongly separated. Quote: Lazy systems also require threading (I assume you don't plan to stop your game loop in order to load something), which means that you now have to carry the bagage of thread safety around with you. Granted, instance passing doesn't help with this however it doesn't make a compelling case of a singleton either. No shi*t Sherlock would it be threaded, just how stupid do you think people are nowadays? At any rate the point I am making here is that you want to know which resource manager to register to, and here a singleton pattern gives you that. Quote: However, there is no 'error' condition here, not have you made a case for global visability of all the caches. The audio sub-system has no reason to know of the texture cache for example. If you were not such an a** you would have noticed that I also stated to have a resource manager possibly for each data type (mesh, texture, music, sample, whatever). Quote: While automagic registering might seem like a good idea the problems involved are often more trouble than they are worth. This would have been insightful if you actually named what the trouble comes from that. Oh wait know! It is when you are an idiot and want to have multiple managers and no clear rule which to register to. Quote: The key point is none of these things require a singleton and I would argue that it is only design lazyness which has made it seem like it would be a good idea; your 'ewww' and 'irritating' comments would indicate that train of thought. Right, now call me names too, well I did just in this post so it is ok I guess. If you go to the message passing that you are masturbating about, what gets those messages? Giggles, one particular fixed object. How different is it really to have an implicit assumption that you are always refering the same object than a singleton? Oh wait! I know! The singleton actively checks that you are referring the same object all the time by making sure there is only one. Now if you decide to have multiple managers for a common type and purpose then you need to address: 1) can a fixed resource be registered to both? if not, can resources be exchanged between managers? what rules do we need to implement to get this to work seamlessly? Blah. Now for your coding example, it is actually more or less what I did do (but on a per type basis) but by having that function that returns the same manager always, that is essentially a singleton , it is not enforced by the manager's implementation but the usage is exactly that. Well except that your code leaks, as it does not look like the manager ever gets freed. If there is more than one that can be used, you need to add more code and logic to decide which one... hmm... code more to get the same thing done, I guess you get paid by line of code or by the hour. the only possible way I can defend you is if one does all of the following: 1) don't make the implementation of the resource manager a singleton (that is good, less work) 2) either all resources are loaded directly by the resource manager and the manager registers the object, or the objects need to register themselves as such the manager need to be passed at ctor, the first option is definitely better. The second option is messy and I would not do that. Similarly, resources can either only be deleted by the manager or the resource needs to know which manager it is on so it can unregister it self. The only thing one possible gets out of this is the ability to test one's manager code, you do not get any new functionality, as the ability to have multiple managers for one type means that you need to classify resources (and what happens when a resource is used by two different areas, then you need more code and logic to do the sharing of a resource across multiple managers, oh wait that means more salary because you do more work to achieve the same thing another method does with less work!). Lastly, some name calling: if the system where you was soooooooooo horrible, why was it used? why wasn't it's shortcoming addressed if you spent such a loooooooong time dealing with it? The possible answer I come up with right now reading your idiocy is: 1) You did complain, but your complaints were ruled out (either you are too junior, using it the wrong way, too many hours to do it your way, as for the bosses more hours is not a good idea). and lastly, how freaking hard would it have been to add the following: clear_all_but(some list of stuff) to the manager? that is what you wanted and that it was a singleton was not the real cause of your troubles, oh but wait, you wanted to use the class in a way it was not supposed to be used so you wanted to hack around it by creating another instance and moving resources from the old instance to the new instance. Hmm.. sounds like you missed the _real_ point of your experience there.
  2. kRogue

    Pros and cons of singleton classes

    Quote: I find it amusing that you agree with me and yet have implimented one of the worst uses of a singleton you could have... seriously, I read it and I chuckled. There is no good reason for a resource manager to be a singleton, doing so is a wonderful design flaw and is going to lead to lots of problems. I've worked on a game where the engine had a singleton resource manager; I spent alot of time swearing about this because the design meant it was impossible to load a level, keep the front end textures in memory and then flush the level without dropping all the shaders, textures and model data for the front end. This meant going from level -> front end had a lovely long loading time attached for no good reason. (Console game, we had plenty of resources spare to allow this pattern) If I was doing a system right now then I'd have multiple resource managers, certainly on a PC build where it is less of a problem, so that I could drop and flush things with no problems. Even if I only had one I'd never ever consider enforcing it at the class level; having more than one wouldn't be an error and there is no need for global access to it so thats the two reasons for a singleton to exist gone. Sighs, sounds like your singleton resource manager was an epic fail of design then. lets see what we want from a resource manager: 1) ability to transparently get a resource: texture, mesh, etc from a resource identifier 2) ability to delete the resource and in doing so remove it from the manager too. Now 1) means we need to access "the" manager from somewhere in order to get the resource there are several typical ways to do it: 1) there is only one, i.e. a singleton, you may have a manager for different data types, but the point is there is an implicit rule on how to reference the appropriate manager object, I chose to have one manager object as a static member function family for each type of resource. In theory, nothing in the code of the managers themselves actually enforced the singleton part, but it would be an error to have two texture managers because then comes to the issue: which one do you use then? The singleton pattern 99/100 we just want that "we have only one choice" to choose and use that. 2) you pass the manager along in the anything that might load data, this makes lazy loading (i.e. load on reference, that is not LOAD everything at a level start) much more irritating to do. 3) you setup some global pointer (eww) which points to which manager you are using. Now I elected to do 1): the manager class only can act on managed classes, at construction they "register" themselves to the class, easiest one I use for the name of the virtual filename of the resource, at deconstruction they unregister from the class {if the manager's deconstrcutor is active, all add/remove requests are ignored logically). Again, the singleton part is important here, since the classes are registering themselves they need to know where to register themselves, that means either some singleton, pass the resource manager at construction or some global pointer. The 2nd option is particular irritating, and the third when mixed with multiple managers for the same resource type is probably going to cause some odd loading bugs eventually. Now how would you do an intelligent level loading system: each level lists their resources they need, you pass to the manager that list, it then deletes everything that is not on that list, and voila, you do not free a resource and then immediately load it again, not exactly brain surgery. Moreover, you can easily elect what resources must be ready *immediately* at level start and have those loaded, and the rest, load more lazily while the playing the level, works great for me. You can get fancier too, you can create heuristics based off of how often levels change, level ordering and size of the resource to leave some resources resident if they are going to get accessed soon if the level to load will end soon or if the resource is so small there is not a log gained from freeing it. [Edited by - kRogue on February 1, 2010 3:44:21 PM]
  3. kRogue

    Pros and cons of singleton classes

    I concur with Phantom: Quote: Reason to use a singleton: It is an ERROR for more than one to exist AND you REQUIRE GLOBAL access. and once you have a singleton, you need to carefully tread on it's construction, etc... generally speaking, do not do this: static mySingletonType mysingletonobject; but do this: mySingletonType& getSingleton(void) { static mySingletonType R; return R; } But be aware about the fact that you have no fine control when R's deconstructor is called. One use I have for singleton's is for a resource manager: need to get data but don't know it it has been loaded or not, then the resource manager does it for you... but even that case has evils, something has to delete the data from the manager, what happens when some of the data are objects that have, say for instance, have GL textures made o their creation, naturally then their deconstructor delete the GL textures, but then you need to make sure that the GL context still exists, worse, if we get into hairy details, you need to make sure that context is current in the thread where stuff gets deleted.... often times as not, the control freak in us does this to deal with that issue in our libraries: we create Init() and DeInit() routines... Just tread carefully.
  4. The reply about the business case is the best advice I have read so far: if you are doing coding for a company then the company expects (reasonably) that whatever it costs for them to make the code nicer will be less than what they get out of it. That is business, the estimates of what it costs and what are the value of the benefits is where opinions enter, so really it really depends on the situation. Myself, I have rewritten large portions of a server handed to me to develop, I did that because the original code was excruciatingly difficult to add features and features were to be added. The main convenience I had was that since I was new and knew nothing of the platform when I started, no one was expecting anything, after that first month and I checked in the changes with everything working using less CPU (this was a phone) and the ability to add features to the server faster than the components that requested the features (i.e one sever want feature X, I implemented feature X faster than the asker) everyone was happy.. but here is a very hard question: what if I had been wrong? what if the new architecture was really not any better? That code was under 20,000 lines of C (not much) but what about if it was hundreds of thousands of lines of code? No way would I then "redo" or refactor the code. If you are working on an open source project, then the situation is very different... many open source projects have a habit of refactoring a great deal, weather this is good or bad depends on your opinion. Additionally, when we work on a personal project we often shoot for "beautiful code" changing our public API's often, but when you work in a team, the situation is totally different, additionally, if you are working on something that is used by other components and you do a "big cleanup" of the existing code you have better guarantee that the new code you wrote interacts the same way.. and sometimes, horrifyingly enough, fixing a bug in a component can cause bugs in other components because they relied on that behavior, so be vary wary of "cleaning up" a large volume of code when you are new to a company or for that matter any project even.
  5. kRogue

    OpenGL 3.2 Shader help

    Mipmapping disabled: check. Texture uniform set to same texture unit as where texture is bound to: check Just out of curiosity, what hardware, driver and OS? one desperate check: I see lots of indexing at [gBitmapLibrary.size], is it safe to assume that the array is 1+gBitmapLibrary.size atleast in size? But chances are the guilty part is here: glGenBuffers(1, &gBitmapLibrary.vArrays[gBitmapLibrary.size].tboId); glBindBuffer(GL_ARRAY_BUFFER, gBitmapLibrary.vArrays[gBitmapLibrary.size].tboId); glBufferData(GL_ARRAY_BUFFER, sizeof(GLfloat) * 4 * 2, TexCoords, GL_STATIC_DRAW); location = glGetAttribLocation(fontProgram, "in_TexCoord"); glVertexAttribPointer(location, 2, GL_FLOAT, GL_FALSE, 0, 0); you omitted the enable call: glEnableVertexAttribArray(location); Lastly, why are you using gl_Vertex? Nowadays the general rule of thumb is to shun any of the GLSL attributes, varying and uniforms that come from the fixed function pipeline.
  6. kRogue

    FBO problems fixed =)

    couple of things I guess would help you: Firstly, make sure that the active texture unit is texture unit 0 and that the active program is 0 too additionally set glColor to 1.0 in all values so it modulates correctly, also there is really no point in using -1 for the z value for the vertices on that quad, since you have disabled depth testing and such and the projection matrix is for z-values -100<=z<=100, just make the z-coordinates zero anyways: //... snip glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, 0); glPopAttrib(); //NEW STUFF! glActiveTexture(GL_TEXTURE0); glEnable(GL_TEXTURE_2D); glUseProgram(0); //Post Processing Manager Needed, draw fullscreen quad for now. //Prep the screen for drawing a full quad glPushAttrib(GL_LIGHTING_BIT); glDisable(GL_LIGHTING); glDisable(GL_DEPTH_TEST); glMatrixMode(GL_PROJECTION); glPushMatrix(); glLoadIdentity(); glOrtho(0,1, 0, 1, -100, 100); glMatrixMode(GL_MODELVIEW); glPushMatrix(); glLoadIdentity(); glBindTexture(GL_TEXTURE_2D, Lighting->result1); glBegin(GL_QUADS); //NEW: glColor4f(1,1,1,1); glTexCoord2f(0,0); glVertex3f(0, 0, 0); glTexCoord2f(1,0); glVertex3f(1, 0,0); glTexCoord2f(1,1); glVertex3f(1, 1,0); glTexCoord2f(0,1); glVertex3f(0, 1,0); glEnd();
  7. kRogue

    which distro for nVidia&OpenGL?

    One comment: as a general rule of thumb the drivers of a distro (for example Ubuntu's restricted drivers) are significantly older than what is available from nVidia site andoddly enough, installing the drivers from the nVidia site is more robust (I have found that the although the distro authors _SHOULD_ know how to pack up the files, they often mess it up and ironically nVidia's installer is LESS likely to mess up the system)
  8. kRogue

    VBO 32-Byte Alignment

    Comments: 1. I have found that 16bit indices (GL_INDEX_ARRAY_BUFFER) need to be 16bit aligned on nVidia hardware. 2. I have found that 32bit indices (GL_INDEX_ARRAY_BUFFER) need to be 32bit aligned on nVidia hardware. 3. For bindless graphics (GL_NV_shader_buffer_load) the alignment rules are a touch trickier, but 16 _byte_ alignment is guaranteed to work. 4. I once wrote to nVidia asking about alignment and got this: Quote: > Also on the subject of > alignment, is there a performance advantage in making vertex attribute > data 16byte aligned? Probably not, particularly if the data is interleaved. It's actually pretty rare to be bottlenecked by vertex attribute fetch (at least if your buffers are in vidmem), I think your indices would have to be pretty scattered.
  9. kRogue

    Should I learn DirectX 9, 10, or 11?

    Quote: Or maybe you should not. And I have been poking around with GL since 1.1. As I already wrote once, unless you need to support Apple, there's no real reason to learn GL. I am going to call BS on this one. Here goes. Firstly, if you are an Indie developer and not aiming to sell 500,000+ units, then the sales from non-windows platform will be disproportionally much higher, for evidence take a look into World of Goo. Secondly, by learning GL you will get a better notion of what is going on for 3D hardware, in particular what is just convention, what is not etc. I am not saying to not learn DX10, but ditching GL is really a BAD idea. Compounding this issue is that if you want to make something for the embedded market then GLES and GLES2 are really your only choices to do 3D graphics. If you have the time I would recommend: 1. GL3 Core and DX10 together, you will find that what the API's expose mirror each other strongly 2. GLES and GLES2 afterwards, as they both share lots of concepts from GL for desktops. As for DX11, I'd seriously wait on it.
  10. kRogue

    [Solved] FTGL problems

    If you are already using SDL, you can go for SDL_ttf, at http://www.libsdl.org/projects/SDL_ttf/ having pre-made .lib's and .dll dependencies. SDL_ttf is not precisely idea at times, every time you "draw" text it creates an SDL_surface, from which you then need to update a GL texture. In all brutal honesty it is not really that nasty work to just work with freetype2 directly and create your own glyph cache as a GL-texture.
  11. Not to be an arse, but using GLSL to make the GPU do computations for you is the really hard painful way to do it, take a look into CUDA (if you are using nVidia) or OpenCL (more generic)... though in all honesty, only nVidia really has good implementation of compute on GPU.
  12. The App Hungarian notation talked about in that article is very useful, especially when you are dealing with multiple co-ordinates systems, different units, etc. The "type" System hungrarian notation is a piece of junk (and I have always shunned it). Also that blog has this: Quote: Before we go, there’s one more thing I promised to do, which is to bash exceptions one more time. The last time I did that I got in a lot of trouble. In an off-the-cuff remark on the Joel on Software homepage, I wrote that I don’t like exceptions because they are, effectively, an invisible goto, which, I reasoned, is even worse than a goto you can see. Of course millions of people jumped down my throat. The only person in the world who leapt to my defense was, of course, Raymond Chen, who is, by the way, the best programmer in the world, so that has to say something, right? Here’s the thing with exceptions, in the context of this article. Your eyes learn to see wrong things, as long as there is something to see, and this prevents bugs. In order to make code really, really robust, when you code-review it, you need to have coding conventions that allow collocation. In other words, the more information about what code is doing is located right in front of your eyes, the better a job you’ll do at finding the mistakes. When you have code that says dosomething(); cleanup(); … your eyes tell you, what’s wrong with that? We always clean up! But the possibility that dosomething might throw an exception means that cleanupmight not get called. And that’s easily fixable, using finally or whatnot, but that’s not my point: my point is that the only way to know that cleanup is definitely called is to investigate the entire call tree of dosomething to see if there’s anything in there, anywhere, which can throw an exception, and that’s ok, and there are things like checked exceptions to make it less painful, but the real point is that exceptions eliminate collocation. You have to look somewhere else to answer a question of whether code is doing the right thing, so you’re not able to take advantage of your eye’s built-in ability to learn to see wrong code, because there’s nothing to see. Now, when I’m writing a dinky script to gather up a bunch of data and print it once a day, heck yeah, exceptions are great. I like nothing more than to ignore all possible wrong things that can happen and just wrap up the whole damn program in a big ol’ try/catch that emails me if anything ever goes wrong. Exceptions are fine for quick-and-dirty code, for scripts, and for code that is neither mission critical nor life-sustaining. But if you’re writing an operating system, or a nuclear power plant, or the software to control a high speed circular saw used in open heart surgery, exceptions are extremely dangerous. I know people will assume that I’m a lame programmer for failing to understand exceptions properly and failing to understand all the ways they can improve my life if only I was willing to let exceptions into my heart, but, too bad. The way to write really reliable code is to try to use simple tools that take into account typical human frailty, not complex tools with hidden side effects and leaky abstractions that assume an infallible programmer. and this statement, although no longer the same thread topic, I agree with whole heartedly. The other bit (which I did not bother quoting) which I hate in C++: non-mathematical operator overloading. In my (not so humble) opinion overloading mathematical operators to do non-mathematical things is a horrible idea. Worse, even with mathematical things, one can get into serious idiocy as well: //the type T has an associate multiply. template<typename T, unsigned int N> array<T,N> operator*(const array<T,N> &vector, const T &scalar) { //code } template<typename T, unsigned int N> array<T, N> operator*(const T &scalar, const array<T,N> &vector) { //code } but with that comes potential idiocy: T foo,bar; array<T,3> joe, sally; //how many multiplies are in this call? sally= foo*bar*joe; //how many multiplies are in this call? //worse if the type T has a non-communative multiply //this gives a different number than the above sally= joe*foo*bar; //just to be safe, have to do this for on the left: sally= (foo*bar)*joe //and to minimize the multiplies sally= joe*(foo*bar); Some UI libraries have matrix classes which have operator* overlaoded to map a big structure and the operation is expensive, as such there is a world of difference between Region= Mat1*(Mat2*RegionStart); Region= (Mat1*Mat2)*RegionStart; sighs.
  13. kRogue

    Border of a Mesh

    On little note: if your mesh is texture mapped, there is a chance that there are vertices with same position but different texture co-ordinates, so you need to adapt the above to handle when two different vertex ID's refer to the same geometric position.
  14. Signals _might_ be a good idea for events to trigger things to happen, http://libsigc.sourceforge.net/ is a signal library that has minimal overhead (it is mostly syntactic sugar unless you get into marshalling stuff).
  15. I have found SDL to be pretty sweet 9/10. But, always a but: I have found that under Linux vs under Windows, the same pad, on the same computer, will be exposed differently by SDL between Linux and Windows. However, this in not exactly SDL's fault: it is just a wrapper over some other deal: in Windows it is DirectX in Linux it is the joystick library (reading from /dev/input/js*) and for the console-to-USB adapters I have seen oddball things (like extra axis and such). What really smells bad is that there is this standard called HID, and the pads are supposed to be following it, so how on earth does Linux get an extra (non-existent) axis that Windows does not?! In my experience, SDL has modeled a fixed pad(like a PS2 look alike or PS2-USB adapter) with many axis's as one joystick with many axis, but depending on the OS, the D-pad gets mapped to an axis or a hat. For PS2-USB adapters, it is often a random function, and highly OS dependent (one I have I think is exposed as one pad under one OS but as two pads under another). Sighs. I totally agree with others advice here, to implement a remapping, ala controller settings, in your application (not taking into account anything exotic like motion sensitive stuff) and present each direction of each axis of application as configurable. Some games that I thought did this really well where Descent 3, Descent Freespace 2, X-Men Legends. Another good example of a good mapping UI and methodology are console emulators. I *hate* it when games are hard wired to a few controllers (Devil May Cry 3 for PC was quite guilty) Especially for those PS2-USB adapters the axis and button mappings are often just odd ball, yet those old PS/PS2 controllers are absolutely wonderful to use to play. Also a good idea is to use the joystick name, not the number when saving the mapping, as the number is dependent on the order the joystick was plugged in).
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!