Jump to content
  • Advertisement


  • Content Count

  • Joined

  • Last visited

Community Reputation

1487 Excellent

About boogyman19946

  • Rank
    Advanced Member

Personal Information

  • Interests
  1. boogyman19946

    Stupid Things I've Done

    This one in particular I didn't do, but it was in the existing codebase of an image processing library that I'm still working on right now. My task is to take this library and move it to execute on CUDA. I tried to keep all the code changes to a bare minimum. This is understandable because some of those algorithms are supremely complicated. Haralick features are on their own a project that could be taken up for a two semester senior project.    I've managed to get one of the algorithms to run on CUDA and process the images. It's running through fine but it's failing to provide the correct values. It's not crashing, it's just that every value that comes out of it is 0. I can already see this is going to be a pain in the ass because I haven't modified this algorithm in any way. Stepping through the debugger with two instances of the code in parallel (the CPU version, which worked fine, and the GPU version, which gave me 0s for all the values), I finally encountered the culprit: size2 = size2++; Looking at it, I actually wondered how this would be executed. I assumed that first the value of size2 would be assigned to itself, and then incremented, which is I guess what happened in the CPU. Alternately, even if the ++ was evaluated first, still size2 + 1 would have been used to overwrite the old value in size2. Those two ways would have made sense to me; however, the GPU had an even better way to do it. I haven't disassembled the code because, quite frankly, I didn't really care to visualize what really has happened, but I guess behind the scenes, the GPU must have cached the value of size2, incremented the original value of size2, and then evaluate the expression, ergo, overwriting the incremented value with the original value (size2 of course being initialized to 0 means size2 will always be 0).   Either way, this is an entirely redundant piece of code and serves only to confuse.   Okay, while I'm here, maybe I'll rant a little bit (>.>). Of course this isn't the only silliness that happened in these algorithms. Sometimes I wonder if people just got bored when they were coding and just brute forced a solution that made the code work. For instance, take this function:   double imgmoments(pix_data *pixels, int width, int height, int x, int y) { double *xcoords,sum; xcoords=new double[width*height]; int row,col; /* Generate a matrix with the x coordinates of each pixel. */ for (row=0;row<height;row++) for (col=0;col<width;col++) xcoords[row*width+col]=pow((double)(col+1),(double)x); sum=0; /* Generate a matrix with the y coordinates of each pixel. */ for (col=0;col<width;col++) for (row=0;row<height;row++) { if (y!=0) { if (x==0) xcoords[row*width+col]=pow((double)(row+1),(double)y); else xcoords[row*width+col]=pow((double)(col+1),(double)y)*xcoords[row*width+col]; } sum+=xcoords[row*width+col]*get_pixel(pixels, width, height, col, row, 0).intensity; } delete xcoords; return(sum); } At first glance, this isn't necessarily a bad solution. If the second loop refers to multiple values calculated in the first loop on a single iteration, you maybe don't want to recalculate the pow function too many times so you might look at it, assume the programmer knew what he/she was just caching the values to avoid redundant calculations, and leave it alone. My task, however, is to move the code to CUDA because the algorithms are slow as balls, so MY ultimate goal is actually optimization. Allocating memory in code that's running on GPU is kind of detrimental in that regard, so of course I get skeptical when I see code that looks like it's allocating dynamic memory the size of the whole image in doubles in a function that appears to be executed for every pixel in the image. Looking through the second loop, I tracked down the references to xcoord. There are 4. The array might have been worth keeping around if the algorithm addressed multiple different values of xcoords on a single iteration (even then it probably wouldn't be), but every access of xcoords happens at [row*width + col], so.... literally the current coordinate, every time. To make sure I wasn't losing my mind, I've copied the codebase over, modified the function to replace the whole dynamic array shtick with a single local variable in the inner for loop, and ran both the old and new functions on an image to make sure the moments are the same, which they were.   The whole codebase was kind of silly like that. I've encountered gotos, functions that spanned over 300 lines, sometimes both gotos and big-ass functions.    I've fought with the code structure since I started this project. The ImageMatrix class that represented the images loaded, contained over 20 methods. Whoever wrote it has never heard of the term "encapsulation" because the class's member variables were all public. I've also ended up arguing with my professor multiple times about this because his argument is "code structure is not important to it running fast or on cuda" What's actually a waste of my time, and anyone else's who worked on this trash, is when I need to rearrange the code you've written (or let other students write, I mean it had multiple authors), and I have to effectively take every noodle of your spaghetti and separate it from the rest before being able to see how it's going to run on the GPU.    Okay, rant over XD Nevertheless, once I've actually got everything up and running, the project isn't too bad. I'm still doing a bunch of copy-pasta, but it's not too bad, and the code is fairly important in the fields that use them. These algorithms provide a lot of useful information about images, with the volume of data for processing we get (like some of those really big telescopes that take monstrous amount of images of the cosmos every night), parallelizing them is a nice step forward.
  2. boogyman19946

    Variable range rope physics.

    You know, I can't actually remember if we did bother with that. I think I still have the code on a branch somewhere so I might just go back and test it to see, but I think we'll roll with this setup from here forward.
  3. boogyman19946

    Variable range rope physics.

    So we were never able to fix the bug with the rope using hinge joints, which is unfortunate because we like the rope's flimsiness when made out of discrete components. What we ended up doing is getting a static image for the rope , stretching it as the player changes the tongue's length. It actually looks moderately decent but still requires some work. The most important thing is that they are stable. What I have also tried just drawing a curved rope based on player's velocity , but it just ended up making the tongue Iook like a PVC pipe.
  4. boogyman19946

    Variable range rope physics.

    That's the issue I'm facing. Under static conditions, the rope works ok. If a player just hangs off of it and nothing changes about the situation, Unity will be able to resolve the simulation correctly. The issue arises when the player's rope/tongue either attaches to a moving platform or gets pulled by some other object like the floating platform in the game prototype above. Unfortunately, that's the main method of locomotion in my game and I'm having a really hard time finding a solution to it.
  5. boogyman19946

    Variable range rope physics.

    Not sure if I'm in the right place to ask this but here goes.   I'm having issues beating Unity's joints into submission. I've tried to look for a solution, but I can't come up with a good solution, and I'm trying to avoid implementing my own joints (if I even can).   So, my intent here is to have a player swing on a rope. Using a single joint like a distance joint looks and feels really crappy, so I've tried making the rope out of multiple joints like a lot of folks suggest. This actually looks good when it works, but it only works in very particular circumstances. The latest iteration is here:   http://kjarosz.github.io/Hookshot/Hookshot   (Can't play in Chrome; needs Unity Web Player; controls are regular WASD + space + mouse)   When you attach yourself to the floating platform, the bug is immediately obvious. Unity can't seem to calculate the necessary forces correctly and causes the joints to spaz out.    I've switch out the joints for all the other available ones but they all similar issues. Distance/Spring Joint just stretches out like bubble gum. The same issues happen when you shorten the "rope length". Basically I remove the joints and have Unity figure out the fact that the player needs to be lifted up appropriately but the same bug occurs.   The build in the link tries to fake it a bit, but the result is really crappy. There is essentially a single distance joint connecting the player and the anchor point and that maintains the player's position under the hood. The visible rope is made out of a multitude of jointed objects that tracks the player's position through a script.   Has anyone dealt with this issue? Is there any reasonable way to solve it? At one point I thought about implementing my own joints but I just don't know how much of a viable solution that is. Any ideas?
  6. boogyman19946

    Singletons and Game Dev

      Heh, that reminds me of the lugaru source: http://hg.icculus.org/icculus/lugaru/file/97b303e79826/Source/GameTick.cpp#l7276   I guess it really comes down to a works or not result.     Global variables create hidden dependencies, which destorys your capacity to analyse access patterns and control their scheduling. Hidden dependencies are also evil for countless other reasons too    I've been feeling this pain ever since I started working at my current job. It's a web application for which the client is written in Javascript. The way it behaves is really unpredictable. A lot of functions have really nasty side effects, and of course no dependencies are made obvious through the function interface because all data is stored in a global variable called "locals" (an oxymoron of the highest order). At least I can grep the source code to find instances of its use, but it's still not a very good "technique", merely kind of a means for brute-force debugging.
  7. boogyman19946

    Singletons and Game Dev

      Meh. Dig through Android code sometime. You have to control every allocation to prevent the garbage collector from murdering you, so static float buffers abound.       You can make it thread-safe that way, yes. But you can't use it in parallel. Which is kind of the point of having multi-core machines in the first place.     I hear Android is supposed to have a "smart" way of managing memory, but I've never really dug into that. Kind of interesting to find out that even in simple cases like this it can be that troublesome.   I've got to really face-palm here. I very rarely need to work with concurrently executing code, so I somehow failed to realize the difference between just "multithreading" and actual parallel execution. That makes the problem a lot clearer.      That was my first thought, but then I saw it's java. You can't allocate arrays on the stack in java, right? I've done this when using XNA/C# before, to avoid any memory allocations in the game loop... as long as I knew that section of code was not going to be called from multiple threads, or re-entered on the same thread.   That being said, if they really wanted they could just explicitly declare 16 float variables in each method  where this buffer was needed.     Well, you do have to use the "new" keyword when allocating one and in addition to that an array is actually a full object. As far as I'm aware, there's no way to declare member arrays of variable size without using the heap even in C++. If they do somehow manage to do that, I'd sure like to find out how.   > Does anyone how to get rid of this block?
  8. boogyman19946

    Singletons and Game Dev

    I've considered the multi-threading argument. I agree that having no global-state can make it easier to work with, which alone is worth the effort to eliminate it; however, can't actual, Design Pattern-style singletons implement synchronization correctly?   As far as I know, and I'm by no means a Design Pattern guru, the Singleton pattern just provides a single instance of an object and global access to it, but the typical way of implementing it with the static getInstance() method pretty much prevents anyone from modifying the variable referencing the global object. Since we're dealing with an object reference, can't we encapsulate synchronization methods in the object's implementation, such as for instance using Java's "synchronized" modifiers on non-static methods? Also, if we decide to implement that same object as something other than a Singleton and pass it around to different threads, are we not facing the same synchronization issues?   libGDX? You're using libGDX as a reference of some sort?   Take a look:   https://github.com/libgdx/libgdx/blob/master/gdx/src/com/badlogic/gdx/math/Matrix4.java#L73   Thats exactly what it looks like. A static float[16], whats its used for? Oh nothing too important, just holding effin intermediary results in math functions.   What does that means? That you can't possibly  invert a matrix on two separate threads because it will fuck something up. Read that again: You can't use libGDX's math functions on more than one thread. And you know what? Its not the only static used that way in libGDX.   I'll just leave you with that bit of information.   EDIT: More on topic -> Eff singletons. All of them. I'd simply copy-paste swiftwcoder's words here.     Ok, how is that not flagged as a bug? That's total bulls***. Is there a reason not to use the stack? I don't think "performance" is even applicable to this situation.
  9. I've heard a long while ago that some interviewers would actually ask their candidates to list a design pattern that's not a Singleton, implying of course that, out of all the design patterns that there are, a person knowing only about the singleton must have no idea about good coding processes. As tongue-in-cheek as that is, I personally may have bought into this mantra myself. I guess the logic is that: if globals are evil and singletons are globally accessible, then sure singletons must be evil.    Of course, there are genuinely good reasons not to declare global state carelessly:   + They introduce a degree of unpredictability in the code, not knowing when and where the state mutates, making debugging ridiculous. + It tends to make code tightly coupled. + Non-trivial dependencies become less obvious as globals are almost never included as function arguments to anything.   Plenty of spaghetti can be had when globals roam rampant through the codebase. I've worked on applications like that before (mostly in Javascript), where data structures just magically get populated with data between function calls. If I can avoid making needless globals, I do, but then I understand some uses like maybe when you need a logger. Or maybe you have an Android app and you'd like a solitary socket to allow all your Activities to communicate with a server somewhere without having to jump hoops and opening multiple sockets.    However, I've looked at some game libraries and engines (most prominently libGDX and Unity) and I've noticed they have no shame with exposing a lot of data structures globally. For instance, Unity's infrastructure allows me to control pretty much all aspects of the game from any script I want. I can find any object I want, modify any of its components, I have access to all the input state, etc, etc. libGDX is very similar. I can access any of the modules from wherever I want.   Somehow, I don't mind any of that, and I don't feel like it's making my code any more complicated. Quite the contrary. How would my code look if I had to pass all that state around through function arguments and the like? Wouldn't that make it needlessly verbose?   Sure, my code is tightly coupled to the engine, but if I'm really vehement about that, I can just write wrappers to interface with the third party stuff and keep that separate from my own code. It's not the end of the world yet. Unity also provides tools for making dependencies obvious, even if they aren't.   I don't know, maybe it's a special circumstance. I've had this talk in a class named "Comparative Programming Languages" [sic], which really should have been called "Comparing Programming Languages", and I've laid down my own arguments for thinking that global state causes more harm than good and is considered bad practice. Do you guys think games might be an exception? (I somehow feel like this topic might attract a flaming war so... pretty please no flaming? Thanks! )
  10. boogyman19946

    Best comment ever

    def take_dump(self): # haha... I'm 8 years old Here's something that resides in production code that I'm working on. The function takes a dump of a SQL database and saves it into a file.
  11. boogyman19946

    Strange Corners of C

    I'm pretty sure I've seen a good chunk of these on the forums at one point or another, but they're still pretty fun, and it's cool to see all of these in one place. My favorite is Duff's Device:   http://blog.robertelder.org/weird-c-syntax/?utm_content=buffer973d2&utm_medium=social&utm_source=plus.google.com&utm_campaign=buffer
  12. boogyman19946

    99 bottles Challenge thread 2

    My first attempt was in Python with 254 characters: for i in range(99,-1,-1): a="bottle" s=lambda k,n: (n+"o more"if k<1 else str(k))+" "+(a+""if k==1 else a+"s") o="of beer" e=o+" on the wall" print s(i,"N"),e+",",s(i,"n"),o+".\n"+("Take one down and pass it around, "+s(i-1,"n")if i>0 else "Go to the store and get some more, "+s(99,"")),e+".\n" It's alright. I like how the lambda helped wrap some of the switching into one (kind of) concise function, but I kind of wish it used less characters. I also wish Python had a ? operator. Damn you Python for trying to be expressive! I thought then I'd have a hand at the challenge in a functional language. Enter Haskell: z=concat a=" bottle" e=a++"s" y="o more" b 0="n"++y++e b 1="1"++a b n=show n++e c=" of beer" d=c++" on the wall" p=".\n" q n=z[d,", ",b n,c,p] f 0=z["N",y,e,q 0,"Go to the store and buy some more, ",b 99,d,p] f n=z[b n,q n,"Take one down and pass it around, ",b $ n-1,d,p,"\n",f(n-1)] main = putStrLn(f 99) with 253 non-ws characters. I suck at Haskell It too me forever just to set up the main function.   I started writing another version in Java but I reached 200 characters just setting up a bare bones program. I'd probably go well into the 300s with it. EDIT: I actually think I might have stolen that Haskell version from someone on the old thread a while ago and not realized it. My version uses less characters though :D
  13. boogyman19946

    how to program A LOT of skills/spells?

    You could employ a Prototype pattern: http://gameprogrammingpatterns.com/prototype.html   If you have a lot of if/else statements or a large switch, you should probably start thinking about how to alleviate all those checks.    In addition, if you haven't read through some of those design patterns, I suggest you do. They're pretty awesome and they can make your code a lot more flexible.
  14. boogyman19946

    Spot the bug quiz.

    That isn't my point. You are correct that learning C or C++ does not magically make you aware of how to optimise cache usage.   But the sad truth is that learning Java pretty much guarantees that you will not be aware, as the language doesn't have the necessary mechanisms to do anything about it.   Oh yeah, I'm aware, that's why I removed the quote as it wasn't really on-point.
  15. boogyman19946

    Spot the bug quiz.

    Writing code in C++ does not automatically mean the coder is even aware of what cache coherency is. If the intent is to teach people about making effective use of CPU cache, then it is imperative to teach people about the effective use of CPU cache. Throwing them into a pit with a low level language doesn't accomplish anything and can arguably make things worse (just check out that Lugaru code I linked to in the first post. Written entirely in C++ I believe).   A lot of people naively think that absorbing the contents of a CS degree makes them a decent programmer. As far as I'm concerned, coming out of school with that mindset the student does a great disservice to himself. At my university, the CS students start with Intro to C and move on to Comp Sci which is taught in C++. None of those classes ever mentioned cache, or the inner workings of virtual functions.    Of course, if you're using a high level language like Python, it's rather unlikely that you'll hear anything about any of those concepts, whereas with C++ you might at least come across it one day.    Then again, I don't even know how to write a Makefile, let alone an efficient one   EDIT: Removed quote. I don't think it was necessary.
  • Advertisement

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!