JohnBSmall

Members
  • Content count

    2258
  • Joined

  • Last visited

Community Reputation

881 Good

About JohnBSmall

  • Rank
    Contributor
  1. Noggs: Right, the workers could get references to the original data, but then the overseer has to ensure that it doesn't change that data while a worker is still processing it, which may be easy in some places and hard in others. The details can be fiddled around though, I'm sure. Antheus: I'm not quite clear on the relationship between process() and push(); process() updates the state of the Blob from the inside, and push() updates it from the outside? So, is it just that some concrete Blob types will have push()-type methods and others will have process()-type methods? I don't see when you'd have both...? [I'm off for the night now, btw] John B
  2. Hmmm... that might work. Presumably you mean the overseer thread has the definitive copy of the data, and packs copies (of parts) of that data into task objects, so the tasks themselves don't even touch the definitive data? That's certainly possible for optimisation (which involves building a copy of the data into a large sparse matrix form anyway), and should work ok for several of the core vision tasks like feature matching, new feature detection, pose estimation and so on. Rendering could still be slightly tricky since the visualisation is likely to touch most of the data, but I think that could be dealt with with some care. Sounds like at its core, this is just an inversion of control over what I had in mind: where my original plan had each task doing its thing under its own control and requiring effective encapsulation and management in the data structure to ensure correct and efficient concurrent access, this makes the access management the 'active' part and the data processing work is done by dumb servers. Sounds like that inversion is a useful trick to remember. I shall have to think about its wider effects a bit -- more suggestions or comments are welcome. John B
  3. FAST euclidean distance

    What are you using the SOM for? You may be able to do better with some alternative method. For example, if you're using it for dimensionality reduction, maybe you can do principal component analysis, or perhaps reduce dimensionality partially (say, dropping it to 32 dimensions), with PCA, and then run the SOM on that reduced data. Or if you have a recent-ish graphics card, you can go mad and pump the whole thing up with CUDA. Incidentally, kd trees can be used in high dimensional spaces, with appropriate adjustments to the search method. See: http://www.cs.ubc.ca/~mariusm/index.php/FLANN/FLANN There are other nearest neighbour search structures too (cover trees, for example). But I take your point about modifying the tree possibly being a pain, although they may still be a win overall. John B
  4. Polish vs Innovation, whats the balance?

    I think the question should really be: Polish or Scope. Spending time and money on existing stuff increases polish, spending time and money on new stuff increases scope. Innovation changes the cost of polish, because it requires more experimenting and testing to find out what really works. However, mild innovation only increases the cost of polish mildly. My opinion, given my own reactions to games I've played, is that you should almost always sacrifice scope for polish. One of the best games I've ever played is Geometry Wars: Retro Evolved. It's a game that could (with the benefit of the existing game to copy game mechanics and design from), be implemented in a month or less by an experienced programmer (with a little help from a musician perhaps). But it's extremely addictive, because it's polished. I've played (or tried to play) several other games on Steam that are in a similar vein (that is, fairly simple gameplay, very simple graphics, typically fairly abstract), and none of them even come close -- their controls aren't quite slick enough, their graphics have irritating aesthetic problems, the gameplay is just on the wrong side of being frustrating, etc, etc. Geometry Wars goes pretty much to the extreme of minimal scope but great polish. It's not absolutely perfect, but it's pretty damn close. As for the original question of innovation vs. polish, I don't think you should get hung up on innovation. Minor innovation (working within an existing genre and just adding one new idea, or mixing ideas from three other games in that genre) is pretty much as good as major innovaion (trying to come up with "entirely new" game mechanics). Arguably it's better, since the people playing your game will find it a lot easier to get into. I would say that minor innovation is almost inevitable unless you really are deliberately copying just one other existing game, and it shouldn't be too risky. But do polish over scope. Play your game a lot. Get other people to play it. Find out all the little annoyances. Tweak the controls to respond perfectly. Fix that tiny graphical glitch caused by incorrect alpha blending in your text. Make sure the game works fullscreen and windowed. Make sure it doesn't crash if the player unplugs the gamepad. Make sure it pauses when you tab away from it and doesn't hog CPU while it's paused. Make sure that when the player dies, they always know *what killed them*. Lots of tiny things, but you should get them all right before you add another type of enemy or another level. If the game is great but it ends up being too short, you can price it appropriately and sell a level pack later -- if it's great, then people will be excited about the level pack and they'll buy it. If the game is long but mediocre, people won't even get past your free demo. Also, Kylotan is right: It's very dangerous to try to go after too large a market. Identify your target market segment as well as you can and *ignore* people who are outside it. When your target market segment is happy, *then* you can afford to go after more people. John B
  5. I'm working on a computer vision application and I'm trying to put the central data structure together. This is not totally trivial, because the data structure must be designed for concurrent access. My expected use pattern has a lot in common with requirements in games: - A UI thread that renders the current state of the data - A tracking thread that updates a camera pose estimate in real time (and needs read-only access to the data state; could be analogous to running AI in another thread) - A mapping thread that performs structural updates to the data (analogous to a thread running main game logic) - An optimisation thread that runs an expensive non-linear optimisation process on the data to refine it (analogous to running physics simulation in another thread except that unlike game physics it probably won't be running 'at framerate' -- in terms of access to the data, needs to update values in bulk, but doesn't need to change the 'structure' of the data, not adding/removing known entities or their relationships). The three routes that I can think of for designing the management of this are: 1- A complex ad-hoc design direct from the best guess I can come up with for my requirements. 2- A simple design using a single multi-reader-single-writer lock on one shared copy of the data, and more care in the other parts of the design (eg, thread communication and buffering updates) to try to reduce contention on the lock. 3- First create a generic in-memory object database library, with snapshotting capability and probably object-level locking, and then use that. Of the three, the first is quite undesirable (for fairly obvious reasons), I'm put off from the second because I'm concerned that I won't be able to effectively reduce lock contention without a large mess of hacks (putting it close to the first in terms of complexity and debugging time), and the third puts me off because it feels like overkill. So, I'm looking for tips from anyone who has tackled this kind of design problem before. Alternatively, recommendations for existing in-memory object database libraries that I could use (object databases exist, but I haven't found one that is free and for C++) John B
  6. [C++] How's my (lockless) driving?

    Quote:Original post by ApochPiQ For instance, you still haven't even considered the generated assembly from your code, so you still don't have the first clue about what locations might become preemption problems. I have to say this worries me. Proper use of intrinsics to give memory barriers and locked instructions and whatever else you need should be enough. If it isn't, then you shouldn't just be examining the assembly code, you should be writing the relevant bits of assembly code yourself. Otherwise you could change the compiler settings and suddenly have a bug that wasn't there before, which sounds to me like a disaster waiting to happen. John B
  7. Lotsa Text, External?

    The difficult thing is not storing large amounts of text in memory, which is absolutely fine unless you're dealing with extremely large files (which is unlikely for text files), the difficult bits are: - Can you load the file fast enough (loading a large file into memory may take long enough for the user to notice it not being 'instant')? - Can you make modifications to that data efficiently? - (and possibly: Can you support undo and redo efficiently?) Loading files into memory fast enough doesn't become a serious problem until they get pretty big either, so really it's just a question of finding a data structure that supports efficient updates anywhere in the file. Luckily, there are very good and well-known answers to those problems. See: Data Structures for Text Editors, and perhaps easier, the series of tutorials about 'neatpad' on catch22.net (specifically Part 17: Editing Text). Edit: If you really need to support very large files, then you absolutely need to learn about memory mapped files. They're very easy to understand and use in themselves, and will save you a lot of headaches. Having said that, for very large files you will still need to be careful with your design and think before you start coding, particularly if you want to be able to 'insert' data in the middle of a very large file (which is difficult to do efficiently no matter what you do -- better to use a file format that doesn't require it if you possibly can). John B
  8. [C++] How's my (lockless) driving?

    This may or may not matter to you, but it is perhaps worth pointing out that since C++ doesn't really have a well defined memory model, issues of thread safety are architecture dependent (especially when writing lock-free code). For example, you say that "32-bit reads/writes on 32-bit boundaries are atomic": this is true for x86, but do you know if it's true for PowerPC or ARM (I don't know the answer myself, the point is it's worth being sure if the code might end up running on them). I don't have enough experience with lock-free programming to critique your code myself, but Charles Bloom has some fairly extensive and very useful information on his blog, which you may want to look at if you haven't already: this post should be a good place to start. John B
  9. Yes, that will work, because shared_ptr provides a conversion to bool (see the 'conversions' section of the documentation). The documentation also specifies that the post-conditions of the nullary constructor (the one that will be called in your example) are that get() and use_count() both return 0. John B
  10. Does anyone remember this "game"?

    Starting from Core War on wikipedia, you can find various other programming games. I don't know which one you're referring to though... John B
  11. new to blender

    It's also worth pointing out that Blender itself runs and works fine without any Python installation. The only restriction is that some scripts rely on being able to access Python libraries that aren't bundled with Blender, to run those scripts you do need a full Python install of the appropriate version. John B
  12. Climbing Game (comments please?)

    Quote:Original post by WavyVirus Check Google Video for "vertigo", a climbing game in development which is somewhat similar sounding but has limbs "snap" to points on the rock face. One thing they implement which you might consider including is fatigue/stamina for each limb. Thanks for this. I couldn't find any videos, but I found a few forum posts - it looks very interesting. Quite a similar concept, although probably more realistic, whereas mine is more arcade-like. The screenshots look pretty good. The worry I'd have with stamina per-limb is that it increases the complexity of controlling your character even more. Possibly good for a more challenging advanced mode though. John B
  13. Climbing Game (comments please?)

    Quote:Original post by kru I think it sounds very neat and original. I don't know of anything similar. I think similar gameplay oriented more toward exploration could be interesting as well. I imagine lateral movement could be made to be just as interesting as constant upward movement. Thank you for the feedback, I'm glad you like it :-). What do you think would work to reward the player in exploration-based gameplay? Do you think finding/collecting items would be enough? Or perhaps just the challenge of fluid movement and finding routes? I think often for exploration you need something that's difficult to find so that the player feels good when they do find it, but I'm not sure how locations could be hidden in this game. I suppose if it was extended into a more full 3D environment there might be more opportunity for hiding things. John B
  14. Permutation with repetition

    Quote:Original post by alvaro Quote:Original post by JohnBSmall Edit: Of course, you could generalize it to arbitrary bases to support different set sizes though (e.g., if the set size happened to be 8, you'd be outputting numbers in octal). ...which is exactly the second solution I proposed. Indeed it is. John B
  15. Permutation with repetition

    Quote:Original post by yahastu Ok, some people have been suggesting some very complicated solutions to your problem. Let me just say this.... ... In other words, the solution is given by simply starting with 0 and adding 1 on each iteration.. Only true when the set you're dealing with is {0,1}, which isn't always the case (according to the original post, the set size is an input parameter). Edit: Of course, you could generalize it to arbitrary bases to support different set sizes though (e.g., if the set size happened to be 8, you'd be outputting numbers in octal). John B