• Content count

  • Joined

  • Last visited

Community Reputation

236 Neutral

About JuNC

  • Rank
  1. construct fuild surface, and splash

    Not sure I totally understand the question, if you just want a mesh for rendering you could try: Screen Space Meshes
  2. In terms of understanding Bayesian approaches there is an excellent book which you can buy and get most of online: PROBABILITY THEORY: THE LOGIC OF SCIENCE by E. T. Jaynes There are even application chapters on image reconstruction (and by inference image recognition). You may also find Geometric Algebra to be a useful avenue to study (e.g. try here), there have been some interesting works using GA for camera parameter estimation etc. which may have relevance to image recognition (plus the profound insights GA gives :). The Wikipedia is usually a good place to start for references and links since you've already mentioned most of the search terms you'll need :)
  3. Brainstorm: 2D Physics Puzzles

    Hmm, - Ninja Rope/Grapple swinging - See-saw balancing - rolling objects down hills timed to clog up some gears - see-saw jumping or aiming objects at targets - using blowers, as in blowing objects around - filling up liquid tanks to overflow into others (or some sort of tip-bucket) - using objects to fill up the tanks (e.g. put a load of bricks in causing overflow) - crane operation (to move heavy blocks) - skateboarding (2d half-pipe type jumps or loops) - setup some crazy TIM like contraption to push a cat through a flap or something - laying platforms for moving over lava or similar - low/high gravity or different directional What kind of setting? Might be more inspiration with a bit more detail.
  4. AI Helper - FPS context.

    There is certainly something to be said for having sidekicks, although it's not novel (HL2 makes great use of some of this). Having the companion be smarter and able to do more would (depending on game) be pretty good, but I imagine that is where Valve and others are headed anyway - it's just that AI technology isn't there yet. I suspect we'll see a big leap at some point as the tech and processing power converge to allow a greater depth of interaction. I also think this would work better in a more free roaming environment (think STALKER) where AI that thinks on its feet would be very useful and make things less lonely (although then some of the magic of STALKER would be lost perhaps ;).
  5. Quote:Original post by issch Visage: Coincidentally, I've been playing with a programming language idea for the past year and a half now, where chunks of code are arranged in a directed graph. It would be theoretically possible to analyze the graph at compile time and determine which chunks of code can be run in parallel. The entire language is designed around this, so, (again theoretically, since I have yet to actually work on this aspect) one could analyze the nodes at different levels (since the nodes can be composed of a sub-graph of nodes, eg: a function call can be further decomposed). The compiler could then determine which nodes can be run in parallel on these levels (large module-like nodes could be run on separate computers perhaps? Functions in separate threads? Primitive code chunks, which can run in parallel, could be arranged for maximum pipeline throughput?). This would all be completely transparent to the programmer - the programmer creates a program by creating nodes and linking dependent nodes. The code is a directed graph of operations. It is irrelevant to the programmer if the operations are primitive instructions, complex constructs, functions, large modules or whatever. Having said that, I don't know very much about graph theory/analysis, so I'm not sure how difficult this would be. This is the main reason purely functional languages are usually hyped to be 'easily parallelizable'. Many of these languages (e.g. Clean) use a 'graph reduction semantics' which is almost like what you've described here. Unfortunately parallel analysis for efficient parallelism is *much* harder than it sounds - unless you can have fine-grained enough parallelism you don't wind up gaining anything due to waiting. Pi or Join calculus based languages (Erlang isn't a direct implementation but is very close) make things much easier by making the parallelism explicit (but not necessarily easier to program!). Quote:Original post by visage My intention was not to necessarily provide yet another abstraction layer that removed 'power' from the programmer. Rather than removing 'power,' I see it as a way of providing more freedom. If a programmer isn't as bogged down by the concepts of critical sections, monitors, mutexes, semaphores, shared memory, message passing, distribution, scaling, fail-safe protection, and all the other issues that come along with concurrency and distributed processing, they would be free to write more code. In my opinion, Erlang has taken the largest step towards what I believe a 'concurrent paradigm' should be -- but it still isn't entirely there for me. I think Erlang sneaks around the majority of the issues because it is a functional language, and therefore does not have to deal with volatile values. It's message passing paradigm works, but I don't believe it is the best possible solution -- and it certainly won't work in languages that are not functional. I think we need to take it a step further. I don't know how closely you've looked at Erlang, but its being purely functional is completely orthogonal to it being concurrent. Each process could easily be a procedural implementation with mutable variables. The important thing about Erlang is its *lack of shared state*. Message passing (esp. async) seems to be being underestimated a little here, especially when you consider distribution and not just parallelism. It is a beautiful and unified means of computation (again, look closely at the pi and Join calculi - EVERYTHING is a message :).
  6. Averaged normals only work for 'smoothing groups' - that is surfaces that should be smooth and have no seams. Since the edges of the cube are actually seams the calculation breaks down. There are multiple strategies for handling this: i) Duplicate vertices (one set for each smooth mesh) ii) Multiple normals per vertex (probably a bad idea) iii) Normal maps And probably others I can't think of right now :)
  7. Hydrogen War

    I guess the post H-war game might be more interesting - oh.. wait.. (Fallout?...) Seriously though, there are some interesting multiplayer (diplomatic) aspects to MAD, but probably not for most casual gamers who will just click the damn 'NUKE' button :) Defcon is pretty cool.
  8. Avoiding reload cheating

    Surely that defeats the deletion on loading? Or the player could setup an auto-move-autosave type program external to the game (and that will then be easily available on the net). I guess the game could save encrypted versions or something. Seems like a lot of effort for what IMO is a non-problem. Much of the enjoyment I get from games is to experiment with the mechanics and for that I need save game abuse :) I honestly don't see why save game 'abuse' is a problem - it's the players game, let them play how they want. Fair enough if you want to provide a hardcore mode where saving is only allowed at certain designer chosen points, but why limit yourself to hardcore gamers when casual gamers (like myself for many games I'll admit) make up so much more of the market? Without a save-at-any-point I simply won't bother unless the game is aimed at my hardcore style (FPS virtual instadeath - think Vietcong). Rattenhirn provides a very good answer that nowadays is feasible and I think likely to become moreso as broadband prices drop: host your game servers.
  9. Avoiding reload cheating

    Quote:Original post by Kest Implement a save-on-exit system, where the save is deleted when loaded. Then allow the player to choose between "save anytime" or "save only on exit" when a new game (campaign) is started, and don't allow changing that option during the entire campaign. Trust me, this is the ultimate solution. It will save everyone that would normally get burned by the availability of save-reload cheating. It allows them to cut it off from themselves, and anyone who can afford to will. What happens when the game crashes? :)
  10. You should also look closely at the π Calculus and (the related) Join Calculus.
  11. Unity Pacman AI info

    I quite like the Antiobjects idea: Wikipedia, Collaborative Diffusion: Programming Antiobjects (paper).
  12. You mean you had code blow up when you tried to do this? Could you provide an example? I have noticed that OpenGL didn't like when I created very small textures (2x2) as a static array, it seemed to be some sort of alignment problem. I assumed this was just driver issues (and it's never affected my 'real' code).
  13. REYES Rendering

    Pixar provide lots of interesting papers here, including an original on the Reyes architecture (it's the same as the one from Wikipedia linked above though). The basic premise is that rather than directly rasterizing each primitive (like bezier patches, spheres, triangles), you perform a recursive subdivide. At each stage you either 'slice' (decompose primitive into more primitives) or 'dice' (decompose primitive into < 1 pixel sized quads). Each of these diced quads is then gouraud shaded. It's a very nice, simple and effective system for working with massive scenes (larger than core, or with complex displacement mapping) but as mentioned above, not suitable for realtime rendering (yet!). Its also been suggested that a similar algorithm can be useful for ray tracing, where you lazily slice and dice as rays intersect various parts of the scene and instead of directly rendering the quads (or triangles) you insert them into geometry caches, compute new acceleration structures and do the ray trace there. Might be an interesting avenue if you were after offline rendering algorithms.
  14. I don't see why stack frames have anything to do with this, as long as you allocate in the initialized data section (i.e. outside any functions). But yes, GIMP or any paint package that supports export to C will do.
  15. Shadows for scares

    BioShock uses shadows for scares in quite a few places and it works really well. There are several other brilliant uses of light/dark in that game too. It works extremely well partly because it isn't overdone and is appropriately fitted into the level/story structure. FEAR is also a good example - although the gameplay itself gets a bit dull the story and scare aspect is excellent throughout. (no spoilers in my post, you'll have to play the games if you haven't ;)