• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.

Madoc

Members
  • Content count

    16
  • Joined

  • Last visited

Community Reputation

146 Neutral

About Madoc

  • Rank
    Member
  1. Elo again, Pretty busy these days! I'll try to say something but going into any depth here would require a lot of time. •How many rigid bodies can it currently support (on an average target user machine)? Honestly we haven't really stress tested it, more than enough for our purposes. Probably in practice the most demanding thing is something that keeps the simulation fluid (no jittering and popping) and stable even when objects are put under impossible stress. Of course most of the time objects are resting and cost nothing. What is still very demanding is cloth collision detection (you can see this on the cloak here www.youtube.com/watch?v=OPrOonmaG00). That's still going to need some clever optimisation. •When it comes to importing the meshes to the physics system, are they converted to a low-poly version and fit with convex hulls? Umm... To be honest this is something I'd rather not reveal while we're still so early in development with the game. We'd like to be able to use it first. I hope you understand. •Any details you can share about the physics+animation combination would be cool. Well, applying some forces to an articulated body is not a big deal but all you get is a body twitching awkwardly on the ground. Going from there to something that behaves like a character, that can maintain and recover balance, swing heavy weapons without falling over, get up from having fallen, run over rough terrain and do all sorts of other things is rather challenging. This needed a lot of procedural behaviours and basically a mountain of hacks, and the way it all interacts is the stuff of nightmares. A lot of the body is controlled almost purely procedurally and getting it to behave well in all circumstances has threatened to drive me insane. It's still a bit clumsy but I hope to improve it further, also with more predictive behaviours which have been somewhat neglected so far. I'm also looking to include more hand crafted solutions (i.e. for breaking or preventing a fall) which from the little I've picked up looks to be closer to how Euphoria works. For some reason I still don't fully understand, I can't get traditional IK solvers to interact well with the system. I've tried this several times and just failed to produce anything that wasn't a bit twitchy (the people who made Euphoria are probably laughing at me right now). I have something that sort of does the job but it works in mysterious ways .
  2. As a member of the gamedev forums I thought I'd step in here and drop a line. L. Spiro, you seem to just be describing some constraint solving method. It's not what I use but even so this would amount to a couple of line of codes, our character animation system is currently shy of 15000 lines of code. Don't want to bash anyone or anything, it doesn't seem right to mislead people into thinking that what I've done here can be reduced to something so simple. It's the result of a really huge effort, lots of testing and tweaking and still it's a work in progress. I usually get things done very quickly, this has taken away inordinate amounts of my time. Cheers, Madoc
  3. Ugh. Right, well... Let me try and reply to some of that. Andy, I understood what you mean from your post earlier today. I see the merit in the method but it just wouldn't work for our models, the depth complexity would cause problems. The only way we could be using something similar is by rendering from inside out and that has the problems I described earlier. Anyway, as for the precision, we're not really doing AO, I mentioned that before. Accessibility shading is closer to what we are doing, but even that doesn't quite fit. There's a lot of stuff you can do with this information. You can set your FoV to 90 degrees but our sample hemisphere is half a sphere and covers 180 degrees, hence the requirement for something more like a cubemap. --- I finished implementing kd Trees. The tree itself gave suriprisingly little improvement over the Octree approach (it should be a fairly decent SAH implementation) but the "neat tricks" mentioned seem very worthwhile, I implemented just one and got a ~250% speed increase in the raycasts. These kd trees have some nice implicit properties.
  4. What we're doing is quite different. Strictly speaking, it's not even AO and certainly not the kind of AO you see so much of these days. We need to sample a complete hemisphere and the only way (I can think of) to do that is with half a cube map as I mentioned above. Also we need very high levels of multisampling. The occlusion is calculated (with MS) for every pixel of very large maps and models are several million polygons. If you stick to the requirements I gave, some of our maps would take over 300 billion renders of several million polys. It's the specific number of "rays" in Melody that leads me to believe that it uses actual rays, you can't achieve any number of well distributed pixels with an image, but of course they could just be lying about the number... Hope that clarifies things a litle.
  5. That was basically in my reply to _Lopez. But for what we do, we'd need to render 75 images per texel and without some expensive additional work the results would be incorrectly biased. You also have the problem of the near clipping plane (which you can eliminate but not without causing more problems). This has to be precision work. Edit: I estimate the GPU rendering path as above would take about a week as opposed to a couple of hours. Also NVidia's Melody allows you to choose a *specific number* of "rays", that suggests rays is indeed what they are using. kd trees look nice. I haven't found any decent literature but playing with the idea I can see some really neat tricks to speed things up. I'll definitely try it tomorrow. [Edited by - Madoc on November 1, 2007 7:53:42 PM]
  6. Thanks for the replies. KD trees is something I have been considering but I haven't looked at them in detail, I just know the basic concept. If they really are that much more efficient then I'll definitely give them a shot. The intersection tests use precomputed data and are heavily optimised. The reasons for the high number of rays are two, one is that multisampling is used and the other is that the surfaces are extremely complex and the fine details are important. We don't get visually acceptable results with less than about 1k rays per texel. We use 4k for production but that's fine as we can just leave some machine grinding away at it once the model is ready for it. I'm also not so sure about Melody being ever so quick anymore. I just tried it again with one of our models and 200 rays and it took well over an hour. I'm sure I've seen it go much faster, it's hard to guess what affects it's performance though (it's also a bit of a pain to use...). _Lopez, I have heard of such methods and I suppose rendering depth in half a cube map might work pretty well but I also see a lot of problems and added complexity that I'm not prepared to deal with.
  7. I'm wanting to optimise ray casting for ambient occlusion purposes. For a high quality render, we are casting near 4k rays per texel on the destination texture. This gets a bit slow, but even with less rays for "preview" purposes it's quite slow. We are currently using a well optimised octree to accellerate the ray casts. To be honest, I wouldn't be expecting or looking for *much* better performance if it weren't that NVidia's Melody seems to handle large numbers of rays for ambient occlusion considerably quicker. Does anyone know what they might be doing? I have considered a number of potential optimisations but I don't think any would be very general purpose or effective. I would still stick to ray casting and not something more approximate but a limited quality or accuracy sacrifice is fine for a preview mode.
  8. Apologies if you misunderstood me, my post was nothing more than a little sarcasm aimed at ATI.
  9. You MUST have introduced some serious bugs in your code, I don't see how else the ATI card could be working properly.
  10. You can derive the bitangent from the tangent and normal in the shader. I don't think there is any way you can get by without computing a tangent unless in very special cases. Take a look at Eric Lengyel's code. In your shader, do B = (N cross T) * T.w or something like: XPD bitangent, normal, tangent; MUL bitangent, bitangent, tangent.w; I believe there are no issues with this method. It works fine for me with some pretty jumbled UV mapping. In a recent post I claimed problems in trying to split the vertices where tangents were discontinuous but this is actually trivially solved, some of the data I was using in my splitting stage was corrupt/undefined!
  11. I need to calculate tangents for a mesh with adjacent faces having flipped AND mirrored UVs (or rotated 180 deg). So far I've only dealt with splitting vertices where face UVs have different winding order, which is very simple. I'm using Eric's method for tangents. I've tried splitting vertices from faces with considerably different tangents and bitangents but this proved surprisingly messy and unreliable. I haven't had any problems with badly behaved tangents besides this case of 180 deg rotation but it makes sense that any case of discontinuous tangents should be handled. Are there any standard or ideal methods to detect and repair these conditions?
  12. Hello, I've been using stencil shadows for all dynamic shadowing for years now and I've not seriously looked into shadow maps. Now I'm moving to the great outdoors and I seriously need an image space approach. I've done a fair bit of research and I think I have a reasonable overview of all the more popular methods, but before I get into those I'd like to run some ideas of my own by someone with experience in shadow maps. One of the things I most dislike about shadow maps is the shimmering aliasing artifacts given by view-dependant methods. I am looking into directional lights, ie the sun, only (stencil shadows for other lights). My terrain is split into a square grid and each cell can be rendered independantly with it's own state set. Exploiting this, I would consider rendering a view-independant shadow map per cell. The advantages I see in this approach are: 1) View independant and hence aliasing artifacts don't move with camera transforms. 2) View independant and hence no need to re-render shadow maps for additional renderings in same time-slice (i.e. water reflections). 3) Per-cell updates means I can update distant shadow maps (very large portions of the scene) with considerably lower frequency. The only downside I see is the relatively poor distribution of samples in view space for a first person or similar view. This could result in poor resolution close up and minification artifacts further away. Also, I am presuming that I can trasform my shadow map projection so as to obtain an isotropic distribution of shadow map samples onto my square terrain cell. Not sure how yet but I presume this would be fairly easy? So, can someone with experience in shadow maps suggest how serious these issues might be and help me figure out if this approach could be worthwhile? Thanks
  13. Oooh, Dave Eberly himself! Didn't know where to find your great resources since magic software dissapeared. I see you've basically just changed the name. Right, clipping a polyhedron against every (negated) face plane of another in turn (given that the latter is convex) would indeed give me the intersection I require. I at least have an alternative solution if the method I suggested doesn't prove worthwhile. Any idea if that separating plane intersection test from the gamasutra article is valid? Thanks
  14. Yeah, that much I understand. Problem is, at this point it becomes important that all redundant planes are eliminated (for performance reasons) and also I will require the additional features (vertices and edges). Until this point (component polyhedra), the other features are not important but can be provided at minimal cost. I considered that it may be most efficient to eliminate the redundant planes in a first step by determining whether they intersect the other polyhedron, then I would perform the union and finally I would derive the missing features. The first step becomes a matter of performing a number of polygon-polyhedron tests. The polyhedra are simple, averaging 6 to 10 faces. I would appreaciate any reccomendation as to what method may prove most efficient here. I have considered V-Clip but something more brute force may suffice. If the method from the gamasutra article is valid, I may consider using a form of that. The second step of deriving edge and vertex features from the planes is where I've encountered precision issues in the past. Hopefully I can overcome these probelms, otherwise I may have to look at a method that preserves existing features. I don't like the sound of that.
  15. Hmm... Believe it or not, I've always completely ignored BSP, I use portals, octrees and BV hierarchies and have never even read up on BSP (maybe it's time I did). I can't imagine how it might help me. I suppose the reason why I proposed the above method is because I've found it useful in a number of circumstances. Most of the volumes I work with in this context do not have any explicit features beyond plane equations (though they could be defined) and being able to derive additional features when necessary is quite attractive. I am basically assessing the feasability of a revised implementation of this and I am still not certain what all the requirements are, at this point I am mostly trying to determine what is available to me. So, excluding the existence of well known solutions, if I were to take the approach I suggest, can you confirm that it is theoretically correct and speculate as to whether the precision issues are surmountable? I am afraid to make mistakes in this planning stage as I can't do any actual testing for quite some time yet. What are you proposing by "testing every edge vs face"? Thanks