• Content count

  • Joined

  • Last visited

Community Reputation

234 Neutral

About BeastX

  • Rank
  1. This isn't a standard cube map. My cube faces are all on a single texture that's subdivided into an NxM grid. For each light, I render depth/shadow data to a viewport/cell on the grid instead of a unique FBO per face. This saves me FBO changes, texture units, and allows me to render multiple lights and shadows per pass. CLAMP_TO_EDGE won't work with the subdivided texture because not all edges are on the texture's border. Clamping UVs in the fragment shader, to not cross cell boundaries, behaves as I'd expect it to but doesn't fix the outline.
  2. For my generic forward renderer, to reduce passes, I've implemented spot and directional shadow mapping using a texture atlas that's subdivided to support several lights. I'm using that same texture to implement VSDCTs for point lights. This looks correct except that I have edges, borders, outlines around each sampled face edge. I've tried clamping my UVs, which shifts the shadow and just moves this "border" I've tried increasing the shadow view's FOV beyond 90, which only has a slight effect after 130 degrees but that is excessive and destroys depth map precision. I'd compare this to the seam issues with DPSMs, but I have several Any suggestions would be greatly appreciated.
  3. [4E6] Conclusion

    Quote:Original post by Numsgil Quote:Original post by BeastX You're provided with an extensive list of free resources in the contest forum, as well as the web, and any other resources you can find, make, or purchase. Why would you purchase assets for a little game competition like this? I looked through the asset links when the competition first started. I saw nothing particularly relevant (ie: ponies or crystals) or usable (there were some links to tilesets that were something like 8x8, which is, well, unusable :)) It's never just a contest! It's a blip on the radar, hosted by a globally viewed site, and resume filler. While a GDNet+ membership is free money, its dollar value doesn't compare to free hardware and software worth hundreds or thousands of dollars, as was the case with 4e4. 4e4 had prizes and visibility that represented a decent return on investment for winners.
  4. [4E6] Conclusion

    You're provided with an extensive list of free resources in the contest forum, as well as the web, and any other resources you can find, make, or purchase. I'll say the same thing about art that I do about the elements. As long as it comes together and is consistently stylized then you should be able to make a decent game. There are several great games that use hand drawn, pen, pencil, crayon, and other media to create simple, style. I can't find the pen and paper space invaders. Great stuff!
  5. [4E6] Conclusion

    I still say elements don't matter. If the contest had remained earth, fire, wind, and water would you just make a game with those elements as terrain or would you shoot for something like Avatar or Captain Planet? You decided what elements were good or bad based on your basic interpretation that tangible elements would simply be props. I say that you limited yourself by not making your potatoes be sentient, cannabilistic creatures, your platforms not red, ruby elevator shoes, your physics be the subject that an antogonizing, OCD professor studied, and so on.
  6. [4E6] Conclusion

    I'm sure they'll be judged soon enough. Then again, it's never soon enough for the participants. As much as people complained about the elements and contest, I'd not be in a rush to judge either. I have mixed feelings about the contest but I love the random elements and free form. I never have time but I've wanted to enter every year since Ninjas/Pirates/Robots/Aliens. That was the best year for high-tech, varied, complete entries. Then again, it had a theme everyone could appreciate and a great set of prizes as incentive. The elements should never really matter. Any creative person should be able to adapt them to their needs. This year, I wanted to do a dark, comical game inspired by Rainbow Brite. Last year, I wanted to make a game inspired by Gem and the Holograms, with metal and rock, set in Europe, fighting the evils of economy eating, American Idols and country music. Comedy and imagination are so underrated in games. Every cartoon and movie from the 80s and 90s works as amazing fuel for thought. People go overboard with their lofty ideas for such a short term project. Still, it is a little unbalanced that 2D flash and Game Maker artists can pummel programmers who can make the tech but not the content. Fewer programmers will enter, leaving pretty, complete, but low-tech submissions. Since animators and artists don't waste time with likely to fail projects, I've considered paying for content that goes beyond my ability or schedule. To invest that seriously, I'd have to either have alterior motives or a decent ROI. Sure, legal, freeware, shareware, or fully licensed software is obviously required for development. Why would anyone (like nVidia, AMD, Intel, Adobe, Autodesk, etc) sponsor a competition whose entries didn't reflect the potential of their products? Make bigger, better games and get bigger, better prizes. (I'd buy that for a dollar!) For future contests, I'd recommend the following: -Extend the contest to a full year, or 6 months, and have them back to back. -Announce new elements the day the previous contest ends. -Announce and commit to submission dates up front. -Announce and commit to judging dates up front. -Announce base prizes up front. -Set a higher bar for minimum game requirements to balance entries. [Edited by - BeastX on April 29, 2008 1:10:02 PM]
  7. I successfully got Ogre and MD5 mesh and animation data exporting from MAX, rendering and animating with the same code. Collada and MS3D will come next.
  8. After I got OGRE XML working, I went back to MD5 to attempt to convert it to a useable format (my normal mesh format) for standard skinning. 1) The first problem was caused by not resettingg my model's/skin's transform after scaling it down in MAX. That really only affected Ogre XML because the MD5 exporter ignored scale altogether. 2) After the data was correct, I made sure the MD5 still worked using the MD5 format's assumed method of rendering. That requires building the acttual mesh for the current frame on the CPU and passing it to the normal, static mesh rendering pipeline. Some people managed to do it GPU side but it wouldn't leave room for other effects. The advantages of building an intermediate mesh are that it doesn't have to be skinned per draw call, your bone count per vertex is limitless, and it can be used for other things like triangle soup collision. It's essentially the same as vertex tweening with less data. Normal GPU and CPU skinning are much cheaper, don't require custom code, and don't require dealing with so much data. GPU skinning is cheapest in all cases, whether by API or shader. That's good enough reason to convert. 3) Build the base mesh as normal, transforming and combining weights by they're joints to make the meshes vertices. 4) The joints for the MD5Mesh base skeleton (bind pose) are already transformed by their parents. They need to be concatenated with the inverse of their parent joint to match the rest of the data. This is if your system builds the inverse bind matrices instead of taking them directly. 5) Create the inverse bind matrices per bone from the step above. Only transform by these when providing matrices to the renderer. 6) Normalizing the weight values was the correct solution to chopping off too many weights per vertex. 7) There was a minor vertex twitch in one spot after removing some weights. So, limiting the bone count per vertex to 4 in MAX on the skin modifier worked best. Then it was all henshin a gogo baby.
  9. I got everything working perfectly. It took a while for my brain to consider and ssarch that Ogre stores skeletal animation keyframes as deltas from the base pose instead of the actual pose for that frame. I just converted the deltas to whole transforms and fed those to my animation system. When drawing just the skeleton, it also wasn't registering (what I get for staying up late) that I should only transform by the base pose's inverse for rendering purpose. After that, and some minor tweaks, it all worked as far as Ogre XML (which is only meant to be intermediate to get things working) was concerned.
  10. Suitable 3D Format

    They all have limitations. MD5 is good but requires a unique mesh format and software skinning, AFAIK. OGRE XML is good and straight forward even though I'm having issues applying animations to it in my own code. ASE supports vertex tweening animation. Others recommend COLLADA, which does EVERYTHING. But because it does so much there's even more to wade through. X text files are useable if you can parse them.
  11. Everything works fine now regarding bone count. However, the md5 format requires me to have a custom mesh class. I'd like to convert the md5 mesh and anim to work with my normal mesh class, which uses D3D indexed vertex blending. I've attempted to create the bind pose mesh (the default pose from the md5mesh file), invert each bone's local transform, and plug that into my animation system. What I get is an animated skin that somewhat resembles a roadkill version of my model. I know this conversion/combination is possible since others have written md5 importers. Can someone who's successfully done this offer assistance, please?
  12. If not Ogre, what other text based formats are readily available with exporters from Max that provide all the data I need for meshes, skeletons, and animations?
  13. I've written a generic animation system. I'm using MD5 and Ogre XML formats for test data. I can get MD5 meshes and animations to load, play, and render perfectly if I do my own software skinning. I'm having issues with using the same animation code with Ogre's XML formats. The base mesh renders correctly. But, when animating it warps and stretches like Plastic Man, Mr. Fantastic, and Elastigirl/Mrs. Incredible. I've read that it could be necessary for me to apply an inverse transform matrix from my base skeletion pose. That makes complete sense because the base mesh would already be transformed by the base pose. The question is... (for Ogre's export specifically) If the lack of an inverse transform is my problem, where should I compute and where should it be applied? Are the base skeleton's bones already transformed by their parents directly out of the file? Do I apply the inverse for every bone in the hierarchy or just the current bone? [Edited by - BeastX on July 13, 2006 10:27:04 AM]
  14. I'll try that. I'd tried trimming the smaller weights and normalizing the remaining four as if they were a vector. That failed :) Do you still average in the position of the ignored weights?
  15. I have a generic skinning and animation system that I hope can support a few formats. Currently, I'm adding support for MD5. It was easy enough to get MD5 meshes, skeletons, and animations loading and playing. My skeleton proxy renders and animates correctly but my mesh has a few anomolies. Because it's a generic system, I can't follow the standard MD5 soft-skinning approach. I tried using the highest priority weights to compute untransformed, weighted vertex positions for a base mesh. This actually kind of worked but it's not as perfect as the usual, soft-skinning approach because I'm limiting the number of weights. Has anyone else encountered this or does anyone have suggestions? [Edited by - BeastX on July 13, 2006 11:49:26 AM]