• Advertisement

BlackSheep

Member
  • Content count

    492
  • Joined

  • Last visited

Community Reputation

100 Neutral

About BlackSheep

  • Rank
    Member
  1. Radiosity Visibility Preprocessing

    Re: slow visibility calculations. I saw an enormous increase in speed from simply preprocessing a per-polygon dotproduct visibility test. For all triangles or polygons in the scene, store a flag for which other polygons that faces away from it (and therefore has a formfactor of zero. For each patch in the scene, keep a reference to which polygon / triangle it is part of. That way, when you do all the patch-to-patch checks, you can discard huge piles of patches simply by reading through the poly-see-poly data. Very easy to code, and sees huge improvements.
  2. triangle - AABB intersection

    Quote:Original post by Lode Quote:Original post by BlackSheep OTTOMH - Simply checking each vertex of the triangle is inside the AABB will do it. Quite a lot of point-plane distance tests though. Maybe there's a cleverer way than just brute-forcing it. Sometimes all vertices of the triangle are outside the AABB, and still a part of the triangle may go through the AABB, I'd like to include those cases too. If you use the hyperplanes of the AABB rather than the absolute rectangular boundaries, you'll catch those too.
  3. triangle - AABB intersection

    OTTOMH - Simply checking each vertex of the triangle is inside the AABB will do it. Quite a lot of point-plane distance tests though. Maybe there's a cleverer way than just brute-forcing it.
  4. Radiosity in Practice

    I could be wrong, but doesn't Half-life 2 use precomputed radiosity lightmaps, with normal mapping overlaid for extra detail?
  5. Radiosity in Practice

    VladR: Following you OK there! Can you just clarify - when you say 400 passes per second, do you mean shooting the 400 highest-energy patches? I think this is not enough to allow acceptable real-time performance unless the patch density is very low, as even at 20fps you'd only be able to shoot 20 patches per frame, which isn't going to be enough unless there are some amazingly bright patches to work with. Maybe I am still missing something? Pre-calculating X positions of the light for moving lights is a good idea, but quickly falls over itself with multiple lights - even with two lights you'll need 10,000 sets of data to cover all combinations of each moving light where the cast light interacts with each other, unless you manually calculate the minimum necessary lighting positions for all lights. Dammit, this is making me want to dust off my old radiosity processor, but I've got finals to study for!
  6. Radiosity in Practice

    Quote:Original post by David Hart Screenshot 1 Screenshot 2 Nice work, looks like a heck of a time saving doing that much interpolation. Your red-green shot looks strange. Is your light part of the ceiling? The pic makes it look like a light suspended from the ceiling. Also, is the ceiling really that dark after 200 passes? Maybe a slight reduction in patch-count would be helpful :)
  7. Radiosity in Practice

    Quote:Original post by David Hart Well not really. Form factors between two stationnary objects are constant. But if we start talking about moving objects, just moving a light means recalculating all the form factors between each patch on the light with all other patches. And then I could start moving an object in front of the light, and even more form factors to calculate... Exactly. VladR mentioned precomputing static walls, without reference to moving objects, which would indeed require complete recalculation of form factors. However, for static geometry, there is no restriction on position, intensity or colour of the lights in the scene. Even the number of lights is relatively easy to play with, as only the first pass (the initial shooting of the lights' energy) will show an increase in calculation time, although there may be a corresponding increase in the number of passes required to obtain an acceptable solution.
  8. Radiosity in Practice

    Quote:Original post by VladR Well, it should only help you in case you`re just blindly testing visibility of every patch with each other patch, right ? Such patches (on same polygon) would get rejected during form-factor calculation anyway (because of phi_p, phi_r angles). So you would just save 1 (maybe 2) dotproducts for a price of condition for each patch (possibly hundreds of thousands of condition executions throught complete radiosity solution). Otherwise you`re testing visibility of given patch in your code before testing angles of normals, which would be inefficient. Personally, I have a text configuration file where you can specify self-shadowing and if you set it to false for given object, it automatically shoots only to patches of all other objects, so if I have a wall that has, say, 5000 patches, any shooter from that wall is shooting energy directly (without any checking for each patch) only to all other objects (thus saving 5000 partial form-factor calculations per pass). Of course, it`s good mainly for walls. If you had some more complicated object like a statue, the results would be better with Self-Shadowing On. But majority of patches are usually just walls, so why not make a use of this ? I realise that logically a patch-by-patch test should be slower than simple per-polygon testing, or just letting it drop out of the FF equation and saving the conditionals, but my test scenes were showing savings of up to 30 seconds on a 5 minute scene, so I kept with it - every little helps, and that's quite a big help :) This may have been a geometry-dependant saving though, I'll need to test the code with different scenes to reach a more reliable conclusion. Quote: EDIT2:I have dropped the idea of interpolation when I was starting with radiosity and somehow forgot it. But now, that you reminded me about it, I made some calculations: Scene with 60k patches can be approximated with 4146 patches (each of them acting as the corner of 4x4 patch quad).Since wall patches in a game are stationary, we can precompute their visibility (2 MB). During 1 second I managed to compute 401 passes (full FF calculation). This means it`s possible to do it in real-time ! Now I don`t know how much time would it take to update the texture each frame (or maybe each 10th frame) - I`d have to try it. But the calculations itself are definitely real-time ! Besides, once you would reach certain number of passes (say,1000), you could just keep the last textures and stop doing any more passes to raise the framerate ! So, for stationary lights, the real-time radiosity is entirely possible for common scenes. Moving lights (like in Doom3) couldn`t be precalculated, but still, 401 passes per second offer plenty room for many other in-game activities (AI,navigation,controls). What do you think ? I'm not sure why you think you could only do static lighting in real-time? Formfactors are constant and independant of lighting variables, so it should be possible to do whatever you like to the lights. You don't explicitly mention what data you're storing - visibility, complete FF, or more?
  9. Radiosity in Practice

    Quote:Original post by VladR Quote:Original post by David Hart If only I could store the form factors, it would speed everything up a lot, but with 240'400 patches, that's about 57.79 Gb of information!!! Any ideas?Well actually, 240400*240400*4 = 215 GB (you forgot to multiply by sizeof float), so obviously this is not an option. But with those 9216 patches, it would still eat about 324 MB. With some compression (arithmetic coder anyone?), this could be half size or less, for the price of few multiplications (and huge memory latency of course). Depending on the geometry of the scene, the formfactor matrix can often be very sparse. I found that scenes with lots of tight corridors between connecting rooms could have their formfactor matrix represented quite simply in compressed STL vectors with a matching index stored with the formfactors. Maintaining a similar persistent vector of relative visibility between renderable polygons also has benefits in rendering speed as well as storage requirements. Not that none of the above really helps with the simple cube scene used by the OP, as it's polygon visibility matrix is fully populated. I found that a simple check for patches occupying the same polygon (and therefore invisible to each other) can improve calculation speeds and storage requirements tremendously. I haven't read the previous threads, so this maybe old news :)
  10. Help me find screenshot of an OLD game

    I vote for Death Rally http://www.dosgames.com/screens/deathrally.gif EDIT: <tabloid news> preview-button introduces craziness in post-editing shocker!</>
  11. Cultural development of games

    Film has been used as a medium for spreading philosophy and ideas for years. Film was intended as an entertainment at first as well, just like games are now. Why shouldn't games do it too? Games may be primarily about entertainment, but they are now teaching us stuff we would never have otherwise learned. How many people knew how to correct a car's skid before they played GT3/4? How many non-CS players know why a flashbang is such a funky piece of kit? The Matrix dumped a philosopy on the masses (albeit a far-fetched and highly stylised philosophy), and it had enough effect that kids have used it as a viable excuse to gun down their classmates. Viable meaning that said excuse has been ratified as schizophrenia and paranoia by professionals. Games are just another method of communicating ideas, regardless of content. With enough exposure, someone will pick it up and make something of it. It's all media.
  12. Shared vertices

    I looked back through my code - I have one index array referencing individual vertex, texcoord arrays etc. Apologies all :$ Ok, even if you only have one index array, you still need to pass 24 or 36 indices into the rendering calls, which can be generated from the 2 separate vert / texcoord arrays. Still saves memory when there's a lot of mesh data. <devil's advocate> If you want to be really fussy about it, the guy didn't even specify an API (although we can assume that the LaMothe reference implies DirectX). In software rendering, you can do any damn thing you like :)
  13. Shared vertices

    Quote:Original post by timford Your code to accomplish this is hideous because it's nonexistant :) It is impossible to use different indeces for different parts of the vertex data. Hopefully that will change one day though, because it'd be nice. Nope, my code is hideous. It's full of global variables, poor data structures, and botched code - the hallmarks of rushed work intended solely for hobby programming. However, I can assure you that it exists, and is therefore possible. I invite you to read up on such exciting functions as glIndexPointer, glTexCoordPointer, etc; Those are the functions on which my non-existant code is built ;)
  14. Desktop wallpaper thread

    Quote:Original post by Crispy Heh - I love it how the Shetland Islands are larger than Ireland... which doesn't exist. You're not from round these parts, are you? :D The Shetlands aren't shown. Maybe you're thinking of Iceland, just off Greenland, or perhaps the weird place of no real interest to the north of Norway? I always thought the Civ world was too small. They should have allowed the maps to be bigger, then doubled or tripled it.
  15. Shared vertices

    Sorry, got muddled with my index stuff (reading old code at late hours does that!) What I meant was: Have two arrays: 1) Vertices - x,y,z * 8 vertices = 24 (floats?) 2) Texture Coords - u,v * up to 24 (4 per face using 6 quads) or 36 (3 per face using 12 triangles) = max 96 (floats?) Then two index arrays of 24 or 36 indices (depending on using quads or triangles). One array holds indices into the Vertex array as defined above, the other hold indices into the TexCoord array as defined above. For a one-box scene, the overhead of the index arrays negates the memory saving gained by using the index arrays - it would be better to just brute force the whole thing. Larger scenes with closed meshes will quickly benefit, as an index typically consists of 2 shorts (1 for the vertex, one for the texcoord), rather than 5 floats (x,y,z,u,v), saving 16 bytes per vertex. My code is hideous, you wouldn't want it. The OpenGL references and MSDN stuff explain it all pretty clearly though.
  • Advertisement