INVERSED

Members
  • Content count

    296
  • Joined

  • Last visited

Community Reputation

172 Neutral

About INVERSED

  • Rank
    Member
  1. Quote:Original post by Ingenu Quote:Original post by INVERSED So, first question, if a graphics card has x texture units, does that become an actual limit to the number of textures you can read from in the shader, or can a card swap textures around so that you can read from say 8 textures on a card that only has 4 units. No, but the number of texture units in shader mode is different than in fixed function mode, there are at least as many available, and often many more (like twice). Ok, that's good to know at least. I wasn't seeing how one would efficiently pull off some of these effects with just four texture units available on some cards. Quote:Original post by Ingenu -additive shader, in which you add code snippets relevant to the parameters of the geometry (quantity and types of lights...), I favore it, for it's way easier to maintain and extend but require strong design to ensure that everything goes smooth. Ok so this is the part I was curious about. I was wondering how to put into code efficiently. The first question is, when compiling a shader for any given object, do you just compile in the maximum number of supported lights and somehow turn off the lights not in use (say if you're allowed to do 4 lights on an object, but in the current scene there are only two affecting the object. Further, what's the best way to compile all these code fragments. At the moment I'm using OpenGL and GLSL. Say I have a situation where I have a code fragment that can render lighting, and a code fragment that can texture a model. The lighting code fragment will need a unique texture for the shadowmap, and the normal map from the model. Say the texturing fragment just needs the model's diffuse texture. Say you wanted to render a scene with three lights and the diffuse texture. Is there any system out there that will allow me to link up these fragments correctly and shuffle around the texture units as needed and spit out the correct GLSL code? Do I have to write this from scratch? Am I going about this the wrong way? Does this scenario make sense?
  2. Scenegraphs and multiple passes

    My understanding of scene graphs hasalways been similar to what Ng was saying, scenegraphcs are spatial/orientation graphs, not render state graphs. So while transform and even camera orientation might be in there, material data would not. In general, the graph is ignorant as to how the items actually get rendendered. Personally, I use the graph to figure out what world elements are visible, and then those elements generate "renderables" which are placed in a queue. Thus, the renderable encapsulates all renderstate changes and multipasses and what not. Hope that helps.
  3. So.... I'm one part confused, and one part looking for suggestions. I have a basic rendering engine that can do all kinds of wonderful multipass- material/effect-shader-offscreen-rendering type things, but it's not very robust yet, and I'm not sure what needs I should expect to meet with it. So, first question, if a graphics card has x texture units, does that become an actual limit to the number of textures you can read from in the shader, or can a card swap textures around so that you can read from say 8 textures on a card that only has 4 units. Second question, in todays typical setup were a renderer may have to render a model multiple times (say once for each light/shadow map, plus a number of fancy extra effects), what is the best way to set things up so that you don't end up writing special case shaders for everything, and so that you can take advantage of multiple texture units so that you can, say, do all of your lighting in one pass, and all of your effects in another pass. I hope the question isn't too general, but I'm not completly sure how much my system has to be prepared to support. I've just read posts were people mention rendering systems processing 20 lights on a model, or doing x number of shadowed lights, so I'm wondering what's a realistic target to shoot for. Let's take kind of a general case situation, though. There is a model, with three lights that are done with normal mapping, and shadow mapping, maybe some other type of effect like paralax mapping, and, just for fun, we'll throw in one more pass for a special effect, and possibly a simple vertex lighting pass for 5 or 6 less important lights. Does anyone have a good way to manage all these passes so that you take full advantage of the hardware's capabilities. Without having to write 100 different shaders? I've seen it mentioned before techniques that generate shaders on the fly, has anyone tried out something like that?
  4. tangentspace view vector?

    Actually it's spelled voila... it's french. Not to be confused with viola, which is like a large violin. Neither are to be confused with the topic which I seem to be off.
  5. Shadowmaps for a point light

    Hey as an aside to what you asked earlier, I believe that you can fake a point light with six spot lights, that's what they're doing in the Oblivion engine. Anyone else read that article in GPU Gems 2 book? Also, I didn't see it mentioned here, but can't you also pack your six cube faces into a 2D texture? That way you get the full speed of a depth compare and not having to use a cubemap. I think you end up wasting some texture memory though.
  6. Do I need shaders?

    Hmmm... your question is a little unclear because there are a lot of things that you can do with shaders that you can not do with 3D animation (and things that have nothing to do with 3D animation). Since you are referring to animation, I will assume that you are referring to vertex shaders. In which case yes, generally anything you can do in a VS you can do on the CPU. The graphics card however is specialized to do what it does well, and thus will probably do it much, much faster (hence moving animation into the vertex shader). As for pixel shaders, there's no CPU equivalent for these (short of reading back the frame and processing on the CPU very, very slowly), and that allows us to do wonderful things like reflection, refraction, bloom filters, glows, and all those other pretty eye candies that everyone craves so much. Hope that clarifies.
  7. What is filtering?

    From Wikipedia http://en.wikipedia.org/wiki/Bilinear_filtering http://en.wikipedia.org/wiki/Trilinear_filtering So it will apply any time the texture is shrunk or enlarged, and since 2D is done through the 3D pipeline, it too gets filtered. At one point in life 2Ds and 3Ds were seperate. Now adays they are not. In open GL it's just a flag when creating the texture, I would imagine that it is similar in D3D. I'm not sure about anistropic filtering though, doesn't that require extensions?
  8. Don't you have to keep more than one instace of the geometry anyway. You have to have a copy of the verts in the bind pose, and then transform from the bind pose to the new bone position. Mayhaps it's possible to interpolate the difference in transformation from one bone pos to another bone pos, but I've never seen that. Further, it seems like a good idea to have a system for cacheing the bindpose anyway, and sharing it among multiple instances of the same model (general instance, not hardware instancing or whatever). For example, if you have three monkey monsters (because everyone loves monkies), and they were not animated, you wouldn't want to load the data three times. Ok, so I guess it would have made for a better example if I picked something non animated, like a tree, but you get the point. If it's an animated model, you use one bind pose for all instances, and each instance will have a local copy representing the current state. Seems efficient enough to me. As for quats versus mats, doesn't the average model only have 20-30ish bones if that? It seems like the savings in computation from the conversion of quats to mats to transform the verts would be worth the extra space. That said, I use quats myself because of SLERP. I thought that was the primary reason for using quats.
  9. grass rendering

    I used the method described in GPU Gems 2 (which is slightly different from the on in GPU Gems 1. In said method, you just draw a bunch of screen facing transparent billboards. You then multiply the alpha value of the pixel by a component pulled from a gray scale texture. This creates a dissolve sort of effect so that you do not have to sort the blades by distance. I have not yet implemented these, but there is also a suggestion for lighting by using the normal of the ground below the grass. Also, one can put more than one grass image on the grass texture (like a texture atlas), and use that to add some variety to the blades. Grass seems to be pretty easy, and when it comes down to it, like most things it's about the quality of the texture. Here are the results of my implimination. I have not yet tied it into my quadtree, so at the moment, I'm just throwing a buffer full of 200,000 quads at the video card. I do my screen aligning in the vertex shader. http://img.photobucket.com/albums/v453/SeraphicArtist/RadiantGrass02.jpg http://img.photobucket.com/albums/v453/SeraphicArtist/RadiantGrass01.jpg Hope that helps.
  10. Nearest lights in outdoor scene.

    I'm not finished with the lighting system in my engine, so this is highly theoretical. It seems like there are a lot of factors that would affect what solution would be best. First, how many objects are we talking about, and how many lights. Second, are you applying the light list to the entire frustum, to each quad/oct, or are you building a seperate light list for each object. Personally, I just keep my lights in a list and for each object (or quad or frustum) build a light list made on certain heuristics (cromacity, intensity, distance). I like this because each object then gets lit by whatever lights affect it most, and if you don't have that many lights in your scene, it's not that bad. I suppose the other way to do it would be to start with all lights in the same node as the object, and then work your way out untill you fill your list. This would probably require that any given node have pointer to it's siblings and parent. The thing I don't like so much about that is that light is not necessarily a spacial problem. For instance, your lighting system only allows one or two lights per object. Your character has two or three glowing fireflies around him. In the near distance, there is a bright red magical explosion, and in sky there is the moon. A spatial system that stops after finding the first few lights would only light the character with the firefly, where the explosion and moon should have more weight. That's why I prefer to traverse the entire list. What I think would make for an interesting system, however, would be some kind of occlusion query, That way you don't bother turning on a light if it's on the other side of a hill or covered by forest. Has anyone tried anything like this?
  11. I haven't read through all the replies here, so excuse me if I repeat any info, but a while ago I implemented a generic vertex and index buffer for my engeine. The way they work is as such. First you determine a format, similar to how D3D does where you choose what data is represented in the vertex by bit flags. For instance, that would look something like: #define MYFORMAT (VF_POSITION | VF_COLOR | VF_NORMAL) Then, when you create the buffer you determine it's size as well. The vertex buffer is extended with an OpenGL or D3D implementation, so upon creation, the buffer will attempt to allocate video memory based on another flag you choose. Based on the components you require, the correct amount of memory is allocated in video and/or system memory (more on that later). The buffer can be allocated in one of three ways, dynamic, static, or local. Local allocation allocates in sys memory. Static is in vid memory, and dynamic allocates in both vid and local memory, so the memory can be accessed quickly by the CPU if you plan to read / change it frequently. So, that takes care of the allocation of the buffer, now for the access. First off, all buffer access happens between a lock and unlock call, this is where the memory pointer is fetched from, or copied to the vid card if needed. My buffer makes use of two access methods. The first being a GetSafePointer method, where the user sends in the vertex format and the number of verts desired, and the app only sends back a pointer what you're asking for is with in the bounds of the buffer. This is the better faster way of doing things. The other way is that there is a fill method which is like fill( VF_COMPONENT, &CV3ctor3 data, unsigned int index ), which you can imagine is slower, but is neat because the app doesn't have to know what's in the buffer to fill it. If you try to fill a component that's not in the buffer, it ignores it. The final part of the system is who determines what components go into any given buffer. This is determined by the material being applied to the geometry. If for instance, lighting isn't applied to the model, then normals will not be present in the buffer. Hope that helps some.
  12. Speeding up bone animation.

    Quote:Original post by skow This sadly means up to 3 sins and 1 arc cos operation is being done per vertex per bone linked to that vertex. That seems a little suspect. If you traverse the bone hiearchy once and do all your slerping and what not there, and then store your results, it seems like it will be more efficient (i.e. 3 sins and 1 arc cos per bone only). That's how I do it in my code. Also, note that if you move the transforming into a shader, if you do any kind of multipass shading, you will end up retransforming those vertices each time. So, if you have a depth fill pass, followed by a material pass, a lighting pass, and possibly a couple of renders for shadow maps, I start to wonder if the cost of redoing those transforms becomes prohibitive. I haven't done the research to back that up though, can anyone else comment on that.
  13. Heh, good question. I'm doing all of my dev on a 6800GT, and I would like to stay at or aobve 60. I figure F.E.A.R. looks good on my machine, and if it stays above 60 most of the time, my demo should too. I wouldn't mind dipping a bit if the quality trade was worth it. Here's how I see it, I'm pretty much targeting "next-gen" type hardware because I'm either going to use this as a demo to prospective employers, or, if we ever did release this as a product, it would take a while before we finish anyway.
  14. how does std::list work?

    In that example, a copy, if you want pointers you would have to do something like cObject myObject; std::list <cObject*> myList; <-notice the * myList.push_back(&myObject); <- notice the & That's a bad example cuz saving pointer to things in local scope is dangerous, but you get the point, right? I would imagine that it's for that very reason that StL passes by value instead of storing a refrence.
  15. So, I have been working on a demo recently, and I have been thinking a lot on how best to light it. The environment i'm going for is an open outdoors enfironment. I have a quadtree based terrain that I want to fill with lots of trees, grass, and other thingies. I want the demo to be able to do day/night transitions, so I want most of my lighting to be dynamic. So, here's my theory on light management/rendering techniques. Step 1: Identify the most important lights on the scene, namely, the sun, and perhaps one or two other lights close to the camera. These lights will be important enough to cast shadows and what not. Step 2: Identify the most important lights to the individual objects. So, if a group of trees has a campfire near it, those trees will put that light in their list. Step 3: Use spherical harmonics for global illumination of the rest of the lights contributing to the scene. Shader x3 had an article on sperical harmonics to reduce the number of calculations needed to render a scene. While the tecnique isn't particularly acurate, it seems like it would give a nice GI appearance to the scene. So has anyone else tried the SH approach that I'm referring to, and did it work well for you? Also, I'm wondering if I'm missing anything. I'd like to do everything HDR, but should I attempt to bring in any other fun technologies like realtime ambient occlusion, or some kind of dynamic specular cube mapping. Any other thoughts in general on how to light a scene like this? Thanks much for the input.