• Content count

  • Joined

  • Last visited

Community Reputation

122 Neutral

About gatisk

  • Rank
  1. Hi all. Is it recommended to use physics engine (PhysX) for player movement/jumping/falling in platformer/FPS/3rd person view type games?? Or I should program player moving, jumping, walking slopes/stairs by my self and use physics engine only when something is blowing up or player falls dead as ragdoll?? If using physics engine for player movement - what are the principles of doing that? Should I apply forces to player object and just run the simulation, or some other way? Thanks in advance. G.
  2. Thanks for the answer. So I understand we render only solid geometry as depth values in separate buffer. After that we render everything normally including our transparent objects and create blurred texture from that. Finally we blend sharp and blurred versions together using depth information. So it means semitransparent objects will be blurred as much as solid objects behind them?? (because each pixel represents single depth) I don't think it will look correct.
  3. Hello. I have a basic understanding about real time depth of field effect. I know how I should render my geometry and output depth-blur information into alpha channel and finally blend between sharp image and downsampled version using these values. So far so good, but what to do with semitransparent objects (alpha blended objects) - smoke trails, particles etc. ??? First, I am using alpha channel for blending and therefore can't store depth-blur values there. Second, how to blur these objects?? They are semitransparent and it means we can see some background objects through them. Any ideas??
  4. per object chadow maps.

    thanks for answer. But how do I render these (per object) projections? My idea was: 0. Render everything without shadows. 1. define a projection frustum for each shadow. 2. For each shadow - find level geometry (and other objects) that fits in that frustum and render that chunk of geometry with shadow texture projected on it. is it correct??
  5. Hi. I was wondering about multiple light rendering with multiple passes (when amount of visible lights is varying). First idea was like this: 1. Find a list of visible objects and level geometry. (with octree and frustum culling) 2. Render everything with only ambient light on. 3. Find a list of point-lights affecting frustum. (ligts whose influence sphere intersect with frustum) 4. For each light - find visible objects and geometry (from list generated in point 1) and render them with additive blending (passing that light to the shader) Is this correct?? Is there a better solution? How this was done in Doom3 or FarCry ?? First I considered rendering multiple lights with a single advanced shader pass, but then I would have a hardcoded light amount for each object even there are not as many lights affecting that object or there are more lights..
  6. Hello. Can some one explain me some details about per-object shadow map rendering? As I understand this technique is being used by Unreal2 engine as well as in the new HL2 engine (play CS:Source beta) where all actors cast shadows on the ground and other static geometry (no self shadowing). As far as I understand this simple technique - first I need to render each object to a buffer texture (models without textures - only black shapes), and then project these newly generated shadow textures on the geometry (terrain or whaterever shadow reciever I need..). Is this correct? If it is - then how is this projection rendering done on a large level geometry? For example if I have a big terrain?? I hope you understand my question?
  7. What is the correct way to get affecting lights for large surfaces and large objects when lights (omni light) influence sphere is smaller than objects dimensions (terrain nodes, larger static meshes, larger indoor geometry(walls)) ?? - If my shader supports 3 lights per object, then how to get these 3 correct lights from all the other lights?? simply finding nearest lights to the center of the object won't work correctly :( What algorithms are used for such tasks in different game engines??? Please help..
  8. Hi. Is it possible to use variable amount of passes for light rendering if amount of visible lights changes?? For example I have maximum of 3 lights in my shader, but if there is only 1 light realy affecting surface at some moment, then it would be waste to calculate 2 more lights even they are definately not giving any effect. Also if I have a bigger object (a buss for example :) ), how to get affecting lights for the object?? First idea was to get nearest lights to the object (objects center), but it would be incorrect for larger objects. Maybe use distance (or if lights influence sphere collides with objects BBox) to the bounding box of the object (wouldn't it be too expensive??) What about large surfaces indoor levels ad terrains??
  9. Choosing lights?

    Directional light is useful for sunlight and is infinite - it lights all the scene so it would be wise to have only one directional light in the scene and use it for every object. Actually I have a related question - how to choose lights for big geometry for example terrain?? First idea was to search for nearest lights for each quadtree node, but for example if there are many lights near each other then only nearest to the center of node will be rendered and also there may be lighting errors on the borders of adjacent nodes (each node uses different set of lights, but lights are overlapping so there is a visible border between nodes).. Calculating lights for each triangle would be too expensive and also then quadtree rendering wouldn't be possible.. How it is done in Doom3 or farcry indoor geometry large surfaces??
  10. Few days ago I downloaded Nvidas FX Composer (very nice app for fx creation in my opinion). The question is about semantics - FX Composer uses a set of known semantics from Microsoft specification (it automatically binds known semantic to color values, light positions and matrices..), but the problem is that it uses the same semantic for different things - in example it uses semantic "Diffuse" for diffuse texture, material color and light color. (That's how it is suposed to be according to MS specification).. so I wanted to know how to use such fx in my program - how can I know which variable is which if they have the same semantics (i.e. how can I know if variable is light color or material color if they use the same semantics) ??? I know that I can use my own semantics, but then FX composer gives some warnings and there is not possible automatic binding , and also I wanted to have my app compatible with MS specification..