Jump to content
  • Advertisement

shultays

Member
  • Content Count

    245
  • Joined

  • Last visited

Everything posted by shultays

  1. maybe you can do this in shaders, how are you choosing which polygons will be sem-transparent? like they are closer to camera or what? or are they random?
  2. is there a good book or article about game physics? I am mostly interested in how two cubic objects collision are determined and more importantly handled.
  3. shultays

    Java pointer or similar

    Link from Sun Web Site "Reference data type parameters, such as objects, are also passed into methods by value. This means that when the method returns, the passed-in reference still references the same object as before. However, the values of the object's fields can be changed in the method, if they have the proper access level."
  4. shultays

    Java pointer or similar

    java always pass by value, it sends a copy of the reference object which refers to actual object. like in c++ void test(Object *b){ ... } Object *a = new Object(); test(a); in this case it sends a copy of the pointer a to test function. this is same in java, just you don't put * before references but again it sends a copy of a pointer (reference) to test function and if test sets b to something it does not affect a.
  5. shultays

    Deferred shading

    for directional lights, you must render a full screen quad. for each pixel ps wil calculate angle between normal of pixel and direction of light and calculate the final color of the pixel using that value for point lights you can use full screen quads but it is more inefficent. a better apprach is using a light volume (for example a sphere centered at ight postion and has radius of light range). this is like shadow volumes if you know about them. basically it finds the which part of the screen will be illuminated by light. there are 3 different cases while rendering light volume if front face of model passes the depth test while back face does not, than it means that that pixel is affected by light. if both sides fails at depth test than light is completly behind of an object so it does not illuminate anything if both sides passes the depth test than light is floating in air and does not affect scene again. you need to use stencil buffers. first render backside of light volume and set stencil buffer to some specific value if depth test fails (make sure that depth write is false). you will not render anything on color buffer in this pass. in the second pass render front face of light volume but this time enable stencil test and render object if stencil buffer is equal to what we set previous pass and depth test passes. in this second pass you should render the effect of light on color buffer.
  6. I read somewhere standard pipeline is also implemented as a shader program in most modern GPUs so using it does not have any performance gain (of course if you program your shaders properly).
  7. shultays

    Why in C++?

    I always imagine something like that when I see a thread like that. I am not implying you are a troll, just idea itself is funny :D
  8. shultays

    Looking for artist name ideas

    vuvuzela? :P
  9. shultays

    Z-Fighting

    you can render second triangle using only stencil test. render object, set stencil bits to 1 while rendering. after render picked triangle, disable depth test and enable stencil test and render if stencil buffer is 1 not sure if this is a good idea though
  10. shultays

    Tiled Terrain Rendering

    this will be probably bad an idea but I think it would work. I am not that experienced with shaders and their performance lets say you have you have 4 textures, give each of them a id, 0 1 2 3. each vertex will need a 2 components for storing texture id and one for weight distrubiton between textures. for triangles, every vertex must have same texture ids but weights will be different. you will pass those to pixel shader, since ids are same they will not be interpolated but weight will be. in pixel shader you will do something like that if( id1 < 0.5) fetch texture1 and store in color1 else if( id1 < 1.5) fetch texture2 and store in color1 else if( id1 < 2.5) fetch texture3 and store in color1 else fetch texture4 and store in color1 same will go for id2. once you do a fetch operation for both textures you will interpolate two color return lerp(color1, color2, weight) again, I never tried this and not sure how well it will work or if it will work at all
  11. if you are using as2 you can use something like that _root["name"] if movie clip at root. you can specify other layers like _root.container["name"] or simply this["name"] if you are working on same layer
  12. shultays

    Thank you guys!

    I made a game for my graduation project and my grade is AA :D. it would be very hard without all your help (your answers and articles in this site). seriously, this site is a great resource for people interested in game programming. here is a video from my game. fps was a bit too low due to recording and I had to close shadow volumes. it should be still considered in alpha stage I guess, there is still much to do. I hope I won't lose my interest soon and finish it.
  13. shultays

    stencil buffer.. i dont get it

    you are right about first example. render the cutting shape on the stencil buffer, set every value to something specific while rendering. after that render second shape using stencil test not equal so it will render not render on cutting shape. finding outlines would be much harder. I am not sure if it is related to stencil buffers. you can search for cel shaded animation in ogre. here is the wikipedia article http://en.wikipedia.org/wiki/Cel-shaded_animation if ogre supports rendering objects as a wireframe using thick lines you can easily implement it using first method.
  14. shultays

    stencil buffer.. i dont get it

    1- stencil mask is used for setting which bits of the stencil buffer will be read in stencil test. it is basically an and operation before reading the stencil buffer value (stencil value)&(stencil mask) for example if you require only right most bit of stencil buffer and other bits are used for some other tasks you can use stencil mask 0x01 so other bits will not interfere with your stencil testing. if right most bit is 1, resut will be 1 else it will be 0, other bits will not be influential 2- stencil write mask is same as stencil mask but this time it is used while writing on stencil buffer. you can specify which bits will be writable using write mask and all other bits will be preserved while writing. for example previous value in the buffer is 0xFF and you want to set right most bit to 0, you can set write mask to 0x01 and stencil value to 0x00. only right most bit will be altered and buffer values will be 0xFE 3- you can alter the value in the buffer while rendering on object using stencil operations. for example StencilPass operation will be executed if stencil test passes. for example you want to render a mirror on the scene. first job is rendering the scene normally, after that render the mirror but set stencil func to always and stencil pass to replace and stencil ref to 0x01. in this case stencil test will always pass. while rendering the mirror, it will replace the stencil buffer value with 0x01 while outside of mirror will remain 0x00. After that render the mirror scene, but use stencil test = equal and stencil ref = 0x01. change stencil operation back to keep (or set stencil write mask to 0x00) to prevent altering stencil buffer. So this time scene will be rendered if stencil buffer is 0x01 which is position of mirror. 4- it compares reference value to buffer value. if it is greater, it will be something reference > buffer. 5- There are 3 different events you can use stencil operation. stencil pass, stencil fail, stencil z fail. stencil pass operation will be trigerred if both z test and stencil test passes. stencil fail operation will be trigerred if only z test passes and stencil test fails. and stencil z fail will be trigerred if both z and stencil tests fails. stencil buffer is not a feature of ogre. it is a feature of graphics unit and graphics libraries simply change render states to use it.
  15. shultays

    Particle System Help

    Quote:Original post by X Abstract X I just assumed that 2 trig function calls per particle would be really slow in a case where something like 10 000 particles are spawned all at the same time. Maybe I'm underestimating the power of a CPU though, I really don't know, that's why I'm asking. Quote:Original post by dmatter The way I handled this before was, when adding a particle, to just repeatedly generate positions for it in a rectangular region until the squared-distance between the particle and the center is less than or equal to the squared-radius of the sphere. Generally this requires only one or two attempts. That sounds really interesting. I'm probably going to try it. I don't think trig function would be that slow. I took a numerical analysis course 1-2 years before and they can be calculated as sum of series. I think it can perform it even faster by using a precalculated table and approach your input using it. although I have no idea how they are implemented. anyone knows how they are implemented? try both, but I don't think you will get any noticable performance boost. it might be even slower. it will miss the circle only 1-pi/4 ~= 20% of time but I guess dmatter's method would give a more uniform result. if you choose a random angle and radius it will tend to be on the center
  16. shultays

    Particle System Help

    why should it be inefficent?
  17. shultays

    Generate 2D map in real time

    you can move objects if player moves away from them and they are off screen. lets say your window size is 100x100 pixel. imagine a square centered on player with 200 pixel sizes. If an object moves out of this square you will recreate it on a random spot (but be sure that this spot is off screen) in square again. only 1/4 of the imaginary scene will be visible. if an object is further than 100 pixel in width or height to player, move it to another off screen spot on map (player position +- 50 pixels will be visible) but if player leaves an area and come back to it again he will notice that area is different now.
  18. I thought it wasn't but I have doubts now. Is this possible? If it is how efficent it is?
  19. Quote:Original post by Evil Steve 1. You only process 1 message per frame, you need to process all messages, then render oh, thank you. I had a tearing problem. I am not sure why but this solved it.
  20. I am working on deferred lighting, but performance is very poort with many lights. for example I also implemented shadow volume algorithm. Whole scene has a shadow mesh with ~45000 triangles. And with 10 lighting, if shadowing is on it is about 30 fps, if it is off it is 15-20 fps. I first thought it is the shadow volumes which are causing the problem but it is not :S Game runs fine with 1 lighting, fps is about 70-90. why a post-process effect causes too much strain. I knew texture fetching is expensive but not that much :S light post process has 4 texture fecthing, 3 for color, normal and position g-buffers and another 1 fetching old light results. here is my code, I hope problem is about my code rather than post process it self. my light shader float4 LightPS(in V2P IN) : COLOR0{ IN.TexCoord.x += dx/2; IN.TexCoord.y += dy/2; float4 Color = tex2D( ColorSampler0, IN.TexCoord ); float4 Normal = tex2D( ColorSampler1, IN.TexCoord ); float4 Pos = tex2D( ColorSampler2, IN.TexCoord ); Normal = (Normal - 0.5)*2; float d = dot( Normal.xyz, normalize(lightCoor-Pos.xyz)); d *= saturate((lightRange-distance(lightCoor, Pos.xyz))/lightFallOff); Color.x *= lightColor.x*d; Color.y *= lightColor.y*d; Color.z *= lightColor.z*d; return Color + tex2D( ColorSampler4, IN.TexCoord ); } and part of rendering responsible for lights and shadows postProcessingEffect->SetTexture("ColorTex0", QuadTexture[0]); //color buffer postProcessingEffect->SetTexture("ColorTex1", QuadTexture[1]); //normal buffer postProcessingEffect->SetTexture("ColorTex2", QuadTexture[2]); //position buffer (D3DFMT_A16B16G16R16F) postProcessingEffect->SetTexture("ColorTex4", QuadTexture[4]); //light buffer, result of all lights added to here ClearTexture(QuadTexture[4], 0xFF000000); Tools::D3DDevice->SetRenderTarget(0, RenderSurface[4]); for(int i=0; i<lightNum; i++){ if(hasShadow){ Tools::D3DDevice->Clear( 0, NULL, D3DCLEAR_STENCIL, 0x00000000, 1.0f, 0 ); for(int j=0; j<ShadowVolume::shadowSize; j++){ ShadowVolume::allShadows[j]->drawShadowVolume(lightPos); } } postProcessingEffect->SetFloatArray("lightCoor", lightPos, 3); postProcessingEffect->SetFloatArray("lightColor", lightColors, 3); postProcessingEffect->SetFloat("lightRange", lightRanges); postProcessingEffect->SetFloat("lightFallOff", lightFallOffs); if(hasShadow){ postProcessingEffect->SetTechnique( "Light"); }else{ postProcessingEffect->SetTechnique( "LightWithoutShadow"); } renderQuad(); } can you notice anything else which can cause a problem?
  21. shultays

    Blur looks wrong

    maybe it is about your aspect ratio? are you sure your back buffer and window size has same aspect ratio? your fonts also looks like a little wide to me
  22. actually, with last optimization performance is gream :D as long as I don't put lights on top of each other fps is playable. here is a ss with 32 lights only two of them cast shadows though.
  23. Quote:Original post by talhadad The house seems like squidward's house from spongebob... lol, yes it does. it is from a custom quake map here Quote:Original post by Ashaman73 Well, I think that your implementation is quite ineffective. For each light you do a full-screen pass, this is 4 texture fetches which sums up to 40 texture fetches for 10 lights. One solution, already mentioned, is to add culling per stencil buffer or just render the light volumns, but if your lights lit large areas of your screen I would suggest an other approach. Do the lighting of all of your lights in a single pass. This leaves 4 texture fetches for 10 or 50 lights. Set your light data as parameters and run a little loop in your shader. This way I render up to 50 lights. You could still try to reduce the lighting by stencil masks. I am now using a cube (maybe I should change it to a sphere but probably it wont effect much) and use stencil test to find if light is floating in air or completely behind of an object. I first tried two sided stencil test but since I am already using lower bits of stencil buffer for shadowing, I can't use INCR and DECR for stencil operations and it does not work. now, it renders the cube two times. first it renders front face of cube, if depth test fails it means that there is an object infront of light. it replaces stencil value with 0x10 (mask is also 0x10, right 4 bits are reserved for shadows, left 3 is unused for now). after that it render behind of cube with zfunc=greater, stencilfunc=equal and stencilref=0x00 which means that render pixel only if backside polygon of light cube is behind an object and front face of polygon is not. in pixel shader of second pass, it computes lighting effects. and also it is 3 texture fetches per rendered pixel now, I am using DestBlend = SrcBlend = One now which adds lights together without fetching old value. also it might clip a pixel before doing all texture fetches. for example if distance between pixel position and light position > range of light, it does not fetch normal and color textures. Quote:Original post by MJP Quote:Original post by Ashaman73 Do the lighting of all of your lights in a single pass. This leaves 4 texture fetches for 10 or 50 lights. Set your light data as parameters and run a little loop in your shader. This way I render up to 50 lights. You could still try to reduce the lighting by stencil masks. If you want to do that, you can bin your lights according to screen-space tiles. Then for each tile you only render the lights that intersect. Using a single pass for all lights will not work because of shadow volumes, it requires a seperate stencil buffer for each light. but I am planning to use it for lights without shadows. so, any other optimizations? also thanks again guys.
  24. thanks again, I think I get what you meant with stencil testing. It is pretty much like shadow volume algorithm, we have a volume which defines the region that light can illimunate except. thanks for all your help, it works better now. here is my scene with 10 lights, 3 of them generating shadows. fps is around 30, still not good as I hope but at least it is better now. I will check your article about using depth buffers for generating world positions, but if 8 bits are not enough, it will not help me for reducing texture fetches. but at least an 32bit buffer should be more efficent than using A16B16G16R16F.
  25. Quote:Original post by MJP For the lighting pass of your deferred renderer, your GPU performance will most likely be directly tied to the number of pixels shaded. Any optimizations you can use to reduce that number are likely to help a lot. These include: -Rendering light bounding volumes instead of full-screen quads -Using the depth buffer to cull pixels that don't need to be lit -Using the stencil buffer to cull pixels that don't need to be lit Aside from that, 4 texture fetches is pretty heavy. Are you not using hardware blending to sum the results of your lighting pass? You may also want to look into packing your G-Buffer more tightly, by storing depth and reconstructing position from that rather than explicitly storing view-space or world-space position. It's not only better for performance, but unless you use 32-bit floats for storing position the precision will be better as well. please tell me if I get it right "-Rendering light bounding volumes instead of full-screen quads" so instead of rendering a screen quad I will render a cube for each the light, it will be centered on light position and its sides' lenthgs are light range. if a light is off screen, it will not be rendered at all. "-Using the depth buffer to cull pixels that don't need to be lit" this is a side benefit of rendering a cube with depth testing right? if cube is behind of an object it will not be rendered again. or does this mean anything else? "-Using the stencil buffer to cull pixels that don't need to be lit" sorry, I didn't get this. I am doing a stencil test for not rendering on shadowed areas caused by shadow volumes but I guess that is not it. I never thought about using depth value for finding world position, but is 8 bit enough for that? I am already storing depth of each pixel for another algorithm on normal buffer's 4th element. I will certainly try this can you explain how I can use hardware blending to sum light results? Currently it adds lights together in pixel shader (return Color + tex2D( ColorSampler4, IN.TexCoord );). ColorSampler4 is the render target it is writing on.
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!