Guoshima

Members
  • Content count

    191
  • Joined

  • Last visited

Community Reputation

204 Neutral

About Guoshima

  • Rank
    Member
  1. Hello, I have a few questions regarding the usage of Early-Z and Double Speed Rendering. At the beginning of the frame, we render all the visible objects to a depth texture, storing the linear depth to the camera. This texture is then later used for other effects such as shadows, volumetric fog, water, ... But during the rendering of this pass I would like to make use of double speed depth-only rendering and later during the shading of the scene use the depth-pass data for early-z culling. But I have some problems doing so. To be able to use double speed rendering, I'm not allowed to render to the color buffer, and may not use alpha testing or texkill. This introduces problems (depth-tested grass and trees for example). It would still be possible, and simply don't render any objects which have alpha properties. Also, I have to create a depth-rendertarget for this, but I presume I can't use the depth-data in that offscreen buffer as depth/stencil buffer during the rendering of my main back/front-buffer. Because I would like to use the depth-data in that ofscreen-target for early-z. I presume someone must have had these issues before. Any suggestions on this? Regards, Kenzo
  2. Hi all, I'm doing some tests on texture splatting for terrains using a Texture atlas, which stores the different ground types. The rendering goes fine, but the mipmapping gives me a headache. Let me explain my situation: each ground type is tiled across the terrain (wrap address mode, scaled and biased in the pixelshader to sample the atlas correctly and support texture filtering), though seams are visible between the tiles (only when mipmapping is enabled though). I think I understand why this happens: at the border of each "wrap" the texture coordinates go from 0.99999f to 0.00001f, which is a huge jump and the lowest mipmap is thus sampled for that texel. When I give the lowest mipmap in my texture an explicit color (like, red), then you can clearly see that it is the lowest mipmap that peaks through the tiles. I read the article in Shader X3 on using texture atlasses, and there the author calculates the ddx and ddy of the texture coordinates and uses these in the HLSL "tex2D"-instruction to aid in sampling the correct mipmap. This indeed solves the mipmap problem, but it really hurts the framerate (almost 1/3 less on a nVidia 7800 GT card). Is there any reason why this goes so slow? Perhaps someone here has experience using texture atlasses and knows another way of solving the mipmap problem? Thanks a lot in advance!
  3. Hello, we would like the saved acv files from the photoshop curve editor to create a matrix which can be used in a post process pass to modify the current scene. Because artists know how to use this tool, and I presume you can almost do anything with it. I though I saw this already somewhere in a book or in the internet but I don't know where anymore? If someone has a better suggestion on how to add a global color multiplication post process effect, which supports desturate, night vision, brightness, hue, ... always welcome of course ! Regards, Kenzo
  4. Wang Tiles

    thx for the link, but it seems that the result I get from the app is not really correct. Or do I have to set some special values. I tried with a brick texture and quilted texture is completely wrong. Regards, Kenzo
  5. Wang Tiles

    Hello, Wang tiles seem to be the perfect solution to make texture splatting looks much better. But I can't find any demo anywhere which generates a wang tiled texture from an input texture. Somewhere there should be one I suppose, since the article is already from the 70's or something. I can of course write it myself but it'll take longer for a simple test .. So, anyone any idea on where I might find a demo or sample which uses wang tiles, or creates them ? (or plugin for photoshop or something) Regards, Kenzo
  6. Terrain Texturing

    I was able to startup the app on another PC. Far away it looks really nice, but up close it's again not of high quality. Other suggestions? Regards, Kenzo
  7. Hello, I would like to add a new way of doing the texturing on our terrain but I can't really find an good solution out there. I have already tried a few things, but none of them were good enough. The goal is easy: fast to render, good quality near the camera, and easy to modify for an artist. One big texture and using some sort of MegaTexture streaming solution might be interesting, but the problem is the bandwidth and reading from DVD rom drives on consoles. And it's not so easy to implement a fast version which is editable by an artist - I think. The only other solution to this is some sort of texture splatting, as far as I know. But I tried a few simple ones. Does anyone have a good working suggestion for this. In ShaderX4 I found an article on procedural textures (7.3), but demo doesn't work for some reason. Has anyone already tried to implement it? Regards, Kenzo
  8. Instancing and Lighting?

    Quote: Use a hybrid renderer :-) ... normals layed out in screenspace like in deferred rendering if you this I presume you also use MRTs to render and you loose hardware AA anyway right? Quote: Keep in mind that, if you've got a bunch of objects or characters on-screen, most of them aren't going to need full-fidelity rendering. Using ambient plus one dual-hemisphere light per object will probably be plenty for most instances. true, but rendering the terrain 50 times is also no good solution :) (for the terrain smple lighting won't be enough). Something in between has to be found and it's just a harder to do because you to take care of more things then default deferred rendering. But in the end, if you have a good working system, it will probably be faster.
  9. grass shadows

    taking pictures of grass works fine. Then modify them with photoshop. The problem with the GPU gem article is that it's for static lights else you have to recompute them every time the light changes. When you a SPU left you might be able to do this but else .. :) This PDF looks indeed very nice but I wonder how many triangles you have to draw to make a normal piece of terrain look realistic. A few 100K I presume. And vertex and pixel shaders are not cheap either I think (with wind movement and heightmaps that is it). Regards, Kenzo
  10. Instancing and Lighting?

    This is very correct, and that's another reason why deferred shading is so handy. You can instance all your geometry without having to think about the lighting fase yet. This works great for small object, trees, ... I asked this same question to ATI and the only thing they had to say to this was to devide the world into patches and instance locally only, and all lights which influence this patch will be added to a list processed by the shader. Not a handy solution if you ask me. This the reason why I still use a deferred rendering for my outdoor environments. While I actually want to get rid of deferred shading because of the hardware problems I have with nVidia (nVidia is very slow when rendering to 4 floating point render targets on a big resolution compared to ATI - our outdoor environments run almost 70% faster on ATI than on nVidia). How else can you render an outdoor nightscene with more than 50 dynamic lights .. ? Regards, Kenzo
  11. grass shadows

    for good quality you have to use some form of CSM and only in your nearest shadow map you need to render the grass. The rest you won't see anyway. And up close I render my grass as instanced geometries, and a little bit further I render billboarded patches. You don't see the transition and get the idea the whole field is field with realistic grass geometries. Shadow on and from grass is very expensive because they increase your fillrate with a lot because of the big amount of overdraw .. Might be better to try to fake it with precomputed shadow maps or something which you can tile. This technique works fine for tree shadows in a forest so I presume it must also work for grass patches. Good luck, Kenzo
  12. for spotlight I use VSM and rather small shadow maps which I then blur (normally spotlight shouldn't cove to much world space). Shadows are far from hard and you might have some light bleeding but the overal results looks really soft. For characters and other object which need to cast high detailed shadows you need another solution anyway
  13. point light soft shadows

    you can use VSDCT for this. You find more information about this in ShaderX3 (with an article based on ShaderX2). It basically unfolds a cubemap into a 2D texture and uses simple cubemaps to transform the 3D texcoord into a 2D texcoord in your shader. For the 2D texture you can then use hardware shadow maps, Or storing VSM values in your Cube texture should also give you filtering if you hardware supports it. Regards, Kenzo
  14. Hello, small question, how can I get the maximum amount of video memory available (using DirectX or Win API calls). Should be ATI or nVidia independent. Regards, Kenzo
  15. I would simply bake my AO in a seperate texture on which you can use DXT1 compression. You can even put rgb values (radiosity maps) in here if you want. If you put your AO in the alpha of your normal map you can't compress your normal map anymore (at least not if you don't want to loose to much quality or exclude nVidia graphics hardware). You can simply bake AO or Radiosity using Max or so Regards, Kenzo