chrisATI

Members
  • Content count

    29
  • Joined

  • Last visited

Community Reputation

266 Neutral

About chrisATI

  • Rank
    Member

Personal Information

  1. AMD has a tool called [url="http://developer.amd.com/tools/gpu/shader/Pages/default.aspx"]GPU Shader Analyzer[/url] that will take HLSL/GLSL and show you the actual machine instructions generated by the driver's compiler (you select which GPU you want to target). It can also estimated performance for you and analyze the bottlenecks. It's quite useful for answering these kinds of questions because you can change the HLSL dynamically and watch how the generated code changes.
  2. SH Projection

    [quote name='derKai' timestamp='1349186106' post='4986050'] Do you have any idea why this is the case? In this simplest of all tests the error is around 12%. [/quote] Removing my post becase I just realized you are using a D3D function, not rolling your own. Nevermind me ;)
  3. Quote:Original post by murderv when i set the texture alpha to 1.0 everything renders just fine. Oh, in that case then maybe you accidentally have alpha blend or test enabled?
  4. It looks like this is a problem with your render state. Double check that your depth test state is setup correctly.
  5. Sounds like you want to do a scatter operation. You could do this by taking your input image and processing it as if it were a stream of point primitives, so each texel becomes a point primitive. You could then do your transformation in the vertex shader and scatter the points to whatever output addresses you wanted.
  6. You may find some helpful information on ATI's CubeMapGen webpage: http://www.ati.com/developer/cubemapgen/index.html There's a stand alone tool as well as a library for integrating the tool into your own application. There also a link to a presentation that explains it all. You can use this tool to generate cubemaps that you can use for image based lighting. --Chris
  7. Z-fighting with shaders

    Quote:I'm using a 24-bit Z-buffer. What are your near and far planes set to? You can eliminate z-fighting by reducing the distance between the near and far planes... this means pushing the near plane out as far as you can get away with and pulling the far plane in as close as you can get away with. --Chris
  8. Ambient Normalmapping

    Quote:Original post by Guoshima are there any good others ways to store precalculated lighting data such as radiosity with nice soft shadows, and use it realtime, and still be able to see normalmapping nicely? One option is to use ambient occlusion and bent normals to sample diffuse environment maps. This requires very little pre-computed data and is very efficient to render. --Chris
  9. Check out this slide deck: http://www.ati.com/developer/gdce/Oat-ScenePostprocessing.pdf Slides 17 through 23 (Kawase's Light Streak Filter) discuss this filter and give a sample HLSL implementation. The trick is getting the growable filter kernel right. --Chris
  10. Real? Or not...

    Quote:Original post by reltham Wouldn't the reflections on the car of itself (the side mirror) be possible if the cube map is generated from inside the car and only rendering the bits of the car that can reflect onto the car when doing the cube map? Perhaps the math to align it properly doesn't work out? Cubemaps only give correct reflections at the point at which they are generated. You would need a unique cubemap for every pixel on the car's body otherwise you would get strange/impossible reflections like the mirror being reflected by the mirror (if the mirror is on the left face of the cubemap, then parts of the mirror that face left would reflect mirror... or parts of the hood that face left would reflect mirror even though the mirror isn't visible to the hood... etc).
  11. Real? Or not...

    Quote:Original post by Anonymous Poster Quote:Original post by ace_lovegrove These games are beginning to look like pixel perfect Pixar animations. I will be truely stunned when things dont look so perfect. Like dirt marks smears etc ace Not even close. The movie stuff looks significantly better. Go to a siggraph sometime and you'll realize just how advanced the Renderman stuff is. One area where the offline folks have a definite advantage is with anti-aliasing. They can use gigantic textures and render with 32xFSAA, etc, etc. This makes a huge difference for image quality. --Chris
  12. Real? Or not...

    Look at this shot: http://www.gamespot.com/xbox360/action/2daystovegas/screens.html?page=17 The car reflects the sideview mirror. A cubemap can't make the sideview mirror reflect on the body of the car. The reflections were done with raytracing. --Chris
  13. Real? Or not...

    Some of these are probably not real-time renders. If you look closely at the reflections on the cars, you'll notice that the cars reflect themselves. For example, you can see the side-view mirrors relfected on the body of the car. These aren't planar surfaces so you can't just reflect the scene about a plane like you would for a water surface and a cubemap can't give you these kinds of reflections. In fact, you need something like raytraced reflections to get these kinds of reflections. It's possible that they're doing something complex to get this effect but my immediate impression is that these are offline renders. --Chris This image demonstrates my point: http://www.gamespot.com/xbox360/action/2daystovegas/screens.html?page=15 [Edited by - chrisATI on August 17, 2005 5:35:25 PM]
  14. Quote:Original post by hplus0603 Quote:- HDR Is achieveable in 8 bit at a fragment processing cost. Not true. I believe the original poster was referring to the RGBE format which stores 3 8bit color channels and a shared 8bit exponent (this is the format stored in .hdr files). Many of the light probes on Paul Debevec's website are provided in this format (http://www.debevec.org/Probes/). If I remember correctly, this gives you 76.8 dB. A shared exponent isn't ideal but in practice it tends to work fairly well most of the time. One limitation of this format is that RGBE textures can't be filtered on current hardware, you need hardware with native support for the format. --Chris
  15. OpenGL CG matrices question

    Hello, David Gosselin wrote an article about skinning and tweening in a vertex shader a few years ago for ShaderX. The article is online here: http://www.ati.com/developer/shaderx/ShaderX_CharacterAnimation.pdf All the examples were written for DirectX 8.1 using vertex shader 1.1 assembly language but there's plenty of explanation so you should be able to port it quite easily. The basic idea is that you store a palette of skinning matrices in shader constant store. Then each vertex get's 4 bone indices and weights. The indices are used to index into constant store to find the 4 bones for that vertex. The weights are used to blend the bones to give you your final composite transform. --Chris