Jump to content
  • Advertisement

imajor

Member
  • Content Count

    26
  • Joined

  • Last visited

Community Reputation

122 Neutral

About imajor

  • Rank
    Member
  1. imajor

    What is XNA?

    Don't except that XNA will be a 600 mb downloadable packet for free which contains a complete graphics engine with cool physics. From what I understand before XNA is only a collection of well defined APIs. Its your responsibility to create the graphics or physics engine or if you have money you may purchase UE3, Renderware for graphics or havok for physics probably all of them will provide an XNA compatible API. So what will change in the future with XNA is that you may change the main engine from renderware to UE3 without rewriting even one line of code.
  2. Good idea but maybe the precision of the alpha channel is not sufficient (8 bit only). Maybe it is enough for depth of field. You can also use float texture as depth but AFAIK float rendertargets are not supported on gf5 cards, but will work with radeon9 and gf6 series.
  3. imajor

    Shaders and Backbuffer

    You may render to the backbuffer and then copy that surface to your own texture and use that texture as a source for the pixel shader. (IDirect3DDevice9::GetBackBuffer and IDirect3DDevice9::UpdateSurface) This is also a waste of time but at least antialiasing will work. (AFAIK if you are rendering to a rendertarget AA will be disabled)
  4. You are doing all thing well I think. Multiply render targets are exactly for such a cases - when you need more information not only color. Multiply monitors is a really different case of course, you can not use MRT for that purpose. I think you should use MRT for creating the depth texture because rendering twice would be much slower I think. First you should use the backbuffer as the first render target instead of the 800x600 texture because AFAIK if you render to a texture antialiasing will be disabled. So check the resolution of your back buffer and create the depth texture with the same resoultion (and of course recreate this texture on resolution change). Later you can access the content of the backbuffer with IDirect3DDevice9::GetBackBuffer and IDirect3DDevice9::UpdateSurface. I did not try it yet but I read that this is the right way. Right now I'm doing the same as you - rendering to a texture and later copy that to backbuffer with postprocessing effects - but antialiasing disabled in this case. The two problem seems to be very strange, I would try a newer driver version (if exists). For the fog problem maybe you should turn off hardware fog and apply the fog in the pixel shader. If you have any result please post here, will be usefull for me (and hopefully others)
  5. imajor

    depth via pixel shader

    No I don't. Do you have a DX9 card? If so try to implement the same thing with PS2.0 as I see that is not so hacked like ps1.4.
  6. imajor

    Why restore failed?

    Use the debug runtime and see te output in VS.
  7. When calling D3DXCreateTextureFromFileEx pass the color of the outside pixels as the colorkey param. The function will automatically set the alpha value to 0 for those texels.
  8. imajor

    pixel shader 2.0 range

    What do you mean incoming texture samples? Values read from textures using tex2D? Range of those values are depending on the texture format you are using. The input color registers are also clamped to [0,1] if this is not acceptable use texture coordinate registers instead since in ps2.0 and above you can use them for any purpose not only for texture coordinates.
  9. imajor

    Blending w/ Bumpmapping

    If you have to render a transparent surface in multipass, let assume that the final color of the surface is D and the partial results are A B and C. So for the first pass you get A, on the second pass you get B and finally you get C. So D=A+B+C. If you would be able to render in one pass, the final result in the rendertarget would be R*invsrcalpha+D*srcalpha. For the first pass use srcalha for srcblend and invsrcalpha for destblend, you get R*invsrcalpha+A*srcalpha. For the other passes use srcalpha for srcblend and use one for destblend, so finally you get R*invsrcalpha+A*srcalpha+B*srcalpha+C*srcalpha=R*invsrcalpha+(A+B+C)*srcalpha=R*invsrcalpha+D*srcalpha and this is what you want to get. (I guess)
  10. imajor

    Blending w/ Bumpmapping

    For transparent surfaces set srcblend to srcalpha, and set destblend to one.
  11. Since graphics cards has more than one vertex/pixel pipeline and they are executing shaders parallel it is not possible to write any constant space register, so your idea is absolutely unfeasible. From a vertex shader you can write only the output registers. Part of them will be passed to the pixel shader, the others are used for special purposes (position, fog, etc)
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!