HDR lighting + bloom effect

Started by
18 comments, last by simonjacoby 14 years, 11 months ago
Quote:Original post by cyberlorddan
1 more question..... U said that MDX is a dead project... that means that it won't be updated anymore? or only the documentation for it won't be updated? i'm asking this because i saw that tesselation was implemented in directx 11 only, and i saw somewhere (i can't remember where) while i was doing modifications to my mdx engine, a struct or enumerator or smth (it showed up in the intellisense scroll list) that had the name Tesselation.... :| it might be a stupid question, but i really don't know very much of mdx :(


Yes, it won't be updated in any way. No D3D10, no D3D10.1, no D3D11. Not even bug fixes. Like I said it's not even included in the SDK anymore. The tessellation stuff you're seeing in the documentation was never actually supported in D3D9 hardware, and is completely different from the programmable tessellation available in D3D11.



Advertisement
Quote:Original post by cyberlorddan
thx for the info. i'm slowly implementing hdr to my engine (i got everything that i need to do this...thx for the links :) ).
1 more question..... U said that MDX is a dead project... that means that it won't be updated anymore? or only the documentation for it won't be updated? i'm asking this because i saw that tesselation was implemented in directx 11 only, and i saw somewhere (i can't remember where) while i was doing modifications to my mdx engine, a struct or enumerator or smth (it showed up in the intellisense scroll list) that had the name Tesselation.... :| it might be a stupid question, but i really don't know very much of mdx :(


i said that i know everything i should (well i was wrong). i ran into another problem :(

first of all, the format of the texture that i should set to render the scene to. i have a problem..... if i set it to A16B16G16R16, then semi-transparent objects are rendered wrong (instead of blending with the content that's below them, they are blending with the color i set in device.clear() method.

second, i have no depth buffer :| . when rendering to the texture (surface) instead of the screen, the depth buffer doesn't work. it renders everything in the order i tell them to render (meaning that objects that are in the back are shown in front of others)

any solutions? :(

btw, i should have mentioned that by setting the texture format to rgb8 the transparency problem is solved, but that is not a hdr format right?

Have you looked at the HDRLighting sample in the SDK?
Quote:Original post by leet bix
Have you looked at the HDRLighting sample in the SDK?


yes... but when i try to execute it it gives an error that says that the device can not pe initialized properly... :(
i've benn thinking that this problem might be caused by the incompatibility of my graphics card (it's kinda old... geforce 6200 . i'll change it on 10th may with an 9800 one - my birthday :D ).
could this also be because i'm using mdx instead of native directx?
i think i should better post the important parts of my code: (i removed the parts that weren't relevant to the problem)

let me explain first what this code should do (or what's left from it). first i initialize all the stuff i have to. the function render() is called on every frame. the problems: objects are rendered in front of others (the depth buffer doesnt work). the transparency is a mess.... you can look at this image to see what results i get: http://i383.photobucket.com/albums/oo272/cyberlorddan/dxprob.png

i also explained some things in the image


namespace LevelEditor
{
public partial class Main : Form
{
int widthP;
int heightP;

Device motorGrafic;

VertexBuffer vb = null; //fsdfsdfsfdsfsdfsdfs
IndexBuffer ib = null;

Matrix projection;
Matrix camera;

short[] indices = { 0, 1, 2, 2, 1, 3 };

CustomVertex.PositionNormalTextured[] terrainTriangle;
TerrainPointByPoint[] terrainDetailPBP; //terrainpbp is class that holds the terrain data such as height, texture, pathing and so on....

Texture rocksTexture;
Texture dirtFinalTexture;

Texture originalRenderedScene;
Surface originalRenderSurface;
Surface bbS;
Surface depthStencilS;

CustomVertex.TransformedTextured[] screenVertices = new CustomVertex.TransformedTextured[6];

Effect effect;

float curWindHei;
float curWindWid;

public Main()
{
InitializeComponent();
//widthP and heightP represent the terrain size. they are initialized here (i removed the code because it was big and wasnt relevant

initializeMainEngine();
initializeBuffers();
initializeMeshes();
initializeTextures();

vd = new VertexDeclaration(motorGrafic, velements);

originalRenderedScene = new Texture(motorGrafic, this.Width, this.Height, 1, Usage.RenderTarget, Format.A16B16G16R16, Pool.Default);
originalRenderSurface = originalRenderedScene.GetSurfaceLevel(0);

}
void initializeBuffers()
{
vb = new VertexBuffer(typeof(CustomVertex.PositionNormalTextured), 4, motorGrafic, Usage.Dynamic | Usage.WriteOnly, CustomVertex.PositionNormalTextured.Format, Pool.Default); //fsdfsdfsfdsfsdfsdfs
vb.Created += new EventHandler(this.OnVertexBufferCreate); //fsdfsdfsfdsfsdfsdfs
OnVertexBufferCreate(vb, null);
ib = new IndexBuffer(typeof(short), indices.Length, motorGrafic, Usage.WriteOnly, Pool.Default); //fsdfsdfsfdsfsdfsdfs
ib.Created += new EventHandler(this.OnIndexBufferCreate); //fsdfsdfsfdsfsdfsdfs
}
void initializeMainEngine()
{
PresentParameters paramPrez = new PresentParameters();
paramPrez.SwapEffect = SwapEffect.Discard;
paramPrez.Windowed = true;
paramPrez.MultiSample = MultiSampleType.FourSamples;
paramPrez.AutoDepthStencilFormat = DepthFormat.D16;
paramPrez.EnableAutoDepthStencil = true;
paramPrez.BackBufferFormat = Format.X8R8G8B8;

motorGrafic = new Device(0, DeviceType.Hardware, this.splitContainer1.Panel2, CreateFlags.SoftwareVertexProcessing, paramPrez);
effect = Effect.FromFile(motorGrafic, "defaultEffect.fx", null, ShaderFlags.None, null);
}
void initializeTextures()
{
//removed code
}
void initializeMeshes()
{
//removed code
}
void initializeTerrain()
{
//removed code

}
void OnIndexBufferCreate(object sender, EventArgs e)
{
//removed code
}

void OnVertexBufferCreate(object sender, EventArgs e)
{
VertexBuffer buffer = (VertexBuffer)sender; //fsdfsdfsfdsfsdfsdfs
originalRenderedScene = new Texture(motorGrafic, this.splitContainer1.Panel2.Width, this.splitContainer1.Panel2.Height, 1, Usage.RenderTarget, Format.A16B16G16R16F, Pool.Default);
originalRenderSurface = originalRenderedScene.GetSurfaceLevel(0);


//some code removed here

}

void generateTerrainData(int whichOne)
{
//code removed here
}

Point GetMouseCoordonates()
{
//code removed here
}

void render()
{

projection = Matrix.PerspectiveFovLH((float)Math.PI / 4, curWindWid / curWindHei, 0.1f, 50.0f);
camera = Matrix.LookAtLH(currentCameraPosition, currentCameraTarget, currentCameraUp);


bbS = motorGrafic.GetBackBuffer(0, 0, BackBufferType.Mono);


motorGrafic.SetRenderTarget(0, originalRenderSurface);

motorGrafic.Indices = ib;
motorGrafic.VertexDeclaration = vd;
motorGrafic.RenderState.SourceBlend = Blend.SourceAlpha;
motorGrafic.RenderState.DestinationBlend = Blend.InvSourceAlpha;
motorGrafic.SetStreamSource(0, vb, 0);
motorGrafic.BeginScene();

motorGrafic.Clear(ClearFlags.Target | ClearFlags.ZBuffer, Color.CornflowerBlue.ToArgb(), 1, 0);

motorGrafic.RenderState.AlphaBlendEnable = true;
motorGrafic.RenderState.ZBufferEnable = true;

effect.SetValue("xColoredTexture", rocksTexture);
effect.Technique = "Simplest";
effect.Begin(0);
effect.BeginPass(0);
effect.SetValue("xViewProjection", Matrix.Translation(trackBar1.Value, trackBar2.Value, trackBar3.Value) * camera * projection);
effect.SetValue("xRot", Matrix.Translation(trackBar1.Value, trackBar2.Value, trackBar3.Value));
motorGrafic.SetTexture(0, rocksTexture);
motorGrafic.DrawIndexedPrimitives(PrimitiveType.TriangleList, 0, 0, 4, 0, 2); //////////i use this to render the position of my light


////i render here my scene

}
};

effect.EndPass();
effect.End();
motorGrafic.EndScene();

motorGrafic.RenderState.Lighting = false;
motorGrafic.SetRenderTarget(0, bbS);
motorGrafic.SetTexture(0, originalRenderedScene);
motorGrafic.Clear(ClearFlags.Target, Color.Red, 1, 0);
motorGrafic.BeginScene();


motorGrafic.VertexFormat = CustomVertex.TransformedTextured.Format;
motorGrafic.RenderState.CullMode = Cull.None;

motorGrafic.DrawUserPrimitives(PrimitiveType.TriangleList, 2, screenVertices);

motorGrafic.EndScene();
motorGrafic.Present();
}

void setCameraPosition(float xCamPozS, float yCamPozS, float zCamPozS)
{

////removed code
}
void setCameraTarget(float xCamTarS, float yCamTarS, float zCamTarS)
{
//removed code
}
}
}


...so how do i fix this?
originalRenderedScene = new Texture(motorGrafic, this.Width, this.Height, 1, Usage.RenderTarget, Format.A16B16G16R16, Pool.Default);originalRenderSurface = originalRenderedScene.GetSurfaceLevel(0);


Only the latest DirectX 10 compatible graphics cards (NVIDIA G8x) supports alpha blending, filtering and multi-sampling on a 16:16:16:16 render target. Graphics cards that support the 10:10:10:2 render target format support alpha blending and multi-sampling of this format (ATI R5 series). Some DirectX 9 graphics cards that support the 16:16:16:16 format support alpha blending and filtering (NVIDIA G7x), others alpha blending and multi-sampling but not filtering (ATI R5 series). All alternative color spaces than the following (HSV, CIE Yxy, L16uv, RGBE) do not support alpha blending. Therefore all blending operations still have to happen in RGB space.
An implementation of a high-dynamic range renderer that renders into 8:8:8:8 render targets might be done by differing between opaque and transparent objects. The opaque objects are stored in a buffer that uses the CIE Yxy color model or the L16uv color model to distribute precision over all four channels of this render target. Transparent objects that would utilize alpha blending operations would be stored in another 8:8:8:8 render target in RGB space. Therefore only opaque objects would receive a better color precision.
To provide to transparent and opaque objects the same color precision a Multiple-Render-Target consisting of two 8:8:8:8 render targets might be used. For each color channel bits 1-8 would be stored in the first render target and bits 4 - 12 would be stored in the second render target (RGB12AA render target format). This way there is a 4 bit overlap that should be good enough for alpha blending.
Quote:Original post by leet bix
originalRenderedScene = new Texture(motorGrafic, this.Width, this.Height, 1, Usage.RenderTarget, Format.A16B16G16R16, Pool.Default);originalRenderSurface = originalRenderedScene.GetSurfaceLevel(0);


Only the latest DirectX 10 compatible graphics cards (NVIDIA G8x) supports alpha blending, filtering and multi-sampling on a 16:16:16:16 render target. Graphics cards that support the 10:10:10:2 render target format support alpha blending and multi-sampling of this format (ATI R5 series). Some DirectX 9 graphics cards that support the 16:16:16:16 format support alpha blending and filtering (NVIDIA G7x), others alpha blending and multi-sampling but not filtering (ATI R5 series). All alternative color spaces than the following (HSV, CIE Yxy, L16uv, RGBE) do not support alpha blending. Therefore all blending operations still have to happen in RGB space.
An implementation of a high-dynamic range renderer that renders into 8:8:8:8 render targets might be done by differing between opaque and transparent objects. The opaque objects are stored in a buffer that uses the CIE Yxy color model or the L16uv color model to distribute precision over all four channels of this render target. Transparent objects that would utilize alpha blending operations would be stored in another 8:8:8:8 render target in RGB space. Therefore only opaque objects would receive a better color precision.
To provide to transparent and opaque objects the same color precision a Multiple-Render-Target consisting of two 8:8:8:8 render targets might be used. For each color channel bits 1-8 would be stored in the first render target and bits 4 - 12 would be stored in the second render target (RGB12AA render target format). This way there is a 4 bit overlap that should be good enough for alpha blending.


thx for the answer. so i'll have to wait until i get a new nvidia graphics card... i could use this time to move the code to c++.
anyway, i still can't get the depth buffer to work properly. this isn't a hardware problem, because the HDRFormats sample shows the teapot as it should. any solution for the depth buffer problem?
I don't think you need newer hardware, just use a differnet format render target to store your hdr values.
I don't know what's wrong with the depth buffer, but I don't think you should have to use a floating point buffer to hold the information, 256 deltas per channel should be plenty.
Quote:Original post by leet bix
I don't think you need newer hardware, just use a differnet format render target to store your hdr values.
I don't know what's wrong with the depth buffer, but I don't think you should have to use a floating point buffer to hold the information, 256 deltas per channel should be plenty.


I 'solved' the problem with the depth buffer. I had multisampling turned on. When i turned it off, the depth buffer worked as it should.
but now a new, probably stupid, question arises: how do i turn multisampling on without 'damaging' the depth buffer?
You need to create a multisampled depth buffer to match your color buffer. The color/depth must always match in multisample modes (and no multisampling is also a multisample mode ;))

/Simon

This topic is closed to new replies.

Advertisement