• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.
Sign in to follow this  
Followers 0
cyberlorddan

HDR lighting + bloom effect

19 posts in this topic

Hi! I want to implement hdr lighting to my engine, but i just can't find a good tutorial for managed directx to show me how to do it. Bassicly, i understood what i have to do: 1. render the color data to a texture (color can exceed 1.0f) 2. resize the texture (make it smaller) 3. set the areas that are below a luminance level to black (0) 4. blur the texture 5. scale the texture to its original size 6. render the original texture combined with the resulting texture from step 5. 7. get the maxium luminance level of the texture resulting texture from step 6. 8. divide the texture colors by the maxium luminance 1st question: are these the right steps to perform a hdr rendering? 2nd question(s): if they are, how do i render the color data to a texture?, how do i resize the texture?, how do i get the luminance of a pixel?, how do i blur the texture?, how do i combine 2 textures?, how do i get the maxium luminance level of a texture? 3rd question: can all those operations be described in one .fx file? (i mean i don't want to do any of this operations within my application code (i don't want to use the cpu)). thx in advice.
0

Share this post


Link to post
Share on other sites
you can start with the examples that are in the DirectX SDK. They are a good starting point to do your own stuff.
-1

Share this post


Link to post
Share on other sites
upps I just saw that are you are specifically looking for managed DirectX ... I think it would be good to check out the C++ stuff first.
What you say there is a good starting point. You will have to use render targets with a higher resolution then 8-bit per channel and then there are lots of small little challenges attached to each of the points you mention in 1 - 8..
The level of detail you will go into will decide on how good your HDR pipeline is at the end.
Gamma correction will be also a major topic to look into.
0

Share this post


Link to post
Share on other sites
First of all....why MDX? It's a dead project that's no longer included in the SDK, and will never be updated again. These days if you want to do DirectX with a managed language your only real choices are SlimDX and XNA. The former is (very good) wrapper of DX9/DX10/DX10.1, while the latter is a wrapper of DX9 combined with other game-related framework utilities (it's also compatible with the Xbox 360 and Zune).

Secondly, I agree with wolf that the SDK samples are a good place to start. In fact there's actually a managed port of the HDR Pipeline sample that happens to be the first when when you search on google for "HDR sample MDX".

You might also want to check out this, it's a good overview.
0

Share this post


Link to post
Share on other sites
wow.... thx for the fast answers.

K. So, to answer to ur'....answers :).... i'm working on a rts game, and now i'm doing the map editor. the reason that i'm using mdx is because i don't know how to add butons, panels, and so on in native c++. (the game application is written in native c++. i've done the start screen and now i have to do the level editor).

I've looked at the samples that come up with the SDK, but i simply don't understand them. I started studying HLSL 2 days ago (i follow the www.riemers.net tutorial).

thx for the 'long answer' :). it was the kind of answer i was expecting to get.
but, i don't know how to do some of the things u described there.... 1st, u said that i have to copy the HDR-texture to the back buffer. isn't there a simpler way of doing this? smth like device.backbuffer = hdr-texture? from what u wrote there, i have to create 4 vertices and render them with the hdr-texture right?
2nd: i don't know how to scale the texture (create mipmaps)
3rd: i didn't understood how to blur the texture :(

i think that's all :)

[Edited by - cyberlorddan on April 26, 2009 3:36:37 PM]
0

Share this post


Link to post
Share on other sites
Quote:
Original post by cyberlorddan

K. So, to answer to ur'....answers :).... i'm working on a rts game, and now i'm doing the map editor. the reason that i'm using mdx is because i don't know how to add butons, panels, and so on in native c++. (the game application is written in native c++. i've done the start screen and now i have to do the level editor).



Well I really don't think you want to create a version of your renderer in native DX, and a version of your renderer in MDX. That's a disaster waiting to happen.

What you CAN do is generate managed wrappers of your native C++ classes using C++/CLI. This will allow you to write your editor in C#, and use the same native rendering back-end. However I'll warn you that although it starts out somewhat simple, maintaining your wrappers can turn into a very non-trivial task. If you're not working on a bigger team, it's much more ideal to just have everything written in managed code.

Another option is to use a C++ toolkit like Qt or WxWidgets for doing the UI. Those are generally much easier to work with than the native Windows API.
0

Share this post


Link to post
Share on other sites
1. What you generally do, is once you have all of your final images, and all that is left to do is merge, is that you get a copy of the back buffer through the device, set it as the render target, then draw a full screen quad and render all of the images that have been put into your one final one.

2 and 3 can be answered if you would take the time to look at some code from the sdk or other relivant source, for examples on how to do both take a look at this article; http://www.gamedev.net/columns/hardcore/hdrrendering/

there is also a more in depth and complex description of the hdr process based on the DirectX10 API here on the wiki.
0

Share this post


Link to post
Share on other sites
thx for the info. i'm slowly implementing hdr to my engine (i got everything that i need to do this...thx for the links :) ).
1 more question..... U said that MDX is a dead project... that means that it won't be updated anymore? or only the documentation for it won't be updated? i'm asking this because i saw that tesselation was implemented in directx 11 only, and i saw somewhere (i can't remember where) while i was doing modifications to my mdx engine, a struct or enumerator or smth (it showed up in the intellisense scroll list) that had the name Tesselation.... :| it might be a stupid question, but i really don't know very much of mdx :(
0

Share this post


Link to post
Share on other sites
Quote:
Original post by cyberlorddan
1 more question..... U said that MDX is a dead project... that means that it won't be updated anymore? or only the documentation for it won't be updated? i'm asking this because i saw that tesselation was implemented in directx 11 only, and i saw somewhere (i can't remember where) while i was doing modifications to my mdx engine, a struct or enumerator or smth (it showed up in the intellisense scroll list) that had the name Tesselation.... :| it might be a stupid question, but i really don't know very much of mdx :(


Yes, it won't be updated in any way. No D3D10, no D3D10.1, no D3D11. Not even bug fixes. Like I said it's not even included in the SDK anymore. The tessellation stuff you're seeing in the documentation was never actually supported in D3D9 hardware, and is completely different from the programmable tessellation available in D3D11.



0

Share this post


Link to post
Share on other sites
Quote:
Original post by cyberlorddan
thx for the info. i'm slowly implementing hdr to my engine (i got everything that i need to do this...thx for the links :) ).
1 more question..... U said that MDX is a dead project... that means that it won't be updated anymore? or only the documentation for it won't be updated? i'm asking this because i saw that tesselation was implemented in directx 11 only, and i saw somewhere (i can't remember where) while i was doing modifications to my mdx engine, a struct or enumerator or smth (it showed up in the intellisense scroll list) that had the name Tesselation.... :| it might be a stupid question, but i really don't know very much of mdx :(


i said that i know everything i should (well i was wrong). i ran into another problem :(

first of all, the format of the texture that i should set to render the scene to. i have a problem..... if i set it to A16B16G16R16, then semi-transparent objects are rendered wrong (instead of blending with the content that's below them, they are blending with the color i set in device.clear() method.

second, i have no depth buffer :| . when rendering to the texture (surface) instead of the screen, the depth buffer doesn't work. it renders everything in the order i tell them to render (meaning that objects that are in the back are shown in front of others)

any solutions? :(

btw, i should have mentioned that by setting the texture format to rgb8 the transparency problem is solved, but that is not a hdr format right?

0

Share this post


Link to post
Share on other sites
Quote:
Original post by leet bix
Have you looked at the HDRLighting sample in the SDK?


yes... but when i try to execute it it gives an error that says that the device can not pe initialized properly... :(
i've benn thinking that this problem might be caused by the incompatibility of my graphics card (it's kinda old... geforce 6200 . i'll change it on 10th may with an 9800 one - my birthday :D ).
could this also be because i'm using mdx instead of native directx?
0

Share this post


Link to post
Share on other sites
i think i should better post the important parts of my code: (i removed the parts that weren't relevant to the problem)

let me explain first what this code should do (or what's left from it). first i initialize all the stuff i have to. the function render() is called on every frame. the problems: objects are rendered in front of others (the depth buffer doesnt work). the transparency is a mess.... you can look at this image to see what results i get: http://i383.photobucket.com/albums/oo272/cyberlorddan/dxprob.png

i also explained some things in the image


namespace LevelEditor
{
public partial class Main : Form
{
int widthP;
int heightP;

Device motorGrafic;

VertexBuffer vb = null; //fsdfsdfsfdsfsdfsdfs
IndexBuffer ib = null;

Matrix projection;
Matrix camera;

short[] indices = { 0, 1, 2, 2, 1, 3 };

CustomVertex.PositionNormalTextured[] terrainTriangle;
TerrainPointByPoint[] terrainDetailPBP; //terrainpbp is class that holds the terrain data such as height, texture, pathing and so on....

Texture rocksTexture;
Texture dirtFinalTexture;

Texture originalRenderedScene;
Surface originalRenderSurface;
Surface bbS;
Surface depthStencilS;

CustomVertex.TransformedTextured[] screenVertices = new CustomVertex.TransformedTextured[6];

Effect effect;

float curWindHei;
float curWindWid;

public Main()
{
InitializeComponent();
//widthP and heightP represent the terrain size. they are initialized here (i removed the code because it was big and wasnt relevant

initializeMainEngine();
initializeBuffers();
initializeMeshes();
initializeTextures();

vd = new VertexDeclaration(motorGrafic, velements);

originalRenderedScene = new Texture(motorGrafic, this.Width, this.Height, 1, Usage.RenderTarget, Format.A16B16G16R16, Pool.Default);
originalRenderSurface = originalRenderedScene.GetSurfaceLevel(0);

}
void initializeBuffers()
{
vb = new VertexBuffer(typeof(CustomVertex.PositionNormalTextured), 4, motorGrafic, Usage.Dynamic | Usage.WriteOnly, CustomVertex.PositionNormalTextured.Format, Pool.Default); //fsdfsdfsfdsfsdfsdfs
vb.Created += new EventHandler(this.OnVertexBufferCreate); //fsdfsdfsfdsfsdfsdfs
OnVertexBufferCreate(vb, null);
ib = new IndexBuffer(typeof(short), indices.Length, motorGrafic, Usage.WriteOnly, Pool.Default); //fsdfsdfsfdsfsdfsdfs
ib.Created += new EventHandler(this.OnIndexBufferCreate); //fsdfsdfsfdsfsdfsdfs
}
void initializeMainEngine()
{
PresentParameters paramPrez = new PresentParameters();
paramPrez.SwapEffect = SwapEffect.Discard;
paramPrez.Windowed = true;
paramPrez.MultiSample = MultiSampleType.FourSamples;
paramPrez.AutoDepthStencilFormat = DepthFormat.D16;
paramPrez.EnableAutoDepthStencil = true;
paramPrez.BackBufferFormat = Format.X8R8G8B8;

motorGrafic = new Device(0, DeviceType.Hardware, this.splitContainer1.Panel2, CreateFlags.SoftwareVertexProcessing, paramPrez);
effect = Effect.FromFile(motorGrafic, "defaultEffect.fx", null, ShaderFlags.None, null);
}
void initializeTextures()
{
//removed code
}
void initializeMeshes()
{
//removed code
}
void initializeTerrain()
{
//removed code

}
void OnIndexBufferCreate(object sender, EventArgs e)
{
//removed code
}

void OnVertexBufferCreate(object sender, EventArgs e)
{
VertexBuffer buffer = (VertexBuffer)sender; //fsdfsdfsfdsfsdfsdfs
originalRenderedScene = new Texture(motorGrafic, this.splitContainer1.Panel2.Width, this.splitContainer1.Panel2.Height, 1, Usage.RenderTarget, Format.A16B16G16R16F, Pool.Default);
originalRenderSurface = originalRenderedScene.GetSurfaceLevel(0);


//some code removed here

}

void generateTerrainData(int whichOne)
{
//code removed here
}

Point GetMouseCoordonates()
{
//code removed here
}

void render()
{

projection = Matrix.PerspectiveFovLH((float)Math.PI / 4, curWindWid / curWindHei, 0.1f, 50.0f);
camera = Matrix.LookAtLH(currentCameraPosition, currentCameraTarget, currentCameraUp);


bbS = motorGrafic.GetBackBuffer(0, 0, BackBufferType.Mono);


motorGrafic.SetRenderTarget(0, originalRenderSurface);

motorGrafic.Indices = ib;
motorGrafic.VertexDeclaration = vd;
motorGrafic.RenderState.SourceBlend = Blend.SourceAlpha;
motorGrafic.RenderState.DestinationBlend = Blend.InvSourceAlpha;
motorGrafic.SetStreamSource(0, vb, 0);
motorGrafic.BeginScene();

motorGrafic.Clear(ClearFlags.Target | ClearFlags.ZBuffer, Color.CornflowerBlue.ToArgb(), 1, 0);

motorGrafic.RenderState.AlphaBlendEnable = true;
motorGrafic.RenderState.ZBufferEnable = true;

effect.SetValue("xColoredTexture", rocksTexture);
effect.Technique = "Simplest";
effect.Begin(0);
effect.BeginPass(0);
effect.SetValue("xViewProjection", Matrix.Translation(trackBar1.Value, trackBar2.Value, trackBar3.Value) * camera * projection);
effect.SetValue("xRot", Matrix.Translation(trackBar1.Value, trackBar2.Value, trackBar3.Value));
motorGrafic.SetTexture(0, rocksTexture);
motorGrafic.DrawIndexedPrimitives(PrimitiveType.TriangleList, 0, 0, 4, 0, 2); //////////i use this to render the position of my light


////i render here my scene

}
};

effect.EndPass();
effect.End();
motorGrafic.EndScene();

motorGrafic.RenderState.Lighting = false;
motorGrafic.SetRenderTarget(0, bbS);
motorGrafic.SetTexture(0, originalRenderedScene);
motorGrafic.Clear(ClearFlags.Target, Color.Red, 1, 0);
motorGrafic.BeginScene();


motorGrafic.VertexFormat = CustomVertex.TransformedTextured.Format;
motorGrafic.RenderState.CullMode = Cull.None;

motorGrafic.DrawUserPrimitives(PrimitiveType.TriangleList, 2, screenVertices);

motorGrafic.EndScene();
motorGrafic.Present();
}

void setCameraPosition(float xCamPozS, float yCamPozS, float zCamPozS)
{

////removed code
}
void setCameraTarget(float xCamTarS, float yCamTarS, float zCamTarS)
{
//removed code
}
}
}


...so how do i fix this?
0

Share this post


Link to post
Share on other sites

originalRenderedScene = new Texture(motorGrafic, this.Width, this.Height, 1, Usage.RenderTarget, Format.A16B16G16R16, Pool.Default);
originalRenderSurface = originalRenderedScene.GetSurfaceLevel(0);


Only the latest DirectX 10 compatible graphics cards (NVIDIA G8x) supports alpha blending, filtering and multi-sampling on a 16:16:16:16 render target. Graphics cards that support the 10:10:10:2 render target format support alpha blending and multi-sampling of this format (ATI R5 series). Some DirectX 9 graphics cards that support the 16:16:16:16 format support alpha blending and filtering (NVIDIA G7x), others alpha blending and multi-sampling but not filtering (ATI R5 series). All alternative color spaces than the following (HSV, CIE Yxy, L16uv, RGBE) do not support alpha blending. Therefore all blending operations still have to happen in RGB space.
An implementation of a high-dynamic range renderer that renders into 8:8:8:8 render targets might be done by differing between opaque and transparent objects. The opaque objects are stored in a buffer that uses the CIE Yxy color model or the L16uv color model to distribute precision over all four channels of this render target. Transparent objects that would utilize alpha blending operations would be stored in another 8:8:8:8 render target in RGB space. Therefore only opaque objects would receive a better color precision.
To provide to transparent and opaque objects the same color precision a Multiple-Render-Target consisting of two 8:8:8:8 render targets might be used. For each color channel bits 1-8 would be stored in the first render target and bits 4 - 12 would be stored in the second render target (RGB12AA render target format). This way there is a 4 bit overlap that should be good enough for alpha blending.
0

Share this post


Link to post
Share on other sites
Quote:
Original post by leet bix

originalRenderedScene = new Texture(motorGrafic, this.Width, this.Height, 1, Usage.RenderTarget, Format.A16B16G16R16, Pool.Default);
originalRenderSurface = originalRenderedScene.GetSurfaceLevel(0);


Only the latest DirectX 10 compatible graphics cards (NVIDIA G8x) supports alpha blending, filtering and multi-sampling on a 16:16:16:16 render target. Graphics cards that support the 10:10:10:2 render target format support alpha blending and multi-sampling of this format (ATI R5 series). Some DirectX 9 graphics cards that support the 16:16:16:16 format support alpha blending and filtering (NVIDIA G7x), others alpha blending and multi-sampling but not filtering (ATI R5 series). All alternative color spaces than the following (HSV, CIE Yxy, L16uv, RGBE) do not support alpha blending. Therefore all blending operations still have to happen in RGB space.
An implementation of a high-dynamic range renderer that renders into 8:8:8:8 render targets might be done by differing between opaque and transparent objects. The opaque objects are stored in a buffer that uses the CIE Yxy color model or the L16uv color model to distribute precision over all four channels of this render target. Transparent objects that would utilize alpha blending operations would be stored in another 8:8:8:8 render target in RGB space. Therefore only opaque objects would receive a better color precision.
To provide to transparent and opaque objects the same color precision a Multiple-Render-Target consisting of two 8:8:8:8 render targets might be used. For each color channel bits 1-8 would be stored in the first render target and bits 4 - 12 would be stored in the second render target (RGB12AA render target format). This way there is a 4 bit overlap that should be good enough for alpha blending.


thx for the answer. so i'll have to wait until i get a new nvidia graphics card... i could use this time to move the code to c++.
anyway, i still can't get the depth buffer to work properly. this isn't a hardware problem, because the HDRFormats sample shows the teapot as it should. any solution for the depth buffer problem?
0

Share this post


Link to post
Share on other sites
I don't think you need newer hardware, just use a differnet format render target to store your hdr values.
I don't know what's wrong with the depth buffer, but I don't think you should have to use a floating point buffer to hold the information, 256 deltas per channel should be plenty.
0

Share this post


Link to post
Share on other sites
Quote:
Original post by leet bix
I don't think you need newer hardware, just use a differnet format render target to store your hdr values.
I don't know what's wrong with the depth buffer, but I don't think you should have to use a floating point buffer to hold the information, 256 deltas per channel should be plenty.


I 'solved' the problem with the depth buffer. I had multisampling turned on. When i turned it off, the depth buffer worked as it should.
but now a new, probably stupid, question arises: how do i turn multisampling on without 'damaging' the depth buffer?
0

Share this post


Link to post
Share on other sites
You need to create a multisampled depth buffer to match your color buffer. The color/depth must always match in multisample modes (and no multisampling is also a multisample mode ;))

/Simon
0

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!


Register a new account

Sign in

Already have an account? Sign in here.


Sign In Now
Sign in to follow this  
Followers 0