• Advertisement
Sign in to follow this  

DX11 Technical papers on real time rendering

This topic is 1059 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hey guys,

 

I have to provide a demonstration and give a seminar as a part of a college course this semester. As usual, I chose computer graphics as the domain of my seminar, and decided to implement a rendering technique and show that as a part of the demonstration.

 

I have found a couple rendering techniques interesting, namely SSAO (from Frank Luna's DX11 book) and Bokeh effect ( https://mynameismjp.wordpress.com/2011/02/28/bokeh/ (thanks MJP!) )

 

BUT, for some reason they require students to cite a technical paper as part of my seminar report which should have atleast a smattering of relation with the rendering technique that I'll be implemeting. Now I know what I'm supposed to present, but so far I just can't seem to find any good *related* techical papers which I could use. The main reason for that is that they want us to choose papers from IEEE/ACM dated 2012 onwards. With ACM as an option, I can look at SIGGRAPH papers too. BUT if I could find a paper pre-2012 which is still good, exceptions _could_ be made.

 

tl;dr :

 

Could anyone provide me references to technical papers which are atleast *somehow* related to SSAO and Bokeh (or algorithms used for Bokeh), from IEEE and/or ACM/SIGGRAPH?

 

EDIT: Just to clarify, this isn't research, just a computer graphics seminar where I implement a rendering technique for demonstration.

Edited by jammm

Share this post


Link to post
Share on other sites
Advertisement

wouldn't it make more sense if you find one or more interesting papers from those journals and implement them ?

Taking existing/available implementations, adding a couple of references and submit them as your own research is NOT how you get a degree...

Share this post


Link to post
Share on other sites

wouldn't it make more sense if you find one or more interesting papers from those journals and implement them ?

Taking existing/available implementations, adding a couple of references and submit them as your own research is NOT how you get a degree...

It's not really research, just a presentation. As I said, it's just a seminar. Nowhere in the post have I specified that this is research.

Edited by jammm

Share this post


Link to post
Share on other sites

 

wouldn't it make more sense if you find one or more interesting papers from those journals and implement them ?

Taking existing/available implementations, adding a couple of references and submit them as your own research is NOT how you get a degree...

It's not really research, just a presentation. As I said, it's just a seminar. Nowhere in the post have I specified that this is research.

 

 

The following advice is if you're genuinely interested in learning here, if your priority is to pass the seminar as simply as possible than I guess you disregard it.

 

The point of these exercises is typically to have you research a topic you're unfamiliar with and show that you're capable of picking up knowledge and understand what's going on on your own (by crawling through refs, etc...). The post 2012 criterium is probably there specifically because most pre 2012 algorithms have simple tutorials or at least sample projects available that you could copy, or which do a large part of the "understanding" for you and give it to you in a simpler and more digestible manner, at the cost of depth and accuracy. 

 

That may not seem like a big thing, but if you're unfamiliar with it, there's quite a difference between crawling papers to understand and implement something, and reading a tutorial and implementing what's presented there. But paper crawling is important if you don't want to wait around for others to explain stuff to you when new papers come out. Just as an example, most tutorials you find will go fairly easy on the math and pretty much always sweep something under the rug that's extremely important to get a detailed understanding of the relevant field, as opposed to just that specific algorithm that's presented in a tutorial.

 

Also it's kind of lame to implement something like the basic SSAO algorithm that Crytek came up with when lots of improvements in that area have been made since.

 

But now, some practical help: This thesis from 2013 is a pretty good overview over some relatively recent advances in SSAO (it compares six specific algorithms).

Share this post


Link to post
Share on other sites

Regarding DOF/Bokeh: you should most definitely check out this presentation from CryTek (from 2013), and possibly also this 2014 presentation about COD: AW. In my opinion, the purely gather-based approaches are more scalable in terms of performance, and can give some really great results. The first presentation also has some good references to earlier papers that you can check out.

Share this post


Link to post
Share on other sites

 

 

wouldn't it make more sense if you find one or more interesting papers from those journals and implement them ?

Taking existing/available implementations, adding a couple of references and submit them as your own research is NOT how you get a degree...

It's not really research, just a presentation. As I said, it's just a seminar. Nowhere in the post have I specified that this is research.

 

 

The following advice is if you're genuinely interested in learning here, if your priority is to pass the seminar as simply as possible than I guess you disregard it.

 

The point of these exercises is typically to have you research a topic you're unfamiliar with and show that you're capable of picking up knowledge and understand what's going on on your own (by crawling through refs, etc...). The post 2012 criterium is probably there specifically because most pre 2012 algorithms have simple tutorials or at least sample projects available that you could copy, or which do a large part of the "understanding" for you and give it to you in a simpler and more digestible manner, at the cost of depth and accuracy. 

 

That may not seem like a big thing, but if you're unfamiliar with it, there's quite a difference between crawling papers to understand and implement something, and reading a tutorial and implementing what's presented there. But paper crawling is important if you don't want to wait around for others to explain stuff to you when new papers come out. Just as an example, most tutorials you find will go fairly easy on the math and pretty much always sweep something under the rug that's extremely important to get a detailed understanding of the relevant field, as opposed to just that specific algorithm that's presented in a tutorial.

 

Also it's kind of lame to implement something like the basic SSAO algorithm that Crytek came up with when lots of improvements in that area have been made since.

 

But now, some practical help: This thesis from 2013 is a pretty good overview over some relatively recent advances in SSAO (it compares six specific algorithms).

 

I agree with you, I do intend to learn from implementing these algorithms, but the reason why I'm going for a simple-to-implement algorithm is so that I could first learn how that works, then implement a paper which improves upon that algorithm. Also, I only have a month to prepare for this, so I have to be realistic with my own goals and expectations by the end of the month.

 

Thanks for the thesis link! I'm sure I'll be having a great time going through it, understanding the algorithms and, if possible, implement one of those algorithms as part of my demonstration!

 

 

 

Regarding DOF/Bokeh: you should most definitely check out this presentation from CryTek (from 2013), and possibly also this 2014 presentation about COD: AW. In my opinion, the purely gather-based approaches are more scalable in terms of performance, and can give some really great results. The first presentation also has some good references to earlier papers that you can check out.

Thanks alot for the links! That's some great information right there smile.png

I also found a paper here, which talks about an advancement in DOF/Bokeh, offering better performance than normal non-seperable filtering. It seems to be cited in the Crytek presentation too. The plan is that I'll be learning DOF/Bokeh from your samples, implement them on my engine and try to implement this paper after that. I hope you don't mind me referencing your approach while implementing DOF/Bokeh on my engine.

 

I'll be sure to cite your blog and the paper I found on my report.

Edited by jammm

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
  • Advertisement
  • Popular Now

  • Advertisement
  • Similar Content

    • By mister345
      Hi, can somebody please tell me in clear simple steps how to debug and step through an hlsl shader file?
      I already did Debug > Start Graphics Debugging > then captured some frames from Visual Studio and
      double clicked on the frame to open it, but no idea where to go from there.
       
      I've been searching for hours and there's no information on this, not even on the Microsoft Website!
      They say "open the  Graphics Pixel History window" but there is no such window!
      Then they say, in the "Pipeline Stages choose Start Debugging"  but the Start Debugging option is nowhere to be found in the whole interface.
      Also, how do I even open the hlsl file that I want to set a break point in from inside the Graphics Debugger?
       
      All I want to do is set a break point in a specific hlsl file, step thru it, and see the data, but this is so unbelievably complicated
      and Microsoft's instructions are horrible! Somebody please, please help.
       
       
       

    • By mister345
      I finally ported Rastertek's tutorial # 42 on soft shadows and blur shading. This tutorial has a ton of really useful effects and there's no working version anywhere online.
      Unfortunately it just draws a black screen. Not sure what's causing it. I'm guessing the camera or ortho matrix transforms are wrong, light directions, or maybe texture resources not being properly initialized.  I didnt change any of the variables though, only upgraded all types and functions DirectX3DVector3 to XMFLOAT3, and used DirectXTK for texture loading. If anyone is willing to take a look at what might be causing the black screen, maybe something pops out to you, let me know, thanks.
      https://github.com/mister51213/DX11Port_SoftShadows
       
      Also, for reference, here's tutorial #40 which has normal shadows but no blur, which I also ported, and it works perfectly.
      https://github.com/mister51213/DX11Port_ShadowMapping
       
    • By xhcao
      Is Direct3D 11 an api function like glMemoryBarrier in OpenGL? For example, if binds a texture to compute shader, compute shader writes some values to texture, then dispatchCompute, after that, read texture content to CPU side. I know, In OpenGL, we could call glMemoryBarrier before reading to assure that texture all content has been updated by compute shader.
      How to handle incoherent memory access in Direct3D 11? Thank you.
    • By _Engine_
      Atum engine is a newcomer in a row of game engines. Most game engines focus on render
      techniques in features list. The main task of Atum is to deliver the best toolset; that’s why,
      as I hope, Atum will be a good light weighted alternative to Unity for indie games. Atum already
      has fully workable editor that has an ability to play test edited scene. All system code has
      simple ideas behind them and focuses on easy to use functionality. That’s why code is minimized
      as much as possible.
      Currently the engine consists from:
      - Scene Editor with ability to play test edited scene;
      - Powerful system for binding properties into the editor;
      - Render system based on DX11 but created as multi API; so, adding support of another GAPI
        is planned;
      - Controls system based on aliases;
      - Font system based on stb_truetype.h;
      - Support of PhysX 3.0, there are samples in repo that use physics;
      - Network code which allows to create server/clinet; there is some code in repo which allows
        to create a simple network game
      I plan to use this engine in multiplayer game - so, I definitely will evolve the engine. Also
      I plan to add support for mobile devices. And of course, the main focus is to create a toolset
      that will ease games creation.
      Link to repo on source code is - https://github.com/ENgineE777/Atum
      Video of work process in track based editor can be at follow link: 
       
       

    • By mister345
      I made a spotlight that
      1. Projects 3d models onto a render target from each light POV to simulate shadows
      2. Cuts a circle out of the square of light that has been projected onto the render target
      as a result of the light frustum, then only lights up the pixels inside that circle 
      (except the shadowed parts of course), so you dont see the square edges of the projected frustum.
       
      After doing an if check to see if the dot product of light direction and light to vertex vector is greater than .95
      to get my initial cutoff, I then multiply the light intensity value inside the resulting circle by the same dot product value,
      which should range between .95 and 1.0.
       
      This should give the light inside that circle a falloff from 100% lit to 0% lit toward the edge of the circle. However,
      there is no falloff. It's just all equally lit inside the circle. Why on earth, I have no idea. If someone could take a gander
      and let me know, please help, thank you so much.
      float CalculateSpotLightIntensity(     float3 LightPos_VertexSpace,      float3 LightDirection_WS,      float3 SurfaceNormal_WS) {     //float3 lightToVertex = normalize(SurfacePosition - LightPos_VertexSpace);     float3 lightToVertex_WS = -LightPos_VertexSpace;          float dotProduct = saturate(dot(normalize(lightToVertex_WS), normalize(LightDirection_WS)));     // METALLIC EFFECT (deactivate for now)     float metalEffect = saturate(dot(SurfaceNormal_WS, normalize(LightPos_VertexSpace)));     if(dotProduct > .95 /*&& metalEffect > .55*/)     {         return saturate(dot(SurfaceNormal_WS, normalize(LightPos_VertexSpace)));         //return saturate(dot(SurfaceNormal_WS, normalize(LightPos_VertexSpace))) * dotProduct;         //return dotProduct;     }     else     {         return 0;     } } float4 LightPixelShader(PixelInputType input) : SV_TARGET {     float2 projectTexCoord;     float depthValue;     float lightDepthValue;     float4 textureColor;     // Set the bias value for fixing the floating point precision issues.     float bias = 0.001f;     // Set the default output color to the ambient light value for all pixels.     float4 lightColor = cb_ambientColor;     /////////////////// NORMAL MAPPING //////////////////     float4 bumpMap = shaderTextures[4].Sample(SampleType, input.tex);     // Expand the range of the normal value from (0, +1) to (-1, +1).     bumpMap = (bumpMap * 2.0f) - 1.0f;     // Change the COORDINATE BASIS of the normal into the space represented by basis vectors tangent, binormal, and normal!     float3 bumpNormal = normalize((bumpMap.x * input.tangent) + (bumpMap.y * input.binormal) + (bumpMap.z * input.normal));     //////////////// LIGHT LOOP ////////////////     for(int i = 0; i < NUM_LIGHTS; ++i)     {     // Calculate the projected texture coordinates.     projectTexCoord.x =  input.vertex_ProjLightSpace[i].x / input.vertex_ProjLightSpace[i].w / 2.0f + 0.5f;     projectTexCoord.y = -input.vertex_ProjLightSpace[i].y / input.vertex_ProjLightSpace[i].w / 2.0f + 0.5f;     if((saturate(projectTexCoord.x) == projectTexCoord.x) && (saturate(projectTexCoord.y) == projectTexCoord.y))     {         // Sample the shadow map depth value from the depth texture using the sampler at the projected texture coordinate location.         depthValue = shaderTextures[6 + i].Sample(SampleTypeClamp, projectTexCoord).r;         // Calculate the depth of the light.         lightDepthValue = input.vertex_ProjLightSpace[i].z / input.vertex_ProjLightSpace[i].w;         // Subtract the bias from the lightDepthValue.         lightDepthValue = lightDepthValue - bias;         float lightVisibility = shaderTextures[6 + i].SampleCmp(SampleTypeComp, projectTexCoord, lightDepthValue );         // Compare the depth of the shadow map value and the depth of the light to determine whether to shadow or to light this pixel.         // If the light is in front of the object then light the pixel, if not then shadow this pixel since an object (occluder) is casting a shadow on it.             if(lightDepthValue < depthValue)             {                 // Calculate the amount of light on this pixel.                 float lightIntensity = saturate(dot(bumpNormal, normalize(input.lightPos_LS[i])));                 if(lightIntensity > 0.0f)                 {                     // Determine the final diffuse color based on the diffuse color and the amount of light intensity.                     float spotLightIntensity = CalculateSpotLightIntensity(                         input.lightPos_LS[i], // NOTE - this is NOT NORMALIZED!!!                         cb_lights[i].lightDirection,                          bumpNormal/*input.normal*/);                     lightColor += cb_lights[i].diffuseColor*spotLightIntensity* .18f; // spotlight                     //lightColor += cb_lights[i].diffuseColor*lightIntensity* .2f; // square light                 }             }         }     }     // Saturate the final light color.     lightColor = saturate(lightColor);    // lightColor = saturate( CalculateNormalMapIntensity(input, lightColor, cb_lights[0].lightDirection));     // TEXTURE ANIMATION -  Sample pixel color from texture at this texture coordinate location.     input.tex.x += textureTranslation;     // BLENDING     float4 color1 = shaderTextures[0].Sample(SampleTypeWrap, input.tex);     float4 color2 = shaderTextures[1].Sample(SampleTypeWrap, input.tex);     float4 alphaValue = shaderTextures[3].Sample(SampleTypeWrap, input.tex);     textureColor = saturate((alphaValue * color1) + ((1.0f - alphaValue) * color2));     // Combine the light and texture color.     float4 finalColor = lightColor * textureColor;     /////// TRANSPARENCY /////////     //finalColor.a = 0.2f;     return finalColor; }  
      Light_vs.hlsl
      Light_ps.hlsl
  • Advertisement