Sign in to follow this  
trevex

OpenGL Deferred Rendering Problem

Recommended Posts

I am currently working on a small deferred rendering engine for this semester's coursework assignment. The code compiles fine and runs in Visual Studio without any problems except no output to the screen and that's where the problems start.

The deferred rendering system is nothing special it is basically the first draft of the algorithms presented in "Practical Rendering with DirectX 11" [2011 by Jason Zink, Matt Pettineo, Jack Hoxley]. No optimization like Attribute packing etc. are used. Only a simple Lighting system is used (no shadow).

The problems seems to be the fullscreen-quad(simply inefficient fullscreen-quads used for lighting passes) or a general problem. Since I get no errors in the debug log I tried to use PIX and PerfStudio to get some more information on the gpu side. Unfortunately PIX and PerfStudio are before the first frame with this error:

Invalid allocation size: 4294967295 bytes

So for whatever reason it seems to allocate some space with -1 bytes. Awkwardly everything is fine in VisualStudio debugging... and if I attach a debugger to the PIX process and break when the error happens I land in a debuggers header file.

I just started using DirectX with prior OpenGL experience, so I hope I did not something generally wrong. I used the executable that was output by the compiler in debug mode.

To avoid general logic mistakes, here is roughly what I currently do:

1. SetDepthStencilState (with DepthTest enabled)
2. clear all RenderTargetViews (was unsure about gBuffer but gBuffer is being cleared as well currently) and DepthStencilBuffer
3. bind gBuffer and DepthStencilBuffer
4. render Geometry
5. disable DepthTest
6. bind Backbuffer
7. render all lights with the associated shader (since I am using the effects framework I set the BlendState in the shader)
8. render CEGUI (works fine even if rest doesn't output anything)
9. present()

The lights are as already mentioned fullscreen quads. The lighting technique is simply passing the position through, so the quads vertices are in the range [-1, 1].

If you need any additional informations let me know.

Thanks, Nik

P.S. sorry for the bad english...


EDIT:

For further informations: The vertices and indices of the fullscreen quad

glm::vec3 vertices[] =    {		glm::vec3(-1.0f, -1.0f,  1.0f),		glm::vec3(-1.0f,  1.0f,  1.0f),		glm::vec3( 1.0f, -1.0f,  1.0f),		glm::vec3( 1.0f,  1.0f,  1.0f),    };	    UINT indices[] = { 0, 3, 2, 2, 0, 1 };


And a rough walkthough the code:

// before geometry pass
        m_d3dImmediateContext->RSSetState(m_RasterState);
	m_d3dImmediateContext->OMSetDepthStencilState(m_GeometryDepthStencilState, 1);
	m_d3dImmediateContext->ClearRenderTargetView(m_RenderTargetView, reinterpret_cast<const float*>(&clearColor));
	m_d3dImmediateContext->ClearRenderTargetView(m_gBuffer[0], reinterpret_cast<const float*>(&clearColor));
	m_d3dImmediateContext->ClearRenderTargetView(m_gBuffer[1], reinterpret_cast<const float*>(&clearColor));
	m_d3dImmediateContext->ClearRenderTargetView(m_gBuffer[2], reinterpret_cast<const float*>(&clearColor));
	m_d3dImmediateContext->ClearRenderTargetView(m_gBuffer[3], reinterpret_cast<const float*>(&clearColor));
	m_d3dImmediateContext->ClearDepthStencilView(m_DepthStencilView, D3D11_CLEAR_DEPTH|D3D11_CLEAR_STENCIL, 1.0f, 0);
	m_d3dImmediateContext->OMSetRenderTargets(4, m_gBuffer, m_DepthStencilView);

// before lighting pass
        m_d3dImmediateContext->OMSetDepthStencilState(m_LightingDepthStencilState, 1);
	m_d3dImmediateContext->OMSetRenderTargets(1, &m_RenderTargetView, m_DepthStencilView);
	DXLightingShader->enable();

// DXLightingShader::enable (the static cast is necessary because the engine supports opengl and directx this is my dirty way
        static_cast<SDXRenderInfo*>(g_RenderInfo)->context->IASetInputLayout(m_InputLayout);
	static_cast<SDXRenderInfo*>(g_RenderInfo)->context->IASetPrimitiveTopology(D3D11_PRIMITIVE_TOPOLOGY_TRIANGLELIST);
	m_fxNormalMap->SetResource(m_NormalView);
	m_fxDiffuseMap->SetResource(m_DiffuseView);
	m_fxSpecularMap->SetResource(m_SpecularView);
	m_fxPositionMap->SetResource(m_PositionView);
	m_fxCameraPos->SetFloatVector(Camera->getPosition());

// depending on light-type this is how it is drawn
        for(UINT p = 0; p < m_DirectionalLightDesc.Passes; ++p)
	{
		m_DirectionalLight->GetPassByIndex(p)->Apply(0, static_cast<SDXRenderInfo*>(g_RenderInfo)->context);
		static_cast<SDXRenderInfo*>(g_RenderInfo)->context->DrawIndexed(6, 0, 0);
	}

// present function called after light passes
        DXLightingShader->disable();
	CEGUI::System::getSingleton().renderGUI();
	HR(m_SwapChain->Present(0, 0));

// DXLightingShader::disable
        m_fxNormalMap->SetResource(NULL);
	m_fxDiffuseMap->SetResource(NULL);
	m_fxSpecularMap->SetResource(NULL);
	m_fxPositionMap->SetResource(NULL);
	m_DirectionalLight->GetPassByIndex(0)->Apply(0, static_cast<SDXRenderInfo*>(g_RenderInfo)->context);



If you need more information or some details of the shader implementation let me know Edited by trevex

Share this post


Link to post
Share on other sites
Invalid allocation size: 4294967295 bytes

 

 

What makes you think this has anything to do with your deferred rendering setup? It could simply be an uninitialized variable somewhere else in your code. What header file do you break into?

Share this post


Link to post
Share on other sites

Thanks for the tip. I am currently in chrismas stress but I am trying to debug the application again now.

 

So I figured out what the problem was, it was quite simple actually... Visual Studio has a different runtime environment, so pix wasn't able to find some files...

 

I am investigating why it is not rendering now, I'll hopefully come back with more informations later...

Share this post


Link to post
Share on other sites

So I started debugging a frame with pix:

 

The 4 gBuffer textures are successfully being rendered.

The problem seems to be the fullscreen quad:

 

33kt2ky.png

 

 

Since in the viewport output there is only a whiteline it seems to get discarded?

Share this post


Link to post
Share on other sites

Sure, sorry for delayed reply chrismas time...

 

[code] 

float4 VSMain(in float3 Position : POSITION) : SV_Position 
     return float4(Position, 0.0f); 
}
[/code]

 

for what ever reason the z value is always -1.0 also the preVS value...

but this is the vertex data in the associated buffer:

 

[code] 

glm::vec3 vertices[] =
    {
glm::vec3(-1.0f, -1.0f,  1.0f),
glm::vec3(-1.0f,  1.0f,  1.0f),
glm::vec3( 1.0f, -1.0f,  1.0f),
glm::vec3( 1.0f,  1.0f,  1.0f),
    };
[/code]

 

As already stated out previously I am new to directx, so is there anything that can influence how vertex data is interpreted since the preVS value is also -1? 

Share this post


Link to post
Share on other sites
Ok just noticed I was setting last value to 0.0 and not 1.0...
The problem now is I changed the z value of the input vertices to 0.0f no change, I changed the shader to set the z value to 0.0f I get some output, still not the right shaded cube but it seems to be a problem with my lighting code...

But this would be kind of a dirty fix, so why is the Z value being set to -1...? Edited by trevex

Share this post


Link to post
Share on other sites
Ah, already further, so yeah, your new vertex shader is probably fine. You can even do
float4 VSMain(in float4 Position : POSITION) : SV_Position
{
    return Position;
}
since w = 1 will happen automatically. This is the rare occasion where signature and layout may differ.

But your problem is elsewhere. PreVS means either your buffer init/update or your input layout is off - or you've bound the wrong buffer(s).

Share this post


Link to post
Share on other sites
Thanks alot indeed I forgot to set the VertexBuffer and since the vertices of my cube are quite similar, I didn't notice that. Can't believe I haven't noticed that...

The only thing left now is a bug in my lighting code, some surfaces stay black... The scene currently uses a single directional light for testing!
float3 CalcLighting(in float3 normal, in float3 position, in float3 diffuseAlbedo, in float3 specularAlbedo, in float specularPower, uniform int gLightingMode) 
{  
float3 L = 0; 
float attenuation = 1.0f; 
if (gLightingMode == POINTLIGHT || gLightingMode == SPOTLIGHT)
{
L = LightPos - position; 
float dist = length(L); 
attenuation = max(0, 1.0f - (dist / LightRange.x)); 
L /= dist;
}
else if (gLightingMode == DIRECTIONALLIGHT)
{
L = -LightDirection; }if (gLightingMode == SPOTLIGHT)
{
float3 L2 = LightDirection; 
float rho = dot(-L, L2); 
attenuation *= saturate((rho - SpotlightAngles.y) / (SpotlightAngles.x - SpotlightAngles.y)); 
}
float nDotL = saturate(dot(normal, L)); 
float3 diffuse = nDotL * LightColor * diffuseAlbedo;
float3 V = CameraPos - position; 
float3 H = normalize(L + V);
float3 specular = pow(saturate(dot(normal, H)), specularPower) * LightColor * specularAlbedo.xyz * nDotL; 
return (diffuse + specular) * attenuation;
}
The basic algorithm is basically out of a book so I assumed there would be nothing wrong the only thing I changed is using if and a uniform to embed it in an effects file.

The Buffers are filled with informations and there is nothing missing, these are values of a black pixel that is supposed to have some color:
11h3rxj.png

EDIT:

Lighting Shader Code on pastebin http://pastebin.com/8gbYBLCX, because the code tags seem to be buggy currently either escaping html as well or completly breaking formating Edited by trevex

Share this post


Link to post
Share on other sites

What color do you expect the surface to be? The vector L / -LightDirection (unnormalized) hits the surface orthogonal to its normal, so nDotL equals zero. Since both the diffuse and the specular coefficient are multiplied by nDotL the surface reflects no light. If you don't want to fade specular hi-lights with the light's incidence, you should remove the last '* nDotL'.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Forum Statistics

    • Total Topics
      628293
    • Total Posts
      2981868
  • Similar Content

    • By mellinoe
      Hi all,
      First time poster here, although I've been reading posts here for quite a while. This place has been invaluable for learning graphics programming -- thanks for a great resource!
      Right now, I'm working on a graphics abstraction layer for .NET which supports D3D11, Vulkan, and OpenGL at the moment. I have implemented most of my planned features already, and things are working well. Some remaining features that I am planning are Compute Shaders, and some flavor of read-write shader resources. At the moment, my shaders can just get simple read-only access to a uniform (or constant) buffer, a texture, or a sampler. Unfortunately, I'm having a tough time grasping the distinctions between all of the different kinds of read-write resources that are available. In D3D alone, there seem to be 5 or 6 different kinds of resources with similar but different characteristics. On top of that, I get the impression that some of them are more or less "obsoleted" by the newer kinds, and don't have much of a place in modern code. There seem to be a few pivots:
      The data source/destination (buffer or texture) Read-write or read-only Structured or unstructured (?) Ordered vs unordered (?) These are just my observations based on a lot of MSDN and OpenGL doc reading. For my library, I'm not interested in exposing every possibility to the user -- just trying to find a good "middle-ground" that can be represented cleanly across API's which is good enough for common scenarios.
      Can anyone give a sort of "overview" of the different options, and perhaps compare/contrast the concepts between Direct3D, OpenGL, and Vulkan? I'd also be very interested in hearing how other folks have abstracted these concepts in their libraries.
    • By aejt
      I recently started getting into graphics programming (2nd try, first try was many years ago) and I'm working on a 3d rendering engine which I hope to be able to make a 3D game with sooner or later. I have plenty of C++ experience, but not a lot when it comes to graphics, and while it's definitely going much better this time, I'm having trouble figuring out how assets are usually handled by engines.
      I'm not having trouble with handling the GPU resources, but more so with how the resources should be defined and used in the system (materials, models, etc).
      This is my plan now, I've implemented most of it except for the XML parts and factories and those are the ones I'm not sure of at all:
      I have these classes:
      For GPU resources:
      Geometry: holds and manages everything needed to render a geometry: VAO, VBO, EBO. Texture: holds and manages a texture which is loaded into the GPU. Shader: holds and manages a shader which is loaded into the GPU. For assets relying on GPU resources:
      Material: holds a shader resource, multiple texture resources, as well as uniform settings. Mesh: holds a geometry and a material. Model: holds multiple meshes, possibly in a tree structure to more easily support skinning later on? For handling GPU resources:
      ResourceCache<T>: T can be any resource loaded into the GPU. It owns these resources and only hands out handles to them on request (currently string identifiers are used when requesting handles, but all resources are stored in a vector and each handle only contains resource's index in that vector) Resource<T>: The handles given out from ResourceCache. The handles are reference counted and to get the underlying resource you simply deference like with pointers (*handle).  
      And my plan is to define everything into these XML documents to abstract away files:
      Resources.xml for ref-counted GPU resources (geometry, shaders, textures) Resources are assigned names/ids and resource files, and possibly some attributes (what vertex attributes does this geometry have? what vertex attributes does this shader expect? what uniforms does this shader use? and so on) Are reference counted using ResourceCache<T> Assets.xml for assets using the GPU resources (materials, meshes, models) Assets are not reference counted, but they hold handles to ref-counted resources. References the resources defined in Resources.xml by names/ids. The XMLs are loaded into some structure in memory which is then used for loading the resources/assets using factory classes:
      Factory classes for resources:
      For example, a texture factory could contain the texture definitions from the XML containing data about textures in the game, as well as a cache containing all loaded textures. This means it has mappings from each name/id to a file and when asked to load a texture with a name/id, it can look up its path and use a "BinaryLoader" to either load the file and create the resource directly, or asynchronously load the file's data into a queue which then can be read from later to create the resources synchronously in the GL context. These factories only return handles.
      Factory classes for assets:
      Much like for resources, these classes contain the definitions for the assets they can load. For example, with the definition the MaterialFactory will know which shader, textures and possibly uniform a certain material has, and with the help of TextureFactory and ShaderFactory, it can retrieve handles to the resources it needs (Shader + Textures), setup itself from XML data (uniform values), and return a created instance of requested material. These factories return actual instances, not handles (but the instances contain handles).
       
       
      Is this a good or commonly used approach? Is this going to bite me in the ass later on? Are there other more preferable approaches? Is this outside of the scope of a 3d renderer and should be on the engine side? I'd love to receive and kind of advice or suggestions!
      Thanks!
    • By nedondev
      I 'm learning how to create game by using opengl with c/c++ coding, so here is my fist game. In video description also have game contain in Dropbox. May be I will make it better in future.
      Thanks.
    • By Abecederia
      So I've recently started learning some GLSL and now I'm toying with a POM shader. I'm trying to optimize it and notice that it starts having issues at high texture sizes, especially with self-shadowing.
      Now I know POM is expensive either way, but would pulling the heightmap out of the normalmap alpha channel and in it's own 8bit texture make doing all those dozens of texture fetches more cheap? Or is everything in the cache aligned to 32bit anyway? I haven't implemented texture compression yet, I think that would help? But regardless, should there be a performance boost from decoupling the heightmap? I could also keep it in a lower resolution than the normalmap if that would improve performance.
      Any help is much appreciated, please keep in mind I'm somewhat of a newbie. Thanks!
    • By test opty
      Hi,
      I'm trying to learn OpenGL through a website and have proceeded until this page of it. The output is a simple triangle. The problem is the complexity.
      I have read that page several times and tried to analyse the code but I haven't understood the code properly and completely yet. This is the code:
       
      #include <glad/glad.h> #include <GLFW/glfw3.h> #include <C:\Users\Abbasi\Desktop\std_lib_facilities_4.h> using namespace std; //****************************************************************************** void framebuffer_size_callback(GLFWwindow* window, int width, int height); void processInput(GLFWwindow *window); // settings const unsigned int SCR_WIDTH = 800; const unsigned int SCR_HEIGHT = 600; const char *vertexShaderSource = "#version 330 core\n" "layout (location = 0) in vec3 aPos;\n" "void main()\n" "{\n" " gl_Position = vec4(aPos.x, aPos.y, aPos.z, 1.0);\n" "}\0"; const char *fragmentShaderSource = "#version 330 core\n" "out vec4 FragColor;\n" "void main()\n" "{\n" " FragColor = vec4(1.0f, 0.5f, 0.2f, 1.0f);\n" "}\n\0"; //******************************* int main() { // glfw: initialize and configure // ------------------------------ glfwInit(); glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3); glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 3); glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE); // glfw window creation GLFWwindow* window = glfwCreateWindow(SCR_WIDTH, SCR_HEIGHT, "My First Triangle", nullptr, nullptr); if (window == nullptr) { cout << "Failed to create GLFW window" << endl; glfwTerminate(); return -1; } glfwMakeContextCurrent(window); glfwSetFramebufferSizeCallback(window, framebuffer_size_callback); // glad: load all OpenGL function pointers if (!gladLoadGLLoader((GLADloadproc)glfwGetProcAddress)) { cout << "Failed to initialize GLAD" << endl; return -1; } // build and compile our shader program // vertex shader int vertexShader = glCreateShader(GL_VERTEX_SHADER); glShaderSource(vertexShader, 1, &vertexShaderSource, nullptr); glCompileShader(vertexShader); // check for shader compile errors int success; char infoLog[512]; glGetShaderiv(vertexShader, GL_COMPILE_STATUS, &success); if (!success) { glGetShaderInfoLog(vertexShader, 512, nullptr, infoLog); cout << "ERROR::SHADER::VERTEX::COMPILATION_FAILED\n" << infoLog << endl; } // fragment shader int fragmentShader = glCreateShader(GL_FRAGMENT_SHADER); glShaderSource(fragmentShader, 1, &fragmentShaderSource, nullptr); glCompileShader(fragmentShader); // check for shader compile errors glGetShaderiv(fragmentShader, GL_COMPILE_STATUS, &success); if (!success) { glGetShaderInfoLog(fragmentShader, 512, nullptr, infoLog); cout << "ERROR::SHADER::FRAGMENT::COMPILATION_FAILED\n" << infoLog << endl; } // link shaders int shaderProgram = glCreateProgram(); glAttachShader(shaderProgram, vertexShader); glAttachShader(shaderProgram, fragmentShader); glLinkProgram(shaderProgram); // check for linking errors glGetProgramiv(shaderProgram, GL_LINK_STATUS, &success); if (!success) { glGetProgramInfoLog(shaderProgram, 512, nullptr, infoLog); cout << "ERROR::SHADER::PROGRAM::LINKING_FAILED\n" << infoLog << endl; } glDeleteShader(vertexShader); glDeleteShader(fragmentShader); // set up vertex data (and buffer(s)) and configure vertex attributes float vertices[] = { -0.5f, -0.5f, 0.0f, // left 0.5f, -0.5f, 0.0f, // right 0.0f, 0.5f, 0.0f // top }; unsigned int VBO, VAO; glGenVertexArrays(1, &VAO); glGenBuffers(1, &VBO); // bind the Vertex Array Object first, then bind and set vertex buffer(s), //and then configure vertex attributes(s). glBindVertexArray(VAO); glBindBuffer(GL_ARRAY_BUFFER, VBO); glBufferData(GL_ARRAY_BUFFER, sizeof(vertices), vertices, GL_STATIC_DRAW); glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 3 * sizeof(float), (void*)0); glEnableVertexAttribArray(0); // note that this is allowed, the call to glVertexAttribPointer registered VBO // as the vertex attribute's bound vertex buffer object so afterwards we can safely unbind glBindBuffer(GL_ARRAY_BUFFER, 0); // You can unbind the VAO afterwards so other VAO calls won't accidentally // modify this VAO, but this rarely happens. Modifying other // VAOs requires a call to glBindVertexArray anyways so we generally don't unbind // VAOs (nor VBOs) when it's not directly necessary. glBindVertexArray(0); // uncomment this call to draw in wireframe polygons. //glPolygonMode(GL_FRONT_AND_BACK, GL_LINE); // render loop while (!glfwWindowShouldClose(window)) { // input // ----- processInput(window); // render // ------ glClearColor(0.2f, 0.3f, 0.3f, 1.0f); glClear(GL_COLOR_BUFFER_BIT); // draw our first triangle glUseProgram(shaderProgram); glBindVertexArray(VAO); // seeing as we only have a single VAO there's no need to // bind it every time, but we'll do so to keep things a bit more organized glDrawArrays(GL_TRIANGLES, 0, 3); // glBindVertexArray(0); // no need to unbind it every time // glfw: swap buffers and poll IO events (keys pressed/released, mouse moved etc.) glfwSwapBuffers(window); glfwPollEvents(); } // optional: de-allocate all resources once they've outlived their purpose: glDeleteVertexArrays(1, &VAO); glDeleteBuffers(1, &VBO); // glfw: terminate, clearing all previously allocated GLFW resources. glfwTerminate(); return 0; } //************************************************** // process all input: query GLFW whether relevant keys are pressed/released // this frame and react accordingly void processInput(GLFWwindow *window) { if (glfwGetKey(window, GLFW_KEY_ESCAPE) == GLFW_PRESS) glfwSetWindowShouldClose(window, true); } //******************************************************************** // glfw: whenever the window size changed (by OS or user resize) this callback function executes void framebuffer_size_callback(GLFWwindow* window, int width, int height) { // make sure the viewport matches the new window dimensions; note that width and // height will be significantly larger than specified on retina displays. glViewport(0, 0, width, height); } As you see, about 200 lines of complicated code only for a simple triangle. 
      I don't know what parts are necessary for that output. And also, what the correct order of instructions for such an output or programs is, generally. That start point is too complex for a beginner of OpenGL like me and I don't know how to make the issue solved. What are your ideas please? What is the way to figure both the code and the whole program out correctly please?
      I wish I'd read a reference that would teach me OpenGL through a step-by-step method. 
  • Popular Now