Jump to content
  • Advertisement

phatgreen

Member
  • Content Count

    10
  • Joined

  • Last visited

Community Reputation

180 Neutral

About phatgreen

  • Rank
    Member
  1. It didn't work like before. It reads from the Proj matrix even though I'm using the WorldViewProj, weird stuff, what were suspecting?
  2. That was the problem..... Thank you so much. In all the documentation I've read, and in previous projects I've done, they all use 'cb'. Why is 'b' making it work?
  3. I am trying to draw a Sphere. The Vertex Shader simply transforms the Vertice from Object Space to Clip Space. The Pixel Shader just returns a single color. I am using Visual Studio 2015, Shader Model 5.0, and DirectX11.   This is the result I am looking for: http://i.imgur.com/SkCnYme.png   Here is the Vertex Shader Code: http://i.imgur.com/aCKk09p.png   Now my issue is that the WorldViewProj variable in constant buffer cb1 is acting like it is referring to Proj in cb0 when the code runs. This means that when I set Proj to what the WorldViewProj's value is for that frame, the code works. But if Proj is what it is supposed to be, just the projection matrix, the code does not work. In fact, if the Proj matrix is actually set to the projection matrix values like it is supposed to, the screen looks like this:http://i.imgur.com/geyuKg7.png.   If I set the Proj Matrix to WorldViewProj Values and don't set WorldViewProj at all, the code runs correctly, even though in the HLSL code I am clearly asking for values stored in WorldViewProj even though they are 0's(If you don't set a constant buffer's values, they are set to 0, see my links to GPU Debug output below) but yet it still works. If I don't set the Proj to anything, I get a black screen.   Now I want to prove this is not an issue on the CPU side...   The following picture is CPU side code for setting everything for that frame, I fill the Proj Matrix with 999.0f's and the WorldViewProj Matrix with 111.0f's.http://i.imgur.com/7wW3USh.png   I ran the GPU Debugger and saved the results. Here is the Camera Constant Buffer: http://i.imgur.com/QmnbeVz.png Here is the World Constant Buffer: http://i.imgur.com/N19PmT6.png   So I am setting the data correctly. I don't understand why this is happening. Thank you for your help.   EDIT 1: I had quite a bit of discussion with some people over on Reddit with this issue here. https://www.reddit.com/r/GraphicsProgramming/comments/4x990j/directx11_multiple_constant_buffer_issue/ Please don't suggest a solution that has already been talked about and looked at there.   Here is my Constant Buffer Binding Code: http://i.imgur.com/hrZmFI3.png   For those with Visual 2015, here is a link to the Debugger Capture File with the correct values set for all the matrices in the constant buffers (But when you look at the results of the frame, it is clearly showing it use the projection matrix even though the HLSL code calls the WorldViewProj Matrix in cb1, hence the issue): http://www.filedropper.com/report20160811-1655   For those who are reluctant to download the file here is a screenshot of the Constant Buffer Bind info from it: http://i.imgur.com/ucKThg1.png   Thank you all.
  4.   Fair enough on the viewspace point. I think I even call it "eye" space in my code to distinguish it, since the transform is only different from worldspace in translation and not in rotation, so not truly viewspace as you said. Eye space is still a bit of a misnomer, perhaps camera space makes the most sense.   Short of doing all of that, if you simply scale in double precision in the domain shader it would be a step in the right direction.     Okay now I've tried doing double precision of everything in the domain shader.   Heres the code..     I still get the jerking. This is interesting because now I have no idea what is causing the jerk because I would think if all these calculations are done with doubles and I only return my final clip space value as a float it would stop the jerking because no precision that is significant would be lost.    I did try converting converting all the mesh vertices from being in object space to being in world space on the CPU before sending them to the GPU, but that did not stop the wobbling. That was also very strange because those vertices close to the camera had precision going into the GPU so I'm uncertain again why I had issues.    I prefer the method you suggested of simply scaling in double precision in the domain shader which is what I'm trying now and what I talked about first in this post over the more CPU heavy method that I can't even get to work.   So at this point I am just trying to find why the jerk is still happening, I am not currently looking for an efficient way, I just want to find the source, as in, which specific multiplication when you are dealing with large meshes is the main operation where the precision is lost. I thought it would be any operation that has to do with the Scale matrix because that is where I store the large radius of this sphere, but I as I said above, I've altered the domain shader to do all those matrix multiplications and vector transformations with doubles.   Also one final note, in the picture I linked I use a function I wrote called "mul_4x4d_4x4d" for my double matrix multiplication and "mul_4d_4x4d" for my vector transformation because I don't know if Microsofts version of those (mul()) actually supports double precision, so I just make it do it just in case, please correct me if I'm wrong though.   EDIT: I just realized this. How do I even know that the GPU is actually doing all this in doubles? Does my card have to support it and if it doesn't it just defaults to floats?
  5. My vertex data is being stored with single precision. The scaling happens when I multiply the vertex in Object Space by the World Matrix which contains the Scale, Rotation, and Translation of the object, which all occurs on the GPU, in my domain shader in my case, in single precision.    I don't think the camera being the origin qualifies as being view space because the view matrix not only accounts for the camera's position, but also the right vector, up vector, and look vector of the camera. when you multiply a vertex in World Space by that view matrix, you get a vertex in View Space. My world space origin happens to be the camera instead of some random spot when I do stuff on the GPU.   Allow me to elaborate a bit more on what I am doing.   I send the vertex data of the vertices that make up my unit sphere to the GPU. Before I apply the world matrix, I feed the coordinates of each vertex to a noise function and I offset the vertex based on that to get a height differences.    I think what I'm going to try is every frame, on the CPU, convert all those vertices of the unit sphere from Object Space into World space (relative to the usual origin, not the camera), in doubles, then convert all their origins from being relative to the usual origin to having the camera be the origin in floats, so the ones close to the camera will have good precision. The problem though, after the performance hits considered of all those matrix multiplications on the CPU, is that on the GPU I'll eventually need the vertices I submitted in Object Space so I can offset them correctly but I already did a little bit of thinking on that and I know how I'll manage that.   I will do some testing and report back here.    Thanks for the reply. 
  6. I generate a unit sphere that represents a sphere about the size of earth (around 6.7 million meters) after being multiplied by the scaling matrix (which contains the actual radius of the sphere so I can use the sphere for many other things).    1.0f is equal to 1 meter in my project.   I've recorded this video displaying my problem.     When I get close to the sphere, so that that I'm moving around near the surface at 1 meter a second or slower, I have these jumping issues.   There are already several things that I am doing: - I store all positions of objects on the CPU with doubles (the camera and the sphere) - I treat the camera's position as the origin of the world when it comes to rendering so I convert every object's world coordinates to being relative to the camera before I build their world matrix each frame (This also means my view matrix is built with the camera being at (0,0,0) also)   I do not know where my problem is and I have tried searching a lot of different places.   I'm converting all my float input data that goes into my shaders into doubles, doing the necessary transformations and calculations with doubles on the gpu just to see if that gets rid of the issue but it doesn't.   I pretty certain that my issue has something to do when I transform a vertice in my shader from Object space (Very small numbers because it is a unit sphere in object space) to World Space which goes to actual planet size of millions of meters where the precision is taken away from the smaller distances   I am stumped.
  7. phatgreen

    Create Texture Issue

    Okay thanks I understand, I was going to move it down to a lot lower bit count after I got this version working but it appears to not be worth it. 
  8. Hello I have a small issue. I'm trying to create a Texture that will be part of my gBuffer for deferred rendering which means I need to flag the resource as D3D11_BIND_SHADER_RESOURCE and D3D11_BIND_RENDER_TARGET.   I am storing normals in this particular resource and I want to use the DXGI_FORMAT_R32G32B32_FLOAT type or any other type like it (eg. DXGI_FORMAT_R32G32B32_TYPELESS) . When I try to create the resource it fails. It works if I do DXGI_FORMAT_R32G32B32A32_FLOAT which is something I don't understand.   When I only use the D3D11_BIND_SHADER_RESOURCE flag the creation of the texture works but of course the CreateRenderTargetView fails a few lines later.   Looking at Microsoft's page D3D11_RENDER_TARGET_VIEW_DESC it says in the remaks that DXGI_FORMAT_R32G32B32_FLOAT  cannot be used if the view will be used to bind a buffer (vertex, index, constant, or stream-out). However, I am not doing any of that so I do not understand why it fails.   https://msdn.microsoft.com/en-us/library/windows/desktop/ff476201(v=vs.85).aspx   Code I use is below. Thanks. const unsigned int clientWidth = uClient->getAppScreenWidth(); const unsigned int clientHeight = uClient->getAppScreenHeight(); releaseGBuffers(); D3D11_TEXTURE2D_DESC texture2DDesc; texture2DDesc.Width = clientWidth; texture2DDesc.Height = clientHeight; texture2DDesc.MipLevels = 1; texture2DDesc.ArraySize = 1; texture2DDesc.Format = DXGI_FORMAT_R32G32B32_FLOAT; if (msaaEnabled) { texture2DDesc.SampleDesc.Count = 4; texture2DDesc.SampleDesc.Quality = msaaQuality - 1; } else { texture2DDesc.SampleDesc.Count = 1; texture2DDesc.SampleDesc.Quality = 0; } texture2DDesc.Usage = D3D11_USAGE_DEFAULT; texture2DDesc.BindFlags = D3D11_BIND_SHADER_RESOURCE | D3D11_BIND_RENDER_TARGET; texture2DDesc.CPUAccessFlags = 0; texture2DDesc.MiscFlags = 0; HRESULT hr = d3d11Device->CreateTexture2D(&texture2DDesc, 0, &gBufferOne);
  9. Wow it fixed the lighting! Can't believe it was that simple. I still have the camera issue, bad framerate, and the slicing. I'm going to go over all my matrix stuff and see if I can find anything.   As of right now.. 1 Light - 110fps 10 Lights - 26fps   Are binding the gbuffers as resources to the deferredPS supposed to take a lot of time? Because I lose a bulk of my fps to that as well.
  10. I've attempted to implement Deferred Rendering into my engine for the first time using Direct3D11, HLSL, C++ (No XNA, my own math lib)... and I've got some issues that I am struggling to find the solution to.     A Few things to know: The camera is treated as the origin. The objects, and the lights have their coordinates converted from being relative to the center of the world to being relative to the cameras position in the world. The objects being generated are geospheres.    Issues: 1. There are bugs in my lighting (I don't even know if its CPU side or GPU side) that I cannot figure out the problem to. 2. My camera is acting very weird. When I pitch the camera the entire world is pitching, when I turn (yaw) the camera yaws correctly and the world doesn't move like it shouldn't.  When I strafe (move along the camera's Right Vector) it is moving as if the right vector has never changed since the application started, as in, the right vector is always parallel with the world's x-axis. Same with the look vector. This camera issue is very strange to me because I had the camera being the origin in a different project and it works just fine, so I don't think it has to do with incorrect methods regarding movement in my Camera Class. 3. When I switch to fullscreen there is some sort of slicing on the screen (like something you would see if your swap chain presented too early) 4. My framerate in general is poor, I don't understand why. With 1 light it is 108fps, 2 lights 79 fps, 3 lights 63, and so on. 5. The objects themselves are jumpy, they will jump a pixel or two, consistently, and at the same intervals in unison.    My lighting is basically acting weird at the poles (They don't appear to be y-axis poles, but some sort of poles in the object's world space).  I looked through my method that generates the normals and tangents and I believe the issue might be when the normal vector is (0,1,0) and my tangents get wacky and become (0,0,0). However, even when I hardcode it that (0,1,0) vectors should have (1,0,0) tangent vectors, it doesn't solve the issue.   I do not believe the issue has anything to do with the data being correctly transferred over to the GPU because I looked at debugged the GPU and found no issues there.   Here are some examples of my lighting issue. The following pictures are lit by a single Directional Light (light hits all the objects from the same direction)   Directional Light 1   Directional Light 2   Directional Light 3   Heres one with a Point Light, similiar issues.   So with those pictures in mind I've also like to add these screenshots of the shaders used. Default Vertex Shader   Default Pixel Shader   Deferred Vertex Shader   Deferred Pixel Shader Part 1   Deferred Pixel Shader Part 2     If you need more information, more code, let me know.   Thank you for reading and helping.          
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!