Jump to content
  • Advertisement


  • Content Count

  • Joined

  • Last visited

Community Reputation

767 Good

About DrColossus

  • Rank
  1. DrColossus

    Radiance Question

    If you look at this image from Wikipedia, which shows the inverse-square-law, you can visualize why it gets darker the further you move away from a light source. If you hold a piece of paper (for example the patch marked with an 'A') at varying distances, it's gonna get hit by more photons the closer it gets and vice versa, the ray density increases and decreases.   Now radiance is a unit defined as watt per steradian per square meter, which describes some amount of energy radiating from an area in a certain solid angle. Looking at the image above, you can imagine that all those rays come from a tiny area on the surface of a lightsource. The "volume" in which they travel is the solid angle, in this case an infinite pyramid or cone.   So by moving the piece of paper you're basically capturing various amounts of that radiance, the radiance itself (here number of rays) within that solid angle is constant.   For realtime rendering it's not really feasible (yet) to calculate lighting in terms of areas affecting other areas. Point lights allow us to simplify some things here, we know light goes uniformly outward from a single point. And that means we can just use the inverse-square law and scale the light by the inverse of the squared distance to the light source. Point lights don't really exist in reality, as they're defined to be infinitely small (a single point), which is why this is sometimes a bit hard to wrap your head around 
  2. DrColossus

    Raw Input mouse stutter/jitter

    I have since changed the rawinput message handling to run on a different thread as a workaround, using an invisible window with blocking GetMessage() calls.   Everything is smooth now in all circumstances, though i'd really like to know what was causing this behaviour.
  3. You could just keep different sets of arrays per object type, chances are you want to do different things with them anyway.    For example: struct Doors { std::vector<vec4> positions; }; struct Players { std::vector<vec4> positions; std::vector<float> healthValues; }; You can then either write functions that operate on these structs or on the internal data: void updatePlayerPositions( Players& players ) { //iterate just over player positions } void updatePositions( vec4* positions, size_t count ) { // iterate over positions of any type } updatePlayerPositions( players ); updatePositions( players.positions.data(), players.positions.size() );
  4. I'm using the raw input API for all input events in my own engine, and everything works just fine. Well, as long as i only run the executable itself.   I also started working on an editor tool suite in Winforms/C#. The editor starts the engine in a separate process and passes it's window handle along. The engine then creates a child window and renders everything inside the editor, similar to how the BitSquid tools work (http://bitsquid.blogspot.de/2010/04/our-tool-architecture.html).   And that's where my problems arise: Whenever i hold down a button on my keyboard while the engine is running within the editor window, the mouse input gets very jittery.   Here are two small 60fps videos (~500KB in size) showing the issue: Running just the engine: https://dl.dropboxusercontent.com/u/6556492/Engine_60fps.mp4 Running the engine inside the editor: https://dl.dropboxusercontent.com/u/6556492/Editor_60fps.mp4   The jitter starts to appear whenever the windows key repeat kicks in. I don't have a lot of experience in Win32/WinForms, so i'm currently at a loss about how to fix this.   Other games seem to exhibit similar (the same?) issues:   Battlefield 3:   Counter-Strike Source:   The BF3 fix was to limit "RenderFramesAhead" to 1, forcing that in the driver (Nvidia 320.49, GTX580) doesn't fix it in my case. the CS:S solution was to use raw input, which i already do.     Does anyobody have any ideas on what might be causing this?
  5. I just figured it out, the engine window needed to be created as a child to the editor window :)   You can do that by passing the other application's window handle as hwndParent and the window style WS_CHILD to CreateWindow.
  6. I'm trying to have my C++ DX11 engine render to a window that was created by an editor written in C#.   I'm using FindWindow to get the HWND: HRESULT initWindow( HINSTANCE hInstance, int nCmdShow ) { g_WindowHandle = 0; g_WindowHandle = FindWindow( 0, L"Editor" ); if( g_WindowHandle ) return S_OK; // more code to create own window if editor isn't running }   And it works as expected, i can, for example, resize the window from the engine side without problems.   I then pass the window handle to my renderer to create the D3DDevice and swapchain: HRESULT RendererDX11::initD3DDevice( const HWND windowHandle ) { UINT createDeviceFlags = 0; #ifdef _DEBUG createDeviceFlags |= D3D11_CREATE_DEVICE_DEBUG; #endif D3D_DRIVER_TYPE driverTypes[] = { D3D_DRIVER_TYPE_HARDWARE, D3D_DRIVER_TYPE_REFERENCE }; UINT numDriverTypes = ARRAYSIZE( driverTypes ); D3D_FEATURE_LEVEL featureLevels[] = { D3D_FEATURE_LEVEL_11_0, D3D_FEATURE_LEVEL_10_1, D3D_FEATURE_LEVEL_10_0 }; UINT numFeatureLevels = ARRAYSIZE( featureLevels ); DXGI_SWAP_CHAIN_DESC sd; ZeroMemory( &sd, sizeof(sd) ); sd.BufferCount = m_BufferCount; sd.BufferDesc.Width = m_DisplayResX; sd.BufferDesc.Height = m_DisplayResY; sd.BufferDesc.Format = DXGI_FORMAT_R8G8B8A8_UNORM; sd.BufferDesc.RefreshRate.Numerator = 0; sd.BufferDesc.RefreshRate.Denominator = 0; sd.BufferDesc.Scaling = DXGI_MODE_SCALING_UNSPECIFIED; sd.BufferDesc.ScanlineOrdering = DXGI_MODE_SCANLINE_ORDER_UNSPECIFIED; sd.BufferUsage = DXGI_USAGE_RENDER_TARGET_OUTPUT; sd.Flags = m_Fullscreen ? DXGI_SWAP_CHAIN_FLAG_ALLOW_MODE_SWITCH : 0; sd.OutputWindow = windowHandle; sd.SampleDesc.Count = 1; sd.SampleDesc.Quality = 0; sd.Windowed = m_Fullscreen ? FALSE : TRUE; sd.SwapEffect = DXGI_SWAP_EFFECT_DISCARD; HRESULT hr = S_OK; for( UINT driverTypeIndex = 0; driverTypeIndex < numDriverTypes; driverTypeIndex++ ) { m_driverType = driverTypes[driverTypeIndex]; hr = D3D11CreateDeviceAndSwapChain( // fails when using the editor's window nullptr, m_driverType, nullptr, createDeviceFlags, featureLevels, numFeatureLevels, D3D11_SDK_VERSION, &sd, &m_pSwapChain, &m_pD3Ddevice, &m_featureLevel, &m_pImmediateContext ); if( SUCCEEDED(hr) ) break; } if( FAILED(hr) ) { SHOW_ERROR( hr ); // 0x887A0001: DXGI_ERROR_INVALID_CALL return hr; } return S_OK; }   And it only works as long as i'm using a window that the engine created itself.    I'm getting the DXGI_ERROR_INVALID_CALL error when using the editor's window. The error description on MSDN just reads:    Which doesn't really help.   I'm pretty much out of ideas now, is there anything i'm missing?
  7. This is only true for alpha blending. For alpha testing each pixel is either opaque or clipped. Of course its possible to enable both in which case your point is valid. Yes you're right, i should've mentioned that. It looks like blending is disabled in that Pix screenshot, so that doesn't seem to be the problem.
  8. Do you render the alpha tested geometry before or after you render opaque geometry?   If you render it first, it'll still write the depth values for pixels with alpha < 1.0. The terrain would then fail the depth comparison for those pixels and your clearcolor would become visible.
  9. DrColossus

    Engine Design: Shadow Mapping

    You could use a shadow map atlas to avoid having to manage multiple textures. You then essentially have a single big shadow map, and render each lights depth to a part of that texture. Your shader then only needs to access a single sampler plus UV-coordinates for each light. Something along those lines is described here: http://www.john-chapman.net/content.php?id=14
  10. DrColossus

    Directional light cookies

    I just tried your matrix setup myself (using glm and the same values) and it works fine for me even when moving around. Since it acts weird when you move your camera, how do you set up the inverse view matrix? Try glm::inverse(viewMatrix) if you aren't doing that already and make sure your position reconstruction from depth is correct. MJP has some good posts about that on his blog: http://mynameismjp.wordpress.com/2009/03/10/reconstructing-position-from-depth/ If the light view matrix was dependant on your frustum, then the projected cookie texture would move around as you move the camera.
  11. DrColossus

    Directional light cookies

    Is the wobbling some kind of delay? If so, check if you're using the inverse view matrix of your current frame, after you've already updated it. It's kinda hard to guess what's wrong without seeing what's going on
  12. DrColossus

    Directional light cookies

    You have to manually divide the lightspace position by it's w-value unless you're using a texture function that does that for you, like textureProj (assuming that you don't account for that already within your scaling somewhere). So without any scaling, your sampling coordinate would be vec4 lightSpacePos = lightMatrix * invView * fragmentPos; vec2 projCoords = 0.5 + 0.5 * lightSpacePos.xy / lightSpacePos.w; The 0.5 + 0.5 * brings the coordinates from [-1, 1] in the [0, 1] range. I hope that fixes it
  13. Thank you very much for the explanation, that clears things up quite a bit. Is there a better way to calculate the derivatives than sampling four adjacent depth values as in Humus' Volume Roads sample? http://www.humus.name/index.php?page=3D&ID=84 I guess i'll do that once i implement MSAA and some form of edge detection to only calculate the derivatives where they're needed.
  14. I think i just figured it out, the pixel-sized artifacts are caused by mipmapping. I disabled it and they're gone now. And the blocky artifacts might be precision issues due to the high exponent i'm using. They only occur sparingly near the lightsource
  15. I just implemented EVSMs yesterday, but i get some artifacts at depth discontinuities in my gbuffer and im unsure what's causing them. Some images of how it looks (click to enlarge): the shadow term by itself: comparison: I don't get those artifacts with simple PCF filtering or any of my lighting. For the EVSM i'm currently rendering linear depth into a single 32bit float multisample texture, using a custom resolve to calculate the moments and store them into another GL_RGBA32F texture. The resolved EVSM then gets blurred using a seperable box filter in two passes. the lighting shader: http://pastebin.com/JReUh3e3 There is also a simple tonemapping and gamma correction shader applied after lighting. [edit] Oh and sometimes (very rare) i get these weird blocky artifacts: Any ideas on what's causing that and how to get rid of it?
  • Advertisement

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!