Jump to content
  • Advertisement


  • Content Count

  • Joined

  • Last visited

Community Reputation

1239 Excellent

1 Follower

About Armagedon

  • Rank

Personal Information

  • Role
  • Interests

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Armagedon

    Understanding Entity Component System?

    The easiest answer to your performance problem is: Run the profiler. It will show you what a bottleneck is. But i'm gonna give you a quick tip: - For each entity on screen you're issuing a drawcall as well as bunch of glGetUniformLocation(which are expensive). What you can do is cache glGetUniformLocation per shader and use unordered_map to lookup it. - Try reducing number of glDrawElements and glUniforms* calls by either using instancing(easier) or some other AZDO techniques(harder).
  2. Armagedon

    Shadows on parallax mapped meshes

    If i'm not wrong, what you need is modify the depth that you're comparing with shadow map depth with height that was returned by your POM alghorithm. Basically instead of: float shadow = currentDepth > closestDepth ? 1.0 : 0.0; you should do: float shadow = currentDepth - pomDepth > closestDepth ? 1.0 : 0.0; Never tested this, but it should work :) What is also important, when building shadow map, you should write altered depth to depth buffer, otherwise you will get incorrect self shadowing. However, this introduces some performance penalty, as you've to do POM for every shadow map, as well as, outputting custom depth in pixel shader disables some optimizations on depth buffer (early-z test? hierarchical z-buffer?). Check yourself if effect(subtle) is worth performance penalty yourself. If you want to have detailed, shadowed geometry, go for tessalation instead.
  3. Hi, If you're zooming in and performance get worse, then most probably problem is too complex pixel shader and/or overdraw. From pixel shader which you posted you can do following optimizations: - Reduce shadow map resolution (high resolution shadow map will make you bandwith bound) - Get rid of following conditional (you can treat conditionals as expensive if you're not sure if branch coherence is high): if ((saturate(projectTexCoord.x) == projectTexCoord.x) && (saturate(projectTexCoord.y) == projectTexCoord.y)) - I'd also go for first calculation of light intensity and then if(lightIntensity > 0.0f) do your shadow sampling - Sort all object basing on distance from camera and draw nearest object first
  4. Armagedon

    AMD horrible OpenGL performance

    What graphics cards you're using? Some graphics cards could have implemented part of functions on software instead od hardware. It would be good to profile it with AMD CodeXL(http://gpuopen.com/compute-product/codexl/) and see where the problem exactly is(I had the same problem with AMD firepro cards and cubemaps sampling).
  5. Hello Gamedev I'm currently working on my master thesis that compares various ways to filter shadow maps and stumbled upon a problem with Variance Shadow Maps. Is it possible to omit rendering in shadow receiver in to shadow map? What i've read from original paper, it's not, due to possible high variance between shadow caster and shadow receiver. But it's quite common in games to skip some object from rendering in to shadow map(for example, terrain that can have shadows from raycasting technique, explicit per object shadow maps etc.). Is there any kind of "trick" for that kind of situations? Or is it common to just use old plain shadow maps in this case?
  6. I'm not really sure if dropping support for exceptions in current era is good idea. Exceptions make code cleaner(for ex. you don't need to pass error code via method return codes or arguments), but they need to be used in exceptional situations(a lot of people abuse use of exceptions and try to hide some high level logic inside of catch block). About performance, in x64 you don't pay any cost when calling function that may throw, all the cost is moved in case when function actually throws and you need to handle exceptions. But as i said, this should occur in exceptional situations(ex. missing file that should be there, problem with connection etc) and mostly at this time you don't care about performance. I can't speak about consoles(i've never worked on one), but i bet their differ between architectures, as well in x86 land you pay small cost for every function call as compiler generates code for stack unwinding.
  7. IMHO, it should be the whole image, not just the bright parts. The versions of bloom that appeared before HDR, you had to use some kind of threshold value to extract only bright parts, but that makes no physical sense. Bloom is the light being blurred by imperfections in the lens (either your eye, or smudges/etc on the camera lens... ), and it's impossible to construct a lens that will let through X number of photons perfectly, and then blur all other photons. Natural lighting effects are additive and multiplicative, but thresholding is a subtractive (unnatural) effect. In my HDR pipeline, I just multiply the scene by a small value, such as 1%, instead of thresholding -- e.g. 1% of light refracts through the smudges taking blurry paths to the sensor, and 99% take a direct path. Changing that multiplier will change how smudged or imperfect your lens is. However, after doing this, the end result is similar - you only notice the bloom effect on bright areas :D   That's quite interesting. What puzzles me  is should result of this multiplication and further blurring be stored in HDR format(ex. 16F) and then compose with HDR result(to not lose informations) or is it correct to do tonemap after multiplication and store result in SDR format? (To improve performance)
  8. Hello,   I've got strange artifacts on borders of triangles when rendering scene to texture, then applying it to other mesh. As picture is worth a thousand words, here is a screenshot(here better quality: https://www.dropbox.com/s/qaqptxeojft3ggf/artifacts.png?dl=0): [attachment=34607:artifacts.png]   What i learned from debugging is that problem occurs when rendering scene to texture, but in mainpass problem doesn't occur. I thought about some NaNs in shader, but even when i'm outputting solid color value artifacts occur(less visible, but still). It only happens on borderes of triangles(artifacts layouts matches wireframe).   Do you have any idea what could output that weird results?
  9. Armagedon

    Direct2D and AMD Firepro GPU

    Forgot to change it when i tested software mode rendering speed. A copy/paste error, with Hardware also works correct
  10. Armagedon

    Direct2D and AMD Firepro GPU

    It looks like weird bug in AMD driver, but i managed to fix this one, change DCRenderTarget to HWNDRenderTarget and get HWND from DC. Posting code snippet so maybe someone will stumble on similar problem and it will help void D2DRenderer::CreateRenderTarget(CDC* dc, RECT rect) { auto hwnd = WindowFromDC(*dc); auto props = D2D1::RenderTargetProperties( D2D1_RENDER_TARGET_TYPE_HARDWARE, D2D1::PixelFormat( DXGI_FORMAT_B8G8R8A8_UNORM, D2D1_ALPHA_MODE_IGNORE)); HRESULT hr = _d2dFactory->CreateHwndRenderTarget( props, D2D1::HwndRenderTargetProperties( hwnd, D2D1::SizeU( rect.right - rect.left, rect.bottom - rect.top) ), &_d2dRenderTarget ); }
  11. Hello,   recently i started project that involved some rendering with GDI and Direct2D. Everything went swift until i was forced to change my laptop. On new GPU (AMD Firepro m4170) drawing with Direct2D causes not redrawing graphics, it looks like underlying DC bitmap is not affected. I've tested code before on Intel HD 4600, AMD Radeon HD6470m, WARP device and all was fine. DirectX debug layer also didn't yield any warning nor errors. All drivers are up-to-date.   Code: void Init() { if (D2D1CreateFactory(D2D1_FACTORY_TYPE_SINGLE_THREADED, &_d2dFactory) != S_OK) { throw std::runtime_error("Failed to create D2D1 factory"); } D2D1_RENDER_TARGET_PROPERTIES props = D2D1::RenderTargetProperties( D2D1_RENDER_TARGET_TYPE_DEFAULT, D2D1::PixelFormat( DXGI_FORMAT_B8G8R8A8_UNORM, D2D1_ALPHA_MODE_IGNORE), 0, 0, D2D1_RENDER_TARGET_USAGE_NONE, D2D1_FEATURE_LEVEL_DEFAULT ); if (_d2dFactory->CreateDCRenderTarget(&props, &_d2dRenderTarget) != S_OK) { throw std::runtime_error("Failed to DC Render Target"); } } int Draw(CDC* dc) { CRect rectWindow; dc->GetWindow()->GetClientRect(&rectWindow); RECT ClientRect; ClientRect.left = rectWindow.left; ClientRect.right = rectWindow.right; ClientRect.top = rectWindow.top; ClientRect.bottom = rectWindow.bottom; auto returnCode = _d2dRenderTarget->BindDC(dc->GetSafeHdc(), &ClientRect); if (returnCode != S_OK) { LOG_ERROR("Failed to BindDC()"); return 0; } _d2dRenderTarget->BeginDraw(); _d2dRenderTarget->SetTransform(D2D1::Matrix3x2F::Identity()); _d2dRenderTarget->Clear(D2D1::ColorF(D2D1::ColorF::White)); //...Drawing... //...But it can be also empty... result = _d2dRenderTarget->EndDraw(); if (result == D2DERR_RECREATE_TARGET) { Reinit(); } else if (result != S_OK) { LOG_ERROR("Failed to EndDraw()"); return 0; } } Much thanks for any ideas!  
  12. Armagedon

    glLinkProgram crash on Intel

      Removing sampler2D from struct and putting it as external uniform was the way to remove crash from intel hardware! Thanks! I Wonder if removing indexing on functions variable would improve performance(based on docs, it somehow would!), thanks again!    I think this is way to somehow improve performance on low end hardware, thanks!
  13. Armagedon

    glLinkProgram crash on Intel

    According to: https://www.opengl.org/wiki/Data_Type_%28GLSL%29#Opaque_types they can be part of struct in GLSL. But i'll try with moving sampler out of struct and checking if it's fix issue.   Both debug and release. As i mentioned previously, it compiles without errors and crashes on line glLinkProgram(ProgramID);
  14. Armagedon

    glLinkProgram crash on Intel

    But how is it possible that it works correctly on two amd gpus(hd 7770, windows 10; hd 6470,. windows 7)? And both produces correct images? If there were any mismatch between vertex and pixel shader there would be an error on amd gpus or at least they would produce incorrect image.
  15. Armagedon

    glLinkProgram crash on Intel

    Problem is that, this shader is quite big, and consist of several files, with many #Ifdefs, but here is link to it: https://www.dropbox.com/sh/9vafm0neagtnavs/AADI3BY8XWV9om-PK3j6p6oma?dl=0.   The snipper that i posted in first post is part of shader that is being compiled with deadcode removed by #define NUM_DIRECTIONAL_SHADOW 1, as any other lighting calculations are removed from shader code due to the fact that other defines are defined as: #define NUM_DIRECTIONAL 0 #define NUM_POINT 0 etc.   So i suppose that the problematic code is this snipper that i posted  
  • Advertisement

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!