Jump to content
  • Advertisement


  • Content count

  • Joined

  • Last visited

Community Reputation

1234 Excellent

1 Follower

About Armagedon

  • Rank

Personal Information

  • Role
  • Interests

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Hi, If you're zooming in and performance get worse, then most probably problem is too complex pixel shader and/or overdraw. From pixel shader which you posted you can do following optimizations: - Reduce shadow map resolution (high resolution shadow map will make you bandwith bound) - Get rid of following conditional (you can treat conditionals as expensive if you're not sure if branch coherence is high): if ((saturate(projectTexCoord.x) == projectTexCoord.x) && (saturate(projectTexCoord.y) == projectTexCoord.y)) - I'd also go for first calculation of light intensity and then if(lightIntensity > 0.0f) do your shadow sampling - Sort all object basing on distance from camera and draw nearest object first
  2. Armagedon

    AMD horrible OpenGL performance

    What graphics cards you're using? Some graphics cards could have implemented part of functions on software instead od hardware. It would be good to profile it with AMD CodeXL(http://gpuopen.com/compute-product/codexl/) and see where the problem exactly is(I had the same problem with AMD firepro cards and cubemaps sampling).
  3. Hello Gamedev I'm currently working on my master thesis that compares various ways to filter shadow maps and stumbled upon a problem with Variance Shadow Maps. Is it possible to omit rendering in shadow receiver in to shadow map? What i've read from original paper, it's not, due to possible high variance between shadow caster and shadow receiver. But it's quite common in games to skip some object from rendering in to shadow map(for example, terrain that can have shadows from raycasting technique, explicit per object shadow maps etc.). Is there any kind of "trick" for that kind of situations? Or is it common to just use old plain shadow maps in this case?
  4. I'm not really sure if dropping support for exceptions in current era is good idea. Exceptions make code cleaner(for ex. you don't need to pass error code via method return codes or arguments), but they need to be used in exceptional situations(a lot of people abuse use of exceptions and try to hide some high level logic inside of catch block). About performance, in x64 you don't pay any cost when calling function that may throw, all the cost is moved in case when function actually throws and you need to handle exceptions. But as i said, this should occur in exceptional situations(ex. missing file that should be there, problem with connection etc) and mostly at this time you don't care about performance. I can't speak about consoles(i've never worked on one), but i bet their differ between architectures, as well in x86 land you pay small cost for every function call as compiler generates code for stack unwinding.
  5. IMHO, it should be the whole image, not just the bright parts. The versions of bloom that appeared before HDR, you had to use some kind of threshold value to extract only bright parts, but that makes no physical sense. Bloom is the light being blurred by imperfections in the lens (either your eye, or smudges/etc on the camera lens... ), and it's impossible to construct a lens that will let through X number of photons perfectly, and then blur all other photons. Natural lighting effects are additive and multiplicative, but thresholding is a subtractive (unnatural) effect. In my HDR pipeline, I just multiply the scene by a small value, such as 1%, instead of thresholding -- e.g. 1% of light refracts through the smudges taking blurry paths to the sensor, and 99% take a direct path. Changing that multiplier will change how smudged or imperfect your lens is. However, after doing this, the end result is similar - you only notice the bloom effect on bright areas :D   That's quite interesting. What puzzles me  is should result of this multiplication and further blurring be stored in HDR format(ex. 16F) and then compose with HDR result(to not lose informations) or is it correct to do tonemap after multiplication and store result in SDR format? (To improve performance)
  6. Hello,   I've got strange artifacts on borders of triangles when rendering scene to texture, then applying it to other mesh. As picture is worth a thousand words, here is a screenshot(here better quality: https://www.dropbox.com/s/qaqptxeojft3ggf/artifacts.png?dl=0): [attachment=34607:artifacts.png]   What i learned from debugging is that problem occurs when rendering scene to texture, but in mainpass problem doesn't occur. I thought about some NaNs in shader, but even when i'm outputting solid color value artifacts occur(less visible, but still). It only happens on borderes of triangles(artifacts layouts matches wireframe).   Do you have any idea what could output that weird results?
  7. Armagedon

    Direct2D and AMD Firepro GPU

    Forgot to change it when i tested software mode rendering speed. A copy/paste error, with Hardware also works correct
  8. Armagedon

    Direct2D and AMD Firepro GPU

    It looks like weird bug in AMD driver, but i managed to fix this one, change DCRenderTarget to HWNDRenderTarget and get HWND from DC. Posting code snippet so maybe someone will stumble on similar problem and it will help void D2DRenderer::CreateRenderTarget(CDC* dc, RECT rect) { auto hwnd = WindowFromDC(*dc); auto props = D2D1::RenderTargetProperties( D2D1_RENDER_TARGET_TYPE_HARDWARE, D2D1::PixelFormat( DXGI_FORMAT_B8G8R8A8_UNORM, D2D1_ALPHA_MODE_IGNORE)); HRESULT hr = _d2dFactory->CreateHwndRenderTarget( props, D2D1::HwndRenderTargetProperties( hwnd, D2D1::SizeU( rect.right - rect.left, rect.bottom - rect.top) ), &_d2dRenderTarget ); }
  9. Hello,   recently i started project that involved some rendering with GDI and Direct2D. Everything went swift until i was forced to change my laptop. On new GPU (AMD Firepro m4170) drawing with Direct2D causes not redrawing graphics, it looks like underlying DC bitmap is not affected. I've tested code before on Intel HD 4600, AMD Radeon HD6470m, WARP device and all was fine. DirectX debug layer also didn't yield any warning nor errors. All drivers are up-to-date.   Code: void Init() { if (D2D1CreateFactory(D2D1_FACTORY_TYPE_SINGLE_THREADED, &_d2dFactory) != S_OK) { throw std::runtime_error("Failed to create D2D1 factory"); } D2D1_RENDER_TARGET_PROPERTIES props = D2D1::RenderTargetProperties( D2D1_RENDER_TARGET_TYPE_DEFAULT, D2D1::PixelFormat( DXGI_FORMAT_B8G8R8A8_UNORM, D2D1_ALPHA_MODE_IGNORE), 0, 0, D2D1_RENDER_TARGET_USAGE_NONE, D2D1_FEATURE_LEVEL_DEFAULT ); if (_d2dFactory->CreateDCRenderTarget(&props, &_d2dRenderTarget) != S_OK) { throw std::runtime_error("Failed to DC Render Target"); } } int Draw(CDC* dc) { CRect rectWindow; dc->GetWindow()->GetClientRect(&rectWindow); RECT ClientRect; ClientRect.left = rectWindow.left; ClientRect.right = rectWindow.right; ClientRect.top = rectWindow.top; ClientRect.bottom = rectWindow.bottom; auto returnCode = _d2dRenderTarget->BindDC(dc->GetSafeHdc(), &ClientRect); if (returnCode != S_OK) { LOG_ERROR("Failed to BindDC()"); return 0; } _d2dRenderTarget->BeginDraw(); _d2dRenderTarget->SetTransform(D2D1::Matrix3x2F::Identity()); _d2dRenderTarget->Clear(D2D1::ColorF(D2D1::ColorF::White)); //...Drawing... //...But it can be also empty... result = _d2dRenderTarget->EndDraw(); if (result == D2DERR_RECREATE_TARGET) { Reinit(); } else if (result != S_OK) { LOG_ERROR("Failed to EndDraw()"); return 0; } } Much thanks for any ideas!  
  10. Armagedon

    glLinkProgram crash on Intel

      Removing sampler2D from struct and putting it as external uniform was the way to remove crash from intel hardware! Thanks! I Wonder if removing indexing on functions variable would improve performance(based on docs, it somehow would!), thanks again!    I think this is way to somehow improve performance on low end hardware, thanks!
  11. Armagedon

    glLinkProgram crash on Intel

    According to: https://www.opengl.org/wiki/Data_Type_%28GLSL%29#Opaque_types they can be part of struct in GLSL. But i'll try with moving sampler out of struct and checking if it's fix issue.   Both debug and release. As i mentioned previously, it compiles without errors and crashes on line glLinkProgram(ProgramID);
  12. Armagedon

    glLinkProgram crash on Intel

    But how is it possible that it works correctly on two amd gpus(hd 7770, windows 10; hd 6470,. windows 7)? And both produces correct images? If there were any mismatch between vertex and pixel shader there would be an error on amd gpus or at least they would produce incorrect image.
  13. Armagedon

    glLinkProgram crash on Intel

    Problem is that, this shader is quite big, and consist of several files, with many #Ifdefs, but here is link to it: https://www.dropbox.com/sh/9vafm0neagtnavs/AADI3BY8XWV9om-PK3j6p6oma?dl=0.   The snipper that i posted in first post is part of shader that is being compiled with deadcode removed by #define NUM_DIRECTIONAL_SHADOW 1, as any other lighting calculations are removed from shader code due to the fact that other defines are defined as: #define NUM_DIRECTIONAL 0 #define NUM_POINT 0 etc.   So i suppose that the problematic code is this snipper that i posted  
  14. Hello,   recently i stumbled on error with GLSL shader crashing on intel gpu(Bay Trail Graphics) in method glLinkProgram(ProgramID). Tested on Windows 10. Code compiles and works fine on two AMD gpu..   It's uber shader that computes lightning and texturing of mesh. It works fine if i specify only lights without shadows, if i add any light that use sampler2D with shadowmap, it crashes without any error on line glLinkProgram() indicating driver bug.   Code of directional shadow light looks like: struct LightDirectional {     vec3 Color;     vec3 Direction;     sampler2D ShadowTexture;     vec2 ShadowMapSize; };   #if NUM_DIRECTIONAL_SHADOW > 0 in vec4 DirectionalShadowCoord[NUM_DIRECTIONAL_SHADOW];    //On Vertex Shader side, there is code that computes and specify this as 'out' uniform LightDirectional DirectionalShadow[NUM_DIRECTIONAL_SHADOW]; vec3 CalculateDirectionalShadow(int id, vec3 Normal, vec3 DiffuseColor, vec3 CameraVector, vec3 WorldPos) {     float nDotL = dot(Normal, -DirectionalShadow[id].Direction);          if(nDotL > 0.0)     {         float Shadow = CalcShadowTermSoftPCF(DirectionalShadow[id].ShadowTexture, DirectionalShadowCoord[id].xyz, 5, DirectionalShadow[id].ShadowMapSize);         return CalculateLight(Normal, CameraVector, WorldPos, -DirectionalShadow[id].Direction, DirectionalShadow[id].Color, DiffuseColor, nDotL) * Shadow;     }          return vec3(0.0, 0.0, 0.0); } #endif   //This method is called from void main() of mesh shader vec3 lighting(vec3 Normal, vec4 DiffuseTextureColor, vec3 CameraVector, vec3 WorldPos) {        vec3 DiffuseAmount = vec3(0.01, 0.01, 0.01);    //...Other kinds of lights #if NUM_DIRECTIONAL_SHADOW > 0     for(int i = 0; i < NUM_DIRECTIONAL_SHADOW; i++)     {         DiffuseAmount += CalculateDirectionalShadow(i, Normal, DiffuseTextureColor.rgb, CameraVector, WorldPos);     } #endif    //... Other kinds of lights     return DiffuseAmount; }   Is there any GLSL rule that forbids that kind of sampling texture in loop? If NUM_DIRECTIONAL_SHADOW is compile time define shouldn't it be unrolled by compiler? (That's sad there is no [unroll] like in hlsl) Is there any workaround to that issue?   Best Regards
  15. Armagedon

    2G/3G latency for FPS/TPS

    I remember playing via mobile network(plugged through mobile phone) in Counter strike 1.6. I had ping like 70ms on HSPA(signal strength around 2/5). So it's totally doable in big cities with HSPA+/LTE connection.
  • Advertisement

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!