Jump to content
  • Advertisement
fllwr0491

2D Questions about compute coverage in shader

Recommended Posts

I googled around but are unable to find source code or details of implementation.
What keywords should I search for this topic?

Things I would like to know:

A. How to ensure that partially covered pixels are rasterized?
   Apparently by expanding each triangle by 1 pixel or so, rasterization problem is almost solved.
   But it will result in an unindexable triangle list without tons of overlaps. Will it incur a large performance penalty?

B. A-buffer like bitmask needs a read-modiry-write operation.
   How to ensure proper synchronizations in GLSL?
   GLSL seems to only allow int32 atomics on image.

C. Is there some simple ways to estimate coverage on-the-fly?
   In case I am to draw 2D shapes onto an exisitng target:
   1. A multi-pass whatever-buffer seems overkill.
   2. Multisampling could cost a lot memory though all I need is better coverage.
      Besides, I have to blit twice, if draw target is not multisampled.
 

Edited by fllwr0491

Share this post


Link to post
Share on other sites
Advertisement
32 minutes ago, Infinisearch said:

Conservative rasterization is what you're looking for.  DX12 and 11.3

Exactly.
But how do people workaround this rasterization issue before it?
BTW what is the OpenGL/Vulkan equivalent for this mode?

Share this post


Link to post
Share on other sites

Here are details about Conservative Rasterization using GS: http://wwwcg.in.tum.de/fileadmin/user_upload/Lehrstuehle/Lehrstuhl_XV/Research/Publications/2009/eg_area09_shadows.pdf

Poabably the same as in the GPUGems article.

 

Less details here: https://www.seas.upenn.edu/~pcozzi/OpenGLInsights/OpenGLInsights-SparseVoxelization.pdf

But they also talk about some atomic lock maybe interesting for your point B.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

  • Advertisement
  • Advertisement
  • Popular Tags

  • Similar Content

    • By owenjr
      Hi, I'm a Multimedia Engineering student. I am about to finish my dergree and I'm already thinking about what topic to cover in my final college project.
      I'm interested in the procedural animation with c++ and OpenGL of creatures, something like a spider for example. Can someone tell me what are the issues I should investigate to carry it out? I understand that it has some dependence on artificial intelligence but I do not know to what extent. Can someone help me to find information about it? Thank you very much.
       
      Examples: 
      - Procedural multi-legged walking animation
      - Procedural Locomotion of Multi-Legged Characters in Dynamic Environments
    • By Lewa
      So, i'm still on my quest to unterstanding the intricacies of HDR and implementing this into my engine. Currently i'm at the step to implementing tonemapping. I stumbled upon this blogposts:
      http://filmicworlds.com/blog/filmic-tonemapping-operators/
      http://frictionalgames.blogspot.com/2012/09/tech-feature-hdr-lightning.html
      and tried to implement some of those mentioned tonemapping methods into my postprocessing shader.
      The issue is that none of them creates the same results as shown in the blogpost which definitely has to do with the initial range in which the values are stored in the HDR buffer. For simplicity sake i store the values between 0 and 1 in the HDR buffer (ambient light is 0.3, directional light is 0.7)
      This is the tonemapping code:
      vec3 Uncharted2Tonemap(vec3 x) { float A = 0.15; float B = 0.50; float C = 0.10; float D = 0.20; float E = 0.02; float F = 0.30; return ((x*(A*x+C*B)+D*E)/(x*(A*x+B)+D*F))-E/F; } This is without the uncharted tonemapping:
      This is with the uncharted tonemapping:
      Which makes the image a lot darker.
      The shader code looks like this:
      void main() { vec3 color = texture2D(texture_diffuse, vTexcoord).rgb; color = Uncharted2Tonemap(color); //gamma correction (use only if not done in tonemapping code) color = gammaCorrection(color); outputF = vec4(color,1.0f); } Now, from my understanding is that tonemapping should bring the range down from HDR to 0-1.
      But the output of the tonemapping function heavily depends on the initial range of the values in the HDR buffer. (You can't expect to set the sun intensity the first time to 10 and the second time to 1000 and excpect the same result if you feed that into the tonemapper.) So i suppose that this also depends on the exposure which i have to implement?
      To check this i plotted the tonemapping curve:
      You can see that the curve goes only up to around to a value of 0.21 (while being fed a value of 1) and then basically flattens out. (which would explain why the image got darker.)
       
      My guestion is: In what range should the values in the HDR buffer be which then get tonemapped? Do i have to bring them down to a range of 0-1 by multiplying with the exposure?
      For example, if i increase the values of the light by 10 (directional light would be 7 and ambient light 3) then i would need to divide HDR values by 10 in order to get a value range of 0-1 which then could be fed into the tonemapping curve. Is that correct?
    • By MarcusAseth
      Hi guys, I'm having a problem rendering with DWrite, and I don't understand why, can you help me figure it out?
      As you can see in the image below, if you look carefully you'll notice that the top of R8 is cut (missing 1 row of pixels), the bottom of R11 is cut again, the 4 in R14 is rendered weird compared to the 4 in R4 and so on, if you look closely you'll spot more yourself.
      I can't figure out why 😕
      Under the image I'll also leave the code, in case I'm doing something wrong like with type conversion or stuff. Any help is much appreciated

      #include "GBAEmulator_PCH.h" #include "Disassembler.h" #include "GBAEmulator.h" Disassembler::Disassembler(LONG width, LONG height, HINSTANCE hInstance, GBAEmulator* emuInstance) : D2DWindowBase(width, height, hInstance, emuInstance), m_background(0.156f, 0.087f, 0.16f, 1.f), m_textFormat{ nullptr } { //Init Window std::string className = "Disassembler"; std::string windowName = "Disassembler"; WNDCLASSEX clientClass{}; clientClass.cbSize = sizeof(WNDCLASSEX); clientClass.style = CS_HREDRAW | CS_VREDRAW; clientClass.lpfnWndProc = GBAEmulator::DisassemblerWinProc; clientClass.hInstance = m_hInstance; //clientClass.hIcon =; TODO: Add Icon clientClass.hCursor = LoadCursor(m_hInstance, IDC_ARROW); clientClass.hbrBackground = (HBRUSH)(COLOR_BACKGROUND + 1); clientClass.lpszClassName = className.c_str(); //clientClass.hIconSm =; TODO: Add Icon DWORD windowStyle = WS_VISIBLE | WS_CAPTION | WS_MINIMIZEBOX | WS_TABSTOP | WS_SYSMENU; m_isValid = InitWindow(windowName, clientClass, windowStyle, false); //Init DWrite if (m_isValid) m_isValid = InitDWrite(); std::vector<std::wstring> tempEntries{ L"PC: ", L"R0: ", L"R1: ", L"R2: ", L"R3: ", L"R4: ", L"R5: ", L"R6: ", L"R7: ", L"R8: ", L"R9: ", L"R10: ", L"R11: ", L"R12: ", L"R13: ", L"R14: ", L"R15: ", L"R16: " }; std::wstring value = L"-UNDEFINED-"; FLOAT left{}, top{}, right{ 300.f }, bottom{ 50.f }; for (auto& s : tempEntries) { m_entries.emplace_back(TextEntry{ s, value, D2D1_RECT_F{ left, top, right, bottom} }); top += 30.f; bottom += 30.f; } } bool Disassembler::InitDWrite() { //Set Text Format HRESULT hr; hr = m_DWriteFactory->CreateTextFormat( L"consolas", NULL, DWRITE_FONT_WEIGHT_NORMAL, DWRITE_FONT_STYLE_NORMAL, DWRITE_FONT_STRETCH_NORMAL, 22.f, L"en-US", &m_textFormat ); if (FAILED(hr)) { MessageBox(NULL, "Failed to create TextFormat", "Error", MB_OK); return false; } //Set Colors m_renderTarget->CreateSolidColorBrush( D2D1::ColorF(D2D1::ColorF::SkyBlue), &m_fillBrush1 ); m_renderTarget->CreateSolidColorBrush( D2D1::ColorF(D2D1::ColorF::Crimson), &m_fillBrush2 ); return true; } Disassembler::~Disassembler() { DestroyWindow(m_hwnd); if (m_textFormat) m_textFormat->Release(); if (m_fillBrush1) m_fillBrush1->Release(); if (m_fillBrush2) m_fillBrush2->Release(); } void Disassembler::Updade(float deltaTime) { } void Disassembler::Draw() { m_renderTarget->BeginDraw(); m_renderTarget->Clear(m_background); for (auto& entry : m_entries) { DrawEntryWithShadow(entry); } m_renderTarget->EndDraw(); } void Disassembler::DrawEntryWithShadow(const TextEntry& entry) { //shadow offset D2D1_RECT_F shadowPos = entry.position; shadowPos.top += 1.05f; shadowPos.left -= 0.95f; //draw text DrawEntry(entry.text, shadowPos, m_fillBrush2); DrawEntry(entry.text, entry.position, m_fillBrush1); D2D1_RECT_F valuePos = entry.position; FLOAT valueOffset = 50.f; valuePos.left += valueOffset; valuePos.right += valueOffset; shadowPos.left += valueOffset; shadowPos.right += valueOffset; //draw value DrawEntry(entry.value, shadowPos, m_fillBrush2); DrawEntry(entry.value, valuePos, m_fillBrush1); } void Disassembler::DrawEntry(const std::wstring& text, const D2D1_RECT_F& pos, ID2D1SolidColorBrush* brush) { m_renderTarget->DrawTextA( text.c_str(), static_cast<UINT>(text.size()), m_textFormat, pos, brush, D2D1_DRAW_TEXT_OPTIONS_NONE ); }  
    • By Catalin Danciu
      Hello,
      In the hopes that my thread is not off-topic or offensive in any way, I dare to ask the following "noob" question:
      what would be the correct way to create sprite animations from 2D *.bmp files?
      I have for reference the 2001 game  Desperados:Wanted Dead or Alive.
      with the help of some tools i found , I was able to extract files containing what seems to be animation frames and frame shadow masks for animating a horse.
      Attached are the archived assets.
      My goal is to recreate the demo level of the game using Unity, for educational purposes.
      I have started by loading the background map layer (also a large bmp file), and next step is to load a 2d character object and animate on the perspective.
       
      Horse_Brown.rar
    • By Alberto Muratore
      Hello, my name is Alberto Muratore and I'm a young game developer specialized in programming.

      Last summer I finished working on Abstract Arena, my first completed project published on Steam. During the development, lasted more than 1 year, I created every aspect of the game except for the audio sector. I previoulsy joined small competitions and had collaborations within the amateur scene, since I started having fun with game development when I was a kid. In the recent months I also started writing videogames reviews for a english-italian website about indie games.

      Abstract Arena Steam page: http://store.steampowered.com/app/678230/Abstract_Arena/
      Abstract Arena website: http://www.abstractarena.com/

          

      I'm currently offering my skills as freelancer in the role of programmer: what I'm searching for is a project that only lacks of the coding part. I'm very good at using the GameMaker: Studio engine, and I already own a license to export on Android platform. I can write code for any genre of 2D games, and I already have experience with the peculiarities of the Android platform (multiple touch controls, accelerometer). I will be able to start working full time only during September, so be aware of that.

      The plan is the following:
      1 - you have an idea about your next project (2D game of any kind, for Windows and/or Android platform),
      2 - you (or your team) create all of the graphics and audio resources,
      3 - I write the whole code for the game (gameplay, menus and anything within the game) putting everything togheter [during September],
      4 - if you like my work you can choose how much to pay for it, and you (or your team) keep the rights to sell the game without sharing any percentage with me.

      Thank you for reading my announcement
      If you have any question, please write me at the following address: albertomurat@gmail.com
    • By N Drew
      I am working on a 2D SideScroller game in my own made game engine using SFML and C++.I am searching for 2D artists,especially pixel artist for making and animating characters,backgrounds and other props that can be made in any Drawing Program.The artist will become part of the team of Hammer Studios and he got a part of the Revenue Sharing.If you are interested send me a mail at:ghiurcutaandrei@gmail.com .If you are not an artist but you want to be a part of our Team,Soon we will be recruiting an C++ AI programmers that worked in SFML/OpenGL.
      We work together using Discord.

    • By nOoNEE
      i am reading this book : link
      in the OpenGL Rendering Pipeline section there is a picture like this: link
      but the question is this i dont really understand why it is necessary to turn pixel data in to fragment and then fragment into pixel could please give me a source or a clear Explanation that why it is necessary ? thank you so mu
       
       
    • By Ds ds
      Hi, my name is Andres, I'm a programmer with a technician degree and a Diploma in C#, looking for a project in Unity to start my career in game development. I don't do it for a paid but a recognition and start a portfolio, preferably a 2D game. Thanks for read, have a nice day. 
       
    • By Inbar_xz
      I'm using the OPENGL with eclipse+JOGL.
      My goal is to create movement of the camera and the player.
      I create main class, which create some box in 3D and hold 
      an object of PlayerAxis.
      I create PlayerAxis class which hold the axis of the player.
      If we want to move the camera, then in the main class I call to 
      the func "cameraMove"(from PlayerAxis) and it update the player axis.
      That's work good.
      The problem start if I move the camera on 2 axis, 
      for example if I move with the camera right(that's on the y axis)
      and then down(on the x axis) -
      in some point the move front is not to the front anymore..
      In order to move to the front, I do
      player.playerMoving(0, 0, 1);
      And I learn that in order to keep the front move, 
      I need to convert (0, 0, 1) to the player axis, and then add this.
      I think I dont do the convert right.. 
      I will be glad for help!

      Here is part of my PlayerAxis class:
       
      //player coordinate float x[] = new float[3]; float y[] = new float[3]; float z[] = new float[3]; public PlayerAxis(float move_step, float angle_move) { x[0] = 1; y[1] = 1; z[2] = -1; step = move_step; angle = angle_move; setTransMatrix(); } public void cameraMoving(float angle_step, String axis) { float[] new_x = x; float[] new_y = y; float[] new_z = z; float alfa = angle_step * angle; switch(axis) { case "x": new_z = addVectors(multScalar(z, COS(alfa)), multScalar(y, SIN(alfa))); new_y = subVectors(multScalar(y, COS(alfa)), multScalar(z, SIN(alfa))); break; case "y": new_x = addVectors(multScalar(x, COS(alfa)), multScalar(z, SIN(alfa))); new_z = subVectors(multScalar(z, COS(alfa)), multScalar(x, SIN(alfa))); break; case "z": new_x = addVectors(multScalar(x, COS(alfa)), multScalar(y, SIN(alfa))); new_y = subVectors(multScalar(y, COS(alfa)), multScalar(x, SIN(alfa))); } x = new_x; y = new_y; z = new_z; normalization(); } public void playerMoving(float x_move, float y_move, float z_move) { float[] move = new float[3]; move[0] = x_move; move[1] = y_move; move[2] = z_move; setTransMatrix(); float[] trans_move = transVector(move); position[0] = position[0] + step*trans_move[0]; position[1] = position[1] + step*trans_move[1]; position[2] = position[2] + step*trans_move[2]; } public void setTransMatrix() { for (int i = 0; i < 3; i++) { coordiTrans[0][i] = x[i]; coordiTrans[1][i] = y[i]; coordiTrans[2][i] = z[i]; } } public float[] transVector(float[] v) { return multiplyMatrixInVector(coordiTrans, v); }  
      and in the main class i have this:
       
      public void keyPressed(KeyEvent e) { if (e.getKeyCode()== KeyEvent.VK_ESCAPE) { System.exit(0); //player move } else if (e.getKeyCode()== KeyEvent.VK_W) { //front //moveAmount[2] += -0.1f; player.playerMoving(0, 0, 1); } else if (e.getKeyCode()== KeyEvent.VK_S) { //back //moveAmount[2] += 0.1f; player.playerMoving(0, 0, -1); } else if (e.getKeyCode()== KeyEvent.VK_A) { //left //moveAmount[0] += -0.1f; player.playerMoving(-1, 0, 0); } else if (e.getKeyCode()== KeyEvent.VK_D) { //right //moveAmount[0] += 0.1f; player.playerMoving(1, 0, 0); } else if (e.getKeyCode()== KeyEvent.VK_E) { //moveAmount[0] += 0.1f; player.playerMoving(0, 1, 0); } else if (e.getKeyCode()== KeyEvent.VK_Q) { //moveAmount[0] += 0.1f; player.playerMoving(0, -1, 0); //camera move } else if (e.getKeyCode()== KeyEvent.VK_I) { //up player.cameraMoving(1, "x"); } else if (e.getKeyCode()== KeyEvent.VK_K) { //down player.cameraMoving(-1, "x"); } else if (e.getKeyCode()== KeyEvent.VK_L) { //right player.cameraMoving(-1, "y"); } else if (e.getKeyCode()== KeyEvent.VK_J) { //left player.cameraMoving(1, "y"); } else if (e.getKeyCode()== KeyEvent.VK_O) { //right round player.cameraMoving(-1, "z"); } else if (e.getKeyCode()== KeyEvent.VK_U) { //left round player.cameraMoving(1, "z"); } }  
      finallt found it.... i confused with the transformation matrix row and col. thanks anyway!
    • By Lewa
      So, i'm currently trying to implement an SSAO shader from THIS tutorial and i'm running into a few issues here.
      Now, this SSAO method requires view space positions and normals. I'm storing the normals in my deferred renderer in world-space so i had to do a conversion and reconstruct the position from the depth buffer.
      And something there goes horribly wrong (which has probably to do with worldspace to viewspace transformations).
      (here is the full shader source code if someone wants to take a look at it)
      Now, i suspect that the normals are the culprit.
      vec3 normal = ((uNormalViewMatrix*vec4(normalize(texture2D(sNormals, vTexcoord).rgb),1.0)).xyz); "sNormals" is a 2D texture which stores the normals in world space in a RGB FP16 buffer.
      Now i can't use the camera viewspace matrix to transform the normals into viewspace as the cameras position isn't set at (0,0,0), thus skewing the result.
      So what i did is to create a new viewmatrix specifically for this normal without the position at vec3(0,0,0);
      //"camera" is the camera which was used for rendering the normal buffer renderer.setUniform4m(ressources->shaderSSAO->getUniform("uNormalViewMatrix"), glmExt::createViewMatrix(glm::vec3(0,0,0),camera.getForward(),camera.getUp())//parameters are (position,forwardVector,upVector) ); Though i have the feeling this is the wrong approach. Is this right or is there a better/correct way of transforming a world space normal into viewspace?
  • Advertisement
  • Popular Now

  • Forum Statistics

    • Total Topics
      631374
    • Total Posts
      2999661
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

Participate in the game development conversation and more when you create an account on GameDev.net!

Sign me up!