Advertisement Jump to content
  • Advertisement
  • 04/27/18 03:41 PM

    Pathfinding
    Navigation Meshes and Pathfinding

    Artificial Intelligence

    jb-dev
    • Posted By jb-dev

    I recently worked on a path-finding algorithm used to move an AI agent into an organically generated dungeon.

    It's not an easy task but because I've already worked on Team Fortress 2 cards in the past, I already knew navigation meshes (navmesh) and their capabilities.

    Why Not Waypoints?

    As described in this paper, waypoint networks were in the past used in video games to save valuable resources. It was an acceptable compromise : level designers already knew where NPCs could and could not go. However, as technology has evolved, computers got more memory that became faster and cheaper.

    In other words, there was a shift from efficiency to flexibility.

    In a way, navigation meshes are the evolution of waypoints networks because they fulfill the same need but in a different way.

    One of the advantages of using a navigation mesh is that an agent can go anywhere in a cell as long as it is convex because it is essentially the definition of convex. It also means that the agent is not limited to a specific waypoint network, so if the destination is out of the waypoint network, it can go directly to it instead of going to the nearest point in the network. A navigation mesh can also be used by many types of agents of different sizes, rather than having many waypoint networks for agents of different sizes.

    Using a navigation mesh also speeds up graph exploration because, technically, a navigation mesh has fewer nodes than an equivalent waypoint network (that is, a network that has enough points to cover a navigation mesh).

    The navigation mesh Graph

    To summarize, a navigation mesh is a mesh that represents where an NPC can walk. A navigation mesh contains convex polygonal nodes (called cells). Each cell can be connected to each other using connections defined by an edge shared between them (or portal edge). In a navigation mesh, each cell can contain information about itself. For example, a cell may be labeled as toxic, and therefore only those units capable of resisting this toxicity can move across it.

    Personally, because of my experience, I view navigation meshes like the ones found in most Source games.

    Nav_edit.jpg

    However, all cells in Source's navigation meshes are rectangular. Our implementation is more flexible because the cells can be irregular polygons (as long as they're convex).

    Navigation Meshes In practice

    A navigation mesh implementation is actually three algorithms :

    1. A graph navigation algorithm
    2. A string pulling algorithm
    3. And a steering/path-smoothing algorithm

    In our cases, we used A*, the simple stupid funnel algorithm and a traditional steering algorithm that is still in development.

    Finding our cells

    Before doing any graph searches, we need to find 2 things :

    1. Our starting cell
    2. Our destination cell

    For example, let's use this navigation mesh :

    Our navmesh

    In this navigation meshes, every edge that are shared between 2 cells are also portal edges, which will be used by the string pulling algorithm later on.

    Also, let's use these points as our starting and destination points:

    Our positions

    Where our buddy (let's name it Buddy) stands is our staring point, while the flag represents our destination.

    Because we already have our starting point and our destination point, we just need to check which cell is closest to each point using an octree. Once we know our nearest cells, we must project the starting and destination points onto their respective closest cells.

    In practice, we do a simple projection of both our starting and destination points onto the normal of their respective cells. 

    Before snapping a projected point, we must first know if the said projected point is outside its cell by finding the difference between the area of the cell and the sum of the areas of the triangles formed by that point and each edge of the cell. If the latter is remarkably larger than the first, the point is outside its cell.

    The snapping then simply consists of interpolating between the vertices of the edge of the cell closest to the projected point. In terms of code, we do this:

    Vector3f lineToPoint = pointToProject.subtract(start);
    Vector3f line = end.subtract(start);
    Vector3f returnedVector3f = new Vector3f().interpolateLocal(start, end, lineToPoint.dot(line) / line.dot(line));

    In our example, the starting and destination cells are C1 and C8 respectively:

    Our starting and destination cells

     

    Graph Search Algorithm

    A navigation mesh is actually a 2D grid of an unknown or infinite size. In a 3D game, it is common to represent a navigation mesh graph as a graph of flat polygons that aren't orthogonal to each other.

    There are games that use 3D navigation meshes, like games that use flying AI, but in our case it's a simple grid.

    For this reason, the use of the A* algorithm is probably the right solution.

    We chose A* because it's the most generic and flexible algorithm. Technically, we still do not know how our navigation mesh will be used, so going with something more generic can have its benefits...

    A* works by assigning a cost and a heuristic to a cell. The closer the cell is to our destination, the less expensive it is. The heuristic is calculated similarly but we also take into account the heuristics of the previous cell. This means that the longer a path is, the greater the resulting heuristic will be, and it becomes more likely that this path is not an optimal one.

    We begin the algorithm by traversing through the connections each of the neighboring cells of the current cell until we arrive at the end cell, doing a sort of exploration / filling. Each cell begins with an infinite heuristic but, as we explore the mesh, it's updated according to the information we learn. In the beginning, our starting cell gets a cost and a heuristic of 0 because the agent is already inside of it.

    We keep a queue in descending order of cells based on their heuristics. This means that the next cell to use as the current cell is the best candidate for an optimal path. When a cell is being processed, it is removed from that queue in another one that contains the closed cells.

    While continuing to explore, we also keep a reference of the connection used to move from the current cell to its neighbor. This will be useful later.

    We do it until we end up in the destination cell. Then, we "reel" up to our starting cell and save each cell we landed on, which gives an optimal path.

    A* is a very popular algorithm and the pseudocode can easily be found. Even Wikipedia has a pseudocode that is easy to understand.

    In our example, we find that this is our path:

    An optimal path

    And here are highlighted (in pink) the traversed connections:

    The used connections

    The String Pulling Algorithm

    String pulling is the next step in the navigation mesh algorithm. Now that we have a queue of cells that describes an optimal path, we have to find a queue of points that an AI agent can travel to. This is where the sting pulling is needed.

    String pulling is in fact not linked to characters at all : it is rather a metaphor.

    Imagine a cross. Let's say that you wrap a silk thread around this cross and you put tension on it. You will find that the string does not follow the inner corner of it, but rather go from corner to corner of each point of the cross.

    stringpull.png.d5a0c0aa7cf1a7df60e6d0b6b9250404.png

    This is precisely what we're doing but with a string that goes from one point to another.

    There are many different algorithms that lets us to do this. We chose the Simple Stupid Funnel algorithm because it's actually...

    ...stupidly simple.

    To put it simply (no puns intended), we create a funnel that checks each time if the next point is in the funnel or not. The funnel is composed of 3 points: a central apex, a left point (called left apex) and a right point (called right apex). At the beginning, the tested point is on the right side, then we alternate to the left and so on until we reach our point of destination. (as if we were walking)

    [funnel_explanation.png]

    When a point is in the funnel, we continue the algorithm with the other side.

    If the point is outside the funnel, depending on which side the tested point belongs to, we take the apex from the other side of the funnel and add it to a list of final waypoints.

    The algorithm is working correctly most of the time. However, the algorithm had a bug that add the last point twice if none of the vertices of the last connection before the destination point were added to the list of final waypoints. We just added an if at the moment but we could come back later to optimize it.

    In our case, the funnel algorithm gives this path:

    The pulled path

    The Steering Algoritm

    Now that we have a list of waypoints, we can finally just run our character at every point.

    But if there were walls in our geometry, then Buddy would run right into a corner wall. He won't be able to reach his destination because he isn't small enough to avoid the corner walls.

    That's the role of the steering algorithm.

    Our algorithm is still in heavy development, but its main gist is that we check if the next position of the agent is not in the navigation meshes. If that's the case, then we change its direction so that the agent doesn't hit the wall like an idiot. There is also a path curving algorithm, but it's still too early to know if we'll use that at all...

    We relied on this good document to program the steering algorithm. It's a 1999 document, but it's still interesting ...

    With the steering algoritm, we make sure that Buddy moves safely to his destination. (Look how proud he is!)

    Buddy is moving

    So, this is the navigation mesh algorithm.


    I must say that, throughout my research, there weren't much pseudocode or code that described the algorithm as a whole. Only then did we realize that what people called "Navmesh" was actually a collage of algorithms rather than a single monolithic one.

    We also tried to have a cyclic grid with orthogonal cells (i.e. cells on the wall, ceiling) but it looked like that A* wasn't intended to be used in a 3D environment with flat orthogonal cells. My hypothesis is that we need 3D cells for this kind of navigation mesh, otherwise the heuristic value of each cell can change depending on the actual 3D length between the center of a flat cell and the destination point.

    So we reduced the scope of our navigation meshes and we were able to move an AI agent in our organic dungeon. Here's a picture :

    pathfinding1.png.42fb0f45fdab34504c7d19b1cfeb4f7d.png

    Each cyan cubes are the final waypoints found by the String pulling and blue lines represents collisions meshes. Our AI is currently still walking into walls, but the steering is still being implemented.

    Edited by khawk



      Report Article


    User Feedback


    Better add a Minkowski'y sum. As result of stripe algo you have a sequence of corners near wich character have to go. Add to each corner a circle with radius enought to bypass a wall, usually it is radius of collision capsula for human-like character. Then connect circles by tangent lines. In case portals enought wide to pass throug, it never will walk thru the walls. 

    Also in case all NPC have same safe radius Minkowsky sum can be accounted at navmesh building time. to do it just required to shift walls projections inside navplanes by NPCs safe radius along wall normal.  

    Edited by Fulcrum.013

    Share this comment


    Link to comment
    Share on other sites


    Create an account or sign in to comment

    You need to be a member in order to leave a comment

    Create an account

    Sign up for a new account in our community. It's easy!

    Register a new account

    Sign in

    Already have an account? Sign in here.

    Sign In Now

  • Advertisement
  • Advertisement
  • What is your GameDev Story?

    In 2019 we are celebrating 20 years of GameDev.net! Share your GameDev Story with us.

    (You must login to your GameDev.net account.)

  • Latest Featured Articles

  • Featured Blogs

  • Advertisement
  • Popular Now

  • Similar Content

    • By isu diss
      How can I find collision point and normal using sat? I read that sat can do that. Please help me?
      AABB.cpp
      int AABB::supportFaceCount() { // there are only three directions for every face of an AABB box. return 3; } XMVECTOR AABB::supportFaceDirection(int i) { // the three axes of an AABB box. along the x, y and z axis. static const XMVECTOR s_aabbAxes[] = { XMVectorSet(1, 0, 0, 0), XMVectorSet(0, 1, 0, 0), XMVectorSet(0, 0, 1, 0) }; return s_aabbAxes[i]; } int AABB::supportEdgeCount() { // there are only three directions for every edges of an AABB box. return 3; } XMVECTOR AABB::supportEdgeDirection(int i) { // every edge go along the x y, or z axis. static const XMVECTOR s_aabbEdges[] = { XMVectorSet(1, 0, 0, 0), XMVectorSet(0, 1, 0, 0), XMVectorSet(0, 0, 1, 0) }; return s_aabbEdges[i]; } void AABB::supportInterval(XMVECTOR direction, float& min, float& max) { XMVECTOR centre = XMVectorSet(Center[0], Center[1], Center[2], 1); // projection of the box centre float p = XMVector3Dot(centre, direction).m128_f32[0]; // projection of the box extents float rx = fabs(direction.m128_f32[0]) * Radius[0]; float ry = fabs(direction.m128_f32[1]) * Radius[1]; float rz = fabs(direction.m128_f32[2]) * Radius[2]; // the projection interval along the direction. float rb = rx + ry + rz; min = p - rb; max = p + rb; } bool ObjectsSeparatedAlongDirection(XMVECTOR& direction, AABB* a, AABB* b) { float mina, maxa; float minb, maxb; a->supportInterval(direction, mina, maxa); b->supportInterval(direction, minb, maxb); return (mina > maxb || minb > maxa); } bool ObjectsIntersected(AABB* a, AABB* b) { // test faces of A for(int i = 0; i < a->supportFaceCount(); i++) { XMVECTOR direction = a->supportFaceDirection(i); if(ObjectsSeparatedAlongDirection(direction, a, b)) return false; } // test faces of B for(int i = 0; i < b->supportFaceCount(); i++) { XMVECTOR direction = b->supportFaceDirection(i); if(ObjectsSeparatedAlongDirection(direction, a, b)) return false; } // test cross product of edges of A against edges of B. for(int i = 0; i < a->supportEdgeCount(); i++) { XMVECTOR edge_a = a->supportEdgeDirection(i); for(int j = 0; j < b->supportEdgeCount(); j++) { XMVECTOR edge_b = b->supportEdgeDirection(j); XMVECTOR direction = XMVector3Cross(edge_a, edge_b); if(ObjectsSeparatedAlongDirection(direction, a, b)) return false; } } return true; }  
    • By Alladin
      Got an amount of inspiration and made this game. It's humoristic and satiric, so don't take it too seriously)
      Here you play as a priest and your main task is catching the kids. Steam store page: 
      https://store.steampowered.com/app/915730/Catch_The_Kids_Priest_Simulator_Game/?beta=0
      Gameplay trailer:
      https://www.youtube.com/watch?v=7cRWIyXU1dc&t=0s
      If you have any suggestions, advice or something else, write here)
       
       
    • By acerskyline
      I encountered this problem when releasing D3D12 resources. I didn't use ComPtr so I have to release everything manually. After enabling debug layer, I saw this error:
      D3D12 ERROR: ID3D12CommandQueue::<final-release>: A Command Queue (0x000002EBE2F3C7A0:'Unnamed ID3D12CommandQueue Object') is being final-released while still in use by the GPU.  This is invalid and can lead to application instability. [ EXECUTION ERROR #921: OBJECT_DELETED_WHILE_STILL_IN_USE]
      I wanted to figure out which queue is it so I enabled debug device and used ReportLiveDeviceObjects trying to identify the queue. But it showed the same error. All my queues had names and ReportLiveDeviceObjects worked on other resources. After googling around, I found this page. It was a similar problem and it seemed it had something to do with finishing the unfinished frames.
      Before I made any changes, my clean up code looks like this:
      void Cleanup() { // wait for the gpu to finish all frames for (int i = 0; i < FrameBufferCount; ++i) { frameIndex = i; //fenceValue[i]++; //------------------------------------------> FIRST COMMENTED CODE SNIPPET //commandQueue->Signal(fence[i], fenceValue[i]); //------------> SECOND COMMENTED CODE SNIPPET WaitForPreviousFrame(i); } ////////////////////////////////////////////////////////////////////////////////////////////////////// // FROM HERE ON, CODES HAVE NOTHING TO DO WITH THE QUESTION, THEY ARE HERE FOR THE SAKE OF COMPLETENESS ////////////////////////////////////////////////////////////////////////////////////////////////////// // close the fence event CloseHandle(fenceEvent); // release gpu resources in the scene mRenderer.Release(); mScene.Release(); // imgui stuff ImGui_ImplDX12_Shutdown(); ImGui_ImplWin32_Shutdown(); ImGui::DestroyContext(); SAFE_RELEASE(g_pd3dSrvDescHeap); // direct input stuff DIKeyboard->Unacquire(); DIMouse->Unacquire(); DirectInput->Release(); // other stuff ... } Obviously it didn't work that's why I googled the problem. After seeing what is proposed on that page, I added the first and the second commented code snippet. The problem was immediately solved. The error above completely disappeared. Then I tried to comment the first code snippet out and only use the second code snippet. It also worked. So I am wondering what happened.
      1.Why do I have to signal it? Isn't WaitForPreviousFrame enough?
      2.What does that answer on that page mean? What does operating system have anything to do with this?
      For you information, the WaitForPreviousFrame function looks like this:
      void WaitForPreviousFrame(int frameIndexOverride = -1) { HRESULT hr; // swap the current rtv buffer index so we draw on the correct buffer frameIndex = frameIndexOverride < 0 ? swapChain->GetCurrentBackBufferIndex() : frameIndexOverride; // if the current fence value is still less than "fenceValue", then we know the GPU has not finished executing // the command queue since it has not reached the "commandQueue->Signal(fence, fenceValue)" command if (fence[frameIndex]->GetCompletedValue() < fenceValue[frameIndex]) { // we have the fence create an event which is signaled once the fence's current value is "fenceValue" hr = fence[frameIndex]->SetEventOnCompletion(fenceValue[frameIndex], fenceEvent); if (FAILED(hr)) { Running = false; } // We will wait until the fence has triggered the event that it's current value has reached "fenceValue". once it's value // has reached "fenceValue", we know the command queue has finished executing WaitForSingleObject(fenceEvent, INFINITE); } // increment fenceValue for next frame fenceValue[frameIndex]++; }  
    • By babaliaris
      So, I'm trying to solve this problem for months. I have already opened relative threads for this but now that I learned a lot of stuff  I still seek your help for this one. What I learned so far when dealing with texture Uploading and Pixel Reads:
      1) Make sure how many channels the source image file has in order to configure the glTexImage2D() to read the data correctly.
      2) Make sure that the width * num_OfChannels of the image is multiple of 4, so you won't have problems with the alignment. (OpenGL Common Mistakes, Texture Upload And Pixel Reads)
      3) Forcing any kind of texture (R, RG, RGB) to have exactly 4 channels ALWAYS works (But you waste a lot of memory)!!!
       
       
      Below, I'm going to show you step by step what I tried and what glitches are occurring, NOTICE that even if I'm creating more than one textures I ONLY render the first one a.jpg:
      First check out my texture code. As you can see I'm configuring glTexImage2D() to read pixel data based on how many channels they have (I'm only using textures with 3 and 4 channels) and I already made sure that the width * channels for each image is multiple of 4.
      #include "texture.h" #include "stb_image/stb_image.h" #include "glcall.h" #include "engine_error.h" #include <math.h> Texture::Texture(std::string path, bool trans, int unit) { //Reverse the pixels. stbi_set_flip_vertically_on_load(1); //Try to load the image. unsigned char *data = stbi_load(path.c_str(), &m_width, &m_height, &m_channels, 0); //Debug. float check = (m_width * m_channels) / 4.0f; printf("file: %20s \tchannels: %d, Divisible by 4: %s, width: %d, height: %d, widthXheight: %d\n", path.c_str(), m_channels, check == ceilf(check) ? "yes" : "no", m_width, m_height, m_width * m_height); /* //The length of the pixes row is multiple of 4. if ( check == ceilf(check) ) { GLCall(glPixelStorei(GL_UNPACK_ALIGNMENT, 4)); } //It's NOT!!!! else { GLCall(glPixelStorei(GL_UNPACK_ALIGNMENT, 1)); } */ //Image loaded successfully. if (data) { //Generate the texture and bind it. GLCall(glGenTextures(1, &m_id)); GLCall(glBindTexture(GL_TEXTURE_2D, m_id)); //Not Transparent texture. if (m_channels == 3) { GLCall(glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, m_width, m_height, 0, GL_RGB, GL_UNSIGNED_BYTE, data)); } //Transparent texture. else if (m_channels == 4) { GLCall(glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, m_width, m_height, 0, GL_RGBA, GL_UNSIGNED_BYTE, data)); } else { throw EngineError("Unsupported Channels!!!"); } //Texture Filters. GLCall(glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT)); GLCall(glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT)); GLCall(glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST_MIPMAP_NEAREST)); GLCall(glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR)); //Generate mipmaps. GLCall(glGenerateMipmap(GL_TEXTURE_2D)); } //Loading Failed. else throw EngineError("The was an error loading image: " + path); //Unbind the texture. GLCall(glBindTexture(GL_TEXTURE_2D, 0)); //Free the image data. stbi_image_free(data); } Texture::~Texture() { GLCall(glDeleteTextures(1, &m_id)); } void Texture::Bind(int unit) { GLCall(glActiveTexture(GL_TEXTURE0 + unit)); GLCall(glBindTexture(GL_TEXTURE_2D, m_id)); }  
      Now Check out the Main.cpp File
      #include "Renderer.h" #include "camera.h" Camera *camera; //Handle Key Input. void HandleInput(GLFWwindow *window) { //Exit the application with ESCAPE KEY. if (glfwGetKey(window, GLFW_KEY_ESCAPE) == GLFW_PRESS) glfwSetWindowShouldClose(window, 1); //Move Forward. if (glfwGetKey(window, GLFW_KEY_W) == GLFW_PRESS) camera->Move(true); //Move Backward. if (glfwGetKey(window, GLFW_KEY_S) == GLFW_PRESS) camera->Move(false, true); //Move left. if (glfwGetKey(window, GLFW_KEY_A) == GLFW_PRESS) camera->Move(false, false, true); //Move right. if (glfwGetKey(window, GLFW_KEY_D) == GLFW_PRESS) camera->Move(false, false, false, true); } //Mouse Input. void MouseInput(GLFWwindow *window, double x, double y) { camera->UpdateRotation(x, y); } //Mouse Zoom input. void MouseZoom(GLFWwindow *window, double x, double y) { camera->UpdateZoom(x, y); } int main(void) { GLFWwindow* window; /* Initialize the library */ if (!glfwInit()) return -1; //Use openGL version 3.3 Core Profile. glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3); glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 3); glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE); /* Create a windowed mode window and its OpenGL context */ window = glfwCreateWindow(800, 600, "Hello World", NULL, NULL); if (!window) { glfwTerminate(); return -1; } /* Make the window's context current */ glfwMakeContextCurrent(window); //Initialize GLEW. if (glewInit() != GLEW_OK) { glfwTerminate(); return -1; } //Set Callback functions. glfwSetCursorPosCallback(window, MouseInput); glfwSetScrollCallback(window, MouseZoom); //Disable the cursor. glfwSetInputMode(window, GLFW_CURSOR, GLFW_CURSOR_DISABLED); //Enable Depth Test. GLCall(glEnable(GL_DEPTH_TEST)); //Get the max texture size. GLint size; GLCall(glGetIntegerv(GL_MAX_TEXTURE_SIZE, &size)); std::cout << "Texture Max Size: "<< size << std::endl; camera = new Camera(glm::vec3(0.0f, 0.0f, 3.0f)); Renderer *renderer = new Renderer(); Shader *shader = new Shader("Shaders/basic_vertex.glsl", "Shaders/basic_fragment.glsl"); Texture *texture1 = new Texture("Resources/a.jpg", false); Texture *texture2 = new Texture("Resources/container.jpg", false); Texture *texture3 = new Texture("Resources/brick2.jpg", false); Texture *texture4 = new Texture("Resources/brick3.jpg", false); //Forget this texture. //Texture *texture5 = new Texture("Resources/brick4.jpg", false); Texture *texture6 = new Texture("Resources/container2.png", true); /* Loop until the user closes the window */ while (!glfwWindowShouldClose(window)) { //Handle input. HandleInput(window); //Clear the screen. renderer->ClearScreen(0.0f, 0.0f, 0.0f); //Render the cube. renderer->Render(texture1, shader, camera); //Update. renderer->Update(window); } //-------------Clean Up-------------// delete camera; delete renderer; delete shader; //forget about textures for now. //-------------Clean Up-------------// glfwTerminate(); return 0; }  
      I will put the code of the rest classes and the glsl shaders at the end if you want to check them out, but i assure you that they work just fine.
      Now if i run the code below I'm getting this:

       
      Now lets see what happens if I'm loading the textures one by one starting from the first one which is the only one i render.
       
      Attempt 1:
      Texture *texture1 = new Texture("Resources/a.jpg", false); //Texture *texture2 = new Texture("Resources/container.jpg", false); //Texture *texture3 = new Texture("Resources/brick2.jpg", false); //Texture *texture4 = new Texture("Resources/brick3.jpg", false); //Forget this texture. //Texture *texture5 = new Texture("Resources/brick4.jpg", false); //Texture *texture6 = new Texture("Resources/container2.png", true);
       
      Attempt 2:
      Texture *texture1 = new Texture("Resources/a.jpg", false); Texture *texture2 = new Texture("Resources/container.jpg", false); //Texture *texture3 = new Texture("Resources/brick2.jpg", false); //Texture *texture4 = new Texture("Resources/brick3.jpg", false); //Forget this texture. //Texture *texture5 = new Texture("Resources/brick4.jpg", false); //Texture *texture6 = new Texture("Resources/container2.png", true);
       
      Attempt 3:
      Texture *texture1 = new Texture("Resources/a.jpg", false); Texture *texture2 = new Texture("Resources/container.jpg", false); Texture *texture3 = new Texture("Resources/brick2.jpg", false); //Texture *texture4 = new Texture("Resources/brick3.jpg", false); //Forget this texture. //Texture *texture5 = new Texture("Resources/brick4.jpg", false); //Texture *texture6 = new Texture("Resources/container2.png", true);
       
      Attempt 4 (Orange Glitch Appears)
      Texture *texture1 = new Texture("Resources/a.jpg", false); Texture *texture2 = new Texture("Resources/container.jpg", false); Texture *texture3 = new Texture("Resources/brick2.jpg", false); Texture *texture4 = new Texture("Resources/brick3.jpg", false); //Forget this texture. //Texture *texture5 = new Texture("Resources/brick4.jpg", false); //Texture *texture6 = new Texture("Resources/container2.png", true);
       
      Attempt 5 (Grey Glitch Appears)
      Texture *texture1 = new Texture("Resources/a.jpg", false); Texture *texture2 = new Texture("Resources/container.jpg", false); Texture *texture3 = new Texture("Resources/brick2.jpg", false); Texture *texture4 = new Texture("Resources/brick3.jpg", false); //Forget this texture. //Texture *texture5 = new Texture("Resources/brick4.jpg", false); Texture *texture6 = new Texture("Resources/container2.png", true);
       
      If you see it, they only texture which I'm rendering is the first one, so how can the loading of the rest textures affect the rendering, since I'm not using them? (I'm binding the first texture before every draw call, you can check it out in the renderer class). This is so weird I literally can't think anything that causes the problem.
       
      Now check this out. I'm going to run Attempt 5 again but with these changes in the Texture class (I'm going to Force 4 channels no matter what the source file's channels😞
      #include "texture.h" #include "stb_image/stb_image.h" #include "glcall.h" #include "engine_error.h" #include <math.h> Texture::Texture(std::string path, bool trans, int unit) { //Reverse the pixels. stbi_set_flip_vertically_on_load(1); //Try to load the image. unsigned char *data = stbi_load(path.c_str(), &m_width, &m_height, &m_channels, 4); //FORCE 4 CHANNELS. //Debug. float check = (m_width * m_channels) / 4.0f; printf("file: %20s \tchannels: %d, Divisible by 4: %s, width: %d, height: %d, widthXheight: %d\n", path.c_str(), m_channels, check == ceilf(check) ? "yes" : "no", m_width, m_height, m_width * m_height); /* //The length of the pixes row is multiple of 4. if ( check == ceilf(check) ) { GLCall(glPixelStorei(GL_UNPACK_ALIGNMENT, 4)); } //It's NOT!!!! else { GLCall(glPixelStorei(GL_UNPACK_ALIGNMENT, 1)); } */ //Image loaded successfully. if (data) { //Generate the texture and bind it. GLCall(glGenTextures(1, &m_id)); GLCall(glBindTexture(GL_TEXTURE_2D, m_id)); /* //Not Transparent texture. if (m_channels == 3) { GLCall(glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, m_width, m_height, 0, GL_RGB, GL_UNSIGNED_BYTE, data)); } //Transparent texture. else if (m_channels == 4) { GLCall(glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, m_width, m_height, 0, GL_RGBA, GL_UNSIGNED_BYTE, data)); } else { throw EngineError("Unsupported Channels!!!"); } */ GLCall(glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, m_width, m_height, 0, GL_RGBA, GL_UNSIGNED_BYTE, data)); //Texture Filters. GLCall(glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT)); GLCall(glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT)); GLCall(glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST_MIPMAP_NEAREST)); GLCall(glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR)); //Generate mipmaps. GLCall(glGenerateMipmap(GL_TEXTURE_2D)); } //Loading Failed. else throw EngineError("The was an error loading image: " + path); //Unbind the texture. GLCall(glBindTexture(GL_TEXTURE_2D, 0)); //Free the image data. stbi_image_free(data); } Texture::~Texture() { GLCall(glDeleteTextures(1, &m_id)); } void Texture::Bind(int unit) { GLCall(glActiveTexture(GL_TEXTURE0 + unit)); GLCall(glBindTexture(GL_TEXTURE_2D, m_id)); }
      Rendering is what I expected!
      But I still can't understand why this fixes it. In the first version of my texture class, which I don't force 4 channels but instead I'm using the default channels, I'm configuring glTexImage2D the right way based on the source files channels and also I'm SURE that the width * channels of each image file is multiple of 4.But again in the second version of my texture class, which solve's the problem my mind is thinking again that this might be an alignment problem but it's not, I made sure of that.
      So what else can cause such a problem? Does anybody knows the answer?
       
      Below you will find the rest of the code:
      Vertex Shader:
      #version 330 core layout(location = 0) in vec3 aPos; layout(location = 1) in vec3 aNormal; layout(location = 2) in vec2 aTexCoord; uniform mat4 model; uniform mat4 view; uniform mat4 proj; out vec2 TexCoord; void main() { gl_Position = proj * view * model * vec4(aPos, 1.0); TexCoord = aTexCoord; }  
      Fragment Shader:
      #version 330 core layout(location = 0) in vec3 aPos; layout(location = 1) in vec3 aNormal; layout(location = 2) in vec2 aTexCoord; uniform mat4 model; uniform mat4 view; uniform mat4 proj; out vec2 TexCoord; void main() { gl_Position = proj * view * model * vec4(aPos, 1.0); TexCoord = aTexCoord; }  
      Renderer Class:
      #include "Renderer.h" Renderer::Renderer() { //Vertex Data. float vertices[] = { // positions // normals // texture coords -0.5f, -0.5f, -0.5f, 0.0f, 0.0f, -1.0f, 0.0f, 0.0f, 0.5f, -0.5f, -0.5f, 0.0f, 0.0f, -1.0f, 1.0f, 0.0f, 0.5f, 0.5f, -0.5f, 0.0f, 0.0f, -1.0f, 1.0f, 1.0f, 0.5f, 0.5f, -0.5f, 0.0f, 0.0f, -1.0f, 1.0f, 1.0f, -0.5f, 0.5f, -0.5f, 0.0f, 0.0f, -1.0f, 0.0f, 1.0f, -0.5f, -0.5f, -0.5f, 0.0f, 0.0f, -1.0f, 0.0f, 0.0f, -0.5f, -0.5f, 0.5f, 0.0f, 0.0f, 1.0f, 0.0f, 0.0f, 0.5f, -0.5f, 0.5f, 0.0f, 0.0f, 1.0f, 1.0f, 0.0f, 0.5f, 0.5f, 0.5f, 0.0f, 0.0f, 1.0f, 1.0f, 1.0f, 0.5f, 0.5f, 0.5f, 0.0f, 0.0f, 1.0f, 1.0f, 1.0f, -0.5f, 0.5f, 0.5f, 0.0f, 0.0f, 1.0f, 0.0f, 1.0f, -0.5f, -0.5f, 0.5f, 0.0f, 0.0f, 1.0f, 0.0f, 0.0f, -0.5f, 0.5f, 0.5f, -1.0f, 0.0f, 0.0f, 1.0f, 0.0f, -0.5f, 0.5f, -0.5f, -1.0f, 0.0f, 0.0f, 1.0f, 1.0f, -0.5f, -0.5f, -0.5f, -1.0f, 0.0f, 0.0f, 0.0f, 1.0f, -0.5f, -0.5f, -0.5f, -1.0f, 0.0f, 0.0f, 0.0f, 1.0f, -0.5f, -0.5f, 0.5f, -1.0f, 0.0f, 0.0f, 0.0f, 0.0f, -0.5f, 0.5f, 0.5f, -1.0f, 0.0f, 0.0f, 1.0f, 0.0f, 0.5f, 0.5f, 0.5f, 1.0f, 0.0f, 0.0f, 1.0f, 0.0f, 0.5f, 0.5f, -0.5f, 1.0f, 0.0f, 0.0f, 1.0f, 1.0f, 0.5f, -0.5f, -0.5f, 1.0f, 0.0f, 0.0f, 0.0f, 1.0f, 0.5f, -0.5f, -0.5f, 1.0f, 0.0f, 0.0f, 0.0f, 1.0f, 0.5f, -0.5f, 0.5f, 1.0f, 0.0f, 0.0f, 0.0f, 0.0f, 0.5f, 0.5f, 0.5f, 1.0f, 0.0f, 0.0f, 1.0f, 0.0f, -0.5f, -0.5f, -0.5f, 0.0f, -1.0f, 0.0f, 0.0f, 1.0f, 0.5f, -0.5f, -0.5f, 0.0f, -1.0f, 0.0f, 1.0f, 1.0f, 0.5f, -0.5f, 0.5f, 0.0f, -1.0f, 0.0f, 1.0f, 0.0f, 0.5f, -0.5f, 0.5f, 0.0f, -1.0f, 0.0f, 1.0f, 0.0f, -0.5f, -0.5f, 0.5f, 0.0f, -1.0f, 0.0f, 0.0f, 0.0f, -0.5f, -0.5f, -0.5f, 0.0f, -1.0f, 0.0f, 0.0f, 1.0f, -0.5f, 0.5f, -0.5f, 0.0f, 1.0f, 0.0f, 0.0f, 1.0f, 0.5f, 0.5f, -0.5f, 0.0f, 1.0f, 0.0f, 1.0f, 1.0f, 0.5f, 0.5f, 0.5f, 0.0f, 1.0f, 0.0f, 1.0f, 0.0f, 0.5f, 0.5f, 0.5f, 0.0f, 1.0f, 0.0f, 1.0f, 0.0f, -0.5f, 0.5f, 0.5f, 0.0f, 1.0f, 0.0f, 0.0f, 0.0f, -0.5f, 0.5f, -0.5f, 0.0f, 1.0f, 0.0f, 0.0f, 1.0f }; //Generate a VAO and a VBO. GLCall(glGenVertexArrays(1, &m_VAO)); GLCall(glGenBuffers(1, &m_VBO)); //Bind VAO and VBO. GLCall(glBindVertexArray(m_VAO)); GLCall(glBindBuffer(GL_ARRAY_BUFFER, m_VBO)); //Transfer The Data. GLCall(glBufferData(GL_ARRAY_BUFFER, sizeof(vertices), vertices, GL_STATIC_DRAW)); //Positions. GLCall(glEnableVertexAttribArray(0)); GLCall(glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, sizeof(float) * 8, (void *)0)); //Normals. GLCall(glEnableVertexAttribArray(1)); GLCall(glVertexAttribPointer(1, 3, GL_FLOAT, GL_FALSE, sizeof(float) * 8, (void *) 12)); //Texture Coordinates. GLCall(glEnableVertexAttribArray(2)); GLCall(glVertexAttribPointer(2, 2, GL_FLOAT, GL_FALSE, sizeof(float) * 8, (void *) 24)); //Unbind The Buffers. GLCall(glBindVertexArray(0)); GLCall(glBindBuffer(GL_ARRAY_BUFFER, 0)); } Renderer::~Renderer() { } void Renderer::ClearScreen(float r, float g, float b) { GLCall(glClearColor(r, g, b, 1.0f)); GLCall(glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT)); } void Renderer::Update(GLFWwindow * window) { glfwSwapBuffers(window); glfwPollEvents(); } void Renderer::Render(Texture *texture, Shader *program, Camera *camera) { //Bind VAO. GLCall(glBindVertexArray(m_VAO)); //Bind The Program. program->Bind(); //Set the unit to be used on the shader. program->SetUniform1i("diffuse", 0); //Bind the texture at unit zero. texture->Bind(0); glm::mat4 model = glm::mat4(1.0f); glm::mat4 view = glm::mat4(1.0f); glm::mat4 proj = glm::mat4(1.0f); //Get The View Matrix. view = camera->GetView(); //Create The Perspective Projection. proj = glm::perspective(glm::radians(camera->m_fov), 800.0f / 600, 0.1f, 100.0f); //Set the transformation uniforms. program->SetUniformMat4f("model", model); program->SetUniformMat4f("view", view); program->SetUniformMat4f("proj", proj); //Draw Call. GLCall(glDrawArrays(GL_TRIANGLES, 0, 36)); }  
       
       
      Shader Class:
      #include "shader.h" #include "glcall.h" #include "engine_error.h" #include <fstream> #include <string> #include <glm/gtc/matrix_transform.hpp> #include <glm/gtc/type_ptr.hpp> struct ShaderSource { std::string vertex_src; std::string fragment_src; }; static void ReadSources(std::string filename, bool is_vertex, struct ShaderSource *src) { //Create a file object. std::ifstream file; //Open the file. file.open(filename, std::ios::in); //If the file opened successfully read it. if (file.is_open()) { //Size of the file. file.seekg(0, std::ios::end); std::streampos size = file.tellg(); file.seekg(0, std::ios::beg); //Allocate memory to store the data. char *data = (char *)malloc(sizeof(char) * (size + (std::streampos)1) ); //Read the data from the file. file.read(data, size); //Close the string. data[file.gcount()] = '\0'; //Close the file. file.close(); //This was the vertex file. if (is_vertex) src->vertex_src = (const char *)data; //This was the fragment file. else src->fragment_src = (const char *)data; //Release the memory for the data since I coppied them into the ShaderSource structure. free(data); } //Problem opening the file. else throw EngineError("There was a problem opening the file: " + filename); } static unsigned int CompileShader(std::string source, GLenum type) { //__________Local Variables__________// int length, success; //__________Local Variables__________// //Create the shader object. GLCall(unsigned int shader = glCreateShader(type)); //std::string to const c string. const char *src = source.c_str(); //Copy the source code into the shader object. GLCall(glShaderSource(shader, 1, &src, NULL)); //Compile the shader. GLCall(glCompileShader(shader)); //Get the shader info length. GLCall(glGetShaderiv(shader, GL_INFO_LOG_LENGTH, &length)); //Get the shader compilations status. GLCall(glGetShaderiv(shader, GL_COMPILE_STATUS, &success)); //Compilation Failed. if (!success) { //Error string. std::string error; //Allocate memory for the info log. char *info = (char *)malloc(sizeof(char) * (length+1) ); //Get the info. GLCall(glGetShaderInfoLog(shader, length, NULL, info)); //Terminate the string. info[length] = '\0'; //Initialize the error message as vertex compilation error. if (type == GL_VERTEX_SHADER) error = "Vertex Shader compilations error: "; //Initialize the error message as fragment compilation error. else error = "Fragment Shader compilation error: "; //Add the info log to the message. error += info; //Free info. free(info); //Throw a message error. throw EngineError(error); } return shader; } static unsigned int CreateProgram(ShaderSource &src) { //__________Local Variables__________// int length, success; //__________Local Variables__________// unsigned int program = glCreateProgram(); unsigned int vertex_shader = CompileShader(src.vertex_src, GL_VERTEX_SHADER); unsigned int fragment_shader = CompileShader(src.fragment_src, GL_FRAGMENT_SHADER); GLCall(glAttachShader(program, vertex_shader)); GLCall(glAttachShader(program, fragment_shader)); GLCall(glLinkProgram(program)); GLCall(glValidateProgram(program)); //Get the shader info length. GLCall(glGetProgramiv(program, GL_INFO_LOG_LENGTH, &length)); //Get the shader compilations status. GLCall(glGetProgramiv(program, GL_LINK_STATUS, &success)); //Linking Failed. if (!success) { //Error string. std::string error = "Linking Error: "; //Allocate memory for the info log. char *info = (char *)malloc(sizeof(char) * (length + 1)); //Get the info. GLCall(glGetProgramInfoLog(program, length, NULL, info)); //Terminate the string. info[length] = '\0'; //Add the info log to the message. error += info; //Free info. free(info); //Throw a message error. throw EngineError(error); } return program; } Shader::Shader(std::string vertex_filename, std::string fragment_filename) { //Create a ShaderSource object. ShaderSource source; //Read the sources. ReadSources(vertex_filename, true, &source); ReadSources(fragment_filename, false, &source); //Create the program. m_id = CreateProgram(source); //And start using it. this->Bind(); } Shader::~Shader() { } void Shader::Bind() { GLCall(glUseProgram(m_id)); } void Shader::SetUniform1i(std::string name, int value) { //Bind the shader. this->Bind(); //Get uniform location. GLCall(int location = glGetUniformLocation(m_id, name.c_str())); //Set the value. GLCall(glUniform1i(location, value)); } void Shader::SetUniformMat4f(std::string name, glm::mat4 mat) { //Bind the shader. this->Bind(); //Get uniform location. GLCall(int location = glGetUniformLocation(m_id, name.c_str())); //Set the mat4. GLCall(glUniformMatrix4fv(location, 1, GL_FALSE, glm::value_ptr(mat))); } void Shader::SetUniformVec3(std::string name, glm::vec3 vec3) { //Bind the shader. this->Bind(); //Get uniform location. GLCall(int location = glGetUniformLocation(m_id, name.c_str())); //Set the Uniform. GLCall(glUniform3f(location, vec3.x, vec3.y, vec3.z)); } void Shader::SetUniform1f(std::string name, float value) { //Bind the shader. this->Bind(); //Get uniform location. GLCall(int location = glGetUniformLocation(m_id, name.c_str())); GLCall(glUniform1f(location, value)); }  
      Camera Class:
      #include "camera.h" #include <glm/gtc/matrix_transform.hpp> #include <iostream> Camera::Camera(glm::vec3 cam_pos) : m_cameraPos(cam_pos), m_pitch(0), m_yaw(-90), m_FirstTime(true), m_sensitivity(0.1), m_fov(45.0) { //Calculate last x and y. m_lastx = 800 / 2.0f; m_lasty = 600 / 2.0f; } Camera::~Camera() { } void Camera::Move(bool forward, bool backward, bool left, bool right) { //Move Forward. if (forward) m_cameraPos += m_cameraFront * m_speed; //Move Backwards. else if (backward) m_cameraPos -= m_cameraFront * m_speed; //Move Left. if (left) m_cameraPos += glm::normalize(glm::cross(m_cameraUp, m_cameraFront)) * m_speed; //Move Right. else if (right) m_cameraPos -= glm::normalize(glm::cross(m_cameraUp, m_cameraFront)) * m_speed; } glm::mat4 Camera::GetView() { return glm::lookAt(m_cameraPos, m_cameraPos + m_cameraFront, m_cameraUp); } void Camera::UpdateRotation(double xpos, double ypos) { //First time, don't do anything. if (m_FirstTime) { m_lastx = xpos; m_lasty = ypos; m_FirstTime = false; } //Get the offset for pitch and yaw. float xoffset = (xpos - m_lastx) * m_sensitivity; float yoffset = (m_lasty - ypos) * m_sensitivity; //Update lastX and lastY. m_lastx = xpos; m_lasty = ypos; //Add them to pitch and yaw. m_pitch += yoffset; m_yaw += xoffset; //Limits for pitch. if (m_pitch > 89.0f) m_pitch = 89.0f; if (m_pitch < -89.0f) m_pitch = -89.0f; //Calculate the new vector. glm::vec3 front = glm::vec3(1.0f); front.x = cos(glm::radians(m_pitch)) * cos(glm::radians(m_yaw)); front.y = sin(glm::radians(m_pitch)); front.z = cos(glm::radians(m_pitch)) * sin(glm::radians(m_yaw)); //Create the direction camera front vector. m_cameraFront = front; } void Camera::UpdateZoom(double x, double y) { m_fov -= y; if (m_fov <= 25) m_fov = 25; else if (m_fov > 45) m_fov = 45; std::cout << m_fov << std::endl; }  
    • By Vincent Chase
      Hello. I am writing a simple shadowmapping algorithm with one directional light. I understand the basics but I can't make it work. Can you look at my code snippet and tell me what am I doing wrong?
      Shadowmap is correct (1024x1024, one directional light looking at (0,0,0) from (0,10,0)).
      float3 PositionFromDepth(in float depth, in float2 pixelCoord, float aspectRatio, float4x4 invView, float4x4 invProj) { float2 cpos = (pixelCoord + 0.5f) * aspectRatio; cpos *= 2.0f; cpos -= 1.0f; cpos.y *= -1.0f; float4 positionWS = mul(float4(cpos, depth, 1.0f), invProj); positionWS /= positionWS.w; positionWS = mul(positionWS, invView); return positionWS.xyz; } float GetShadow(float3 posWS) { float4 lpos = mul(float4(posWS, 1), gLightViewProj); //re-homogenize position after interpolation lpos.xyz /= lpos.w; //if position is not visible to the light - dont illuminate it //results in hard light frustum if (lpos.x < -1.0f || lpos.x > 1.0f || lpos.y < -1.0f || lpos.y > 1.0f || lpos.z < 0.0f || lpos.z > 1.0f) return 0.0f; //transform clip space coords to texture space coords (-1:1 to 0:1) lpos.x = lpos.x / 2.0f + 0.5f; lpos.y = -lpos.y / 2.0f + 0.5f; //sample shadow map - point sampler float shadowMapDepth = ShadowMap.Sample(Sampler, lpos.xy).r; const float bias = 0.001f; if (shadowMapDepth < lpos.z - bias) return 0.0f; return 1.0f; } float4 ps_main(PixelShaderInput pin) : SV_Target { float2 pixelCoord = pin.texcoord.xy; float depth = Depth.SampleLevel(Sampler, pixelCoord, 0).x; // Outgoing light direction (vector from world-space pixel position to the "eye"). float3 posWS = PositionFromDepth(depth, pixelCoord, gScreenDim.w, gInvView, gInvProj); float shadow = GetShadow(posWS); return float4(shadow.xxx, 1); } The results I get are on the attached image. To me it looks like world space position extracted from depth is not really world space position but I don't know if that's the case.
      I'll appreciate any kind of help on this.
       

×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!