Jump to content
  • Advertisement
  • 06/09/16 01:14 AM
    Sign in to follow this  

    Math for Game Developers: Graphs and Pathfinding

    Math and Physics

    BSVino
    • Posted By BSVino
    Math for Game Developers is exactly what it sounds like - a weekly instructional YouTube series wherein I show you how to use math to make your games. Every Thursday we'll learn how to implement one game design, starting from the underlying mathematical concept and ending with its C++ implementation. The videos will teach you everything you need to know, all you need is a basic understanding of algebra and trigonometry. If you want to follow along with the code sections, it will help to know a bit of programming already, but it's not necessary. You can download the source code that I'm using from GitHub, from the description of each video. If you have questions about the topics covered or requests for future topics, I would love to hear them! Leave a comment, or ask me on my Twitter, @VinoBS
    The video below contains the playlist for all the videos in this series, which can be accessed via the playlist icon at the top of the embedded video frame. The first video in the series is loaded automatically

    Graphs and Pathfinding

    [playlist=PLW3Zl3wyJwWO7p6xpTzs-QR58DRg2Ln0V]


      Report Article
    Sign in to follow this  


    User Feedback


    You had me with the math discussion and the "Kahn Academy" style tutorial.  I think this is a great way to teach this stuff.  But when you started typing in the C++ code and explaining pointers, not using vectors with pointers, using indices for data, and then showing the code in memory, then I felt like you were trying to teach a beginner how to use the language using a Node/Edge data structure as a learning tool.

    Share this comment


    Link to comment
    Share on other sites
    I'm actually a big fan of your videos and I've recommended them to a lot of people in the past to help with understanding complex concepts like quaternions, etc. I think this is the first time I've seen you break to code though (maybe I've only watched ones where you don't), and although you did cover the basics of graphs, it really felt like you could have spent the 'code' time explaining a little bit more about graph basics, or even showing a simple BFS to get from A to B. (This is just referring to the first video in the series here. I'm sure you cover the bases by the end.)

    In any case, Budapest made me lol. I was expecting Byzantium.

    Please keep making videos.

    Share this comment


    Link to comment
    Share on other sites


    Create an account or sign in to comment

    You need to be a member in order to leave a comment

    Create an account

    Sign up for a new account in our community. It's easy!

    Register a new account

    Sign in

    Already have an account? Sign in here.

    Sign In Now

  • Advertisement
  • Advertisement
  • Latest Featured Articles

  • Featured Blogs

  • Advertisement
  • Popular Now

  • Similar Content

    • By calioranged
      *** Beginner question ***
      To my current understanding of mipmapping, If you have a 512x512 texture downsized to 256x256, then only 1 pixel can be rendered on the downsized version for every 4 pixels on the full sized texture.
      If the nearest neighbour method is used, then the colour of each pixel on the downsized version will be determined by which pixel has its centre closest to the relevant texture coordinate, as demonstrated below:

      Whereas if the linear method is used, then the colour of each pixel on the downsized version will be determined by a weighted average of the four full size pixels:

      But if mipmapping is not used, then how is the colour of each pixel determined?
    • By Psychopathetica
      Hey guys. I'm in a bit of a pickle, so here it goes. I created for myself a simple Bloom effect in 2D using a blend of SpriteBatches with Shaders. Works great! But I feel that it is a bit slow. Problem is, I'm using a ton of rendertargetviews, mainly as a tool to output a resulted texture in the form of a rendertargetview. Even in the most simple of programs without the bloom and only using 3 rendertargetviews, you can see it bog down. Not that my computer is slow or anything. Its a nice NVidea GeForce 1060 gaming laptop. So I must be doing something wrong with the rendertargetviews.
      Here is how I have bloom taking place, with each of these having its own rendertargetview, texture, and shaderresourceview:
      1) Draw unscaled sprite
      2) Draw unscaled exposure tone mapped / gamma corrected
      Note that the unscaled 1) and 2) are separate from the actual bloom process, but 2) will be combined later with the scaled down bloomed texture
      3) Draw scaled down sprite
      4) Draw scaled down exposure tone mapped / gamma corrected
      5) Draw scaled down bright filter (only revealing the brightest of colors at a certain bright filter)
      6) Draw scaled down horizontal blur
      7) Draw scaled down vertical blur
      ūüėé Draw scaled down horizontal blur again
      9) Draw scaled down vertical blur again
      10) Draw contrast (this is scaled back to the original size)
      11) Draw combined 2) and 10) onto the backbuffer
      Wow that was a lot! Anyways, I haven't checked the framerate yet, but it seems like, from the looks of it, 20-30 fps, depending on the size of my gaussian kernel. Probably worse. Hell, even at a kernel of 1 is slow due to how many rendertargetviews I got going. Is there a better method of outputing the textures I need rather than using a dozen or so rendertargetviews? Note that in my program, nothing is deprecated. Im using the DirectXTK library. I'm not using .fx files, only hlsl files. I'm not using effects. Only pixel shaders, about 7 of them. All of my pointers are ComPtrs. And so on and so forth. Thanks in advance.
    • By Uttam Kushwah
      This article is on wikipedia says that we can change the behavior of the game entity by adding or removing the component at runtime how it goes 
      i don't know.
       
      What i am doing is, I have a gameObject class which is having four components 
       class gameObject
       {
           public:
            #define MAX_COMPONENT 4
            unsigned int components[MAX_COMPONENT];
            gameObject()
             {
              for(int i=0;i<MAX_COMPONENT;i++)
               {
                 components=0;
               }
                  //parent_id=-1;
            }
      };
      i don't use inheritance, Whenever i make the new game entity i add an gameObject in the gameObjectManger class using the game entities constructor 
      with all of its components filled at the time of game entity creation like index for rigid body , mesh and etc.
      Then i use these gameObjects at individual systems to run the game like below 
      // For renderer 
      for(unsigned int i=0;i<manager->gameObjects.size();i++)
           {
              unsigned int meshIndex = manager->gameObjects.components[MY_MESH]; //mesh data
              mat4 trans=(*transforms)[bodyIndex];// The transformation matrix extracted and spitted out by the the Physics engine 
              mesh_Entries[meshIndex].init();
              GLuint lMM  = glGetUniformLocation(programHandle,"Model");
              glUniformMatrix4fv(lMM, 1, GL_FALSE,&trans[0][0]);
              mesh_Entries[meshIndex].bindTexture();
              glBindBuffer(GL_ARRAY_BUFFER, mesh_Entries[meshIndex].getVertexBuffer());
              glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, mesh_Entries[meshIndex].getIndexBuffer());
              pickColor=vec4(1.0f);
              pickScale=mat4(1.0f);
              lMM  = glGetUniformLocation(programHandle,"pick");
              glUniform4fv(lMM,1,&pickColor[0]);
              lMM  = glGetUniformLocation(programHandle,"pickScale");
              glUniformMatrix4fv(lMM,1,GL_FALSE,&pickScale[0][0]);
              // This is very basic rendering since all object's indices are treated as
              // indexed based. Stored in element buffer with indices and vbo with vertices
              // for each single gameObject having their own VAO and IO. Optimization need to be done later on.
              glDrawElements(
                              GL_TRIANGLES,                                // mode
                              mesh_Entries[meshIndex].getIndicesSize(),    // count
                              GL_UNSIGNED_SHORT,                           // type
                              (void*)0                                     // element array buffer offset
                            );
           }
      but i am not getting it how to add a new component like LogicalData to the gameObject not about how to point from gameObject but how to build one 
      is this should be my approach 
      struct LogicalData 
      {
       float running_speed;
      vec3 seek_position;
      vec3 attack_point;
      };
      Character : public CharacterController
      {
       private:
      gameObject* me;
      public:
      // methods to manipulate gameObject content using the component id for each component in their container
      // i.e is to update component[LOGICAL_DATA] in the gameLogicContainer 
      };
      and then a global container which hold this logical data and every entity has a id to it using gameobjects  
      or i should not be doing this at all. i can just put all logical data  into the any game entity like this and push the physics and rendering data back to gameobject
      Character : public CharacterController
      {
       private:
         float running_speed;
         vec3 seek_position;
         vec3 attack_point;
      public:
      // methods to manipulate above
      };
      Any comments will be greatly appreciated.
       
       
    • By L4ZZA
      Hi there.
      I'm trying to render an object using an index buffer instead of just having all the combinations of vertices with each texture coordinate. I'd like to have two buffers, one for cube vertices and one for the tex coordinates and then add them in an interleaved way to the VBO.
      In the code below you can see the differences of the two versions. On the right hand side it works perfectly with my design, but it's a lot to process with a lot of duplicated vertices and  tex coordinates. On the left, my try to convert it to index data.
      const std::vector<glm::vec3> vertices //static constexpr float vertices[] { //{ {-0.5f, -0.5f, -0.5f}, //0.0f, 0.0f, // a = 0 // -0.5f, -0.5f, -0.5f, 0.0f, 0.0f, // a = 0 - t = 0 { 0.5f, -0.5f, -0.5f}, //1.0f, 0.0f, // b = 1 // 0.5f, -0.5f, -0.5f, 1.0f, 0.0f, // b = 1 - t = 1 { 0.5f, 0.5f, -0.5f}, //1.0f, 1.0f, // c = 2 // 0.5f, 0.5f, -0.5f, 1.0f, 1.0f, // c = 2 - t = 2 {-0.5f, 0.5f, -0.5f}, //0.0f, 1.0f, // d = 3 // 0.5f, 0.5f, -0.5f, 1.0f, 1.0f, // c = 2 - t = 2 {-0.5f, -0.5f, 0.5f}, //0.0f, 0.0f, // e = 4 // -0.5f, 0.5f, -0.5f, 0.0f, 1.0f, // d = 3 - t = 3 { 0.5f, -0.5f, 0.5f}, //1.0f, 0.0f, // f = 5 // -0.5f, -0.5f, -0.5f, 0.0f, 0.0f, // a = 0 - t = 0 { 0.5f, 0.5f, 0.5f}, //1.0f, 1.0f, // g = 6 {-0.5f, 0.5f, 0.5f}, //0.0f, 1.0f, // h = 7 // -0.5f, -0.5f, 0.5f, 0.0f, 0.0f, // e = 4 - t = 0 }; // 0.5f, -0.5f, 0.5f, 1.0f, 0.0f, // f = 5 - t = 1 // 0.5f, 0.5f, 0.5f, 1.0f, 1.0f, // g = 6 - t = 2 const std::vector<unsigned int> indices // 0.5f, 0.5f, 0.5f, 1.0f, 1.0f, // g = 6 - t = 2 { // -0.5f, 0.5f, 0.5f, 0.0f, 1.0f, // h = 7 - t = 3 0, 1, 2, // -0.5f, -0.5f, 0.5f, 0.0f, 0.0f, // e = 4 - t = 0 2, 3, 0, // -0.5f, 0.5f, 0.5f, 1.0f, 0.0f, // h = 7 - t = 1 4, 5, 6, // -0.5f, 0.5f, -0.5f, 1.0f, 1.0f, // d = 3 - t = 2 6, 7, 4, // -0.5f, -0.5f, -0.5f, 0.0f, 1.0f, // a = 0 - t = 3 // -0.5f, -0.5f, -0.5f, 0.0f, 1.0f, // a = 0 - t = 3 7, 3, 0, // -0.5f, -0.5f, 0.5f, 0.0f, 0.0f, // e = 4 - t = 0 0, 4, 7, // -0.5f, 0.5f, 0.5f, 1.0f, 0.0f, // h = 7 - t = 1 6, 2, 1, // 0.5f, 0.5f, 0.5f, 1.0f, 0.0f, // g = 6 - t = 1 1, 5, 6, // 0.5f, 0.5f, -0.5f, 1.0f, 1.0f, // c = 2 - t = 2 // 0.5f, -0.5f, -0.5f, 0.0f, 1.0f, // b = 1 - t = 3 0, 1, 5, // 0.5f, -0.5f, -0.5f, 0.0f, 1.0f, // b = 1 - t = 3 5, 4, 0, // 0.5f, -0.5f, 0.5f, 0.0f, 0.0f, // f = 5 - t = 0 // 0.5f, 0.5f, 0.5f, 1.0f, 0.0f, // g = 6 - t = 1 3, 2, 6, 6, 7, 3, // -0.5f, -0.5f, -0.5f, 0.0f, 1.0f, // a = 0 - t = 3 // 0.5f, -0.5f, -0.5f, 1.0f, 1.0f, // b = 1 - t = 2 }; // 0.5f, -0.5f, 0.5f, 1.0f, 0.0f, // f = 5 - t = 1 // 0.5f, -0.5f, 0.5f, 1.0f, 0.0f, // f = 5 - t = 1 const std::vector<glm::vec2> tex_coord // -0.5f, -0.5f, 0.5f, 0.0f, 0.0f, // e = 4 - t = 0 { // -0.5f, -0.5f, -0.5f, 0.0f, 1.0f, // a = 0 - t = 3 {0.0f, 0.0f}, {1.0f, 0.0f}, // -0.5f, 0.5f, -0.5f, 0.0f, 1.0f, // d = 3 - t = 3 {1.0f, 1.0f}, // 0.5f, 0.5f, -0.5f, 1.0f, 1.0f, // c = 2 - t = 2 {0.0f, 1.0f}, // 0.5f, 0.5f, 0.5f, 1.0f, 0.0f, // g = 6 - t = 1 }; // 0.5f, 0.5f, 0.5f, 1.0f, 0.0f, // g = 6 - t = 1 // -0.5f, 0.5f, 0.5f, 0.0f, 0.0f, // h = 7 - t = 0 const std::vector<unsigned int> tex_indices // -0.5f, 0.5f, -0.5f, 0.0f, 1.0f // d = 3 - t = 3 { //}; 0, 1, 2, 2, 3, 0, 0, 1, 2, 2, 3, 0, 1, 2, 3, 3, 0, 1, 1, 2, 3, 3, 0, 1, 3, 2, 1, 1, 0, 3, 3, 2, 1, 1, 0, 3, }; What I'm struggling with is to then send this data to the GPU in the correct way. 
      (see code below)
      // more... m_vb = new VertexBuffer(); for(int i = 0; i < vertices.size(); ++i) { // add vertex to VBO //.. // add tex coordinate to VBO //.. } VertexBufferLayout layout; layout.Push<float>(3); layout.Push<float>(2); m_va.AddBuffer(*m_vb, layout); m_ib = new IndexBuffer(); for(int i = 0; i < indices.size(); ++i) { // add index to IBO //.. // add tex index to IBO //.. } m_ib->SendToGPU(); // other.... // drawing // using shader program // bound VAO // bound index buffer glDrawElements(GL_TRIANGLES, ib.GetCount(), GL_UNSIGNED_INT, nullptr); I believe what I'm try to achieve is feasible, but is this the right approach? Anyone with a better idea, or a solution that doesn't include specifying all the combinations of vertices and tex coordinates one by one?
    • By congard
      I ran into a problem when testing a program on an AMD GPU. When tested on Nvidia and Intel HD Graphics, everything works fine. On AMD, the problem occurs precisely when trying to bind the texture. Because of this problem, the shader has no shadow maps and only a black screen is visible. Id textures and other parameters are successfully loaded. Below are the code snippets:
      Here is the complete problem area of the rendering code:
      #define cfgtex(texture, internalformat, format, width, height) glBindTexture(GL_TEXTURE_2D, texture); \ glTexImage2D(GL_TEXTURE_2D, 0, internalformat, width, height, 0, format, GL_FLOAT, NULL); void render() { for (GLuint i = 0; i < count; i++) { // start id = 10 glUniform1i(samplersLocations[i], startId + i); glActiveTexture(GL_TEXTURE0 + startId + i); glBindTexture(GL_TEXTURE_CUBE_MAP, texturesIds[i]); } renderer.mainPass(displayFB, rbo); cfgtex(colorTex, GL_RGBA16F, GL_RGBA, params.scrW, params.scrH); cfgtex(dofTex, GL_R16F, GL_RED, params.scrW, params.scrH); cfgtex(normalTex, GL_RGB16F, GL_RGB, params.scrW, params.scrH); cfgtex(ssrValues, GL_RG16F, GL_RG, params.scrW, params.scrH); cfgtex(positionTex, GL_RGB16F, GL_RGB, params.scrW, params.scrH); glClear(GL_COLOR_BUFFER_BIT); glClearBufferfv(GL_COLOR, 1, ALGINE_RED); // dof buffer // view port to window size glViewport(0, 0, WIN_W, WIN_H); // updating view matrix (because camera position was changed) createViewMatrix(); // sending lamps parameters to fragment shader sendLampsData(); glEnableVertexAttribArray(cs.inPosition); glEnableVertexAttribArray(cs.inNormal); glEnableVertexAttribArray(cs.inTexCoord); // drawing //glUniform1f(ALGINE_CS_SWITCH_NORMAL_MAPPING, 1); // with mapping glEnableVertexAttribArray(cs.inTangent); glEnableVertexAttribArray(cs.inBitangent); for (size_t i = 0; i < MODELS_COUNT; i++) drawModel(models[i]); for (size_t i = 0; i < LAMPS_COUNT; i++) drawModel(lamps[i]); glDisableVertexAttribArray(cs.inPosition); glDisableVertexAttribArray(cs.inNormal); glDisableVertexAttribArray(cs.inTexCoord); glDisableVertexAttribArray(cs.inTangent); glDisableVertexAttribArray(cs.inBitangent); ... } renderer.mainPass code:
      void mainPass(GLuint displayFBO, GLuint rboBuffer) { glBindFramebuffer(GL_FRAMEBUFFER, displayFBO); glBindRenderbuffer(GL_RENDERBUFFER, rboBuffer); glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT, params->scrW, params->scrH); glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, rboBuffer); glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); } glsl:
      #version 400 core ... uniform samplerCube shadowMaps[MAX_LAMPS_COUNT]; There are no errors during the compilation of shaders. As far as I understand, the texture for some reason does not bind. Depth maps themselves are drawn correctly.
      I access the elements of the array as follows:
      for (int i = 0; i < count; i++) { ... depth = texture(shadowMaps[i], fragToLight).r; ... }  
      Also, it was found that a black screen occurs when the samplerCube array is larger than the bound textures. For example MAX_LAMPS_COUNT = 2 and count = 1, then
      uniform samplerCube shadowMaps[2];
      glUniform1i(samplersLocations[0], startId + 0); glActiveTexture(GL_TEXTURE0 + startId + 0); glBindTexture(GL_TEXTURE_CUBE_MAP, texturesIds[0]); In this case, there will be a black screen.
      But if MAX_LAMPS_COUNT = 1 (uniform samplerCube shadowMaps[1]) then shadows appear, but a new problem also arises: 


      Do not pay attention to the fact that everything is greenish, this is due to incorrect color correction settings for the video card.
      Any ideas? I would be grateful for the help
√ó

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!