Jump to content
  • Advertisement
  • entries
    22
  • comments
    12
  • views
    1920

C++ Set Up GTest for TDD. C++, VS

8Observer8

1723 views

I made an example of project in VS 2015: SortFunctions.zip  This project will show you how set up Google Test in Visual Studio and run simple unit tests.

Note. If you have another version of VS then before you will run unit tests you need to select VS 2017, like in this screenshot:

Spoiler

PlatformToolset.png.3c9a59798e0e30baeee97b337b6bbe48.png

Google Test library is included in the project as source folder and it is placed in "Libs" folder. You need to:

  • open the solution. The solution is file with name: "SortFunctions.sln"
  • select your version of VS, for example VS 2017 instead of VS 2015 as in screenshot above
  • make the "SortFunction_UnitTests" project as "StartUp Project". For this: make right mouse button click on the "SortFunction_UnitTests" project -> select "Set as StartUp Project"
  • press Ctrl+F5 to run unit tests

You will see this settings in the "SortFunction_UnitTests" project properties:

$(SolutionDir)Libs\gtest-1.8.1\include
$(SolutionDir)Libs\gtest-1.8.1
$(SolutionDir)SortFunction

This solution include two projects:

  • SortFunctions - this project contains modules that we want to test. For example, bubbleSort() method
  • SortFunctions_UnitTests - this project contains unit tests

The "SortFunctions" project has two files:

SortFunctions.h

#pragma once
 
extern void bubbleSort(int *array, unsigned int amount);
 
extern void countingSort(int *array, unsigned int amount);

SortFunctions.cpp

#include "SortFunctions.h"
 
void bubbleSort(int *array, unsigned int amount)
{
 
}
 
void countingSort(int *array, unsigned int amount)
{
 
}

The "SortFunctions_UnitTests" project has tests. For example, this is the "bubbleSortTests.cpp" with two tests. The first test is for positive numbers and the second test is for negative numbers:

bubbleSortTests.cpp

#include <gtest/gtest.h>
 
#include "SortFunctions.h"
 
TEST(bubbleSortTest, AllPositiveElements)
{
    // Arrange
    const unsigned int amount = 5;
    int actualArray[amount] = { 5, 3, 10, 2, 7 };
    int expectedArray[amount] = { 2, 3, 5, 7, 10 };
 
    // Act
    bubbleSort(actualArray, amount);
 
    // Assert
    for (size_t i = 0; i < amount; i++)
    {
        ASSERT_EQ(expectedArray[i], actualArray[i]);
    }
}
 
TEST(bubbleSortTest, AllNegativeElements)
{
    // Arrange
    const unsigned int amount = 5;
    int actualArray[amount] = { -5, -3, -10, -2, -7 };
    int expectedArray[amount] = { -10, -7, -5, -3, -2 };
 
    // Act
    bubbleSort(actualArray, amount);
 
    // Assert
    for (size_t i = 0; i < amount; i++)
    {
        ASSERT_EQ(expectedArray[i], actualArray[i]);
    }
}

 

 



3 Comments


Recommended Comments

" I opened Setting of the project and add these lines to "C/C++" -> "General" -> "Additional Include Directories": "

It's a lot better creating .prop files for you projects under Property Manager. You can reuse them (put in a common folder) for every other project that uses similar libs/settings. Or you can copy and 'fork' them for a new project. It also relieves the frustration of not picking Release settings when in Debug or vice-versa (What a load of crap that is, right?). If interested, it's has a tab near the Solution Explorer pane tab, close to the bottom.

Oh and... Tests? Tests? We don't need no stinkin' tests. :D j/k

Share this comment


Link to comment
7 hours ago, fleabay said:

It's a lot better creating .prop files for you projects under Property Manager. You can reuse them (put in a common folder)

It useful for projects on my computer where each library has only on copy. But I include libraries in some projects because I need to public VS projects and everyone can open it and run them immediately. I make the "Libs" folder inside each such solution, put all necessary libraries to the "Libs" folder and connect them to the project using "$(SolutionDir)Libs\..." paths.

Share this comment


Link to comment

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Advertisement
  • Advertisement
  • What is your GameDev Story?

    In 2019 we are celebrating 20 years of GameDev.net! Share your GameDev Story with us.

    (You must login to your GameDev.net account.)

  • Blog Entries

  • Similar Content

    • By calioranged
      *** Beginner question ***
      To my current understanding of mipmapping, If you have a 512x512 texture downsized to 256x256, then only 1 pixel can be rendered on the downsized version for every 4 pixels on the full sized texture.
      If the nearest neighbour method is used, then the colour of each pixel on the downsized version will be determined by which pixel has its centre closest to the relevant texture coordinate, as demonstrated below:

      Whereas if the linear method is used, then the colour of each pixel on the downsized version will be determined by a weighted average of the four full size pixels:

      But if mipmapping is not used, then how is the colour of each pixel determined?
    • By Psychopathetica
      Hey guys. I'm in a bit of a pickle, so here it goes. I created for myself a simple Bloom effect in 2D using a blend of SpriteBatches with Shaders. Works great! But I feel that it is a bit slow. Problem is, I'm using a ton of rendertargetviews, mainly as a tool to output a resulted texture in the form of a rendertargetview. Even in the most simple of programs without the bloom and only using 3 rendertargetviews, you can see it bog down. Not that my computer is slow or anything. Its a nice NVidea GeForce 1060 gaming laptop. So I must be doing something wrong with the rendertargetviews.
      Here is how I have bloom taking place, with each of these having its own rendertargetview, texture, and shaderresourceview:
      1) Draw unscaled sprite
      2) Draw unscaled exposure tone mapped / gamma corrected
      Note that the unscaled 1) and 2) are separate from the actual bloom process, but 2) will be combined later with the scaled down bloomed texture
      3) Draw scaled down sprite
      4) Draw scaled down exposure tone mapped / gamma corrected
      5) Draw scaled down bright filter (only revealing the brightest of colors at a certain bright filter)
      6) Draw scaled down horizontal blur
      7) Draw scaled down vertical blur
      😎 Draw scaled down horizontal blur again
      9) Draw scaled down vertical blur again
      10) Draw contrast (this is scaled back to the original size)
      11) Draw combined 2) and 10) onto the backbuffer
      Wow that was a lot! Anyways, I haven't checked the framerate yet, but it seems like, from the looks of it, 20-30 fps, depending on the size of my gaussian kernel. Probably worse. Hell, even at a kernel of 1 is slow due to how many rendertargetviews I got going. Is there a better method of outputing the textures I need rather than using a dozen or so rendertargetviews? Note that in my program, nothing is deprecated. Im using the DirectXTK library. I'm not using .fx files, only hlsl files. I'm not using effects. Only pixel shaders, about 7 of them. All of my pointers are ComPtrs. And so on and so forth. Thanks in advance.
    • By Uttam Kushwah
      This article is on wikipedia says that we can change the behavior of the game entity by adding or removing the component at runtime how it goes 
      i don't know.
       
      What i am doing is, I have a gameObject class which is having four components 
       class gameObject
       {
           public:
            #define MAX_COMPONENT 4
            unsigned int components[MAX_COMPONENT];
            gameObject()
             {
              for(int i=0;i<MAX_COMPONENT;i++)
               {
                 components=0;
               }
                  //parent_id=-1;
            }
      };
      i don't use inheritance, Whenever i make the new game entity i add an gameObject in the gameObjectManger class using the game entities constructor 
      with all of its components filled at the time of game entity creation like index for rigid body , mesh and etc.
      Then i use these gameObjects at individual systems to run the game like below 
      // For renderer 
      for(unsigned int i=0;i<manager->gameObjects.size();i++)
           {
              unsigned int meshIndex = manager->gameObjects.components[MY_MESH]; //mesh data
              mat4 trans=(*transforms)[bodyIndex];// The transformation matrix extracted and spitted out by the the Physics engine 
              mesh_Entries[meshIndex].init();
              GLuint lMM  = glGetUniformLocation(programHandle,"Model");
              glUniformMatrix4fv(lMM, 1, GL_FALSE,&trans[0][0]);
              mesh_Entries[meshIndex].bindTexture();
              glBindBuffer(GL_ARRAY_BUFFER, mesh_Entries[meshIndex].getVertexBuffer());
              glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, mesh_Entries[meshIndex].getIndexBuffer());
              pickColor=vec4(1.0f);
              pickScale=mat4(1.0f);
              lMM  = glGetUniformLocation(programHandle,"pick");
              glUniform4fv(lMM,1,&pickColor[0]);
              lMM  = glGetUniformLocation(programHandle,"pickScale");
              glUniformMatrix4fv(lMM,1,GL_FALSE,&pickScale[0][0]);
              // This is very basic rendering since all object's indices are treated as
              // indexed based. Stored in element buffer with indices and vbo with vertices
              // for each single gameObject having their own VAO and IO. Optimization need to be done later on.
              glDrawElements(
                              GL_TRIANGLES,                                // mode
                              mesh_Entries[meshIndex].getIndicesSize(),    // count
                              GL_UNSIGNED_SHORT,                           // type
                              (void*)0                                     // element array buffer offset
                            );
           }
      but i am not getting it how to add a new component like LogicalData to the gameObject not about how to point from gameObject but how to build one 
      is this should be my approach 
      struct LogicalData 
      {
       float running_speed;
      vec3 seek_position;
      vec3 attack_point;
      };
      Character : public CharacterController
      {
       private:
      gameObject* me;
      public:
      // methods to manipulate gameObject content using the component id for each component in their container
      // i.e is to update component[LOGICAL_DATA] in the gameLogicContainer 
      };
      and then a global container which hold this logical data and every entity has a id to it using gameobjects  
      or i should not be doing this at all. i can just put all logical data  into the any game entity like this and push the physics and rendering data back to gameobject
      Character : public CharacterController
      {
       private:
         float running_speed;
         vec3 seek_position;
         vec3 attack_point;
      public:
      // methods to manipulate above
      };
      Any comments will be greatly appreciated.
       
       
    • By L4ZZA
      Hi there.
      I'm trying to render an object using an index buffer instead of just having all the combinations of vertices with each texture coordinate. I'd like to have two buffers, one for cube vertices and one for the tex coordinates and then add them in an interleaved way to the VBO.
      In the code below you can see the differences of the two versions. On the right hand side it works perfectly with my design, but it's a lot to process with a lot of duplicated vertices and  tex coordinates. On the left, my try to convert it to index data.
      const std::vector<glm::vec3> vertices //static constexpr float vertices[] { //{ {-0.5f, -0.5f, -0.5f}, //0.0f, 0.0f, // a = 0 // -0.5f, -0.5f, -0.5f, 0.0f, 0.0f, // a = 0 - t = 0 { 0.5f, -0.5f, -0.5f}, //1.0f, 0.0f, // b = 1 // 0.5f, -0.5f, -0.5f, 1.0f, 0.0f, // b = 1 - t = 1 { 0.5f, 0.5f, -0.5f}, //1.0f, 1.0f, // c = 2 // 0.5f, 0.5f, -0.5f, 1.0f, 1.0f, // c = 2 - t = 2 {-0.5f, 0.5f, -0.5f}, //0.0f, 1.0f, // d = 3 // 0.5f, 0.5f, -0.5f, 1.0f, 1.0f, // c = 2 - t = 2 {-0.5f, -0.5f, 0.5f}, //0.0f, 0.0f, // e = 4 // -0.5f, 0.5f, -0.5f, 0.0f, 1.0f, // d = 3 - t = 3 { 0.5f, -0.5f, 0.5f}, //1.0f, 0.0f, // f = 5 // -0.5f, -0.5f, -0.5f, 0.0f, 0.0f, // a = 0 - t = 0 { 0.5f, 0.5f, 0.5f}, //1.0f, 1.0f, // g = 6 {-0.5f, 0.5f, 0.5f}, //0.0f, 1.0f, // h = 7 // -0.5f, -0.5f, 0.5f, 0.0f, 0.0f, // e = 4 - t = 0 }; // 0.5f, -0.5f, 0.5f, 1.0f, 0.0f, // f = 5 - t = 1 // 0.5f, 0.5f, 0.5f, 1.0f, 1.0f, // g = 6 - t = 2 const std::vector<unsigned int> indices // 0.5f, 0.5f, 0.5f, 1.0f, 1.0f, // g = 6 - t = 2 { // -0.5f, 0.5f, 0.5f, 0.0f, 1.0f, // h = 7 - t = 3 0, 1, 2, // -0.5f, -0.5f, 0.5f, 0.0f, 0.0f, // e = 4 - t = 0 2, 3, 0, // -0.5f, 0.5f, 0.5f, 1.0f, 0.0f, // h = 7 - t = 1 4, 5, 6, // -0.5f, 0.5f, -0.5f, 1.0f, 1.0f, // d = 3 - t = 2 6, 7, 4, // -0.5f, -0.5f, -0.5f, 0.0f, 1.0f, // a = 0 - t = 3 // -0.5f, -0.5f, -0.5f, 0.0f, 1.0f, // a = 0 - t = 3 7, 3, 0, // -0.5f, -0.5f, 0.5f, 0.0f, 0.0f, // e = 4 - t = 0 0, 4, 7, // -0.5f, 0.5f, 0.5f, 1.0f, 0.0f, // h = 7 - t = 1 6, 2, 1, // 0.5f, 0.5f, 0.5f, 1.0f, 0.0f, // g = 6 - t = 1 1, 5, 6, // 0.5f, 0.5f, -0.5f, 1.0f, 1.0f, // c = 2 - t = 2 // 0.5f, -0.5f, -0.5f, 0.0f, 1.0f, // b = 1 - t = 3 0, 1, 5, // 0.5f, -0.5f, -0.5f, 0.0f, 1.0f, // b = 1 - t = 3 5, 4, 0, // 0.5f, -0.5f, 0.5f, 0.0f, 0.0f, // f = 5 - t = 0 // 0.5f, 0.5f, 0.5f, 1.0f, 0.0f, // g = 6 - t = 1 3, 2, 6, 6, 7, 3, // -0.5f, -0.5f, -0.5f, 0.0f, 1.0f, // a = 0 - t = 3 // 0.5f, -0.5f, -0.5f, 1.0f, 1.0f, // b = 1 - t = 2 }; // 0.5f, -0.5f, 0.5f, 1.0f, 0.0f, // f = 5 - t = 1 // 0.5f, -0.5f, 0.5f, 1.0f, 0.0f, // f = 5 - t = 1 const std::vector<glm::vec2> tex_coord // -0.5f, -0.5f, 0.5f, 0.0f, 0.0f, // e = 4 - t = 0 { // -0.5f, -0.5f, -0.5f, 0.0f, 1.0f, // a = 0 - t = 3 {0.0f, 0.0f}, {1.0f, 0.0f}, // -0.5f, 0.5f, -0.5f, 0.0f, 1.0f, // d = 3 - t = 3 {1.0f, 1.0f}, // 0.5f, 0.5f, -0.5f, 1.0f, 1.0f, // c = 2 - t = 2 {0.0f, 1.0f}, // 0.5f, 0.5f, 0.5f, 1.0f, 0.0f, // g = 6 - t = 1 }; // 0.5f, 0.5f, 0.5f, 1.0f, 0.0f, // g = 6 - t = 1 // -0.5f, 0.5f, 0.5f, 0.0f, 0.0f, // h = 7 - t = 0 const std::vector<unsigned int> tex_indices // -0.5f, 0.5f, -0.5f, 0.0f, 1.0f // d = 3 - t = 3 { //}; 0, 1, 2, 2, 3, 0, 0, 1, 2, 2, 3, 0, 1, 2, 3, 3, 0, 1, 1, 2, 3, 3, 0, 1, 3, 2, 1, 1, 0, 3, 3, 2, 1, 1, 0, 3, }; What I'm struggling with is to then send this data to the GPU in the correct way. 
      (see code below)
      // more... m_vb = new VertexBuffer(); for(int i = 0; i < vertices.size(); ++i) { // add vertex to VBO //.. // add tex coordinate to VBO //.. } VertexBufferLayout layout; layout.Push<float>(3); layout.Push<float>(2); m_va.AddBuffer(*m_vb, layout); m_ib = new IndexBuffer(); for(int i = 0; i < indices.size(); ++i) { // add index to IBO //.. // add tex index to IBO //.. } m_ib->SendToGPU(); // other.... // drawing // using shader program // bound VAO // bound index buffer glDrawElements(GL_TRIANGLES, ib.GetCount(), GL_UNSIGNED_INT, nullptr); I believe what I'm try to achieve is feasible, but is this the right approach? Anyone with a better idea, or a solution that doesn't include specifying all the combinations of vertices and tex coordinates one by one?
    • By congard
      I ran into a problem when testing a program on an AMD GPU. When tested on Nvidia and Intel HD Graphics, everything works fine. On AMD, the problem occurs precisely when trying to bind the texture. Because of this problem, the shader has no shadow maps and only a black screen is visible. Id textures and other parameters are successfully loaded. Below are the code snippets:
      Here is the complete problem area of the rendering code:
      #define cfgtex(texture, internalformat, format, width, height) glBindTexture(GL_TEXTURE_2D, texture); \ glTexImage2D(GL_TEXTURE_2D, 0, internalformat, width, height, 0, format, GL_FLOAT, NULL); void render() { for (GLuint i = 0; i < count; i++) { // start id = 10 glUniform1i(samplersLocations[i], startId + i); glActiveTexture(GL_TEXTURE0 + startId + i); glBindTexture(GL_TEXTURE_CUBE_MAP, texturesIds[i]); } renderer.mainPass(displayFB, rbo); cfgtex(colorTex, GL_RGBA16F, GL_RGBA, params.scrW, params.scrH); cfgtex(dofTex, GL_R16F, GL_RED, params.scrW, params.scrH); cfgtex(normalTex, GL_RGB16F, GL_RGB, params.scrW, params.scrH); cfgtex(ssrValues, GL_RG16F, GL_RG, params.scrW, params.scrH); cfgtex(positionTex, GL_RGB16F, GL_RGB, params.scrW, params.scrH); glClear(GL_COLOR_BUFFER_BIT); glClearBufferfv(GL_COLOR, 1, ALGINE_RED); // dof buffer // view port to window size glViewport(0, 0, WIN_W, WIN_H); // updating view matrix (because camera position was changed) createViewMatrix(); // sending lamps parameters to fragment shader sendLampsData(); glEnableVertexAttribArray(cs.inPosition); glEnableVertexAttribArray(cs.inNormal); glEnableVertexAttribArray(cs.inTexCoord); // drawing //glUniform1f(ALGINE_CS_SWITCH_NORMAL_MAPPING, 1); // with mapping glEnableVertexAttribArray(cs.inTangent); glEnableVertexAttribArray(cs.inBitangent); for (size_t i = 0; i < MODELS_COUNT; i++) drawModel(models[i]); for (size_t i = 0; i < LAMPS_COUNT; i++) drawModel(lamps[i]); glDisableVertexAttribArray(cs.inPosition); glDisableVertexAttribArray(cs.inNormal); glDisableVertexAttribArray(cs.inTexCoord); glDisableVertexAttribArray(cs.inTangent); glDisableVertexAttribArray(cs.inBitangent); ... } renderer.mainPass code:
      void mainPass(GLuint displayFBO, GLuint rboBuffer) { glBindFramebuffer(GL_FRAMEBUFFER, displayFBO); glBindRenderbuffer(GL_RENDERBUFFER, rboBuffer); glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT, params->scrW, params->scrH); glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, rboBuffer); glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); } glsl:
      #version 400 core ... uniform samplerCube shadowMaps[MAX_LAMPS_COUNT]; There are no errors during the compilation of shaders. As far as I understand, the texture for some reason does not bind. Depth maps themselves are drawn correctly.
      I access the elements of the array as follows:
      for (int i = 0; i < count; i++) { ... depth = texture(shadowMaps[i], fragToLight).r; ... }  
      Also, it was found that a black screen occurs when the samplerCube array is larger than the bound textures. For example MAX_LAMPS_COUNT = 2 and count = 1, then
      uniform samplerCube shadowMaps[2];
      glUniform1i(samplersLocations[0], startId + 0); glActiveTexture(GL_TEXTURE0 + startId + 0); glBindTexture(GL_TEXTURE_CUBE_MAP, texturesIds[0]); In this case, there will be a black screen.
      But if MAX_LAMPS_COUNT = 1 (uniform samplerCube shadowMaps[1]) then shadows appear, but a new problem also arises: 


      Do not pay attention to the fact that everything is greenish, this is due to incorrect color correction settings for the video card.
      Any ideas? I would be grateful for the help
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!