Jump to content
  • Advertisement
  • entries
  • comments
  • views

C++ Set Up GTest for TDD. C++, VS



I made an example of project in VS 2015: SortFunctions.zip  This project will show you how set up Google Test in Visual Studio and run simple unit tests.

Note. If you have another version of VS then before you will run unit tests you need to select VS 2017, like in this screenshot:



Google Test library is included in the project as source folder and it is placed in "Libs" folder. You need to:

  • open the solution. The solution is file with name: "SortFunctions.sln"
  • select your version of VS, for example VS 2017 instead of VS 2015 as in screenshot above
  • make the "SortFunction_UnitTests" project as "StartUp Project". For this: make right mouse button click on the "SortFunction_UnitTests" project -> select "Set as StartUp Project"
  • press Ctrl+F5 to run unit tests

You will see this settings in the "SortFunction_UnitTests" project properties:


This solution include two projects:

  • SortFunctions - this project contains modules that we want to test. For example, bubbleSort() method
  • SortFunctions_UnitTests - this project contains unit tests

The "SortFunctions" project has two files:


#pragma once
extern void bubbleSort(int *array, unsigned int amount);
extern void countingSort(int *array, unsigned int amount);


#include "SortFunctions.h"
void bubbleSort(int *array, unsigned int amount)
void countingSort(int *array, unsigned int amount)

The "SortFunctions_UnitTests" project has tests. For example, this is the "bubbleSortTests.cpp" with two tests. The first test is for positive numbers and the second test is for negative numbers:


#include <gtest/gtest.h>
#include "SortFunctions.h"
TEST(bubbleSortTest, AllPositiveElements)
    // Arrange
    const unsigned int amount = 5;
    int actualArray[amount] = { 5, 3, 10, 2, 7 };
    int expectedArray[amount] = { 2, 3, 5, 7, 10 };
    // Act
    bubbleSort(actualArray, amount);
    // Assert
    for (size_t i = 0; i < amount; i++)
        ASSERT_EQ(expectedArray[i], actualArray[i]);
TEST(bubbleSortTest, AllNegativeElements)
    // Arrange
    const unsigned int amount = 5;
    int actualArray[amount] = { -5, -3, -10, -2, -7 };
    int expectedArray[amount] = { -10, -7, -5, -3, -2 };
    // Act
    bubbleSort(actualArray, amount);
    // Assert
    for (size_t i = 0; i < amount; i++)
        ASSERT_EQ(expectedArray[i], actualArray[i]);




Recommended Comments

" I opened Setting of the project and add these lines to "C/C++" -> "General" -> "Additional Include Directories": "

It's a lot better creating .prop files for you projects under Property Manager. You can reuse them (put in a common folder) for every other project that uses similar libs/settings. Or you can copy and 'fork' them for a new project. It also relieves the frustration of not picking Release settings when in Debug or vice-versa (What a load of crap that is, right?). If interested, it's has a tab near the Solution Explorer pane tab, close to the bottom.

Oh and... Tests? Tests? We don't need no stinkin' tests. :D j/k

Share this comment

Link to comment
7 hours ago, fleabay said:

It's a lot better creating .prop files for you projects under Property Manager. You can reuse them (put in a common folder)

It useful for projects on my computer where each library has only on copy. But I include libraries in some projects because I need to public VS projects and everyone can open it and run them immediately. I make the "Libs" folder inside each such solution, put all necessary libraries to the "Libs" folder and connect them to the project using "$(SolutionDir)Libs\..." paths.

Share this comment

Link to comment

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Advertisement
  • Advertisement
  • What is your GameDev Story?

    In 2019 we are celebrating 20 years of GameDev.net! Share your GameDev Story with us.

    (You must login to your GameDev.net account.)

  • Blog Entries

  • Similar Content

    • By ADDMX
      It is possible at all to copy descriptors between _two different_ heaps ? The documentation says so:
      First (source) heap is:
          D3D12_DESCRIPTOR_HEAP_DESC HeapDesc;     HeapDesc.NumDescriptors = 256;     HeapDesc.Type           = D3D12_DESCRIPTOR_HEAP_TYPE_SAMPLER;     HeapDesc.Flags          = D3D12_DESCRIPTOR_HEAP_FLAG_NONE;     HeapDesc.NodeMask       = 0; and SrcHeap->GetCPUDescriptorHandleForHeapStart() ==> Handle.ptr == 4 (strange, value indeed, I'd expected ptr as in case of GPU handles)
      Second (destination) heap is:
          HeapDesc.NumDescriptors = 128;     HeapDesc.Type           = D3D12_DESCRIPTOR_HEAP_TYPE_SAMPLER;     HeapDesc.Flags          = D3D12_DESCRIPTOR_HEAP_FLAG_NONE;     HeapDesc.NodeMask       = D3D12_DESCRIPTOR_HEAP_FLAG_SHADER_VISIBLE; and DstHeap->GetCPUDescriptorHandleForHeapStart() ==> Handle.ptr == 9 (strange, value indeed, I'd expected ptr as in case of GPU handles)
      and I want to copy elements 5, 6, and 7 from first one to the second one
      auto Increment = Device->GetDescriptorHandleIncrementSize(D3D12_DESCRIPTOR_HEAP_TYPE_SAMPLER); // Return 32 CD3DX12_CPU_DESCRIPTOR_HANDLE Src = CD3DX12_CPU_DESCRIPTOR_HANDLE(SrcHeap->GetCPUDescriptorHandleForHeapStart(), 5, Increment); CD3DX12_CPU_DESCRIPTOR_HANDLE Dst = CD3DX12_CPU_DESCRIPTOR_HANDLE(DstHeap->GetCPUDescriptorHandleForHeapStart(), 0, Increment); Device->CopyDescriptorsSimple(3, Dst, Src, D3D12_DESCRIPTOR_HEAP_TYPE_SAMPLER); and debug layers says:
      D3D12 ERROR: ID3D12Device::CopyDescriptors: Source ranges and dest ranges overlap, which results in undefined behavior. [ EXECUTION ERROR #653: COPY_DESCRIPTORS_INVALID_RANGES]
      and indeed samplers are not copied to the shader visible descriptors heap ... why ?
      I have win10 1809 (x64), latest nvidia drivers and 2080RTX (I do not have any other cards, and device is initialized on 2080RTX)
      I'v compilled ModelViewer from DXSamples MiniEngine ... and it spills out that same error from within it's DynamicDescriptorHeap implementation :/
    • By Tarrasque
      I am new to game development and am 4 courses away from obtaining an undergraduate degree in the field where I am specializing in game play programming. The course I am currently in is designed to help us with the transition period from being a student and entering the job search and making this passion a profession. By the end of the course I should have a pretty good start on what will become a portfolio.
      This course is what actually brought me to this site and suggested it would be a great place to get support and to provide support when our skill level was up enough to be able to help others. So first, I would like to say hello to anyone who has taken the time to read my very first post here and a special thanks to anyone that replies with any feedback at all. I really do appreciate any and all help or criticism. 
      We were asked to create a proof of concept this week and will eventually need to create a prototype that illustrates our skills in programming. I always want to push myself to see what I can accomplish and decided to come up with an inventory system. I am not sure if it is an original thought, but I feel it would be unique enough to help me stand out from the crowd. At least I hope. 
      The concept is that a player can equip and fill a backpack. They can then drop the filled backpack and all the items will remain accounted for within it. The play can then pickup another backpack and repeat this process. It is important to note that if this is used in a multiplayer game, that another player can loot that filled backpack or pick it up and equip it.
      The question and support I am asking for is more in the realm of guidance. I have searched several places and have not had much luck in finding a good reference to build off from when it comes to developing a complex inventory system that is essentially independent from the player character. Would anyone have any tips or resource suggestions that may help me narrow the search down?
      Any suggestions would be greatly appreciated. 
      It might be important to note I plan on making the prototype using C++ within the Unreal Engine. 
      Thanks again for the time it took to read this post and I look forward to any feedback.
      Happy Gaming
      Devan Luciano
    • By calioranged
      *** Beginner question - some terminology may not be correct ***
      I will relay my understanding of OpenGL shaders in this post in the hope that members can confirm the areas where I am correct and redress the areas where my understanding falls short.
      Firstly, here is my shader:
      #shader vertex #version 330 core layout(location = 0) in vec4 position; layout(location = 1) in vec2 tex_coord; out vec2 v_tex_coord; void main() { gl_Position = position; v_tex_coord = tex_coord; }; #shader fragment #version 330 core layout(location = 0) out vec4 colour; in vec2 v_tex_coord; uniform sampler2D u_Texture; void main() { vec4 tex_colour = texture(u_Texture,v_tex_coord); colour = tex_colour; }; First Query:
      I have been following a tutorial on shaders and have replicated the code used in the tutorial. My program renders a square, onto which a texture (image) is placed. I have bound the vertex position coordinates to the first attribute index of the vertex array, and the texture coordinates to the second attribute index of the vertex array *1*.  The program works without any issues, but there are some aspects which have me confused and I therefore seek clarification from forum members.
      The vertex shader features the below lines:
      layout(location = 0) in vec4 position; layout(location = 1) in vec2 tex_coord;  From my understanding, these lines are essentially saying: "take the layout of attribute 0 and place it in the 'position' variable" and "take the layout of attribute 1 and place in the 'tex_coord' variable". This makes sense to me since the vertex shader determines the position of each vertex on the screen. However, in the fragment shader, I am more dubious:
      layout(location = 0) out vec4 colour; If the lines in the previous snippet are saying "take the layout of attribute x and place it in the 'y' variable", then what exactly is the above line saying? Specifically, why is the layout from the position attribute (attribute 0) being used and not the texture attribute (attribute 1)? 
      Second Query:
      I understand that the input variable in the fragment shader (v_tex_coord) is supplied by the output variable from the vertex shader (which originally gathered its data from the layout of attribute 1). But how is the final colour for each pixel here actually set? In the main() function of the fragment shader, the texture() function is used. From what I have gathered this function samples the colour of each pixel in the texture.
      Once a texture has been bound to a specific slot, we can set the uniform outside of the shader with glUniform() by returning the location of the uniform variable (location of "u_Texture" in this case) and then linking it to the previously bound slot. Presumably this is the step which links together the texture image and the uniform - meaning that 'u_Texture' now has access to the texture image via the slot to which it was bound (please correct me if I am wrong here).  The next assumption is that the texture() function will sample the corresponding colour value through its parameters ('u_Texture' - which now contains the texture image (or at least has access to the slot to which the texture image is bound) and 'v_tex_coord' which contains the coordinates to which the texture should be rendered). 
      Here is the part that confuses me most:
      The outcome of the texture() function is returned to the vec4 variable 'tex_colour' which then reassigns its value to the output vec4 variable 'colour'. What was the point in this last step? 'tex_colour' already contained the result of the texture() function so why does it then need to be reassigned to 'colour'? 'colour' is not predefined by OpenGL, I can change the name to 'the_colour', 'a_colour' or 'the_ice_cream_man' and the texture image is still rendered perfectly fine. Is it the case that the variable defined as the output in the fragment shader will be used to render the colour of each pixel regardless of its name? I suppose that the reason I ask this is because the 'gl_Position' variable in the vertex shader which sets the position appears to be a variable predefined by OpenGL whereas in the fragment shader this doesn't appear to be the case with the variable which sets the colour... some clarification of this would be greatly appreciated. 
      *1* - terminology may be wrong here but essentially I have used glVertexAttribPointer() to set the below vertices position coordinates to attribute 0 and the below texture position coordinates to attribute 1. The full program consists of 16 files and it didn't want to include source code for each file because most of it is irrelevant to the questions I am asking.
      float positions[] = { // vertices // texture -0.5F,-0.5F, 0.0F,0.0F, // x and y coordinates 0.5F,-0.5F, 1.0F,0.0F, 0.5F, 0.5F, 1.0F,1.0F, -0.5F, 0.5F, 0.0F,1.0F };  
    • By antoniorv6
      Hi. I am developing a sort of graphic engine for a project at the University and I have stepped into a problem that I don't really know how to interpret. I'll explain.
      In our game engine, when you load a texture, you read it with the function that STB image library provides: stbi_load(), and then bind it. This is done by the following code:
      void TResourceTexture::LoadResource(const std::string& c_resourceDocument_str) { std::string l_assetRoute = "assets/" + c_resourceDocument_str; LoadFromSTBI(l_assetRoute); BindTexture(); stbi_image_free(m_imageData_cch); } void TResourceTexture::LoadFromSTBI(const std::string& c_resourceDocument_str) { m_imageData_cch = stbi_load(c_resourceDocument_str.c_str(), &m_width_i, &m_height_i, &m_bDepth_i, 3); if(!m_imageData_cch) { std::cout << "[ERROR] - Texture could not be found"<<std::endl; stbi_image_free(m_imageData_cch); return; } } void TResourceTexture::BindTexture() { glGenTextures(1, &m_textureID); glBindTexture(GL_TEXTURE_2D, m_textureID); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, m_width_i, m_height_i, 0, GL_RGB, GL_UNSIGNED_BYTE, m_imageData_cch); glGenerateMipmap(GL_TEXTURE_2D); glBindTexture(GL_TEXTURE_2D, 0); } This works fine and does not give any problems. However, if I change the first line of LoadFromSTBI and the seventh of BindTexture() with these:
      m_imageData_cch = stbi_load(c_resourceDocument_str.c_str(), &m_width_i, &m_height_i, &m_bDepth_i, 4); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, m_width_i, m_height_i, 0, GL_RGBA, GL_UNSIGNED_BYTE, m_imageData_cch); Valgrind triggers me the following memory leak:
      ==5745== 6,840 bytes in 1 blocks are possibly lost in loss record 137 of 140
      ==5745==    at 0x483777F: malloc (vg_replace_malloc.c:299)
      ==5745==    by 0x19A7A0AD: ??? (in /usr/lib/dri/i965_dri.so)
      ==5745==    by 0x19A7A178: ??? (in /usr/lib/dri/i965_dri.so)
      ==5745==    by 0x19A77FB6: ??? (in /usr/lib/dri/i965_dri.so)
      ==5745==    by 0x19A781E2: ??? (in /usr/lib/dri/i965_dri.so)
      ==5745==    by 0x19ACFC5B: ??? (in /usr/lib/dri/i965_dri.so)
      ==5745==    by 0x6DD54FE: __pthread_once_slow (in /usr/lib/libpthread-2.28.so)
      ==5745==    by 0x19AD0266: ??? (in /usr/lib/dri/i965_dri.so)
      ==5745==    by 0x19ADE107: ??? (in /usr/lib/dri/i965_dri.so)
      ==5745==    by 0x199C658E: ??? (in /usr/lib/dri/i965_dri.so)
      ==5745==    by 0x19A05B9E: ??? (in /usr/lib/dri/i965_dri.so)
      ==5745==    by 0x19A05EC0: ??? (in /usr/lib/dri/i965_dri.so)
      I thought it was a memory leak related to hardware and graphic drivers, but while testing in other different device, it triggered again. Does anyone know what's going wrong with this change or if it has to be done anything extra if you are using this function in order to avoid this leaks?
      Thank you in advance!
    • By calioranged
      *** Beginner question ***
      To my current understanding of mipmapping, If you have a 512x512 texture downsized to 256x256, then only 1 pixel can be rendered on the downsized version for every 4 pixels on the full sized texture.
      If the nearest neighbour method is used, then the colour of each pixel on the downsized version will be determined by which pixel has its centre closest to the relevant texture coordinate, as demonstrated below:

      Whereas if the linear method is used, then the colour of each pixel on the downsized version will be determined by a weighted average of the four full size pixels:

      But if mipmapping is not used, then how is the colour of each pixel determined?

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!