Jump to content
  • Advertisement
Sign in to follow this  
Suen

OpenGL Bump Mapping

This topic is 2265 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I'm trying to implement bumpmapping through a heightmap but I seem to have a problem with calculating the normal (most likely) as I get nothing but a black screen.

After doing some research on the net I've merely come to understand that if I use a heightmap for doing a bumpmap then I would basically calculate the surface normal on the fly in the fragment shader and use it in my lighting computation instead of my real normal. What I need to do is find two tangent vectors in the current sampled position of my heightmap. These can be found by taking the partial derivatives and those derivatives can be approximated by a finite difference method so I decided to go on with the central difference scheme. Once this is done I go on to take the normalized cross product of these two vectors to get my normal, and then use this normal in my lighting computation

I also created a grayscale image (well a heightmap of course) through photoshop as well.

Unfortunately I seem to be doing something wrong when calculating the normal to be used. I tried feeding in the real normal and it works fine, I just get a regular diffuse lighting then. I also made sure that there's nothing wrong with my texture by mapping it to my sphere and it worked fine as well. So...I'm not entirely sure what I'm doing wrong really.

I'm slightly limited to what I can do at the moment on the CPU side of things as I am being dependant on a custom toolkit for OpenGL to do quite a bit of operations but I don't really think it would prevent me from doing a bump map anyway so there shouldn't be a problem because of that as far as I know.

Update: Fixed some small errors in the fragment shader which I just noticed, have posted the new code. I do get a visual result now but nothing close to resembling bump mapping. From the result it seems like I'm only adding the ambient color to my sphere as the whole surface consist of the same color. It's as if the diffuse part is never even considered. I'm slightly confused about what component I exactly need from my vec4 variables (x_forw etc.) when calculating the tangent vectors later in the code.

Here's the vertex and fragment shaders:

Vertex shader


#version 330

in vec4 vVertex;
in vec3 vNormal;
in vec2 vTexture0;

uniform mat4 mvMatrix;
uniform mat4 mvpMatrix;
uniform mat3 normalMatrix;
uniform vec3 lightPos;

out vec3 eyeNormal;
out vec3 lightDir;
out vec3 viewer;
out vec2 varTexture0;

void main()
{
eyeNormal = normalMatrix*vNormal;
vec4 eyeVert4 = mvMatrix*vVertex;
vec3 eyeVert3 = eyeVert4.xyz/eyeVert4.w;
lightDir = normalize(lightPos-eyeVert3);
viewer = normalize(-eyeVert3);
varTexture0 = vTexture0;
gl_Position = mvpMatrix*vVertex;
}


Fragment Shader


#version 330

//Uniforms
uniform vec4 diffuseColor;
uniform vec4 ambientColor;
uniform vec4 specularColor;
uniform sampler2D heightMap;

in vec3 eyeNormal;
in vec3 lightDir;
in vec3 viewer;
in vec2 varTexture0;

out vec4 fragColor; //Output for fragment shader

void main()
{
//Central Difference Scheme
vec4 x_forw = texture2D(heightMap, varTexture0.xy+vec2(1.0, 0.0));
vec4 x_back = texture2D(heightMap, varTexture0.xy-vec2(1.0, 0.0));
vec4 y_forw = texture2D(heightMap, varTexture0.xy+vec2(0.0, 1.0));
vec4 y_back = texture2D(heightMap, varTexture0.xy-vec2(0.0, 1.0));
vec3 tangX = vec3(1.0, 0.0, x_forw.z-x_back.z);
vec3 tangY = vec3(0.0, 1.0, y_forw.z-y_back.z);

//Calculate surface normal
vec3 heightNormal = normalize(cross(tangX, tangY));

fragColor = ambientColor +diffuseColor*max(0.0, dot(lightDir, heightNormal));
fragColor.a = 1.0;
}

Share this post


Link to post
Share on other sites
Advertisement
Hi,

when sampling your bump texture, you might want to add/substract "1.0 / textureWidth" and "1.0 / textureHeight", instead of just "1.0" :)
(texture coordinates go from 0 to 1, so when you add 1 to your current coordinate, you'll sample a pixel "outside" your texture)

regards.

Share this post


Link to post
Share on other sites

Hi,

when sampling your bump texture, you might want to add/substract "1.0 / textureWidth" and "1.0 / textureHeight", instead of just "1.0" smile.png
(texture coordinates go from 0 to 1, so when you add 1 to your current coordinate, you'll sample a pixel "outside" your texture)

regards.


Hello! Thanks for your reply. I tried this yesterday after reading a couple of posts on the net with regards to finite differences. So basically I wrote this yesterday:


vec4 x_forw = texture2D(heightMap, varTexture0.xy+vec2(1.0/512.0, 0.0));
vec4 x_back = texture2D(heightMap, varTexture0.xy-vec2(1.0/512.0, 0.0));
vec4 y_forw = texture2D(heightMap, varTexture0.xy+vec2(0.0, 1.0/512.0));
vec4 y_back = texture2D(heightMap, varTexture0.xy-vec2(0.0, 1.0/512.0));


As you said the texture coordinates range between 0 to 1 so I also thought that dividing by the width and height made sense but unfortunately this gives me the exact same result (a sphere with the same color over the whole surface).

Also as an additional question to your post, shouldn't the sample which goes outside the 0-1 range be merely sampled according to wrap mode I've set? Why would it then be necessary to divide the samples by the corresponding width and height?

Also thanks for the help! (though my problem remains wink.png)

Share this post


Link to post
Share on other sites
I finally got a result but I would like some opinions on it. To me I consider this to be bumpmapping of some form. Yes it's far from being a very nice-looking bump mapping but it looks like one nonetheless. What do you guys think?

Also I'm still wondering about my question in the previous post regarding wrap modes and dividing the the texture coord. by the width and height.

[s]Also is there any way of further enhancing the effect of this? I might be wrong about this but even though I do visually get the feeling that the surface is being "bumped" it still feels somewhat "flat"[/s]

Update: Finally managed to enhance the effect a bit by scaling the partial derivatives that I approximated in my tangent vectors.

Now, aside from the question above I have another question. I thought of adding a specular highlight to the object by calculating the reflected vector from the surface and then using it in my calculation in a traditional Phong reflection model. At first I did this calculation by using the final normal I got from combining the true normal and the normal from the heightmap but this gave me no specular highlights at all. Instead I decided to only use the surface normal from the height map when calculating the reflected vector and then using this in my lighting calculations and it gave me this result. I'm not entirely sure if this is the way it's supposed to look like or if the method I'm using is entirely wrong in itself for calculating specular highlights on a bump mapped surface.

Here's the result of that:

Share this post


Link to post
Share on other sites
Here's the code for those who wonder. Aside from some few minor faults I found in my program I changed the fragment shader so that the offsett was divided by the corresponding width and height of the texture as suggested by intyuh (thanks lots!!)

And again I'm still wondering about my two questions in the previous post so help on those would be appreciated smile.png

Thanks!

Fragment shader


#version 330

//Uniforms
uniform vec4 diffuseColor;
uniform vec4 ambientColor;
uniform vec4 specularColor;
uniform sampler2D heightMap;

in vec3 eyeNormal;
in vec3 lightDir;
in vec3 viewer;
in vec2 varTexture0;

out vec4 fragColor;

void main()
{
vec4 x_forw = texture2D(heightMap, varTexture0.xy+vec2(1.0/512.0, 0.0));
vec4 x_back = texture2D(heightMap, varTexture0.xy-vec2(1.0/512.0, 0.0));
vec4 y_forw = texture2D(heightMap, varTexture0.xy+vec2(0.0, 1.0/512.0));
vec4 y_back = texture2D(heightMap, varTexture0.xy-vec2(0.0, 1.0/512.0));
vec3 tangX = vec3(1.0, 0.0, 3.0*(x_forw.x-x_back.x));
vec3 tangY = vec3(0.0, 1.0, 3.0*(y_forw.x-y_back.x));

vec3 heightNormal = normalize(cross(tangX, tangY));
vec3 finalNormal = eyeNormal*heightNormal;

vec3 refDir = normalize(reflect(-lightDir, heightNormal));

fragColor = ambientColor +diffuseColor*max(0.0, dot(lightDir, finalNormal));
fragColor = fragColor+specularColor*pow(max(0.0, dot(refDir, viewer)), 50);
fragColor.a = 1.0;
}

Share this post


Link to post
Share on other sites
Hi,

If texture sampling is set to repeat, for instance, sampling (x, y) and (x + 1, y) will return the exact same value, whatever x and y. If texture sampling is set to clamp, you'll always get the border values, etc.
When re-constructing the normals, you need to sample a pixel's direct adjacent pixels. And the pixels' coordinates are separated by 1/texture(Width/Height)
I don't know if I'm being clear, but I don't really know how to explain it better in english and without drawing stuff :)

When you consider the pixel at varTexture0, when you add 1/512, if your texture is 512, then you will sample the neighbour pixel. Try to draw a sample 16x16 pixel texture on a piece of paper, with the upper left pixel being (0, 0) and the bottom right (1, 1), then see what happens when you increase a texture coordinate by 1/16 as opposed to increasing by 1.

Regards.

Share this post


Link to post
Share on other sites

Hi,

If texture sampling is set to repeat, for instance, sampling (x, y) and (x + 1, y) will return the exact same value, whatever x and y. If texture sampling is set to clamp, you'll always get the border values, etc.
When re-constructing the normals, you need to sample a pixel's direct adjacent pixels. And the pixels' coordinates are separated by 1/texture(Width/Height)
I don't know if I'm being clear, but I don't really know how to explain it better in english and without drawing stuff smile.png

When you consider the pixel at varTexture0, when you add 1/512, if your texture is 512, then you will sample the neighbour pixel. Try to draw a sample 16x16 pixel texture on a piece of paper, with the upper left pixel being (0, 0) and the bottom right (1, 1), then see what happens when you increase a texture coordinate by 1/16 as opposed to increasing by 1.

Regards.


intyuh, it does make perfect sense. I guess I was confusing myself about the fact that 1/texture(Width/Height) merely meant the neighboring pixel. You're exactly right about the fact that if I had set it to repeat then I would never really sample the neighboring pixel but the exact same pixel all the time. Clamp/clamp to edge would give me wrong result for obvious reasons though. Thanks for explaining regardless, it made things clearer.

If I may ask, what is your opinion on the specular highlights on the surface. Does it look correct for a basic phong reflection model?

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
  • Advertisement
  • Popular Tags

  • Popular Now

  • Advertisement
  • Similar Content

    • By LukasBanana
      Hi all,
       
      I'm trying to generate MIP-maps of a 2D-array texture, but only a limited amount of array layers and MIP-levels.
      For instance, to generate only the first 3 MIP-maps of a single array layer of a large 2D-array.
       
      After experimenting with glBlitFramebuffer to generate the MIP-maps manually but still with some sort of hardware acceleration,
      I ended up with glTextureView which already works with the limited amount of array layers (I can also verify the result in RenderDoc).
      However, glGenerateMipmap (or glGenerateTextureMipmap) always generates the entire MIP-chain for the specified array layer.
       
      Thus, the <numlevels> parameter of glTextureView seems to be ignored in the MIP-map generation process.
      I also tried to use glTexParameteri(..., GL_TEXTURE_MAX_LEVEL, 3), but this has the same result.
      Can anyone explain me how to solve this?
       
      Here is an example code, how I do it:
      void GenerateSubMips( GLuint texID, GLenum texTarget, GLenum internalFormat, GLuint baseMipLevel, GLuint numMipLevels, GLuint baseArrayLayer, GLuint numArrayLayers) { GLuint texViewID = 0; glGenTextures(1, &texViewID); glTextureView( texViewID, texTarget, texID, internalFormat, baseMipLevel, numMipLevels, baseArrayLayer, numArrayLayers ); glGenerateTextureMipmap(texViewID); glDeleteTextures(1, &texViewID); } GenerateSubMips( myTex, GL_TEXTURE_2D_ARRAY, GL_RGBA8, 0, 3, // only the first 3 MIP-maps 4, 1 // only one array layer with index 4 );  
      Thanks and kind regards,
      Lukas
    • By mmmax3d
      Hi everyone,
      I would need some assistance from anyone who has a similar experience
      or a nice idea!
      I have created a skybox (as cube) and now I need to add a floor/ground.
      The skybox is created from cubemap and initially it was infinite.
      Now it is finite with a specific size. The floor is a quad in the middle
      of the skybox, like a horizon.
      I have two problems:
      When moving the skybox upwards or downwards, I need to
      sample from points even above the horizon while sampling
      from the botton at the same time.  I am trying to create a seamless blending of the texture
      at the points of the horizon, when the quad is connected
      to the skybox. However, I get skew effects. Does anybody has done sth similar?
      Is there any good practice?
      Thanks everyone!
    • By mmmax3d
      Hi everyone,
      I would need some assistance from anyone who has a similar experience
      or a nice idea!
      I have created a skybox (as cube) and now I need to add a floor/ground.
      The skybox is created from cubemap and initially it was infinite.
      Now it is finite with a specific size. The floor is a quad in the middle
      of the skybox, like a horizon.
      I have two problems:
      When moving the skybox upwards or downwards, I need to
      sample from points even above the horizon while sampling
      from the botton at the same time.  I am trying to create a seamless blending of the texture
      at the points of the horizon, when the quad is connected
      to the skybox. However, I get skew effects. Does anybody has done sth similar?
      Is there any good practice?
      Thanks everyone!
    • By iArtist93
      I'm trying to implement PBR into my simple OpenGL renderer and trying to use multiple lighting passes, I'm using one pass per light for rendering as follow:
      1- First pass = depth
      2- Second pass = ambient
      3- [3 .. n] for all the lights in the scene.
      I'm using the blending function glBlendFunc(GL_ONE, GL_ONE) for passes [3..n], and i'm doing a Gamma Correction at the end of each fragment shader.
      But i still have a problem with the output image it just looks noisy specially when i'm using texture maps.
      Is there anything wrong with those steps or is there any improvement to this process?
    • By babaliaris
      Hello Everyone!
      I'm learning openGL, and currently i'm making a simple 2D game engine to test what I've learn so far.  In order to not say to much, i made a video in which i'm showing you the behavior of the rendering.
      Video: 
       
      What i was expecting to happen, was the player moving around. When i render only the player, he moves as i would expect. When i add a second Sprite object, instead of the Player, this new sprite object is moving and finally if i add a third Sprite object the third one is moving. And the weird think is that i'm transforming the Vertices of the Player so why the transformation is being applied somewhere else?
       
      Take a look at my code:
      Sprite Class
      (You mostly need to see the Constructor, the Render Method and the Move Method)
      #include "Brain.h" #include <glm/gtc/matrix_transform.hpp> #include <vector> struct Sprite::Implementation { //Position. struct pos pos; //Tag. std::string tag; //Texture. Texture *texture; //Model matrix. glm::mat4 model; //Vertex Array Object. VertexArray *vao; //Vertex Buffer Object. VertexBuffer *vbo; //Layout. VertexBufferLayout *layout; //Index Buffer Object. IndexBuffer *ibo; //Shader. Shader *program; //Brains. std::vector<Brain *> brains; //Deconstructor. ~Implementation(); }; Sprite::Sprite(std::string image_path, std::string tag, float x, float y) { //Create Pointer To Implementaion. m_Impl = new Implementation(); //Set the Position of the Sprite object. m_Impl->pos.x = x; m_Impl->pos.y = y; //Set the tag. m_Impl->tag = tag; //Create The Texture. m_Impl->texture = new Texture(image_path); //Initialize the model Matrix. m_Impl->model = glm::mat4(1.0f); //Get the Width and the Height of the Texture. int width = m_Impl->texture->GetWidth(); int height = m_Impl->texture->GetHeight(); //Create the Verticies. float verticies[] = { //Positions //Texture Coordinates. x, y, 0.0f, 0.0f, x + width, y, 1.0f, 0.0f, x + width, y + height, 1.0f, 1.0f, x, y + height, 0.0f, 1.0f }; //Create the Indicies. unsigned int indicies[] = { 0, 1, 2, 2, 3, 0 }; //Create Vertex Array. m_Impl->vao = new VertexArray(); //Create the Vertex Buffer. m_Impl->vbo = new VertexBuffer((void *)verticies, sizeof(verticies)); //Create The Layout. m_Impl->layout = new VertexBufferLayout(); m_Impl->layout->PushFloat(2); m_Impl->layout->PushFloat(2); m_Impl->vao->AddBuffer(m_Impl->vbo, m_Impl->layout); //Create the Index Buffer. m_Impl->ibo = new IndexBuffer(indicies, 6); //Create the new shader. m_Impl->program = new Shader("Shaders/SpriteShader.shader"); } //Render. void Sprite::Render(Window * window) { //Create the projection Matrix based on the current window width and height. glm::mat4 proj = glm::ortho(0.0f, (float)window->GetWidth(), 0.0f, (float)window->GetHeight(), -1.0f, 1.0f); //Set the MVP Uniform. m_Impl->program->setUniformMat4f("u_MVP", proj * m_Impl->model); //Run All The Brains (Scripts) of this game object (sprite). for (unsigned int i = 0; i < m_Impl->brains.size(); i++) { //Get Current Brain. Brain *brain = m_Impl->brains[i]; //Call the start function only once! if (brain->GetStart()) { brain->SetStart(false); brain->Start(); } //Call the update function every frame. brain->Update(); } //Render. window->GetRenderer()->Draw(m_Impl->vao, m_Impl->ibo, m_Impl->texture, m_Impl->program); } void Sprite::Move(float speed, bool left, bool right, bool up, bool down) { if (left) { m_Impl->pos.x -= speed; m_Impl->model = glm::translate(m_Impl->model, glm::vec3(-speed, 0, 0)); } if (right) { m_Impl->pos.x += speed; m_Impl->model = glm::translate(m_Impl->model, glm::vec3(speed, 0, 0)); } if (up) { m_Impl->pos.y += speed; m_Impl->model = glm::translate(m_Impl->model, glm::vec3(0, speed, 0)); } if (down) { m_Impl->pos.y -= speed; m_Impl->model = glm::translate(m_Impl->model, glm::vec3(0, -speed, 0)); } } void Sprite::AddBrain(Brain * brain) { //Push back the brain object. m_Impl->brains.push_back(brain); } pos *Sprite::GetPos() { return &m_Impl->pos; } std::string Sprite::GetTag() { return m_Impl->tag; } int Sprite::GetWidth() { return m_Impl->texture->GetWidth(); } int Sprite::GetHeight() { return m_Impl->texture->GetHeight(); } Sprite::~Sprite() { delete m_Impl; } //Implementation Deconstructor. Sprite::Implementation::~Implementation() { delete texture; delete vao; delete vbo; delete layout; delete ibo; delete program; }  
      Renderer Class
      #include "Renderer.h" #include "Error.h" Renderer::Renderer() { } Renderer::~Renderer() { } void Renderer::Draw(VertexArray * vao, IndexBuffer * ibo, Texture *texture, Shader * program) { vao->Bind(); ibo->Bind(); program->Bind(); if (texture != NULL) texture->Bind(); GLCall(glDrawElements(GL_TRIANGLES, ibo->GetCount(), GL_UNSIGNED_INT, NULL)); } void Renderer::Clear(float r, float g, float b) { GLCall(glClearColor(r, g, b, 1.0)); GLCall(glClear(GL_COLOR_BUFFER_BIT)); } void Renderer::Update(GLFWwindow *window) { /* Swap front and back buffers */ glfwSwapBuffers(window); /* Poll for and process events */ glfwPollEvents(); }  
      Shader Code
      #shader vertex #version 330 core layout(location = 0) in vec4 aPos; layout(location = 1) in vec2 aTexCoord; out vec2 t_TexCoord; uniform mat4 u_MVP; void main() { gl_Position = u_MVP * aPos; t_TexCoord = aTexCoord; } #shader fragment #version 330 core out vec4 aColor; in vec2 t_TexCoord; uniform sampler2D u_Texture; void main() { aColor = texture(u_Texture, t_TexCoord); } Also i'm pretty sure that every time i'm hitting the up, down, left and right arrows on the keyboard, i'm changing the model Matrix of the Player and not the others.
       
      Window Class:
      #include "Window.h" #include <GL/glew.h> #include <GLFW/glfw3.h> #include "Error.h" #include "Renderer.h" #include "Scene.h" #include "Input.h" //Global Variables. int screen_width, screen_height; //On Window Resize. void OnWindowResize(GLFWwindow *window, int width, int height); //Implementation Structure. struct Window::Implementation { //GLFW Window. GLFWwindow *GLFW_window; //Renderer. Renderer *renderer; //Delta Time. double delta_time; //Frames Per Second. int fps; //Scene. Scene *scnene; //Input. Input *input; //Deconstructor. ~Implementation(); }; //Window Constructor. Window::Window(std::string title, int width, int height) { //Initializing width and height. screen_width = width; screen_height = height; //Create Pointer To Implementation. m_Impl = new Implementation(); //Try initializing GLFW. if (!glfwInit()) { std::cout << "GLFW could not be initialized!" << std::endl; std::cout << "Press ENTER to exit..." << std::endl; std::cin.get(); exit(-1); } //Setting up OpenGL Version 3.3 Core Profile. glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3); glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 3); glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE); /* Create a windowed mode window and its OpenGL context */ m_Impl->GLFW_window = glfwCreateWindow(width, height, title.c_str(), NULL, NULL); if (!m_Impl->GLFW_window) { std::cout << "GLFW could not create a window!" << std::endl; std::cout << "Press ENTER to exit..." << std::endl; std::cin.get(); glfwTerminate(); exit(-1); } /* Make the window's context current */ glfwMakeContextCurrent(m_Impl->GLFW_window); //Initialize GLEW. if(glewInit() != GLEW_OK) { std::cout << "GLEW could not be initialized!" << std::endl; std::cout << "Press ENTER to exit..." << std::endl; std::cin.get(); glfwTerminate(); exit(-1); } //Enabling Blending. GLCall(glEnable(GL_BLEND)); GLCall(glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA)); //Setting the ViewPort. GLCall(glViewport(0, 0, width, height)); //**********Initializing Implementation**********// m_Impl->renderer = new Renderer(); m_Impl->delta_time = 0.0; m_Impl->fps = 0; m_Impl->input = new Input(this); //**********Initializing Implementation**********// //Set Frame Buffer Size Callback. glfwSetFramebufferSizeCallback(m_Impl->GLFW_window, OnWindowResize); } //Window Deconstructor. Window::~Window() { delete m_Impl; } //Window Main Loop. void Window::MainLoop() { //Time Variables. double start_time = 0, end_time = 0, old_time = 0, total_time = 0; //Frames Counter. int frames = 0; /* Loop until the user closes the window */ while (!glfwWindowShouldClose(m_Impl->GLFW_window)) { old_time = start_time; //Total time of previous frame. start_time = glfwGetTime(); //Current frame start time. //Calculate the Delta Time. m_Impl->delta_time = start_time - old_time; //Get Frames Per Second. if (total_time >= 1) { m_Impl->fps = frames; total_time = 0; frames = 0; } //Clearing The Screen. m_Impl->renderer->Clear(0, 0, 0); //Render The Scene. if (m_Impl->scnene != NULL) m_Impl->scnene->Render(this); //Updating the Screen. m_Impl->renderer->Update(m_Impl->GLFW_window); //Increasing frames counter. frames++; //End Time. end_time = glfwGetTime(); //Total time after the frame completed. total_time += end_time - start_time; } //Terminate GLFW. glfwTerminate(); } //Load Scene. void Window::LoadScene(Scene * scene) { //Set the scene. m_Impl->scnene = scene; } //Get Delta Time. double Window::GetDeltaTime() { return m_Impl->delta_time; } //Get FPS. int Window::GetFPS() { return m_Impl->fps; } //Get Width. int Window::GetWidth() { return screen_width; } //Get Height. int Window::GetHeight() { return screen_height; } //Get Input. Input * Window::GetInput() { return m_Impl->input; } Renderer * Window::GetRenderer() { return m_Impl->renderer; } GLFWwindow * Window::GetGLFWindow() { return m_Impl->GLFW_window; } //Implementation Deconstructor. Window::Implementation::~Implementation() { delete renderer; delete input; } //OnWindowResize void OnWindowResize(GLFWwindow *window, int width, int height) { screen_width = width; screen_height = height; //Updating the ViewPort. GLCall(glViewport(0, 0, width, height)); }  
      Brain Class
      #include "Brain.h" #include "Sprite.h" #include "Window.h" struct Brain::Implementation { //Just A Flag. bool started; //Window Pointer. Window *window; //Sprite Pointer. Sprite *sprite; }; Brain::Brain(Window *window, Sprite *sprite) { //Create Pointer To Implementation. m_Impl = new Implementation(); //Initialize Implementation. m_Impl->started = true; m_Impl->window = window; m_Impl->sprite = sprite; } Brain::~Brain() { //Delete Pointer To Implementation. delete m_Impl; } void Brain::Start() { } void Brain::Update() { } Window * Brain::GetWindow() { return m_Impl->window; } Sprite * Brain::GetSprite() { return m_Impl->sprite; } bool Brain::GetStart() { return m_Impl->started; } void Brain::SetStart(bool value) { m_Impl->started = value; } Script Class (Its a Brain Subclass!!!)
      #include "Script.h" Script::Script(Window *window, Sprite *sprite) : Brain(window, sprite) { } Script::~Script() { } void Script::Start() { std::cout << "Game Started!" << std::endl; } void Script::Update() { Input *input = this->GetWindow()->GetInput(); Sprite *sp = this->GetSprite(); //Move this sprite. this->GetSprite()->Move(200 * this->GetWindow()->GetDeltaTime(), input->GetKeyDown("left"), input->GetKeyDown("right"), input->GetKeyDown("up"), input->GetKeyDown("down")); std::cout << sp->GetTag().c_str() << ".x = " << sp->GetPos()->x << ", " << sp->GetTag().c_str() << ".y = " << sp->GetPos()->y << std::endl; }  
      Main:
      #include "SpaceShooterEngine.h" #include "Script.h" int main() { Window w("title", 600,600); Scene *scene = new Scene(); Sprite *player = new Sprite("Resources/Images/player.png", "Player", 100,100); Sprite *other = new Sprite("Resources/Images/cherno.png", "Other", 400, 100); Sprite *other2 = new Sprite("Resources/Images/cherno.png", "Other", 300, 400); Brain *brain = new Script(&w, player); player->AddBrain(brain); scene->AddSprite(player); scene->AddSprite(other); scene->AddSprite(other2); w.LoadScene(scene); w.MainLoop(); return 0; }  
       
      I literally can't find what is wrong. If you need more code, ask me to post it. I will also attach all the source files.
      Brain.cpp
      Error.cpp
      IndexBuffer.cpp
      Input.cpp
      Renderer.cpp
      Scene.cpp
      Shader.cpp
      Sprite.cpp
      Texture.cpp
      VertexArray.cpp
      VertexBuffer.cpp
      VertexBufferLayout.cpp
      Window.cpp
      Brain.h
      Error.h
      IndexBuffer.h
      Input.h
      Renderer.h
      Scene.h
      Shader.h
      SpaceShooterEngine.h
      Sprite.h
      Texture.h
      VertexArray.h
      VertexBuffer.h
      VertexBufferLayout.h
      Window.h
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

Participate in the game development conversation and more when you create an account on GameDev.net!

Sign me up!