Jump to content
  • Advertisement

C++ ECS : same transformation component for both physic and graphic

Recommended Posts

I encapsulated Physics and Graphics with ECS successfully.

Here is a simplified version :-

Physics Rigid Body = Physic_MassAndInertia + Physic_Transform + Physic_Shape
Graphic Polygon Body = Graphic_Transform + Graphic_Mesh

I usually set their transform via :-

findService<Service_PhysicTransform>()->setPosition(physicEntity,somePos);
findService<Service_GraphicTransform>()->setPosition(graphicEntity,somePos);

It works so nice, and there is no problem in practice, because I always know its "type" (physic/graphic).  

However, I notice that Physic_Transform and Graphic_Transfrom are very similar and duplicate.

For the sake of good practice and maintainability, I consider to group them into Generic_Transform.

findService<Service_Transform>()->setPosition(anyEntity,somePos);  //cool

However, there is a little difficulty.  The physic transformation is quite complex for a child inside a compound body.

Assume that a physic body B is a child of a compound body C.   In this case, B's transformation component currently has no meaning (by design).

If I want to set the child transformation setTransformation(B,(x,y,45deg)), my current physic engine will not set the B's transformation directly - it will set C's transformation that make B's position match (x,y,45deg).

image.png.8d081eca5b014314aee226fc7564697d.png

Thus, it is not so practical to group them, except I code it like (ugly and worse performance):-

class Service_Transform{
    public: void setPosition(Entity B,Vec2 pos){
        bool checkIsPhysic = .... //check if B is physic
        if(checkIsPhysic){//physic
            Entity compound = .... //find C
            ComponentPtr<Transform> cCompound = compound.get<Transform>();
            cCompound->pos=pos*someValue; //calculate some transformation for C
        }else{//graphic
            ComponentPtr<Transform> cTransform=B.get<Transform>();
            cTransform->pos=pos;
        }
    }
}

Should I group them  into 1 type of component?  

I probably should not group them because its meaning and related implementation are very different, but I feel guilty ... I may miss something.

Advice about other things are also appreciated.  Thanks.

Edit: Hmm... I start to think that grouping component is OK, but I should still use 2 function (from different service/system).

Edit2: fix some code (pointed out by 0r0d, thank)

Edited by hyyou

Share this post


Link to post
Share on other sites
Advertisement

I don't think you should have two different transform components, no. You shouldn't implement the physics using the ECS-transform-component eigther. Physics is a vastly complex  system that requires (or at least should have) a separate implementation. So optimally, you'd just use the transform-component to communicate between the different system, which keeping a separate view of the physics-world:

struct PhysicsComponent
{
  RigidBody* pBody; // pointer to a body aquired from the physics-world upon initialization => could also be queried instead
};

struct PhysicsSystem
{
  
  void Update(double dt) // just to showcase...
  {
    world.Tick(dt);
    
    for(auto entity : EntitiesWithComponents<Physics, Transform>())
    {
      auto& physics = entity.GetComponent<Physics>();
      entity.GetComponent<Transform>().SetTransform(physics.pBody->QueryTransform());
    }
  }    
  
private:
  
  PhysicsWorld world;
};

(This is just some pseudo-code to get the idea across). So essentially Transform becomes a point for communication between different systems. What the physics-system wrote in, you can later read out ie. in the RenderSystem; also your gameplay-systems can just set the transform as well. Entities just register new rigidbodies to the physics-world, but the world doesn't know that an entity even exists, which keeps your physics separated & more flexibel. For example, its pretty easy in this system to exchange your physics with say bullet at some time, while what you originally do creates a sort of tight coupling between those independant modules.

As a general consensus, you should only implement the bare minimum required functionality inside the ECS, if possible. Do not use ECS as a basis for physics, rendering, HUD, ... but instead implemented those systems separately, and use the ECS as the high-level intercommunication-layer. At that, it really excells IMHO, but everything more will just result in many of the reasons why people despise ECS.

Hope that gives you a rough idea.

Share this post


Link to post
Share on other sites

I'm of the opinion that you should always have distinct, domain-specific representations of information.  Physics and rendering are two different domains, thus two transforms. You can split hairs over where these transforms live, and whether or not rendering/physics is part of the ECS, but regardless of where they are you should have one transform owned by "physics" and one transform owned by "rendering" (at least). Don't get distracted by the fact that they may appear similar or duplicate -- they are completely distinct conceptually. If you allow them to both own the same transform, you'll quickly find them fighting each other over even simple changes to functionality.

Don't think of data as a "point of communication". Communication is a behavior, and in the absence of code the only place behaviors exist are in assumptions and inferences. It may be fine for the rendering code to assume the transform mean one thing, and for the physics code to assume the transform mean something else, but when it comes to communicating that information across domain boundaries, you should't force the rendering code to make assumptions about what the transform means to the physics code and vice versa. This is exactly the type of behavioral coupling you'll come to regret

Edited by Zipster

Share this post


Link to post
Share on other sites
11 hours ago, hyyou said:

 


findService<Service_Transform>()->setPosition(anyEntity);  //cool

 

I'm not clear what the above code is supposed to do.

Share this post


Link to post
Share on other sites
15 hours ago, Juliean said:

... but instead implemented those systems separately

Thank Julien. 

In a more real case, I structure it like this :-

rocket = HP + physic_owner + graphic_owner + ....
Physic body = physic_ownee + bulletBodyAndShape_holder 
         + cache_physicAttr(pos,v,gravity,mass,mask) + ...
Graphic body = graphic_ownee 
         + ogreBody_holder (Ogre2.1 graphic body)
         + cache_renderAttribute(pos,blend_mode,layer) 
         + mesh_owner
mesh = mesh_ownee + filename 
        + ogreMeshPtr(point to vertex buffer)

The physic_owner / graphic_owner / physic_ownee / graphic_ownee  are just attach-point.  

If I want to create a rocket, I tend to create 1 rocket entity attach to 2-3 new physic body, 2-3 new graphic body.     The graphic body attach to a shared mesh.   Thus, for a single rocket, I create around 5-7 entities.

I copied the idea from https://gamedev.stackexchange.com/questions/31888/in-an-entity-component-system-engine-how-do-i-deal-with-groups-of-dependent-ent , not sure if it smells. ...  

For me, it works pretty good, but I may overlook something.

P.S. Your comment in another topic of mine boosted my engine's overall performance by 5%. Thank a lot.

7 hours ago, Zipster said:

Don't think of data as a "point of communication"

That is enlightening, thank.

@0r0d  You are right. I have edited the post :)

Edited by hyyou

Share this post


Link to post
Share on other sites
4 hours ago, hyyou said:

Thank Julien. 

In a more real case, I structure it like this :-


rocket = HP + physic_owner + graphic_owner + ....
Physic body = physic_ownee + bulletBodyAndShape_holder 
         + cache_physicAttr(pos,v,gravity,mass,mask) + ...
Graphic body = graphic_ownee 
         + ogreBody_holder (Ogre2.1 graphic body)
         + cache_renderAttribute(pos,blend_mode,layer) 
         + mesh_owner
mesh = mesh_ownee + filename 
        + ogreMeshPtr(point to vertex buffer)

The physic_owner / graphic_owner / physic_ownee / graphic_ownee  are just attach-point.  

If I want to create a rocket, I tend to create 1 rocket entity attach to 2-3 new physic body, 2-3 new graphic body.     The graphic body attach to a shared mesh.   Thus, for a single rocket, I create around 5-7 entities.

Are these all single components? phyiscs_owner, phyiscs_ownee, HP, ...? If so, it does sound needlessly complex, as I laid out in the other post there's really not much point of having stuff like "filename" as a component, but you should see for yourself ;) On the same note, having 5-7 entities for a single rocket sounds like huge overkill. Personally, I would have a rocket-mesh with a Mesh & Physics-component, and thats pretty much it; unless the rocket specifically needs sub-entities like a particle-emitter :D (Thats just what I came to agree on based on 4 years of working with an ECS across different projects; you might come to a different conclusion; though based on your workflow/toolchain, a certain few things like having many components & entities can make creating new content a nightmare).

12 hours ago, Zipster said:

Don't think of data as a "point of communication". Communication is a behavior, and in the absence of code the only place behaviors exist are in assumptions and inferences. It may be fine for the rendering code to assume the transform mean one thing, and for the physics code to assume the transform mean something else, but when it comes to communicating that information across domain boundaries, you should't force the rendering code to make assumptions about what the transform means to the physics code and vice versa. This is exactly the type of behavioral coupling you'll come to regret

I honestly don't agree to that statement. Whats wrong with using a Transform-component to communicate information between different systems, that all might have their own internal transform-structure? You're using data to transfer information anyways at some point, like sending information across a network, or storing and later loading a save-file. There's not code inside the save-file, its just data at that point, but because you agreed on a format, its save to store at one point and load at a later point. Thats how I see it with this example of the transform as well: You agree on a shared "format" for world-transform on a shared "Transform"-component, and systems can read/write to the transform while still being able to internally use their own data. You can still use buisness-logic to make the writing/reading of this transform-data more safe, but I don't see the benefit of code-based communication vs a data-"stream" of sorts.

Share this post


Link to post
Share on other sites
9 hours ago, Juliean said:

I honestly don't agree to that statement. Whats wrong with using a Transform-component to communicate information between different systems, that all might have their own internal transform-structure?

Actually it seems we do agree, at least on each system having its own internal structure. I just like to think in terms of domains as opposed to systems, since I find the conceptual delineation better for determining when different representations are actually needed. However if your systems are 1:1 with behaviors (physics, rendering, etc.), then it's essentially the same :)

I'm also in full agreement with having a separate component handle the transfer of data between systems, but in that case I don't see a need for any intermediary shared state between them. I'm not even sure what purpose it would have, or if it's feasible in the first place. How would you reconcile an OOBB or capsule used by physics, with a matrix transform used by rendering, with a sphere used by gameplay, all into a single representation that makes sense? And why would you even need it if each system already has sufficient internal state to function? The purpose of the transform component in this case would just be to copy the data from one system to another and perform any necessary conversion. Instead of each system assuming internal details about the other, you've inverted your dependencies, shifted that responsibility to a higher level, and made the data relationship explicit in code for all to see.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

  • Advertisement
  • Advertisement
  • Popular Tags

  • Popular Now

  • Advertisement
  • Similar Content

    • By Shtabbbe
      I've had a game idea for a while, and I wanted to finally try to create it.
      Its a 2D open-world tile-based MMO. The concept is it is one world and multiplayer only, so everyone shares one world no matter region, platform, etc.
      I am having problems finding out what to use to start development, I tried Unity but saw some of the negatives and refrained and now im stuck, could anyone recommend some intermediate friendly 2D engines that can support what I am looking for? Preferably in languages that are or are somewhat like Java, C#, Python, JavaScript, Lua.
      Thanks for your help, im very new at this if you cant tell
    • By chiffre
      Introduction:
      In general my questions pertain to the differences between floating- and fixed-point data. Additionally I would like to understand when it can be advantageous to prefer fixed-point representation over floating-point representation in the context of vertex data and how the hardware deals with the different data-types. I believe I should be able to reduce the amount of data (bytes) necessary per vertex by choosing the most opportune representations for my vertex attributes. Thanks ahead of time if you, the reader, are considering the effort of reading this and helping me.
      I found an old topic that shows this is possible in principal, but I am not sure I understand what the pitfalls are when using fixed-point representation and whether there are any hardware-based performance advantages/disadvantages.
      (TLDR at bottom)
      The Actual Post:
      To my understanding HLSL/D3D11 offers not just the traditional floating point model in half-,single-, and double-precision, but also the fixed-point model in form of signed/unsigned normalized integers in 8-,10-,16-,24-, and 32-bit variants. Both models offer a finite sequence of "grid-points". The obvious difference between the two models is that the fixed-point model offers a constant spacing between values in the normalized range of [0,1] or [-1,1], while the floating point model allows for smaller "deltas" as you get closer to 0, and larger "deltas" the further you are away from 0.
      To add some context, let me define a struct as an example:
      struct VertexData { float[3] position; //3x32-bits float[2] texCoord; //2x32-bits float[3] normals; //3x32-bits } //Total of 32 bytes Every vertex gets a position, a coordinate on my texture, and a normal to do some light calculations. In this case we have 8x32=256bits per vertex. Since the texture coordinates lie in the interval [0,1] and the normal vector components are in the interval [-1,1] it would seem useful to use normalized representation as suggested in the topic linked at the top of the post. The texture coordinates might as well be represented in a fixed-point model, because it seems most useful to be able to sample the texture in a uniform manner, as the pixels don't get any "denser" as we get closer to 0. In other words the "delta" does not need to become any smaller as the texture coordinates approach (0,0). A similar argument can be made for the normal-vector, as a normal vector should be normalized anyway, and we want as many points as possible on the sphere around (0,0,0) with a radius of 1, and we don't care about precision around the origin. Even if we have large textures such as 4k by 4k (or the maximum allowed by D3D11, 16k by 16k) we only need as many grid-points on one axis, as there are pixels on one axis. An unsigned normalized 14 bit integer would be ideal, but because it is both unsupported and impractical, we will stick to an unsigned normalized 16 bit integer. The same type should take care of the normal vector coordinates, and might even be a bit overkill.
      struct VertexData { float[3] position; //3x32-bits uint16_t[2] texCoord; //2x16bits uint16_t[3] normals; //3x16bits } //Total of 22 bytes Seems like a good start, and we might even be able to take it further, but before we pursue that path, here is my first question: can the GPU even work with the data in this format, or is all I have accomplished minimizing CPU-side RAM usage? Does the GPU have to convert the texture coordinates back to a floating-point model when I hand them over to the sampler in my pixel shader? I have looked up the data types for HLSL and I am not sure I even comprehend how to declare the vertex input type in HLSL. Would the following work?
      struct VertexInputType { float3 pos; //this one is obvious unorm half2 tex; //half corresponds to a 16-bit float, so I assume this is wrong, but this the only 16-bit type I found on the linked MSDN site snorm half3 normal; //same as above } I assume this is possible somehow, as I have found input element formats such as: DXGI_FORMAT_R16G16B16A16_SNORM and DXGI_FORMAT_R16G16B16A16_UNORM (also available with a different number of components, as well as different component lengths). I might have to avoid 3-component vectors because there is no 3-component 16-bit input element format, but that is the least of my worries. The next question would be: what happens with my normals if I try to do lighting calculations with them in such a normalized-fixed-point format? Is there no issue as long as I take care not to mix floating- and fixed-point data? Or would that work as well? In general this gives rise to the question: how does the GPU handle fixed-point arithmetic? Is it the same as integer-arithmetic, and/or is it faster/slower than floating-point arithmetic?
      Assuming that we still have a valid and useful VertexData format, how far could I take this while remaining on the sensible side of what could be called optimization? Theoretically I could use the an input element format such as DXGI_FORMAT_R10G10B10A2_UNORM to pack my normal coordinates into a 10-bit fixed-point format, and my verticies (in object space) might even be representable in a 16-bit unsigned normalized fixed-point format. That way I could end up with something like the following struct:
      struct VertexData { uint16_t[3] pos; //3x16bits uint16_t[2] texCoord; //2x16bits uint32_t packedNormals; //10+10+10+2bits } //Total of 14 bytes Could I use a vertex structure like this without too much performance-loss on the GPU-side? If the GPU has to execute some sort of unpacking algorithm in the background I might as well let it be. In the end I have a functioning deferred renderer, but I would like to reduce the memory footprint of the huge amount of vertecies involved in rendering my landscape. 
      TLDR: I have a lot of vertices that I need to render and I want to reduce the RAM-usage without introducing crazy compression/decompression algorithms to the CPU or GPU. I am hoping to find a solution by involving fixed-point data-types, but I am not exactly sure how how that would work.
    • By babaliaris
      Well i found out Here what's the problem and how to solve it (Something about world coordinates and object coordinates) but i can't understand how ti works. Can you show me some examples in code on how you implement this???
       
      Scaling Matrix:
      m_Impl->scale = glm::mat4(1.0f); m_Impl->scale = glm::scale(m_Impl->scale, glm::vec3(width, height, 0)); Verticies:
      //Verticies. float verticies[] = { //Positions. //Texture Coordinates. 1.0f, 1.0f, 0.0f, 0.0f, 2.0f, 1.0f, 1.0f, 0.0f, 2.0f, 2.0f, 1.0f, 1.0f, 1.0f, 2.0f, 0.0f, 1.0f }; Rendering:
      //Projection Matrix. glm::mat4 proj = glm::ortho(0.0f, (float)window->GetWidth(), 0.0f, (float)window->GetHeight(), -1.0f, 1.0f); //Set the uniform. material->program->setUniformMat4f("u_MVP", proj * model); //model is the scale matrix from the previous code. //Draw. glDrawElements(GL_TRIANGLES, material->ibo->GetCount(), GL_UNSIGNED_INT, NULL);  
      Shader:
      #shader vertex #version 330 core layout(location = 0) in vec4 aPos; layout(location = 1) in vec2 aTexCoord; out vec2 texCoord; uniform mat4 u_MVP; void main() { gl_Position = u_MVP*aPos; texCoord = aTexCoord; } #shader fragment #version 330 core out vec4 colors; in vec2 texCoord; uniform sampler2D u_Texture; void main() { colors = texture(u_Texture, texCoord); }  
      Before Scaling (It's down there on the bottom left corner as a dot).

       
      After Scaling

       
      Problem: Why does the position also changes?? If you see my Verticies, the first position starts at 1.0f, 1.0f , so when i'm scaling it should stay at that position
    • By Bret Marisnick
      Hey guys!

      Ok so I have been developing some ideas to get to work on and I have one specifically that I need some assistance with. The App will be called “A Walk On the Beach.” It’s somewhat of a 3D representation of the Apple app “Calm.” The idea is that you can take a virtual stroll up and down a pier on the beach. Building the level of a pier seems self explanatory to me... but my question is this.... How could I make it so that players can leave notes on the pier for other users to read and or respond to? I was thinking something like a virtual “peg board” at the end of the pier where players can “pin up” pictures or post it’s.

      Any advice on how I could accomplish this would be helpful!
    • By babaliaris
      Hello Everyone!
      I'm learning openGL, and currently i'm making a simple 2D game engine to test what I've learn so far.  In order to not say to much, i made a video in which i'm showing you the behavior of the rendering.
      Video: 
       
      What i was expecting to happen, was the player moving around. When i render only the player, he moves as i would expect. When i add a second Sprite object, instead of the Player, this new sprite object is moving and finally if i add a third Sprite object the third one is moving. And the weird think is that i'm transforming the Vertices of the Player so why the transformation is being applied somewhere else?
       
      Take a look at my code:
      Sprite Class
      (You mostly need to see the Constructor, the Render Method and the Move Method)
      #include "Brain.h" #include <glm/gtc/matrix_transform.hpp> #include <vector> struct Sprite::Implementation { //Position. struct pos pos; //Tag. std::string tag; //Texture. Texture *texture; //Model matrix. glm::mat4 model; //Vertex Array Object. VertexArray *vao; //Vertex Buffer Object. VertexBuffer *vbo; //Layout. VertexBufferLayout *layout; //Index Buffer Object. IndexBuffer *ibo; //Shader. Shader *program; //Brains. std::vector<Brain *> brains; //Deconstructor. ~Implementation(); }; Sprite::Sprite(std::string image_path, std::string tag, float x, float y) { //Create Pointer To Implementaion. m_Impl = new Implementation(); //Set the Position of the Sprite object. m_Impl->pos.x = x; m_Impl->pos.y = y; //Set the tag. m_Impl->tag = tag; //Create The Texture. m_Impl->texture = new Texture(image_path); //Initialize the model Matrix. m_Impl->model = glm::mat4(1.0f); //Get the Width and the Height of the Texture. int width = m_Impl->texture->GetWidth(); int height = m_Impl->texture->GetHeight(); //Create the Verticies. float verticies[] = { //Positions //Texture Coordinates. x, y, 0.0f, 0.0f, x + width, y, 1.0f, 0.0f, x + width, y + height, 1.0f, 1.0f, x, y + height, 0.0f, 1.0f }; //Create the Indicies. unsigned int indicies[] = { 0, 1, 2, 2, 3, 0 }; //Create Vertex Array. m_Impl->vao = new VertexArray(); //Create the Vertex Buffer. m_Impl->vbo = new VertexBuffer((void *)verticies, sizeof(verticies)); //Create The Layout. m_Impl->layout = new VertexBufferLayout(); m_Impl->layout->PushFloat(2); m_Impl->layout->PushFloat(2); m_Impl->vao->AddBuffer(m_Impl->vbo, m_Impl->layout); //Create the Index Buffer. m_Impl->ibo = new IndexBuffer(indicies, 6); //Create the new shader. m_Impl->program = new Shader("Shaders/SpriteShader.shader"); } //Render. void Sprite::Render(Window * window) { //Create the projection Matrix based on the current window width and height. glm::mat4 proj = glm::ortho(0.0f, (float)window->GetWidth(), 0.0f, (float)window->GetHeight(), -1.0f, 1.0f); //Set the MVP Uniform. m_Impl->program->setUniformMat4f("u_MVP", proj * m_Impl->model); //Run All The Brains (Scripts) of this game object (sprite). for (unsigned int i = 0; i < m_Impl->brains.size(); i++) { //Get Current Brain. Brain *brain = m_Impl->brains[i]; //Call the start function only once! if (brain->GetStart()) { brain->SetStart(false); brain->Start(); } //Call the update function every frame. brain->Update(); } //Render. window->GetRenderer()->Draw(m_Impl->vao, m_Impl->ibo, m_Impl->texture, m_Impl->program); } void Sprite::Move(float speed, bool left, bool right, bool up, bool down) { if (left) { m_Impl->pos.x -= speed; m_Impl->model = glm::translate(m_Impl->model, glm::vec3(-speed, 0, 0)); } if (right) { m_Impl->pos.x += speed; m_Impl->model = glm::translate(m_Impl->model, glm::vec3(speed, 0, 0)); } if (up) { m_Impl->pos.y += speed; m_Impl->model = glm::translate(m_Impl->model, glm::vec3(0, speed, 0)); } if (down) { m_Impl->pos.y -= speed; m_Impl->model = glm::translate(m_Impl->model, glm::vec3(0, -speed, 0)); } } void Sprite::AddBrain(Brain * brain) { //Push back the brain object. m_Impl->brains.push_back(brain); } pos *Sprite::GetPos() { return &m_Impl->pos; } std::string Sprite::GetTag() { return m_Impl->tag; } int Sprite::GetWidth() { return m_Impl->texture->GetWidth(); } int Sprite::GetHeight() { return m_Impl->texture->GetHeight(); } Sprite::~Sprite() { delete m_Impl; } //Implementation Deconstructor. Sprite::Implementation::~Implementation() { delete texture; delete vao; delete vbo; delete layout; delete ibo; delete program; }  
      Renderer Class
      #include "Renderer.h" #include "Error.h" Renderer::Renderer() { } Renderer::~Renderer() { } void Renderer::Draw(VertexArray * vao, IndexBuffer * ibo, Texture *texture, Shader * program) { vao->Bind(); ibo->Bind(); program->Bind(); if (texture != NULL) texture->Bind(); GLCall(glDrawElements(GL_TRIANGLES, ibo->GetCount(), GL_UNSIGNED_INT, NULL)); } void Renderer::Clear(float r, float g, float b) { GLCall(glClearColor(r, g, b, 1.0)); GLCall(glClear(GL_COLOR_BUFFER_BIT)); } void Renderer::Update(GLFWwindow *window) { /* Swap front and back buffers */ glfwSwapBuffers(window); /* Poll for and process events */ glfwPollEvents(); }  
      Shader Code
      #shader vertex #version 330 core layout(location = 0) in vec4 aPos; layout(location = 1) in vec2 aTexCoord; out vec2 t_TexCoord; uniform mat4 u_MVP; void main() { gl_Position = u_MVP * aPos; t_TexCoord = aTexCoord; } #shader fragment #version 330 core out vec4 aColor; in vec2 t_TexCoord; uniform sampler2D u_Texture; void main() { aColor = texture(u_Texture, t_TexCoord); } Also i'm pretty sure that every time i'm hitting the up, down, left and right arrows on the keyboard, i'm changing the model Matrix of the Player and not the others.
       
      Window Class:
      #include "Window.h" #include <GL/glew.h> #include <GLFW/glfw3.h> #include "Error.h" #include "Renderer.h" #include "Scene.h" #include "Input.h" //Global Variables. int screen_width, screen_height; //On Window Resize. void OnWindowResize(GLFWwindow *window, int width, int height); //Implementation Structure. struct Window::Implementation { //GLFW Window. GLFWwindow *GLFW_window; //Renderer. Renderer *renderer; //Delta Time. double delta_time; //Frames Per Second. int fps; //Scene. Scene *scnene; //Input. Input *input; //Deconstructor. ~Implementation(); }; //Window Constructor. Window::Window(std::string title, int width, int height) { //Initializing width and height. screen_width = width; screen_height = height; //Create Pointer To Implementation. m_Impl = new Implementation(); //Try initializing GLFW. if (!glfwInit()) { std::cout << "GLFW could not be initialized!" << std::endl; std::cout << "Press ENTER to exit..." << std::endl; std::cin.get(); exit(-1); } //Setting up OpenGL Version 3.3 Core Profile. glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3); glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 3); glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE); /* Create a windowed mode window and its OpenGL context */ m_Impl->GLFW_window = glfwCreateWindow(width, height, title.c_str(), NULL, NULL); if (!m_Impl->GLFW_window) { std::cout << "GLFW could not create a window!" << std::endl; std::cout << "Press ENTER to exit..." << std::endl; std::cin.get(); glfwTerminate(); exit(-1); } /* Make the window's context current */ glfwMakeContextCurrent(m_Impl->GLFW_window); //Initialize GLEW. if(glewInit() != GLEW_OK) { std::cout << "GLEW could not be initialized!" << std::endl; std::cout << "Press ENTER to exit..." << std::endl; std::cin.get(); glfwTerminate(); exit(-1); } //Enabling Blending. GLCall(glEnable(GL_BLEND)); GLCall(glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA)); //Setting the ViewPort. GLCall(glViewport(0, 0, width, height)); //**********Initializing Implementation**********// m_Impl->renderer = new Renderer(); m_Impl->delta_time = 0.0; m_Impl->fps = 0; m_Impl->input = new Input(this); //**********Initializing Implementation**********// //Set Frame Buffer Size Callback. glfwSetFramebufferSizeCallback(m_Impl->GLFW_window, OnWindowResize); } //Window Deconstructor. Window::~Window() { delete m_Impl; } //Window Main Loop. void Window::MainLoop() { //Time Variables. double start_time = 0, end_time = 0, old_time = 0, total_time = 0; //Frames Counter. int frames = 0; /* Loop until the user closes the window */ while (!glfwWindowShouldClose(m_Impl->GLFW_window)) { old_time = start_time; //Total time of previous frame. start_time = glfwGetTime(); //Current frame start time. //Calculate the Delta Time. m_Impl->delta_time = start_time - old_time; //Get Frames Per Second. if (total_time >= 1) { m_Impl->fps = frames; total_time = 0; frames = 0; } //Clearing The Screen. m_Impl->renderer->Clear(0, 0, 0); //Render The Scene. if (m_Impl->scnene != NULL) m_Impl->scnene->Render(this); //Updating the Screen. m_Impl->renderer->Update(m_Impl->GLFW_window); //Increasing frames counter. frames++; //End Time. end_time = glfwGetTime(); //Total time after the frame completed. total_time += end_time - start_time; } //Terminate GLFW. glfwTerminate(); } //Load Scene. void Window::LoadScene(Scene * scene) { //Set the scene. m_Impl->scnene = scene; } //Get Delta Time. double Window::GetDeltaTime() { return m_Impl->delta_time; } //Get FPS. int Window::GetFPS() { return m_Impl->fps; } //Get Width. int Window::GetWidth() { return screen_width; } //Get Height. int Window::GetHeight() { return screen_height; } //Get Input. Input * Window::GetInput() { return m_Impl->input; } Renderer * Window::GetRenderer() { return m_Impl->renderer; } GLFWwindow * Window::GetGLFWindow() { return m_Impl->GLFW_window; } //Implementation Deconstructor. Window::Implementation::~Implementation() { delete renderer; delete input; } //OnWindowResize void OnWindowResize(GLFWwindow *window, int width, int height) { screen_width = width; screen_height = height; //Updating the ViewPort. GLCall(glViewport(0, 0, width, height)); }  
      Brain Class
      #include "Brain.h" #include "Sprite.h" #include "Window.h" struct Brain::Implementation { //Just A Flag. bool started; //Window Pointer. Window *window; //Sprite Pointer. Sprite *sprite; }; Brain::Brain(Window *window, Sprite *sprite) { //Create Pointer To Implementation. m_Impl = new Implementation(); //Initialize Implementation. m_Impl->started = true; m_Impl->window = window; m_Impl->sprite = sprite; } Brain::~Brain() { //Delete Pointer To Implementation. delete m_Impl; } void Brain::Start() { } void Brain::Update() { } Window * Brain::GetWindow() { return m_Impl->window; } Sprite * Brain::GetSprite() { return m_Impl->sprite; } bool Brain::GetStart() { return m_Impl->started; } void Brain::SetStart(bool value) { m_Impl->started = value; } Script Class (Its a Brain Subclass!!!)
      #include "Script.h" Script::Script(Window *window, Sprite *sprite) : Brain(window, sprite) { } Script::~Script() { } void Script::Start() { std::cout << "Game Started!" << std::endl; } void Script::Update() { Input *input = this->GetWindow()->GetInput(); Sprite *sp = this->GetSprite(); //Move this sprite. this->GetSprite()->Move(200 * this->GetWindow()->GetDeltaTime(), input->GetKeyDown("left"), input->GetKeyDown("right"), input->GetKeyDown("up"), input->GetKeyDown("down")); std::cout << sp->GetTag().c_str() << ".x = " << sp->GetPos()->x << ", " << sp->GetTag().c_str() << ".y = " << sp->GetPos()->y << std::endl; }  
      Main:
      #include "SpaceShooterEngine.h" #include "Script.h" int main() { Window w("title", 600,600); Scene *scene = new Scene(); Sprite *player = new Sprite("Resources/Images/player.png", "Player", 100,100); Sprite *other = new Sprite("Resources/Images/cherno.png", "Other", 400, 100); Sprite *other2 = new Sprite("Resources/Images/cherno.png", "Other", 300, 400); Brain *brain = new Script(&w, player); player->AddBrain(brain); scene->AddSprite(player); scene->AddSprite(other); scene->AddSprite(other2); w.LoadScene(scene); w.MainLoop(); return 0; }  
       
      I literally can't find what is wrong. If you need more code, ask me to post it. I will also attach all the source files.
      Brain.cpp
      Error.cpp
      IndexBuffer.cpp
      Input.cpp
      Renderer.cpp
      Scene.cpp
      Shader.cpp
      Sprite.cpp
      Texture.cpp
      VertexArray.cpp
      VertexBuffer.cpp
      VertexBufferLayout.cpp
      Window.cpp
      Brain.h
      Error.h
      IndexBuffer.h
      Input.h
      Renderer.h
      Scene.h
      Shader.h
      SpaceShooterEngine.h
      Sprite.h
      Texture.h
      VertexArray.h
      VertexBuffer.h
      VertexBufferLayout.h
      Window.h
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

Participate in the game development conversation and more when you create an account on GameDev.net!

Sign me up!