Jump to content
  • Advertisement

OpenGL OpenGL Transformation Is Not Working As I Expected!!

Recommended Posts

Advertisement
4 hours ago, SomeoneRichards said:

In your sprite::addBrain function can you std::cout the sprite tag and the tag of the sprite held by the brain?

In your sprite::render function can you std::cout the sprite tag and the model matrix?

As you can see everything works as i expected. The model matrix of the player is changing every time, and also i double checked the model matrices of the other two sprites (they remain Identity Matrices). Also the add brain is adding the brain to the correct sprite (Player).

Output.PNG

 

First Call Of Player's Renderer.

First Change.PNG

 

Second Call Of Player's Renderer.

Second Change.PNG

 

Third Call Of Player's Renderer.

Third Cjange.PNG

Edited by babaliaris

Share this post


Link to post
Share on other sites

I'm not entirely sure if this is clear, but when you create your sprites, you are creating a new shader for each one. However, when you render them individually, you are not binding the corresponding shaders each time. The only time you bind a shader is in your render method. So I suspect (but am not sure, as it is difficult going through multiple files, and I'm also fairly new to this) that the last sprite you build is the one whose shader is being used, and therefore is getting the model view matrix mapped to it.

I'm not sure the best way to test or eradicate this issue You could try creating a single shader and sharing it, or you could try adding a bind call in each of your sprite's render functions.

You should really remove the useShader line from the sprite creator. To save from mishaps.

And call it just before rendering.

Does that make sense?

To make it clearer, it looks like this is what you are doing:

Create player sprite, bind player shader.

Create other sprite, bind other shader.

Create other2 sprite, bind other2 shader.

Then, when you render, everything is using the other2 shader (ie the last one you bound), so only this shader (and this corresponding sprite) is getting the model matrix.

Edited by SomeoneRichards

Share this post


Link to post
Share on other sites
43 minutes ago, SomeoneRichards said:

I'm not entirely sure if this is clear, but when you create your sprites, you are creating a new shader for each one. However, when you render them individually, you are not binding the corresponding shaders each time. The only time you bind a shader is in your render method. So I suspect (but am not sure, as it is difficult going through multiple files, and I'm also fairly new to this) that the last sprite you build is the one whose shader is being used, and therefore is getting the model view matrix mapped to it.

I'm not sure the best way to test or eradicate this issue You could try creating a single shader and sharing it, or you could try adding a bind call in each of your sprite's render functions.

You should really remove the useShader line from the sprite creator. To save from mishaps.

And call it just before rendering.

Does that make sense?

To make it clearer, it looks like this is what you are doing:

Create player sprite, bind player shader.

Create other sprite, bind other shader.

Create other2 sprite, bind other2 shader.

Then, when you render, everything is using the other2 shader (ie the last one you bound), so only this shader (and this corresponding sprite) is getting the model matrix.

OMG MAN I LOVE YOU!!!!!!!!!!!!!

I wasn't binding the shaders inside the Sprite::Render() Method So i was actually setting the mvp only on the shader that was binded previously inside the renderer!!!

Problem solved!!

I literally can't thank you enough! This will prevent a lot of frustration in the future.

This forum never led me down :) 

Thank you again!

This is what i had to change in my Sprite::Render Method:

//Render.
void Sprite::Render(Window * window)
{

	//I HAD TO DO THAT!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
	m_Impl->program->Bind();



	//Create the projection Matrix based on the current window width and height.
	glm::mat4 proj = glm::ortho(0.0f, (float)window->GetWidth(), 0.0f, (float)window->GetHeight(), -1.0f, 1.0f);


	//SO I CAN DO THAT ON THE RIGHT SHADER!!!!!!!!!!!!!!!!!!!
	//Set the MVP Uniform.
	m_Impl->program->setUniformMat4f("u_MVP", proj * m_Impl->model);


	std::cout << "Inside " << m_Impl->tag.c_str() << " Render() has model pointer: " << &m_Impl->model << std::endl;

	
	//Run All The Brains (Scripts) of this game object (sprite).
	for (unsigned int i = 0; i < m_Impl->brains.size(); i++)
	{

		//Get Current Brain.
		Brain *brain = m_Impl->brains[i];

		//Call the start function only once!
		if (brain->GetStart())
		{
			brain->SetStart(false);
			brain->Start();
		}

		//Call the update function every frame.
		brain->Update();
	}


	//Render.
	window->GetRenderer()->Draw(m_Impl->vao, m_Impl->ibo, m_Impl->texture, m_Impl->program);
}

 

Share this post


Link to post
Share on other sites

No worries :) I'm glad I could help.

I really don't think you want that many shaders or shader bindings though. Unless your shaders are actually doing different things.

Just put a shader in your renderer called something like SpriteRenderer, bind it at the start of the render process, and then update and draw each sprite in turn. From my understanding, shader bind calls are one of the most expensive.

Edited by SomeoneRichards

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

  • Advertisement
  • Advertisement
  • Popular Tags

  • Popular Now

  • Advertisement
  • Similar Content

    • By Alexander Winter
      Jumpaï is a game about creating platformer levels and playing them online with everyone. Will you become the most popular level maker or will you be a speedrunner holding world records on everyone's levels? More into casual play? No problem! You can happily play through the giant level database or chill at people's hub. Meet new people, make new friends, learn to master the game by asking pros or ask for people's favorite tricks on level making. Download here: https://jumpai.itch.io/jumpai Discord: https://discord.gg/dwRTNCG   Trailer:      (The following screenshots are older but still a bit representative)  





      Unlike other games of its genre, Jumpaï is about playing levels with everyone in real time. You have the fun to see how other people are playing and get to realize you are not the only one failing that jump!

      The game is currently into development and still have lots to do. I am looking for people willing to help how they can. Developer? Graphist? Play tester? Sound designer? Game designer? I'm welcoming any talent. The project is so big I have a lot of work to do in all areas. Server backend, UI/UX, Game networking, Gameplay and even the website some day. As you can see from the default buttons, the game has been made with LibGDX. This project is a perfect opportunity for you to get better in various fields as well as showing off your skills.

      If you plan to take an important role into the development of the game, we will discuss how you will get paid once the game generates money. Note that I'm not working on the game full-time. I'm studying full-time and working on it is a hobby. The project has started in november 2016 and experiences heavy progress.

      So, are you interested? If so join me on my discord https://discord.gg/dwRTNCG and I'll answer all your questions.

      Additionnal screenshots:
       



       
    • By mmmax3d
      Hi everyone,
      I would need some assistance from anyone who has a similar experience
      or a nice idea!
      I have created a skybox (as cube) and now I need to add a floor/ground.
      The skybox is created from cubemap and initially it was infinite.
      Now it is finite with a specific size. The floor is a quad in the middle
      of the skybox, like a horizon.
      I have two problems:
      When moving the skybox upwards or downwards, I need to
      sample from points even above the horizon while sampling
      from the botton at the same time.  I am trying to create a seamless blending of the texture
      at the points of the horizon, when the quad is connected
      to the skybox. However, I get skew effects. Does anybody has done sth similar?
      Is there any good practice?
      Thanks everyone!
    • By iArtist93
      I'm trying to implement PBR into my simple OpenGL renderer and trying to use multiple lighting passes, I'm using one pass per light for rendering as follow:
      1- First pass = depth
      2- Second pass = ambient
      3- [3 .. n] for all the lights in the scene.
      I'm using the blending function glBlendFunc(GL_ONE, GL_ONE) for passes [3..n], and i'm doing a Gamma Correction at the end of each fragment shader.
      But i still have a problem with the output image it just looks noisy specially when i'm using texture maps.
      Is there anything wrong with those steps or is there any improvement to this process?
    • By Adrian Bigaj
      Hello!

      The game: https://www.combo-clicks.com/
      DEV blog (so everyone can read the journey and some history):
      http://www.combo-clicks-dev.com/
      TL;DR
      Feedback for Combo Clicks and also IDEAS for future games will be super appreciated (Hyper Casuals done in 3-4 weeks, each game with React Native).
      I will try to post on my blog atleast on weekly basis (both for gamers and developers) 
      Thank you!




    • By chiffre
      Introduction:
      In general my questions pertain to the differences between floating- and fixed-point data. Additionally I would like to understand when it can be advantageous to prefer fixed-point representation over floating-point representation in the context of vertex data and how the hardware deals with the different data-types. I believe I should be able to reduce the amount of data (bytes) necessary per vertex by choosing the most opportune representations for my vertex attributes. Thanks ahead of time if you, the reader, are considering the effort of reading this and helping me.
      I found an old topic that shows this is possible in principal, but I am not sure I understand what the pitfalls are when using fixed-point representation and whether there are any hardware-based performance advantages/disadvantages.
      (TLDR at bottom)
      The Actual Post:
      To my understanding HLSL/D3D11 offers not just the traditional floating point model in half-,single-, and double-precision, but also the fixed-point model in form of signed/unsigned normalized integers in 8-,10-,16-,24-, and 32-bit variants. Both models offer a finite sequence of "grid-points". The obvious difference between the two models is that the fixed-point model offers a constant spacing between values in the normalized range of [0,1] or [-1,1], while the floating point model allows for smaller "deltas" as you get closer to 0, and larger "deltas" the further you are away from 0.
      To add some context, let me define a struct as an example:
      struct VertexData { float[3] position; //3x32-bits float[2] texCoord; //2x32-bits float[3] normals; //3x32-bits } //Total of 32 bytes Every vertex gets a position, a coordinate on my texture, and a normal to do some light calculations. In this case we have 8x32=256bits per vertex. Since the texture coordinates lie in the interval [0,1] and the normal vector components are in the interval [-1,1] it would seem useful to use normalized representation as suggested in the topic linked at the top of the post. The texture coordinates might as well be represented in a fixed-point model, because it seems most useful to be able to sample the texture in a uniform manner, as the pixels don't get any "denser" as we get closer to 0. In other words the "delta" does not need to become any smaller as the texture coordinates approach (0,0). A similar argument can be made for the normal-vector, as a normal vector should be normalized anyway, and we want as many points as possible on the sphere around (0,0,0) with a radius of 1, and we don't care about precision around the origin. Even if we have large textures such as 4k by 4k (or the maximum allowed by D3D11, 16k by 16k) we only need as many grid-points on one axis, as there are pixels on one axis. An unsigned normalized 14 bit integer would be ideal, but because it is both unsupported and impractical, we will stick to an unsigned normalized 16 bit integer. The same type should take care of the normal vector coordinates, and might even be a bit overkill.
      struct VertexData { float[3] position; //3x32-bits uint16_t[2] texCoord; //2x16bits uint16_t[3] normals; //3x16bits } //Total of 22 bytes Seems like a good start, and we might even be able to take it further, but before we pursue that path, here is my first question: can the GPU even work with the data in this format, or is all I have accomplished minimizing CPU-side RAM usage? Does the GPU have to convert the texture coordinates back to a floating-point model when I hand them over to the sampler in my pixel shader? I have looked up the data types for HLSL and I am not sure I even comprehend how to declare the vertex input type in HLSL. Would the following work?
      struct VertexInputType { float3 pos; //this one is obvious unorm half2 tex; //half corresponds to a 16-bit float, so I assume this is wrong, but this the only 16-bit type I found on the linked MSDN site snorm half3 normal; //same as above } I assume this is possible somehow, as I have found input element formats such as: DXGI_FORMAT_R16G16B16A16_SNORM and DXGI_FORMAT_R16G16B16A16_UNORM (also available with a different number of components, as well as different component lengths). I might have to avoid 3-component vectors because there is no 3-component 16-bit input element format, but that is the least of my worries. The next question would be: what happens with my normals if I try to do lighting calculations with them in such a normalized-fixed-point format? Is there no issue as long as I take care not to mix floating- and fixed-point data? Or would that work as well? In general this gives rise to the question: how does the GPU handle fixed-point arithmetic? Is it the same as integer-arithmetic, and/or is it faster/slower than floating-point arithmetic?
      Assuming that we still have a valid and useful VertexData format, how far could I take this while remaining on the sensible side of what could be called optimization? Theoretically I could use the an input element format such as DXGI_FORMAT_R10G10B10A2_UNORM to pack my normal coordinates into a 10-bit fixed-point format, and my verticies (in object space) might even be representable in a 16-bit unsigned normalized fixed-point format. That way I could end up with something like the following struct:
      struct VertexData { uint16_t[3] pos; //3x16bits uint16_t[2] texCoord; //2x16bits uint32_t packedNormals; //10+10+10+2bits } //Total of 14 bytes Could I use a vertex structure like this without too much performance-loss on the GPU-side? If the GPU has to execute some sort of unpacking algorithm in the background I might as well let it be. In the end I have a functioning deferred renderer, but I would like to reduce the memory footprint of the huge amount of vertecies involved in rendering my landscape. 
      TLDR: I have a lot of vertices that I need to render and I want to reduce the RAM-usage without introducing crazy compression/decompression algorithms to the CPU or GPU. I am hoping to find a solution by involving fixed-point data-types, but I am not exactly sure how how that would work.
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

Participate in the game development conversation and more when you create an account on GameDev.net!

Sign me up!