Jump to content
  • Advertisement

Search the Community

Showing results for tags 'OpenGL' in content posted in Graphics and GPU Programming.

The search index is currently processing. Current results may not be complete.


More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Categories

  • Audio
    • Music and Sound FX
  • Business
    • Business and Law
    • Career Development
    • Production and Management
  • Game Design
    • Game Design and Theory
    • Writing for Games
    • UX for Games
  • Industry
    • Interviews
    • Event Coverage
  • Programming
    • Artificial Intelligence
    • General and Gameplay Programming
    • Graphics and GPU Programming
    • Engines and Middleware
    • Math and Physics
    • Networking and Multiplayer
  • Visual Arts
  • Archive

Categories

  • Audio
  • Visual Arts
  • Programming
  • Writing

Categories

  • Game Developers Conference
    • GDC 2017
    • GDC 2018
  • Power-Up Digital Games Conference
    • PDGC I: Words of Wisdom
    • PDGC II: The Devs Strike Back
    • PDGC III: Syntax Error

Forums

  • Audio
    • Music and Sound FX
  • Business
    • Games Career Development
    • Production and Management
    • Games Business and Law
  • Game Design
    • Game Design and Theory
    • Writing for Games
  • Programming
    • Artificial Intelligence
    • Engines and Middleware
    • General and Gameplay Programming
    • Graphics and GPU Programming
    • Math and Physics
    • Networking and Multiplayer
  • Visual Arts
    • 2D and 3D Art
    • Critique and Feedback
  • Community
    • GameDev Challenges
    • GDNet+ Member Forum
    • GDNet Lounge
    • GDNet Comments, Suggestions, and Ideas
    • Coding Horrors
    • Your Announcements
    • Hobby Project Classifieds
    • Indie Showcase
    • Article Writing
  • Affiliates
    • NeHe Productions
    • AngelCode
  • Topical
    • Virtual and Augmented Reality
    • News
  • Workshops
    • C# Workshop
    • CPP Workshop
    • Freehand Drawing Workshop
    • Hands-On Interactive Game Development
    • SICP Workshop
    • XNA 4.0 Workshop
  • Archive
    • Topical
    • Affiliates
    • Contests
    • Technical
  • GameDev Challenges's Topics
  • For Beginners's Forum

Calendars

  • Community Calendar
  • Games Industry Events
  • Game Jams
  • GameDev Challenges's Schedule

Blogs

There are no results to display.

There are no results to display.

Product Groups

  • GDNet+
  • Advertisements
  • GameDev Gear

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


About Me


Website


Role


Twitter


Github


Twitch


Steam

Found 3475 results

  1. So, i'm still on my quest to unterstanding the intricacies of HDR and implementing this into my engine. Currently i'm at the step to implementing tonemapping. I stumbled upon this blogposts: http://filmicworlds.com/blog/filmic-tonemapping-operators/ http://frictionalgames.blogspot.com/2012/09/tech-feature-hdr-lightning.html and tried to implement some of those mentioned tonemapping methods into my postprocessing shader. The issue is that none of them creates the same results as shown in the blogpost which definitely has to do with the initial range in which the values are stored in the HDR buffer. For simplicity sake i store the values between 0 and 1 in the HDR buffer (ambient light is 0.3, directional light is 0.7) This is the tonemapping code: vec3 Uncharted2Tonemap(vec3 x) { float A = 0.15; float B = 0.50; float C = 0.10; float D = 0.20; float E = 0.02; float F = 0.30; return ((x*(A*x+C*B)+D*E)/(x*(A*x+B)+D*F))-E/F; } This is without the uncharted tonemapping: This is with the uncharted tonemapping: Which makes the image a lot darker. The shader code looks like this: void main() { vec3 color = texture2D(texture_diffuse, vTexcoord).rgb; color = Uncharted2Tonemap(color); //gamma correction (use only if not done in tonemapping code) color = gammaCorrection(color); outputF = vec4(color,1.0f); } Now, from my understanding is that tonemapping should bring the range down from HDR to 0-1. But the output of the tonemapping function heavily depends on the initial range of the values in the HDR buffer. (You can't expect to set the sun intensity the first time to 10 and the second time to 1000 and excpect the same result if you feed that into the tonemapper.) So i suppose that this also depends on the exposure which i have to implement? To check this i plotted the tonemapping curve: You can see that the curve goes only up to around to a value of 0.21 (while being fed a value of 1) and then basically flattens out. (which would explain why the image got darker.) My guestion is: In what range should the values in the HDR buffer be which then get tonemapped? Do i have to bring them down to a range of 0-1 by multiplying with the exposure? For example, if i increase the values of the light by 10 (directional light would be 7 and ambient light 3) then i would need to divide HDR values by 10 in order to get a value range of 0-1 which then could be fed into the tonemapping curve. Is that correct?
  2. Hello, everyone! I hope my problem isn't too 'beginnerish'. I'm doing research on motion synthesis now, trying to implement the Deep Mimic paper (DeepMimic) by BINPENG XUE, in this paper, I need to first retarget character A's motion to another character B to make the reference motion clips for character B, since we don't have character B‘s reference motion. The most important thing is that in the paper, the author copied character A's joint's rotation with respective to joint's local coordinate system (not the parent) to character B. In my personal understanding, the joint's rotation with respective to joint's local coordinate system is something like that in the attached photo, where for the Elbow joint, i need to get the Elbow's rotation in the elbow's local coordinate system (i'm very grateful for you to share your ideas if i have misunderstanding about it 🙂) I have searched many materials on the internet about how to extract the local joint's information from FBX, the most relative one i found is the pivot rotation( and geometric transformation, object offset transformation). I'm a beginner in computer graphics, and i'm confused about whether the pivot rotation( or geometric transformation, object offset transformation) is exactly the joint's local rotation i'm seeking? I hope someone that have any ideas can help me, I'd be very grateful for any pointers in the right direction. Thanks in advance!
  3. Hey My laptop recently decided to die, so Ive been transferring my project to my work laptop just to get it up to date, and commit it. I was banging my head against the wall all day, as my textures where not displaying in my program- I was getting no errors and no indication of why it was occurring so I have been just trying to figure it out- I know the image loading was working ok, as im using image data elsewhere, I was pretty confident that the code was fine also, as ive never had an issue with displaying textures before, so I thought it might be the drivers on this laptop, (my old one was just using the built in IntelHD, while this laptop has a NVIDIA graphics card) but all seems to be up to date. Below are my basic shaders: Vertex Shader #version 330 core layout(location = 0) in vec3 position; layout(location = 1) in vec3 color; layout(location = 2) in vec3 normal; layout(location = 3) in vec2 texCoord; uniform mat4 Projection; uniform mat4 Model; out vec3 Color; out vec3 Normal; out vec2 TexCoord; void main() { gl_Position = Projection * Model * vec4( position, 1.0 ); Color = color; Normal = normal; TexCoord = vec2( texCoord.x, texCoord.y); } Fragment Shader #version 330 core in vec3 Color; in vec3 Normal; in vec2 TexCoord; uniform sampler2D textureData; void main() { vec4 textureColor = texture( textureData, TexCoord ); vec4 finalColor = textureColor * vec4( Color, 1.0f); gl_FragColor = finalColor; } Calling Code glActiveTexture(GL_TEXTURE0); glBindTexture(GL_TEXTURE_2D, textureID); glUniform1i(glGetUniformLocation(shaderID, "textureData"), textureID); Now this is the part i dont understand, I worked through my program, until I got to the above 'Calling Code'. This just displays a black texture.. my original issue. Out of desperation, I just tried changing the name in glGetUniformLocation from "textureData" to "textureData_invalid" to see if my error checks would through up something, but in actual fact, it is now displaying the texture as expected. Can anyone fathom a guess as too why this is occurring.. im assuming the random text is just picking up the correct location by c++ witchcraft, but why is the original one not getting picked up correctly and/or not working as expected I realize more code is probably needed to see how it all hangs together.. but it seems to come down to this as the issue
  4. Hello. So far i got decently looking 3d scene. I also managed to render a truetype font, on my way to implementing gui (windows, buttons and textboxes). There are several issues i am facing, would love to hear your feedback. 1) I render text using atlas with VBO containing x/y/u/v of every digit in the atlas (calculated basing on x/y/z/width/height/xoffset/yoffset/xadvance data in binary .fnt format file, screenshot 1). I generated a Comic Sans MS with 32 size and times new roman with size 12 (screenshot 2 and 3). The first issue is the font looks horrible when rescaling. I guess it is because i am using fixed -1 to 1 screen space coords. This is where ortho matrix should be used, right? 2) Rendering GUI. Situation is similar to above. I guess the widgets should NOT scale when scaling window, am i right? So what am i looking for is saying "this should be always in the middle, 200x200 size no matter the display window xy", and "this should stick to the bottom left corner". Is ortho matrix the cure for all such problems? 3) The game is 3D but i have to go 2D to render static gui elements over the scene - and i want to do it properly! At the moment i am using matrix 3x3 for 2d transformations and vec3 for all kinds of coordinates. In shaders tho i technically still IS 3D. I have to set all 4 x y z w of the gl_Position while it would be much much more conventient to... just do the maths in 2d space. Can i achieve it somehow? 4) Text again. I am kind of confused what is the reason of artifacts in Times New Roman font displaying (screenshot 1). I render from left to right, letter after letter. You can clearly see that letters on the right (so the ones rendered after ones on the left are covered by the previous one). I was toying around with blending options but no luck. I do not support kerning at the moment but that's definitely not the cause of error. The display of the small font looks dirty aliased too. I am knd of confused how to interpret the integer data and how should be scaled/adapted to the screen view. Is it just store the data as constant size and again - use ortho matrix? Thanks in advance for all your ideas and suggestions! https://i.imgur.com/4rd1VC3.png https://i.imgur.com/uHrSXfe.png https://i.imgur.com/xRTffPn.png
  5. hello guys , i have some questions what does glLinkProgram and glBindAttribLocation do? i searched but there wasnt any good resource
  6. I've noticed in most post processing tutorials several shaders are used one after another: one for bloom, another for contrast, and so on. For example: postprocessing.quad.bind() // Effect 1 effect1.shader.bind(); postprocessing.texture.bind(); postprocessing.quad.draw(); postprocessing.texture.unbind(); effect1.shader.unbind(); // Effect 2 effect2.shader.bind(); // ...and so on postprocessing.quad.unbind() Is this good practice, how many shaders can I bind and unbind before I hit performance issues? I'm afraid I don't know what the good practices are in open/webGL regarding binding and unbinding resources. I'm guessing binding many shaders at post processing is okay since the scene has already been updated and I'm just working on a quad and texture at that moment. Or is it more optimal to put shader code in chunks and bind less frequently? I'd love to use several shaders at post though. Another example of what I'm doing at the moment: 1) Loop through GameObjects, bind its phong shader (send color, shadow, spec, normal samplers), unbind all. 2) At post: bind post processor quad, and loop/bind through different shader effects, and so on ... Thanks all!
  7. i am reading this book : link in the OpenGL Rendering Pipeline section there is a picture like this: link but the question is this i dont really understand why it is necessary to turn pixel data in to fragment and then fragment into pixel could please give me a source or a clear Explanation that why it is necessary ? thank you so mu
  8. So, i'm currently trying to implement an SSAO shader from THIS tutorial and i'm running into a few issues here. Now, this SSAO method requires view space positions and normals. I'm storing the normals in my deferred renderer in world-space so i had to do a conversion and reconstruct the position from the depth buffer. And something there goes horribly wrong (which has probably to do with worldspace to viewspace transformations). (here is the full shader source code if someone wants to take a look at it) Now, i suspect that the normals are the culprit. vec3 normal = ((uNormalViewMatrix*vec4(normalize(texture2D(sNormals, vTexcoord).rgb),1.0)).xyz); "sNormals" is a 2D texture which stores the normals in world space in a RGB FP16 buffer. Now i can't use the camera viewspace matrix to transform the normals into viewspace as the cameras position isn't set at (0,0,0), thus skewing the result. So what i did is to create a new viewmatrix specifically for this normal without the position at vec3(0,0,0); //"camera" is the camera which was used for rendering the normal buffer renderer.setUniform4m(ressources->shaderSSAO->getUniform("uNormalViewMatrix"), glmExt::createViewMatrix(glm::vec3(0,0,0),camera.getForward(),camera.getUp())//parameters are (position,forwardVector,upVector) ); Though i have the feeling this is the wrong approach. Is this right or is there a better/correct way of transforming a world space normal into viewspace?
  9. I'm having issues loading textures, as I'm clueless on how to handle / load images maybe I missing something, but the past few days I just google a lot to try to find a solution. Well theres two issues I think, one I'm using Kotlin Native (EAP) and OpenGL wrapper / STB image, so I'm not quite sure wheres the issue, if someone with more experience could give me some hints on how to solve this issue? The code is here, if I'm not mistaken the workflow is pretty straight forward, stbi_load returns the pixels of the image (as char array or byte array) and you need to pass those pixels directly to glTexImage2D, so a I'm missing something here it seems. Regards
  10. Hi, I'm trying mix two textures using own shader system, but I have a problem (I think) with uniforms. Code: https://github.com/HawkDeath/shader/tree/test To debug I use RenderDocs, but I did not receive good results. In the first attachment is my result, in the second attachment is what should be. PS. I base on this tutorial https://learnopengl.com/Getting-started/Textures.
  11. So, i stumbled upon the topic of gamma correction. https://learnopengl.com/Advanced-Lighting/Gamma-Correction So from what i've been able to gather: (Please correct me if i'm wrong) Old CRT monitors couldn't display color linearly, that's why gamma correction was nessecary. Modern LCD/LED monitors don't have this issue anymore but apply gamma correction anyway. (For compatibility reasons? Can this be disabled?) All games have to apply gamma correction? (unsure about that) All textures stored in file formats (.png for example) are essentially stored in SRGB color space (as what we see on the monitor is skewed due to gamma correction. So the pixel information is the same, the percieved colors are just wrong.) This makes textures loaded into the GL_RGB format non linear, thus all lighting calculations are wrong You have to always use the GL_SRGB format to gamma correct/linearise textures which are in SRGB format Now, i'm kinda confused how to proceed with applying gamma correction in OpenGL. First of, how can i check if my Monitor is applying gamma correction? I noticed in my monitor settings that my color format is set to "RGB" (can't modify it though.) I'm connected to my PC via a HDMI cable. I'm also using the full RGB range (0-255, not the 16 to ~240 range) What i tried to do is to apply a gamma correction shader shown in the tutorial above which looks essentially like this: (it's a postprocess shader which is applied at the end of the renderpipeline) vec3 gammaCorrection(vec3 color){ // gamma correction color = pow(color, vec3(1.0/2.2)); return color; } void main() { vec3 color; vec3 tex = texture2D(texture_diffuse, vTexcoord).rgb; color = gammaCorrection(tex); outputF = vec4(color,1.0f); } The results look like this: No gamma correction: With gamma correction: The colors in the gamma corrected image look really wased out. (To the point that it's damn ugly. As if someone overlayed a white half transparent texture. I want the colors to pop.) Do i have to change the textures from GL_RGB to GL_SRGB in order to gamma correct them in addition to applying the post process gamma correction shader? Do i have to do the same thing with all FBOs? Or is this washed out look the intended behaviour?
  12. Me again. I noticed a weird issue with color banding in my current PBR shader. After a bit of experimentation i noticed that the fresnel calculation seems to be the culprit. Here is how the colorbanding looks like (it's a bit dark, but noticeable): I also think that the fresnel calculation is a bit off. This is the actual shader (stripped everything away except the fresnel calculation:) It's worth noting that i'm writing those values into a 16 bit floating point buffer. (So the FBO precision shouldn't be the culprit.) Is this a math based precision error? (especially in the pow() function) Also another thing i noticed, The fresnel effect is supposed to look like this: (picture shamelessly stolen from google) However, no matter what i do i never get this effect in my shader. (I tried all lighting conditions and material values.) Here is what a material with 50% roughness and 50% metallic value looks like: I noticed that switching the "VdotH" dot product to "LdotV" makes this effect somewhat work, but i read conflicting information on the internet as if this is even correct. Here is the complete shader: Anyone has an idea why the banding effect takes place and if the fresnel calculation is even correct?
  13. Hi I am trying to program shadow volumes and i stumbled upon an artifact which i can not find the cause for. I generate the shadow volumes using a geometry shader with reversed extrusion (projecting the lightfacing triangles to infinity) and write the stencil buffer according to z-fail. The base of my code is the "lighting" chapter from learnopengl.com, where i extended the shader class to include geometry shader. I also modified the "lightingshader" to draw the ambient pass when "pass" is set to true and the diffuse/ specular pass when set to false. For easier testing i added a view controls to switch on/off the shadow volumes' color rendering or to change the cubes' position, i made the lightnumber controllable and changed the diffuse pass to render green for easier visualization of my problem. The first picture shows the rendered scene for one point light, all cubes and the front cube's shadow volume is the only one created (intentional). Here, all is rendered as it should be with all lit areas green and all areas inside the shadow volume black (with the volume's sides blended over). If i now turn on the shadow volumes for all the other cubes, we get a bit of a mess, but its also obvious that some areas that were in shadow before are now erroneously lit (for example the first cube to the right from the originaly shadow volumed cube). From my testing the areas erroneously lit are the ones where more than one shadow volume marks the area as shadowed. To check if a wrong stencil buffer value caused this problem i decided to change the stencil function for the diffuse pass to only render if the stencil is equal to 2. As i repeated this approach with different values for the stencil function i found out that if i set the value equal to 1 or any other uneven value the lit and shadowed areas are inverted and if i set it to 0 or any other even value i get the results shown above. This lead me to believe that the value and thus the stencil buffer values may be clamped to [0,1] which would also explain the artifact, because twice in shadow would equal in no shadow at all, but from what i found on the internet and from what i tested with GLint stencilSize = 0; glGetFramebufferAttachmentParameteriv(GL_DRAW_FRAMEBUFFER, GL_STENCIL, GL_FRAMEBUFFER_ATTACHMENT_STENCIL_SIZE, &stencilSize); my stencilsize is 8 bit, which should be values within [0,255]. Does anyone know what might be the cause for this artifact or the confusing results with other stencil functions? // [the following code includes all used gl* functions, other parts are due to readability partialy excluded] // glfw: initialize and configure // ------------------------------ glfwInit(); glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 4); glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 4); glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE); // glfw window creation // -------------------- GLFWwindow* window = glfwCreateWindow(SCR_WIDTH, SCR_HEIGHT, "LearnOpenGL", NULL, NULL); if (window == NULL) { cout << "Failed to create GLFW window" << endl; glfwTerminate(); return -1; } glfwMakeContextCurrent(window); glfwSetFramebufferSizeCallback(window, framebuffer_size_callback); glfwSetCursorPosCallback(window, mouse_callback); glfwSetScrollCallback(window, scroll_callback); // tell GLFW to capture our mouse glfwSetInputMode(window, GLFW_CURSOR, GLFW_CURSOR_DISABLED); // glad: load all OpenGL function pointers // --------------------------------------- if (!gladLoadGLLoader((GLADloadproc)glfwGetProcAddress)) { cout << "Failed to initialize GLAD" << endl; return -1; } // ==================================================================================================== // window and functions are set up // ==================================================================================================== // configure global opengl state // ----------------------------- glEnable(GL_DEPTH_TEST); glEnable(GL_CULL_FACE); // build and compile our shader program [...] // set up vertex data (and buffer(s)) and configure vertex attributes [...] // shader configuration [...] // render loop // =========== while (!glfwWindowShouldClose(window)) { // input processing and fps calculation[...] // render // ------ glClearColor(0.1f, 0.1f, 0.1f, 1.0f); glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glDepthMask(GL_TRUE); //enable depth writing glDepthFunc(GL_LEQUAL); //avoid z-fighting //draw ambient component into color and depth buffer view = camera.GetViewMatrix(); projection = glm::perspective(glm::radians(camera.Zoom), (float)SCR_WIDTH / (float)SCR_HEIGHT, 0.1f, 100.0f); // setting up lighting shader for ambient pass [...] // render the cubes glBindVertexArray(cubeVAO); for (unsigned int i = 0; i < 10; i++) { //position cube [...] glDrawArrays(GL_TRIANGLES, 0, 36); } //------------------------------------------------------------------------------------------------------------------------ glDepthMask(GL_FALSE); //disable depth writing glEnable(GL_BLEND); glBlendFunc(GL_ONE, GL_ONE); //additive blending glEnable(GL_STENCIL_TEST); //setting up shadowShader and lightingShader [...] for (int light = 0; light < lightsused; light++) { glDepthFunc(GL_LESS); glClear(GL_STENCIL_BUFFER_BIT); //configure stencil ops for front- and backface to write according to z-fail glStencilOpSeparate(GL_FRONT, GL_KEEP, GL_DECR_WRAP, GL_KEEP); //-1 for front-facing glStencilOpSeparate(GL_BACK, GL_KEEP, GL_INCR_WRAP, GL_KEEP); //+1 for back-facing glStencilFunc(GL_ALWAYS, 0, GL_TRUE); //stencil test always passes if(hidevolumes) glColorMask(GL_FALSE, GL_FALSE, GL_FALSE, GL_FALSE); //disable writing to the color buffer glDisable(GL_CULL_FACE); glEnable(GL_DEPTH_CLAMP); //necessary to render SVs into infinity //draw SV------------------- shadowShader.use(); shadowShader.setInt("lightnr", light); int nr; if (onecaster) nr = 1; else nr = 10; for (int i = 0; i < nr; i++) { //position cube[...] glDrawArrays(GL_TRIANGLES, 0, 36); } //-------------------------- glDisable(GL_DEPTH_CLAMP); glEnable(GL_CULL_FACE); glStencilFunc(GL_EQUAL, 0, GL_TRUE); //stencil test passes for ==0 so only for non shadowed areas glStencilOp(GL_KEEP, GL_KEEP, GL_KEEP); //keep stencil values for illumination glColorMask(GL_TRUE, GL_TRUE, GL_TRUE, GL_TRUE); //enable writing to the color buffer glDepthFunc(GL_LEQUAL); //avoid z-fighting //draw diffuse and specular pass lightingShader.use(); lightingShader.setInt("lightnr", light); // render the cubes for (unsigned int i = 0; i < 10; i++) { //position cube[...] glDrawArrays(GL_TRIANGLES, 0, 36); } } glDisable(GL_BLEND); glDepthMask(GL_TRUE); //enable depth writing glDisable(GL_STENCIL_TEST); //------------------------------------------------------------------------------------------------------------------------ // also draw the lamp object(s) [...] // glfw: swap buffers and poll IO events (keys pressed/released, mouse moved etc.) // ------------------------------------------------------------------------------- glfwSwapBuffers(window); glfwP } // optional: de-allocate all resources once they've outlived their purpose: // ------------------------------------------------------------------------ glDeleteVertexArrays(1, &cubeVAO); glDeleteVertexArrays(1, &lightVAO); glDeleteBuffers(1, &VBO); // glfw: terminate, clearing all previously allocated GLFW resources. // ------------------------------------------------------------------ glfwTerminate(); return 0;
  14. Hi, i am self teaching me graphics and oo programming and came upon this: My Window class creates an input handler instance, the glfw user pointer is redirected to that object and methods there do the input handling for keyboard and mouse. That works. Now as part of the input handling i have an orbiting camera that is controlled by mouse movement. GLFW_CURSOR_DISABLED is set as proposed in the glfw manual. The manual says that in this case the cursor is automagically reset to the window's center. But if i don't reset it manually with glfwSetCursorPos( center ) mouse values seem to add up until the scene is locked up. Here are some code snippets, mostly standard from tutorials: // EventHandler m_eventHandler = new EventHandler( this, glm::vec3( 0.0f, 5.0f, 0.0f ), glm::vec3( 0.0f, 1.0f, 0.0f ) ); glfwSetWindowUserPointer( m_window, m_eventHandler ); m_eventHandler->setCallbacks(); Creation of the input handler during window creation. For now, the camera is part of the input handler, hence the two vectors (position, up-vector). In future i'll take that functionally out into an own class that inherits from the event handler. void EventHandler::setCallbacks() { glfwSetCursorPosCallback( m_window->getWindow(), cursorPosCallback ); glfwSetKeyCallback( m_window->getWindow(), keyCallback ); glfwSetScrollCallback( m_window->getWindow(), scrollCallback ); glfwSetMouseButtonCallback( m_window->getWindow(), mouseButtonCallback ); } Set callbacks in the input handler. // static void EventHandler::cursorPosCallback( GLFWwindow *w, double x, double y ) { EventHandler *c = reinterpret_cast<EventHandler *>( glfwGetWindowUserPointer( w ) ); c->onMouseMove( (float)x, (float)y ); } Example for the cursor pos callback redirection to a class method. // virtual void EventHandler::onMouseMove( float x, float y ) { if( x != 0 || y != 0 ) { // @todo cursor should be set automatically, according to doc if( m_window->isCursorDisabled() ) glfwSetCursorPos( m_window->getWindow(), m_center.x, m_center.y ); // switch up/down because its more intuitive m_yaw += m_mouseSensitivity * ( m_center.x - x ); m_pitch += m_mouseSensitivity * ( m_center.y - y ); // to avoid locking if( m_pitch > 89.0f ) m_pitch = 89.0f; if( m_pitch < -89.0f ) m_pitch = -89.0f; // Update Front, Right and Up Vectors updateCameraVectors(); } } // onMouseMove() Mouse movement processor method. The interesting part is the manual reset of the mouse position that made the thing work ... // straight line distance between the camera and look at point, here (0,0,0) float distance = glm::length( m_target - m_position ); // Calculate the camera position using the distance and angles float camX = distance * -std::sin( glm::radians( m_yaw ) ) * std::cos( glm::radians( m_pitch) ); float camY = distance * -std::sin( glm::radians( m_pitch) ); float camZ = -distance * std::cos( glm::radians( m_yaw ) ) * std::cos( glm::radians( m_pitch) ); // Set the camera position and perspective vectors m_position = glm::vec3( camX, camY, camZ ); m_front = glm::vec3( 0.0, 0.0, 0.0 ) - m_position; m_up = m_worldUp; m_right = glm::normalize( glm::cross( m_front, m_worldUp ) ); glm::lookAt( m_position, m_front, m_up ); Orbiting camera vectors calculation in updateCameraVectors(). Now, for my understanding, as the glfw manual explicitly states that if cursor is disabled then it is reset to the center, but my code only works if it is reset manually, i fear i am doing something wrong. It is not world moving (only if there is a world to render :-)), but somehow i am curious what i am missing. I am not a professional programmer, just a hobbyist, so it may well be that i got something principally wrong :-) And thanks for any hints and so ...
  15. Hi, currently I'm helping my friend with his master thesis creating 3d airplane simulation, which he will use as presentation {just part of the whole}. I've got Boening 737-800 model, which is read from obj file format and directly rendered using my simulator program. The model has many of moving parts, they move independently basing on defined balls. Here's left wing from the top to depict one element and its move logic: Each moving object {here it is aleiron_left} has two balls: _left and _right which define the horizontal line to rotate around. I created movement and rotate logic by just using translatef and rotatef, each object is moved independently within its own push and pop matrixes. Once the project runs I'm able to move the aleiron based on parameter I store in some variable, and it moves alright: First one is before move, and the second one after move. I added balls for direct reference with 3ds max model. The balls are also moved using translatef and rotatef using the same move logic, so they are "glued" to their parent part. Problem is, I need to calculate their world coordinates after move, so in the next step out of the render method I need to obtain direct x,y,z coordinates without again using translatef and rotatef. I tried to multiply some matrixes and so on, but without result and currently I've got no idea how to do that.. Fragment of code looks like this: void DrawModel(Model.IModel model) { GL.glLoadIdentity(); GL.glPushMatrix(); this.SetSceneByCamera(); foreach (Model.IModelPart part in model.ModelParts) this.DrawPart(part); GL.glPopMatrix(); } void DrawPart(Model.IModelPart part) { bool draw = true; GL.glPushMatrix(); #region get part children Model.IModelPart child_top = part.GetChild("_top"); Model.IModelPart child_left = part.GetChild("_left"); Model.IModelPart child_right = part.GetChild("_right"); Model.IModelPart child_bottom = part.GetChild("_bottom"); #endregion get part children #region by part switch (part.PartTBase) { case Model.ModelPart.PartTypeBase.ALEIRON: { #region aleiron float moveBy = 0.0f; bool selected = false; switch (part.PartT) { case Model.ModelPart.PartType.ALEIRON_L: { selected = true; moveBy = this.GetFromValDic(part); moveBy *= -1; } break; } if (selected && child_left != null && child_right != null) { GL.glTranslatef(child_right.GetCentralPoint.X, child_right.GetCentralPoint.Y, child_right.GetCentralPoint.Z); GL.glRotatef(moveBy, child_left.GetCentralPoint.X - child_right.GetCentralPoint.X, child_left.GetCentralPoint.Y - child_right.GetCentralPoint.Y, child_left.GetCentralPoint.Z - child_right.GetCentralPoint.Z); GL.glTranslatef(-child_right.GetCentralPoint.X, -child_right.GetCentralPoint.Y, -child_right.GetCentralPoint.Z); } #endregion aleiron } break; } #endregion by part #region draw part if (!part.Visible) draw = false; if (draw) { GL.glBegin(GL.GL_TRIANGLES); for (int i = 0; i < part.ModelPoints.Count; i += 3) { if (part.ModelPoints[i].GetTexCoord != null) GL.glTexCoord3d((double)part.ModelPoints[i].GetTexCoord.X, (double)part.ModelPoints[i].GetTexCoord.Y, (double)part.ModelPoints[i].GetTexCoord.Z); GL.glVertex3f(part.ModelPoints[i].GetPoint.X, part.ModelPoints[i].GetPoint.Y, part.ModelPoints[i].GetPoint.Z); if (part.ModelPoints[i + 1].GetTexCoord != null) GL.glTexCoord3d((double)part.ModelPoints[i + 1].GetTexCoord.X, (double)part.ModelPoints[i + 1].GetTexCoord.Y, (double)part.ModelPoints[i + 1].GetTexCoord.Z); GL.glVertex3f(part.ModelPoints[i + 1].GetPoint.X, part.ModelPoints[i + 1].GetPoint.Y, part.ModelPoints[i + 1].GetPoint.Z); if (part.ModelPoints[i + 2].GetTexCoord != null) GL.glTexCoord3d((double)part.ModelPoints[i + 2].GetTexCoord.X, (double)part.ModelPoints[i + 2].GetTexCoord.Y, (double)part.ModelPoints[i + 2].GetTexCoord.Z); GL.glVertex3f(part.ModelPoints[i + 2].GetPoint.X, part.ModelPoints[i + 2].GetPoint.Y, part.ModelPoints[i + 2].GetPoint.Z); } GL.glEnd(); } #endregion draw part GL.glPopMatrix(); } What happens step by step: Prepare scene, set the model camera by current variables {user can move camera using keyboard and mouse in all directions}. Take model which contains all modelParts within and draw them one by one. Each part has its own type and subtype set by part name. Draw current part independently, get its child_left and child_right -> those are the balls by name. Get moveBy value which is angle. Every part has its own variable which is modified using keyboard like "press a, increase variable x for 5.0f", which in turn will rotate aleiron up by 5.0f. Move and rotate object by its children Draw part triangles {the part has its own list of triangles that are read from obj file} When I compute the blue ball that is glued to its parent, I do exactly the same so it moves with the same manner: case Model.ModelPart.PartTypeBase.FLOW: { float moveBy = 0.0f; Model.IModelPart parent = part.GetParent(); if (parent != null) { moveBy = this.GetFromValDic(parent); if (parent.Reverse) moveBy *= -1.0f; child_left = parent.GetChild("_left"); child_right = parent.GetChild("_right"); } if (child_left != null && child_right != null) { GL.glGetFloatv(GL.GL_MODELVIEW_MATRIX, part.BeforeMove); GL.glTranslatef(child_right.GetCentralPoint.X, child_right.GetCentralPoint.Y, child_right.GetCentralPoint.Z); GL.glRotatef(moveBy, child_left.GetCentralPoint.X - child_right.GetCentralPoint.X, child_left.GetCentralPoint.Y - child_right.GetCentralPoint.Y, child_left.GetCentralPoint.Z - child_right.GetCentralPoint.Z); GL.glTranslatef(-child_right.GetCentralPoint.X, -child_right.GetCentralPoint.Y, -child_right.GetCentralPoint.Z); } } break; Before move I read the model matrix using GL.glGetFloatv(GL.GL_MODEL_MATRIX, matrix), which gives me those values {red contains current model camera}: The blue ball has its own initial central point computed, and currently it is (x, y, z): 339.6048, 15.811758, -166.209473. As far as I'm concerned, the point world coordinates never change, only its matrix is moved and rotated around some point. So the final question is, how to calculate the same point world coordinates after the object is moved and rotated? PS: to visualize the problem, I've created new small box, which I placed on the center of the blue ball after it is moved upwards. The position of the box is written by eye - I just placed it couple of times and after x try, I managed to place it correctly using similar world coordinates: First one is from the left of the box, and the second one is from behind. The box world coordinates are (x, y, z): 340.745117f, 30.0f, -157.6322f, so according to original central point of blue ball it is a bit higher and closer to the center of the wing. Simply put, I need to: take original central point of blue ball: 339.6048, 15.811758, -166.209473 after the movement and rotation of the blue ball is finished, apply some algorithm {like take something from modelview_matrix, multiply by something other} finally, after the algorithm is complete, in result I get 340.745117f, 30.0f, -157.6322f point {but now computed}, which is the central point in world matrix after movement. PS2: Sorry for long post, I tried to exactly explain what I'm dealing with. Thank you in advance for your help.
  16. Hello fellow programmers, For a couple of days now i've decided to build my own planet renderer just to see how floating point precision issues can be tackled. As you probably imagine, i've quickly faced FPP issues when trying to render absurdly large planets. I have used the classical quadtree LOD approach; I've generated my grids with 33 vertices, (x: -1 to 1, y: -1 to 1, z = 0). Each grid is managed by a TerrainNode class that, depending on the side it represents (top, bottom, left right, front, back), creates a special rotation-translation matrix that moves and rotates the grid away from the origin so that when i finally normalize all the vertices on my vertex shader i can get a perfect sphere. T = glm::translate(glm::dmat4(1.0), glm::dvec3(0.0, 0.0, 1.0)); R = glm::rotate(glm::dmat4(1.0), glm::radians(180.0), glm::dvec3(1.0, 0.0, 0.0)); sides[0] = new TerrainNode(1.0, radius, T * R, glm::dvec2(0.0, 0.0), new TerrainTile(1.0, SIDE_FRONT)); T = glm::translate(glm::dmat4(1.0), glm::dvec3(0.0, 0.0, -1.0)); R = glm::rotate(glm::dmat4(1.0), glm::radians(0.0), glm::dvec3(1.0, 0.0, 0.0)); sides[1] = new TerrainNode(1.0, radius, R * T, glm::dvec2(0.0, 0.0), new TerrainTile(1.0, SIDE_BACK)); // So on and so forth for the rest of the sides As you can see, for the front side grid, i rotate it 180 degrees to make it face the camera and push it towards the eye; the back side is handled almost the same way only that i don't need to rotate it but simply push it away from the eye. The same technique is applied for the rest of the faces (obviously, with the proper rotations / translations). The matrix that result from the multiplication of R and T (in that particular order) is send to my vertex shader as `r_Grid'. // spherify vec3 V = normalize((r_Grid * vec4(r_Vertex, 1.0)).xyz); gl_Position = r_ModelViewProjection * vec4(V, 1.0); The `r_ModelViewProjection' matrix is generated on the CPU in this manner. // No the most efficient way, but it works. glm::dmat4 Camera::getMatrix() { // Create the view matrix // Roll, Yaw and Pitch are all quaternions. glm::dmat4 View = glm::toMat4(Roll) * glm::toMat4(Pitch) * glm::toMat4(Yaw); // The model matrix is generated by translating in the oposite direction of the camera. glm::dmat4 Model = glm::translate(glm::dmat4(1.0), -Position); // Projection = glm::perspective(fovY, aspect, zNear, zFar); // zNear = 0.1, zFar = 1.0995116e12 return Projection * View * Model; } I managed to get rid of z-fighting by using a technique called Logarithmic Depth Buffer described in this article; it works amazingly well, no z-fighting at all, at least not visible. Each frame i'm rendering each node by sending the generated matrices this way. // set the r_ModelViewProjection uniform // Sneak in the mRadiusMatrix which is a matrix that contains the radius of my planet. Shader::setUniform(0, Camera::getInstance()->getMatrix() * mRadiusMatrix); // set the r_Grid matrix uniform i created earlier. Shader::setUniform(1, r_Grid); grid->render(); My planet's radius is around 6400000.0 units, absurdly large, but that's what i really want to achieve; Everything works well, the node's split and merge as you'd expect, however whenever i get close to the surface of the planet the rounding errors start to kick in giving me that lovely stairs effect. I've read that if i could render each grid relative to the camera i could get better precision on the surface, effectively getting rid of those rounding errors. My question is how can i achieve this relative to camera rendering in my scenario here? I know that i have to do most of the work on the CPU with double, and that's exactly what i'm doing. I only use double on the CPU side where i also do most of the matrix multiplications. As you can see from my vertex shader i only do the usual r_ModelViewProjection * (some vertex coords). Thank you for your suggestions!
  17. Hi all, I'm trying to generate MIP-maps of a 2D-array texture, but only a limited amount of array layers and MIP-levels. For instance, to generate only the first 3 MIP-maps of a single array layer of a large 2D-array. After experimenting with glBlitFramebuffer to generate the MIP-maps manually but still with some sort of hardware acceleration, I ended up with glTextureView which already works with the limited amount of array layers (I can also verify the result in RenderDoc). However, glGenerateMipmap (or glGenerateTextureMipmap) always generates the entire MIP-chain for the specified array layer. Thus, the <numlevels> parameter of glTextureView seems to be ignored in the MIP-map generation process. I also tried to use glTexParameteri(..., GL_TEXTURE_MAX_LEVEL, 3), but this has the same result. Can anyone explain me how to solve this? Here is an example code, how I do it: void GenerateSubMips( GLuint texID, GLenum texTarget, GLenum internalFormat, GLuint baseMipLevel, GLuint numMipLevels, GLuint baseArrayLayer, GLuint numArrayLayers) { GLuint texViewID = 0; glGenTextures(1, &texViewID); glTextureView( texViewID, texTarget, texID, internalFormat, baseMipLevel, numMipLevels, baseArrayLayer, numArrayLayers ); glGenerateTextureMipmap(texViewID); glDeleteTextures(1, &texViewID); } GenerateSubMips( myTex, GL_TEXTURE_2D_ARRAY, GL_RGBA8, 0, 3, // only the first 3 MIP-maps 4, 1 // only one array layer with index 4 ); Thanks and kind regards, Lukas
  18. I'm trying to implement PBR into my simple OpenGL renderer and trying to use multiple lighting passes, I'm using one pass per light for rendering as follow: 1- First pass = depth 2- Second pass = ambient 3- [3 .. n] for all the lights in the scene. I'm using the blending function glBlendFunc(GL_ONE, GL_ONE) for passes [3..n], and i'm doing a Gamma Correction at the end of each fragment shader. But i still have a problem with the output image it just looks noisy specially when i'm using texture maps. Is there anything wrong with those steps or is there any improvement to this process?
  19. Hi everyone, I would need some assistance from anyone who has a similar experience or a nice idea! I have created a skybox (as cube) and now I need to add a floor/ground. The skybox is created from cubemap and initially it was infinite. Now it is finite with a specific size. The floor is a quad in the middle of the skybox, like a horizon. I have two problems: When moving the skybox upwards or downwards, I need to sample from points even above the horizon while sampling from the botton at the same time. I am trying to create a seamless blending of the texture at the points of the horizon, when the quad is connected to the skybox. However, I get skew effects. Does anybody has done sth similar? Is there any good practice? Thanks everyone!
  20. Hello Everyone! I'm learning openGL, and currently i'm making a simple 2D game engine to test what I've learn so far. In order to not say to much, i made a video in which i'm showing you the behavior of the rendering. Video: What i was expecting to happen, was the player moving around. When i render only the player, he moves as i would expect. When i add a second Sprite object, instead of the Player, this new sprite object is moving and finally if i add a third Sprite object the third one is moving. And the weird think is that i'm transforming the Vertices of the Player so why the transformation is being applied somewhere else? Take a look at my code: Sprite Class (You mostly need to see the Constructor, the Render Method and the Move Method) #include "Brain.h" #include <glm/gtc/matrix_transform.hpp> #include <vector> struct Sprite::Implementation { //Position. struct pos pos; //Tag. std::string tag; //Texture. Texture *texture; //Model matrix. glm::mat4 model; //Vertex Array Object. VertexArray *vao; //Vertex Buffer Object. VertexBuffer *vbo; //Layout. VertexBufferLayout *layout; //Index Buffer Object. IndexBuffer *ibo; //Shader. Shader *program; //Brains. std::vector<Brain *> brains; //Deconstructor. ~Implementation(); }; Sprite::Sprite(std::string image_path, std::string tag, float x, float y) { //Create Pointer To Implementaion. m_Impl = new Implementation(); //Set the Position of the Sprite object. m_Impl->pos.x = x; m_Impl->pos.y = y; //Set the tag. m_Impl->tag = tag; //Create The Texture. m_Impl->texture = new Texture(image_path); //Initialize the model Matrix. m_Impl->model = glm::mat4(1.0f); //Get the Width and the Height of the Texture. int width = m_Impl->texture->GetWidth(); int height = m_Impl->texture->GetHeight(); //Create the Verticies. float verticies[] = { //Positions //Texture Coordinates. x, y, 0.0f, 0.0f, x + width, y, 1.0f, 0.0f, x + width, y + height, 1.0f, 1.0f, x, y + height, 0.0f, 1.0f }; //Create the Indicies. unsigned int indicies[] = { 0, 1, 2, 2, 3, 0 }; //Create Vertex Array. m_Impl->vao = new VertexArray(); //Create the Vertex Buffer. m_Impl->vbo = new VertexBuffer((void *)verticies, sizeof(verticies)); //Create The Layout. m_Impl->layout = new VertexBufferLayout(); m_Impl->layout->PushFloat(2); m_Impl->layout->PushFloat(2); m_Impl->vao->AddBuffer(m_Impl->vbo, m_Impl->layout); //Create the Index Buffer. m_Impl->ibo = new IndexBuffer(indicies, 6); //Create the new shader. m_Impl->program = new Shader("Shaders/SpriteShader.shader"); } //Render. void Sprite::Render(Window * window) { //Create the projection Matrix based on the current window width and height. glm::mat4 proj = glm::ortho(0.0f, (float)window->GetWidth(), 0.0f, (float)window->GetHeight(), -1.0f, 1.0f); //Set the MVP Uniform. m_Impl->program->setUniformMat4f("u_MVP", proj * m_Impl->model); //Run All The Brains (Scripts) of this game object (sprite). for (unsigned int i = 0; i < m_Impl->brains.size(); i++) { //Get Current Brain. Brain *brain = m_Impl->brains[i]; //Call the start function only once! if (brain->GetStart()) { brain->SetStart(false); brain->Start(); } //Call the update function every frame. brain->Update(); } //Render. window->GetRenderer()->Draw(m_Impl->vao, m_Impl->ibo, m_Impl->texture, m_Impl->program); } void Sprite::Move(float speed, bool left, bool right, bool up, bool down) { if (left) { m_Impl->pos.x -= speed; m_Impl->model = glm::translate(m_Impl->model, glm::vec3(-speed, 0, 0)); } if (right) { m_Impl->pos.x += speed; m_Impl->model = glm::translate(m_Impl->model, glm::vec3(speed, 0, 0)); } if (up) { m_Impl->pos.y += speed; m_Impl->model = glm::translate(m_Impl->model, glm::vec3(0, speed, 0)); } if (down) { m_Impl->pos.y -= speed; m_Impl->model = glm::translate(m_Impl->model, glm::vec3(0, -speed, 0)); } } void Sprite::AddBrain(Brain * brain) { //Push back the brain object. m_Impl->brains.push_back(brain); } pos *Sprite::GetPos() { return &m_Impl->pos; } std::string Sprite::GetTag() { return m_Impl->tag; } int Sprite::GetWidth() { return m_Impl->texture->GetWidth(); } int Sprite::GetHeight() { return m_Impl->texture->GetHeight(); } Sprite::~Sprite() { delete m_Impl; } //Implementation Deconstructor. Sprite::Implementation::~Implementation() { delete texture; delete vao; delete vbo; delete layout; delete ibo; delete program; } Renderer Class #include "Renderer.h" #include "Error.h" Renderer::Renderer() { } Renderer::~Renderer() { } void Renderer::Draw(VertexArray * vao, IndexBuffer * ibo, Texture *texture, Shader * program) { vao->Bind(); ibo->Bind(); program->Bind(); if (texture != NULL) texture->Bind(); GLCall(glDrawElements(GL_TRIANGLES, ibo->GetCount(), GL_UNSIGNED_INT, NULL)); } void Renderer::Clear(float r, float g, float b) { GLCall(glClearColor(r, g, b, 1.0)); GLCall(glClear(GL_COLOR_BUFFER_BIT)); } void Renderer::Update(GLFWwindow *window) { /* Swap front and back buffers */ glfwSwapBuffers(window); /* Poll for and process events */ glfwPollEvents(); } Shader Code #shader vertex #version 330 core layout(location = 0) in vec4 aPos; layout(location = 1) in vec2 aTexCoord; out vec2 t_TexCoord; uniform mat4 u_MVP; void main() { gl_Position = u_MVP * aPos; t_TexCoord = aTexCoord; } #shader fragment #version 330 core out vec4 aColor; in vec2 t_TexCoord; uniform sampler2D u_Texture; void main() { aColor = texture(u_Texture, t_TexCoord); } Also i'm pretty sure that every time i'm hitting the up, down, left and right arrows on the keyboard, i'm changing the model Matrix of the Player and not the others. Window Class: #include "Window.h" #include <GL/glew.h> #include <GLFW/glfw3.h> #include "Error.h" #include "Renderer.h" #include "Scene.h" #include "Input.h" //Global Variables. int screen_width, screen_height; //On Window Resize. void OnWindowResize(GLFWwindow *window, int width, int height); //Implementation Structure. struct Window::Implementation { //GLFW Window. GLFWwindow *GLFW_window; //Renderer. Renderer *renderer; //Delta Time. double delta_time; //Frames Per Second. int fps; //Scene. Scene *scnene; //Input. Input *input; //Deconstructor. ~Implementation(); }; //Window Constructor. Window::Window(std::string title, int width, int height) { //Initializing width and height. screen_width = width; screen_height = height; //Create Pointer To Implementation. m_Impl = new Implementation(); //Try initializing GLFW. if (!glfwInit()) { std::cout << "GLFW could not be initialized!" << std::endl; std::cout << "Press ENTER to exit..." << std::endl; std::cin.get(); exit(-1); } //Setting up OpenGL Version 3.3 Core Profile. glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3); glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 3); glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE); /* Create a windowed mode window and its OpenGL context */ m_Impl->GLFW_window = glfwCreateWindow(width, height, title.c_str(), NULL, NULL); if (!m_Impl->GLFW_window) { std::cout << "GLFW could not create a window!" << std::endl; std::cout << "Press ENTER to exit..." << std::endl; std::cin.get(); glfwTerminate(); exit(-1); } /* Make the window's context current */ glfwMakeContextCurrent(m_Impl->GLFW_window); //Initialize GLEW. if(glewInit() != GLEW_OK) { std::cout << "GLEW could not be initialized!" << std::endl; std::cout << "Press ENTER to exit..." << std::endl; std::cin.get(); glfwTerminate(); exit(-1); } //Enabling Blending. GLCall(glEnable(GL_BLEND)); GLCall(glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA)); //Setting the ViewPort. GLCall(glViewport(0, 0, width, height)); //**********Initializing Implementation**********// m_Impl->renderer = new Renderer(); m_Impl->delta_time = 0.0; m_Impl->fps = 0; m_Impl->input = new Input(this); //**********Initializing Implementation**********// //Set Frame Buffer Size Callback. glfwSetFramebufferSizeCallback(m_Impl->GLFW_window, OnWindowResize); } //Window Deconstructor. Window::~Window() { delete m_Impl; } //Window Main Loop. void Window::MainLoop() { //Time Variables. double start_time = 0, end_time = 0, old_time = 0, total_time = 0; //Frames Counter. int frames = 0; /* Loop until the user closes the window */ while (!glfwWindowShouldClose(m_Impl->GLFW_window)) { old_time = start_time; //Total time of previous frame. start_time = glfwGetTime(); //Current frame start time. //Calculate the Delta Time. m_Impl->delta_time = start_time - old_time; //Get Frames Per Second. if (total_time >= 1) { m_Impl->fps = frames; total_time = 0; frames = 0; } //Clearing The Screen. m_Impl->renderer->Clear(0, 0, 0); //Render The Scene. if (m_Impl->scnene != NULL) m_Impl->scnene->Render(this); //Updating the Screen. m_Impl->renderer->Update(m_Impl->GLFW_window); //Increasing frames counter. frames++; //End Time. end_time = glfwGetTime(); //Total time after the frame completed. total_time += end_time - start_time; } //Terminate GLFW. glfwTerminate(); } //Load Scene. void Window::LoadScene(Scene * scene) { //Set the scene. m_Impl->scnene = scene; } //Get Delta Time. double Window::GetDeltaTime() { return m_Impl->delta_time; } //Get FPS. int Window::GetFPS() { return m_Impl->fps; } //Get Width. int Window::GetWidth() { return screen_width; } //Get Height. int Window::GetHeight() { return screen_height; } //Get Input. Input * Window::GetInput() { return m_Impl->input; } Renderer * Window::GetRenderer() { return m_Impl->renderer; } GLFWwindow * Window::GetGLFWindow() { return m_Impl->GLFW_window; } //Implementation Deconstructor. Window::Implementation::~Implementation() { delete renderer; delete input; } //OnWindowResize void OnWindowResize(GLFWwindow *window, int width, int height) { screen_width = width; screen_height = height; //Updating the ViewPort. GLCall(glViewport(0, 0, width, height)); } Brain Class #include "Brain.h" #include "Sprite.h" #include "Window.h" struct Brain::Implementation { //Just A Flag. bool started; //Window Pointer. Window *window; //Sprite Pointer. Sprite *sprite; }; Brain::Brain(Window *window, Sprite *sprite) { //Create Pointer To Implementation. m_Impl = new Implementation(); //Initialize Implementation. m_Impl->started = true; m_Impl->window = window; m_Impl->sprite = sprite; } Brain::~Brain() { //Delete Pointer To Implementation. delete m_Impl; } void Brain::Start() { } void Brain::Update() { } Window * Brain::GetWindow() { return m_Impl->window; } Sprite * Brain::GetSprite() { return m_Impl->sprite; } bool Brain::GetStart() { return m_Impl->started; } void Brain::SetStart(bool value) { m_Impl->started = value; } Script Class (Its a Brain Subclass!!!) #include "Script.h" Script::Script(Window *window, Sprite *sprite) : Brain(window, sprite) { } Script::~Script() { } void Script::Start() { std::cout << "Game Started!" << std::endl; } void Script::Update() { Input *input = this->GetWindow()->GetInput(); Sprite *sp = this->GetSprite(); //Move this sprite. this->GetSprite()->Move(200 * this->GetWindow()->GetDeltaTime(), input->GetKeyDown("left"), input->GetKeyDown("right"), input->GetKeyDown("up"), input->GetKeyDown("down")); std::cout << sp->GetTag().c_str() << ".x = " << sp->GetPos()->x << ", " << sp->GetTag().c_str() << ".y = " << sp->GetPos()->y << std::endl; } Main: #include "SpaceShooterEngine.h" #include "Script.h" int main() { Window w("title", 600,600); Scene *scene = new Scene(); Sprite *player = new Sprite("Resources/Images/player.png", "Player", 100,100); Sprite *other = new Sprite("Resources/Images/cherno.png", "Other", 400, 100); Sprite *other2 = new Sprite("Resources/Images/cherno.png", "Other", 300, 400); Brain *brain = new Script(&w, player); player->AddBrain(brain); scene->AddSprite(player); scene->AddSprite(other); scene->AddSprite(other2); w.LoadScene(scene); w.MainLoop(); return 0; } I literally can't find what is wrong. If you need more code, ask me to post it. I will also attach all the source files. Brain.cpp Error.cpp IndexBuffer.cpp Input.cpp Renderer.cpp Scene.cpp Shader.cpp Sprite.cpp Texture.cpp VertexArray.cpp VertexBuffer.cpp VertexBufferLayout.cpp Window.cpp Brain.h Error.h IndexBuffer.h Input.h Renderer.h Scene.h Shader.h SpaceShooterEngine.h Sprite.h Texture.h VertexArray.h VertexBuffer.h VertexBufferLayout.h Window.h
  21. HI I've a ok framebuffer looking from above. Now how to turn it 90' to look at it from the front? It looks almost right but the upper colors look like you're right in it. Those should be blue like sky. I draw GL_TRIANGLE_STRIP colored depending on a height value. Any ideas also on the logic? Thanks
  22. DelicateTreeFrog

    OpenGL GLSL: 9-slicing

    I have a 9-slice shader working mostly nicely: Here, both the sprites are separate images, so the shader code works well: varying vec4 color; varying vec2 texCoord; uniform sampler2D tex; uniform vec2 u_dimensions; uniform vec2 u_border; float map(float value, float originalMin, float originalMax, float newMin, float newMax) { return (value - originalMin) / (originalMax - originalMin) * (newMax - newMin) + newMin; } // Helper function, because WET code is bad code // Takes in the coordinate on the current axis and the borders float processAxis(float coord, float textureBorder, float windowBorder) { if (coord < windowBorder) return map(coord, 0, windowBorder, 0, textureBorder) ; if (coord < 1 - windowBorder) return map(coord, windowBorder, 1 - windowBorder, textureBorder, 1 - textureBorder); return map(coord, 1 - windowBorder, 1, 1 - textureBorder, 1); } void main(void) { vec2 newUV = vec2( processAxis(texCoord.x, u_border.x, u_dimensions.x), processAxis(texCoord.y, u_border.y, u_dimensions.y) ); // Output the color gl_FragColor = texture2D(tex, newUV); } External from the shader, I upload vec2(slice/box.w, slice/box.h) into the u_dimensions variable, and vec2(slice/clip.w, slice/clip.h) into u_border. In this scenario, box represents the box dimensions, and clip represents dimensions of the 24x24 image to be 9-sliced, and slice is 8 (the size of each slice in pixels). This is great and all, but it's very disagreeable if I decide I'm going to organize the various 9-slice images into a single image sprite sheet. Because OpenGL works between 0.0 and 1.0 instead of true pixel coordinates, and processes the full images rather than just the contents of the clipping rectangles, I'm kind of stumped about how to tell the shader to do what I need it to do. Anyone have pro advice on how to get it to be more sprite-sheet-friendly? Thank you!
  23. So, algorithm looks like this: Use fbo Clear depth and color buffers Write depth Stretch fbo depth texture to screen size and compare that with final scene GLSL algo looks like this: Project light position and vertex position to screen space coords then move them to 0..1 space Compute projected vertex to light vector Then by defined number of samples go from projected vertex position to projected light position: - Get the depth from depth texture - unproject this given texture coord and depth value * 2.0 - 1.0 using inverse of (model*view)*projection matrix Find closest point on line (world_vertex_pos, wirld light pos) to unprojected point, if its less than 0.0001 then i say the ray hit something and original fragment is in shadow Now i forgot few things, so i'll have to ask: In vertex shader i do something like this vertexClip.x = dp43(MVP1, Vpos); vertexClip.y = dp43(MVP2, Vpos); vertexClip.z = dp43(MVP3, Vpos); vertexClip.w = dp43(MVP4, Vpos); scrcoord = vec3(vertexClip.x, vertexClip.y, vertexClip.z); Where float dp43(vec4 matrow, vec3 p) { return ( (matrow.x*p.x) + (matrow.y*p.y) + (matrow.z*p.z) + matrow.w ); } It looks like i dont have to divide scrcoord by vertexClip.w component when i do something like this at the end of shader gl_Position = vertexClip; and fragments are located where they should be... I pass scrcoord to fragment shader, then do 0.5 + 0.5 to know from which position of the depth tex i start to unproject values. So scrcoord should be in -1..1 space right? Another thing is with unprojecting a screen coord to 3d position: So far i use this formula: Get texel depth, do *2.0-1.0 for all xyz components Then multiple it by inverse of (model*view)*projection matrix like that: Not quite sure if this isneven correct: vec3 unproject(vec3 op) { vec3 outpos; outpos.x = dp43(imvp1, op); outpos.y = dp43(imvp2, op); outpos.z = dp43(imvp3, op); return outpos; } And last question is about ray sampling i'm pretty sure it will skip some pixels making shadowed fragments unshadowed.... Need somehow to fix that too, but for now i have no clue... vec3 act_tex_pos = fcoord + projected_ldir * sample_step * float ( i ); I checked depth tex for values and theyre right. Vert shader
  24. Hello, I have a question about premultiplied alpha images, texture atlas and texture filter (minification with bilinear filter). (I use correct blending function for PMA and all my images are in premultiplied alpha format.) Suppose that there are 3 different versions of a plain square image: 1. In a separate file, by itself. No padding (padding means alpha 0 pixels in my case). 2. In an atlas in which the subtexture's top left corner(0, 0) is positioned on top left corner(0, 0) of atlas. There are padding on right and bottom but not on left and top. 3. In an atlas, in which there are padding in each of the 4 directions of the subtexture. Do these 3 give the same result when texture is minified using Bilinear filter? If not I assume this means premultiplied alpha distorts images since alpha 0 pixels are included in interpolation. If so why do we use something that distorts our images? And even if we don't use premultiplied alpha and use "normal" blending, alpha 0 pixels are still used in interpolation. We add bleeding to overcome problems caused by this (don't know if it causes exact true pixel rendering though). So the question is: Does every texture atlas cause distortion in contained images even if we use bilinear (without mipmaps)? If so why does everyone use atlas if it's something that's so bad? I tried to test this with a simple scenario but don't know if my test method is right. I made a yellow square image with red border. The entire square (border included) is 116x116 px. The entire image is 128x128. I made two versions of this image. Both images are in premultiplied alpha format. 1st version: square starts at 0, 0 and there are 12 pixels padding on bottom and right. 2nd version: square is centered both horizontally and vertically so there are 6px padding on top, left, bottom and right. I scaled them to 32x32 (scaled entire image without removing padding) using Bilinear filter. And when rendered, they both give very very different results. One is exact while the other one is blurry. I need to know if this is caused by the problem I mentioned in the question. Here are the images I used in this test: Image (topleft): Image (mid): Render result: I try to use x, y and width, height values for sprite which match integers so subpixel rendering is not intended.
  25. I am trying to oscillate a 2D object about a point, while the object has already been drawn then later I want to oscillate it by specifying the pivot point, how to proceed, please help me.It is for a mini project.I am using codeblocks and using c programing language along with OpenGL to do this, I have attached main .c file and .cpb file below (For reference I have attached the code for that object also below .) I have attached a .png file as what is my 2D image looks like.I want to oscillate that image about its top point(i.e cap tip). #include <GL/glut.h> #include <math.h> #define PI 3.14159265358979324 void myinit() { glClearColor(1.0,1.0,1.0,0.0); glMatrixMode (GL_PROJECTION); gluOrtho2D(0.0, 200.0, 0.0, 150.0); //glutPostRedisplay(); } void display(void) { float r3 = 4.0; // Radius of circle. float x3 = 80.0; // X-coordinate of center of circle. float y3 = 67.0; float r1 = 1.0; // Radius of circle. float x1 = 77.0; // X-coordinate of center of circle. float y1 = 73.0; float r2 = 1.0; // Radius of circle. float x2 = 83.0; // X-coordinate of center of circle. float y2 = 73.0; float R = 3.0; // Radius of circle. float X = 80.0; // X-coordinate of center of circle. float Y = 103.0; // Y-coordinate of center of circle. int numVertices = 25; // Number of vertices on circle. float t = 0; // Angle parameter. int i; glClear(GL_COLOR_BUFFER_BIT); //glColor3f(0.5, 0.5, 0.5); glLineWidth(2.0); glEnable(GL_LINE_STIPPLE); //glLineStipple(1,0x00ff); //draw a line glBegin(GL_LINES); glColor3f(0.92, 0.78, 0.62); glVertex2f(80.0,60.0); glVertex2f(90.0,65.0); glVertex2f(90.0,65.0); glVertex2f(90.0,75.0); glColor3f(0.137255, 0.556863, 0.137255);//0.0, 0.5, 0.0); glVertex2f(90.0,75.0); glVertex2f(80.0,80.0); glVertex2f(80.0,80.0); glVertex2f(70.0,75.0); glColor3f(0.92, 0.78, 0.62); glVertex2f(70.0,75.0); glVertex2f(70.0,65.0); glVertex2f(70.0,65.0); glVertex2f(80.0,60.0); glColor3f( 0.137255, 0.556863, 0.137255); glVertex2f(90.0,75.0); glVertex2f(80.0,100.0); glVertex2f(80.0,100.0); glVertex2f(70.0,75.0); glEnd(); glFlush(); glBegin(GL_POLYGON); glColor3f(0.1, 0.9, 0.0); for(int i = 0; i < numVertices; ++i) { //glColor3ub(rand()%256, rand()%256, rand()%256); glVertex3f(X + R * cos(t), Y + R * sin(t), 0.0); t += 2 * PI / numVertices; } glEnd(); glFlush(); glBegin(GL_POLYGON); glColor3f(0.647059, 0.164706, 0.164706); for(int i = 0; i < numVertices; ++i) { glVertex3f(x1 + r1 * cos(t), y1 + r1 * sin(t), 0.0); t += 2 * PI / numVertices; } glEnd(); glFlush(); glBegin(GL_POLYGON); glColor3f(0.647059, 0.164706, 0.164706); for(int i = 0; i < numVertices; ++i) { glVertex3f(x2 + r2 * cos(t), y2 + r2 * sin(t), 0.0); t += 2 * PI / numVertices; } glEnd(); glFlush(); glBegin(GL_POLYGON); glColor3f(1.0, 0.7, 0.0); for(int i = 0; i < numVertices; ++i) { glVertex3f(x3 + r3 * cos(t), y3 + r3 * sin(t), 0.0); t += - PI / numVertices; } glEnd(); glFlush(); } void main(int argc,char *argv[]) { glutInit(&argc,argv); glutInitDisplayMode (GLUT_SINGLE | GLUT_RGB); glutInitWindowSize (1000, 1000); glutInitWindowPosition (0, 0); glutCreateWindow ("joker"); myinit(); glutDisplayFunc(display); glutMainLoop(); } thanks in advance, please help me joker.cbp main.c
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

Participate in the game development conversation and more when you create an account on GameDev.net!

Sign me up!