Jump to content
  • Advertisement

Search the Community

Showing results for tags 'OpenGL'.

The search index is currently processing. Current results may not be complete.


More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Categories

  • Audio
    • Music and Sound FX
  • Business
    • Business and Law
    • Career Development
    • Production and Management
  • Game Design
    • Game Design and Theory
    • Writing for Games
    • UX for Games
  • Industry
    • Interviews
    • Event Coverage
  • Programming
    • Artificial Intelligence
    • General and Gameplay Programming
    • Graphics and GPU Programming
    • Engines and Middleware
    • Math and Physics
    • Networking and Multiplayer
  • Visual Arts
  • Archive

Categories

  • Audio
  • Visual Arts
  • Programming
  • Writing

Categories

  • Game Developers Conference
    • GDC 2017
    • GDC 2018
  • Power-Up Digital Games Conference
    • PDGC I: Words of Wisdom
    • PDGC II: The Devs Strike Back
    • PDGC III: Syntax Error

Forums

  • Audio
    • Music and Sound FX
  • Business
    • Games Career Development
    • Production and Management
    • Games Business and Law
  • Game Design
    • Game Design and Theory
    • Writing for Games
  • Programming
    • Artificial Intelligence
    • Engines and Middleware
    • General and Gameplay Programming
    • Graphics and GPU Programming
    • Math and Physics
    • Networking and Multiplayer
  • Visual Arts
    • 2D and 3D Art
    • Critique and Feedback
  • Community
    • GameDev Challenges
    • GDNet+ Member Forum
    • GDNet Lounge
    • GDNet Comments, Suggestions, and Ideas
    • Coding Horrors
    • Your Announcements
    • Hobby Project Classifieds
    • Indie Showcase
    • Article Writing
  • Affiliates
    • NeHe Productions
    • AngelCode
  • Topical
    • Virtual and Augmented Reality
    • News
  • Workshops
    • C# Workshop
    • CPP Workshop
    • Freehand Drawing Workshop
    • Hands-On Interactive Game Development
    • SICP Workshop
    • XNA 4.0 Workshop
  • Archive
    • Topical
    • Affiliates
    • Contests
    • Technical
  • GameDev Challenges's Topics
  • For Beginners's Forum

Calendars

  • Community Calendar
  • Games Industry Events
  • Game Jams
  • GameDev Challenges's Schedule

Blogs

There are no results to display.

There are no results to display.

Product Groups

  • GDNet+
  • Advertisements
  • GameDev Gear

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


About Me


Website


Role


Twitter


Github


Twitch


Steam

Found 2933 results

  1. I've noticed in most post processing tutorials several shaders are used one after another: one for bloom, another for contrast, and so on. For example: postprocessing.quad.bind() // Effect 1 effect1.shader.bind(); postprocessing.texture.bind(); postprocessing.quad.draw(); postprocessing.texture.unbind(); effect1.shader.unbind(); // Effect 2 effect2.shader.bind(); // ...and so on postprocessing.quad.unbind() Is this good practice, how many shaders can I bind and unbind before I hit performance issues? I'm afraid I don't know what the good practices are in open/webGL regarding binding and unbinding resources. I'm guessing binding many shaders at post processing is okay since the scene has already been updated and I'm just working on a quad and texture at that moment. Or is it more optimal to put shader code in chunks and bind less frequently? I'd love to use several shaders at post though. Another example of what I'm doing at the moment: 1) Loop through GameObjects, bind its phong shader (send color, shadow, spec, normal samplers), unbind all. 2) At post: bind post processor quad, and loop/bind through different shader effects, and so on ... Thanks all!
  2. i am reading this book : link in the OpenGL Rendering Pipeline section there is a picture like this: link but the question is this i dont really understand why it is necessary to turn pixel data in to fragment and then fragment into pixel could please give me a source or a clear Explanation that why it is necessary ? thank you so mu
  3. In order to increase the aesthetics, we looked for tips on the post-processing filter for our engine and came up with the idea of using a VHS / Analog post-processing filter, Because my teammate had already built OpenGL shaders in the past and that's kind of his hobby, he gave me the link to shadertoy. This site is amazing! There're a lot of shaders to use as a base we can build on, and it's also 100% web thanks to WebGL. This shader in particular caught my eye: It's really cool, and yet there are no VHS artifacts that can really obstruct the players' view . So I did a little tinkering with JMonkeyEngine and got this result: I'm really happy with the results. I could however reduce the blur amount: it can be annoying it it's too high...
  4. phil67rpg

    wait loop

    void collision(int v) { collision_bug_one(0.0f, 10.0f); glutPostRedisplay(); glutTimerFunc(1000, collision, 0); } void coll_sprite() { if (board[0][0] == 1) { collision(0); flag[0][0] = 1; } } void erase_sprite() { if (flag[0][0] == 1) { glColor3f(0.0f, 0.0f, 0.0f); glBegin(GL_POLYGON); glVertex3f(0.0f, 10.0f, 0.0f); glVertex3f(0.0f, 9.0f, 0.0f); glVertex3f(1.0f, 9.0f, 0.0f); glVertex3f(1.0f, 10.0f, 0.0f); glEnd(); } } I am using glutTimerFunc to wait a small amount of time to display a collision sprite before I black out the sprite. unfortunately my code only blacks out the said sprite without drawing the collision sprite, I have done a great deal of research on the glutTimerFunc and animation.
  5. I'm using the OPENGL with eclipse+JOGL.My goal is to create movement of the camera and the player.I create main class, which create some box in 3D and hold an object of PlayerAxis.I create PlayerAxis class which hold the axis of the player.If we want to move the camera, then in the main class I call to the func "cameraMove"(from PlayerAxis) and it update the player axis.That's work good.The problem start if I move the camera on 2 axis, for example if I move with the camera right(that's on the y axis)and then down(on the x axis) -in some point the move front is not to the front anymore..In order to move to the front, I doplayer.playerMoving(0, 0, 1);And I learn that in order to keep the front move, I need to convert (0, 0, 1) to the player axis, and then add this.I think I dont do the convert right.. I will be glad for help!Here is part of my PlayerAxis class: //player coordinate float x[] = new float[3]; float y[] = new float[3]; float z[] = new float[3]; public PlayerAxis(float move_step, float angle_move) { x[0] = 1; y[1] = 1; z[2] = -1; step = move_step; angle = angle_move; setTransMatrix(); } public void cameraMoving(float angle_step, String axis) { float[] new_x = x; float[] new_y = y; float[] new_z = z; float alfa = angle_step * angle; switch(axis) { case "x": new_z = addVectors(multScalar(z, COS(alfa)), multScalar(y, SIN(alfa))); new_y = subVectors(multScalar(y, COS(alfa)), multScalar(z, SIN(alfa))); break; case "y": new_x = addVectors(multScalar(x, COS(alfa)), multScalar(z, SIN(alfa))); new_z = subVectors(multScalar(z, COS(alfa)), multScalar(x, SIN(alfa))); break; case "z": new_x = addVectors(multScalar(x, COS(alfa)), multScalar(y, SIN(alfa))); new_y = subVectors(multScalar(y, COS(alfa)), multScalar(x, SIN(alfa))); } x = new_x; y = new_y; z = new_z; normalization(); } public void playerMoving(float x_move, float y_move, float z_move) { float[] move = new float[3]; move[0] = x_move; move[1] = y_move; move[2] = z_move; setTransMatrix(); float[] trans_move = transVector(move); position[0] = position[0] + step*trans_move[0]; position[1] = position[1] + step*trans_move[1]; position[2] = position[2] + step*trans_move[2]; } public void setTransMatrix() { for (int i = 0; i < 3; i++) { coordiTrans[0][i] = x[i]; coordiTrans[1][i] = y[i]; coordiTrans[2][i] = z[i]; } } public float[] transVector(float[] v) { return multiplyMatrixInVector(coordiTrans, v); } and in the main class i have this: public void keyPressed(KeyEvent e) { if (e.getKeyCode()== KeyEvent.VK_ESCAPE) { System.exit(0); //player move } else if (e.getKeyCode()== KeyEvent.VK_W) { //front //moveAmount[2] += -0.1f; player.playerMoving(0, 0, 1); } else if (e.getKeyCode()== KeyEvent.VK_S) { //back //moveAmount[2] += 0.1f; player.playerMoving(0, 0, -1); } else if (e.getKeyCode()== KeyEvent.VK_A) { //left //moveAmount[0] += -0.1f; player.playerMoving(-1, 0, 0); } else if (e.getKeyCode()== KeyEvent.VK_D) { //right //moveAmount[0] += 0.1f; player.playerMoving(1, 0, 0); } else if (e.getKeyCode()== KeyEvent.VK_E) { //moveAmount[0] += 0.1f; player.playerMoving(0, 1, 0); } else if (e.getKeyCode()== KeyEvent.VK_Q) { //moveAmount[0] += 0.1f; player.playerMoving(0, -1, 0); //camera move } else if (e.getKeyCode()== KeyEvent.VK_I) { //up player.cameraMoving(1, "x"); } else if (e.getKeyCode()== KeyEvent.VK_K) { //down player.cameraMoving(-1, "x"); } else if (e.getKeyCode()== KeyEvent.VK_L) { //right player.cameraMoving(-1, "y"); } else if (e.getKeyCode()== KeyEvent.VK_J) { //left player.cameraMoving(1, "y"); } else if (e.getKeyCode()== KeyEvent.VK_O) { //right round player.cameraMoving(-1, "z"); } else if (e.getKeyCode()== KeyEvent.VK_U) { //left round player.cameraMoving(1, "z"); } } finallt found it.... i confused with the transformation matrix row and col. thanks anyway!
  6. So, i'm currently trying to implement an SSAO shader from THIS tutorial and i'm running into a few issues here. Now, this SSAO method requires view space positions and normals. I'm storing the normals in my deferred renderer in world-space so i had to do a conversion and reconstruct the position from the depth buffer. And something there goes horribly wrong (which has probably to do with worldspace to viewspace transformations). (here is the full shader source code if someone wants to take a look at it) Now, i suspect that the normals are the culprit. vec3 normal = ((uNormalViewMatrix*vec4(normalize(texture2D(sNormals, vTexcoord).rgb),1.0)).xyz); "sNormals" is a 2D texture which stores the normals in world space in a RGB FP16 buffer. Now i can't use the camera viewspace matrix to transform the normals into viewspace as the cameras position isn't set at (0,0,0), thus skewing the result. So what i did is to create a new viewmatrix specifically for this normal without the position at vec3(0,0,0); //"camera" is the camera which was used for rendering the normal buffer renderer.setUniform4m(ressources->shaderSSAO->getUniform("uNormalViewMatrix"), glmExt::createViewMatrix(glm::vec3(0,0,0),camera.getForward(),camera.getUp())//parameters are (position,forwardVector,upVector) ); Though i have the feeling this is the wrong approach. Is this right or is there a better/correct way of transforming a world space normal into viewspace?
  7. I'm having issues loading textures, as I'm clueless on how to handle / load images maybe I missing something, but the past few days I just google a lot to try to find a solution. Well theres two issues I think, one I'm using Kotlin Native (EAP) and OpenGL wrapper / STB image, so I'm not quite sure wheres the issue, if someone with more experience could give me some hints on how to solve this issue? The code is here, if I'm not mistaken the workflow is pretty straight forward, stbi_load returns the pixels of the image (as char array or byte array) and you need to pass those pixels directly to glTexImage2D, so a I'm missing something here it seems. Regards
  8. Hi, I'm trying mix two textures using own shader system, but I have a problem (I think) with uniforms. Code: https://github.com/HawkDeath/shader/tree/test To debug I use RenderDocs, but I did not receive good results. In the first attachment is my result, in the second attachment is what should be. PS. I base on this tutorial https://learnopengl.com/Getting-started/Textures.
  9. So, i stumbled upon the topic of gamma correction. https://learnopengl.com/Advanced-Lighting/Gamma-Correction So from what i've been able to gather: (Please correct me if i'm wrong) Old CRT monitors couldn't display color linearly, that's why gamma correction was nessecary. Modern LCD/LED monitors don't have this issue anymore but apply gamma correction anyway. (For compatibility reasons? Can this be disabled?) All games have to apply gamma correction? (unsure about that) All textures stored in file formats (.png for example) are essentially stored in SRGB color space (as what we see on the monitor is skewed due to gamma correction. So the pixel information is the same, the percieved colors are just wrong.) This makes textures loaded into the GL_RGB format non linear, thus all lighting calculations are wrong You have to always use the GL_SRGB format to gamma correct/linearise textures which are in SRGB format Now, i'm kinda confused how to proceed with applying gamma correction in OpenGL. First of, how can i check if my Monitor is applying gamma correction? I noticed in my monitor settings that my color format is set to "RGB" (can't modify it though.) I'm connected to my PC via a HDMI cable. I'm also using the full RGB range (0-255, not the 16 to ~240 range) What i tried to do is to apply a gamma correction shader shown in the tutorial above which looks essentially like this: (it's a postprocess shader which is applied at the end of the renderpipeline) vec3 gammaCorrection(vec3 color){ // gamma correction color = pow(color, vec3(1.0/2.2)); return color; } void main() { vec3 color; vec3 tex = texture2D(texture_diffuse, vTexcoord).rgb; color = gammaCorrection(tex); outputF = vec4(color,1.0f); } The results look like this: No gamma correction: With gamma correction: The colors in the gamma corrected image look really wased out. (To the point that it's damn ugly. As if someone overlayed a white half transparent texture. I want the colors to pop.) Do i have to change the textures from GL_RGB to GL_SRGB in order to gamma correct them in addition to applying the post process gamma correction shader? Do i have to do the same thing with all FBOs? Or is this washed out look the intended behaviour?
  10. Me again. I noticed a weird issue with color banding in my current PBR shader. After a bit of experimentation i noticed that the fresnel calculation seems to be the culprit. Here is how the colorbanding looks like (it's a bit dark, but noticeable): I also think that the fresnel calculation is a bit off. This is the actual shader (stripped everything away except the fresnel calculation:) It's worth noting that i'm writing those values into a 16 bit floating point buffer. (So the FBO precision shouldn't be the culprit.) Is this a math based precision error? (especially in the pow() function) Also another thing i noticed, The fresnel effect is supposed to look like this: (picture shamelessly stolen from google) However, no matter what i do i never get this effect in my shader. (I tried all lighting conditions and material values.) Here is what a material with 50% roughness and 50% metallic value looks like: I noticed that switching the "VdotH" dot product to "LdotV" makes this effect somewhat work, but i read conflicting information on the internet as if this is even correct. Here is the complete shader: Anyone has an idea why the banding effect takes place and if the fresnel calculation is even correct?
  11. This week Hello everyone! This week, as always, I fixed few bugs, did few tweaks and implemented small features. Here's a list of things I've done this week: Tweaked enemy prices and levels. Added object picking Added sounds and music Tweaked audio gains for different sounds and music Added capability of selling towers Added new cursor Implemented support for multiple enemy waves Added new enemy: Tank Added new turret type: Rocket turret Screenshots: Important decision Yesterday I came to important realization while making my game. I realized that I need to build a fully working prototype of the game, figure out all the mechanics, implement all the tower and enemy types first and only then continue doing the minor things and polishing things up. I also realized that I should spend less time on music and 3D models. All these things will be done, but not now. I need to make my game fun first, and only then worry about all the assets. I don't want to repeat same mistake which I did with my previous games where I spent too much time on graphics, sounds, performance optimizations and too little time for actual gameplay mechanics. Problems There's one problem which I currently face, and it has to do with the "Tesla Coil" tower. As of now it slows down the enemies in its range, but there's a problem with that. The problem arises when you put the tower at the very end of the maze. When you do that, it not only slows down the enemies in its range, but it also slows down any enemies which came before them. This is because of the obstacle avoidance which makes it so that the enemies couldn't go through each other like some sort of ghosts. The only solution which I can think of right now is to remove this type of tower and replace it with something else. Next week This coming week I'll start thinking of and implementing all the different tower and enemy types. I might do some very basic 3D models to use as placeholders as well. That's all for now, thanks for reading Twitter: https://twitter.com/extrabitgamesFacebook: https://www.facebook.com/extrabitgamesWebsite: http://extrabitgames.com
  12. Hi I am trying to program shadow volumes and i stumbled upon an artifact which i can not find the cause for. I generate the shadow volumes using a geometry shader with reversed extrusion (projecting the lightfacing triangles to infinity) and write the stencil buffer according to z-fail. The base of my code is the "lighting" chapter from learnopengl.com, where i extended the shader class to include geometry shader. I also modified the "lightingshader" to draw the ambient pass when "pass" is set to true and the diffuse/ specular pass when set to false. For easier testing i added a view controls to switch on/off the shadow volumes' color rendering or to change the cubes' position, i made the lightnumber controllable and changed the diffuse pass to render green for easier visualization of my problem. The first picture shows the rendered scene for one point light, all cubes and the front cube's shadow volume is the only one created (intentional). Here, all is rendered as it should be with all lit areas green and all areas inside the shadow volume black (with the volume's sides blended over). If i now turn on the shadow volumes for all the other cubes, we get a bit of a mess, but its also obvious that some areas that were in shadow before are now erroneously lit (for example the first cube to the right from the originaly shadow volumed cube). From my testing the areas erroneously lit are the ones where more than one shadow volume marks the area as shadowed. To check if a wrong stencil buffer value caused this problem i decided to change the stencil function for the diffuse pass to only render if the stencil is equal to 2. As i repeated this approach with different values for the stencil function i found out that if i set the value equal to 1 or any other uneven value the lit and shadowed areas are inverted and if i set it to 0 or any other even value i get the results shown above. This lead me to believe that the value and thus the stencil buffer values may be clamped to [0,1] which would also explain the artifact, because twice in shadow would equal in no shadow at all, but from what i found on the internet and from what i tested with GLint stencilSize = 0; glGetFramebufferAttachmentParameteriv(GL_DRAW_FRAMEBUFFER, GL_STENCIL, GL_FRAMEBUFFER_ATTACHMENT_STENCIL_SIZE, &stencilSize); my stencilsize is 8 bit, which should be values within [0,255]. Does anyone know what might be the cause for this artifact or the confusing results with other stencil functions? // [the following code includes all used gl* functions, other parts are due to readability partialy excluded] // glfw: initialize and configure // ------------------------------ glfwInit(); glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 4); glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 4); glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE); // glfw window creation // -------------------- GLFWwindow* window = glfwCreateWindow(SCR_WIDTH, SCR_HEIGHT, "LearnOpenGL", NULL, NULL); if (window == NULL) { cout << "Failed to create GLFW window" << endl; glfwTerminate(); return -1; } glfwMakeContextCurrent(window); glfwSetFramebufferSizeCallback(window, framebuffer_size_callback); glfwSetCursorPosCallback(window, mouse_callback); glfwSetScrollCallback(window, scroll_callback); // tell GLFW to capture our mouse glfwSetInputMode(window, GLFW_CURSOR, GLFW_CURSOR_DISABLED); // glad: load all OpenGL function pointers // --------------------------------------- if (!gladLoadGLLoader((GLADloadproc)glfwGetProcAddress)) { cout << "Failed to initialize GLAD" << endl; return -1; } // ==================================================================================================== // window and functions are set up // ==================================================================================================== // configure global opengl state // ----------------------------- glEnable(GL_DEPTH_TEST); glEnable(GL_CULL_FACE); // build and compile our shader program [...] // set up vertex data (and buffer(s)) and configure vertex attributes [...] // shader configuration [...] // render loop // =========== while (!glfwWindowShouldClose(window)) { // input processing and fps calculation[...] // render // ------ glClearColor(0.1f, 0.1f, 0.1f, 1.0f); glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glDepthMask(GL_TRUE); //enable depth writing glDepthFunc(GL_LEQUAL); //avoid z-fighting //draw ambient component into color and depth buffer view = camera.GetViewMatrix(); projection = glm::perspective(glm::radians(camera.Zoom), (float)SCR_WIDTH / (float)SCR_HEIGHT, 0.1f, 100.0f); // setting up lighting shader for ambient pass [...] // render the cubes glBindVertexArray(cubeVAO); for (unsigned int i = 0; i < 10; i++) { //position cube [...] glDrawArrays(GL_TRIANGLES, 0, 36); } //------------------------------------------------------------------------------------------------------------------------ glDepthMask(GL_FALSE); //disable depth writing glEnable(GL_BLEND); glBlendFunc(GL_ONE, GL_ONE); //additive blending glEnable(GL_STENCIL_TEST); //setting up shadowShader and lightingShader [...] for (int light = 0; light < lightsused; light++) { glDepthFunc(GL_LESS); glClear(GL_STENCIL_BUFFER_BIT); //configure stencil ops for front- and backface to write according to z-fail glStencilOpSeparate(GL_FRONT, GL_KEEP, GL_DECR_WRAP, GL_KEEP); //-1 for front-facing glStencilOpSeparate(GL_BACK, GL_KEEP, GL_INCR_WRAP, GL_KEEP); //+1 for back-facing glStencilFunc(GL_ALWAYS, 0, GL_TRUE); //stencil test always passes if(hidevolumes) glColorMask(GL_FALSE, GL_FALSE, GL_FALSE, GL_FALSE); //disable writing to the color buffer glDisable(GL_CULL_FACE); glEnable(GL_DEPTH_CLAMP); //necessary to render SVs into infinity //draw SV------------------- shadowShader.use(); shadowShader.setInt("lightnr", light); int nr; if (onecaster) nr = 1; else nr = 10; for (int i = 0; i < nr; i++) { //position cube[...] glDrawArrays(GL_TRIANGLES, 0, 36); } //-------------------------- glDisable(GL_DEPTH_CLAMP); glEnable(GL_CULL_FACE); glStencilFunc(GL_EQUAL, 0, GL_TRUE); //stencil test passes for ==0 so only for non shadowed areas glStencilOp(GL_KEEP, GL_KEEP, GL_KEEP); //keep stencil values for illumination glColorMask(GL_TRUE, GL_TRUE, GL_TRUE, GL_TRUE); //enable writing to the color buffer glDepthFunc(GL_LEQUAL); //avoid z-fighting //draw diffuse and specular pass lightingShader.use(); lightingShader.setInt("lightnr", light); // render the cubes for (unsigned int i = 0; i < 10; i++) { //position cube[...] glDrawArrays(GL_TRIANGLES, 0, 36); } } glDisable(GL_BLEND); glDepthMask(GL_TRUE); //enable depth writing glDisable(GL_STENCIL_TEST); //------------------------------------------------------------------------------------------------------------------------ // also draw the lamp object(s) [...] // glfw: swap buffers and poll IO events (keys pressed/released, mouse moved etc.) // ------------------------------------------------------------------------------- glfwSwapBuffers(window); glfwP } // optional: de-allocate all resources once they've outlived their purpose: // ------------------------------------------------------------------------ glDeleteVertexArrays(1, &cubeVAO); glDeleteVertexArrays(1, &lightVAO); glDeleteBuffers(1, &VBO); // glfw: terminate, clearing all previously allocated GLFW resources. // ------------------------------------------------------------------ glfwTerminate(); return 0;
  13. Hi, i am self teaching me graphics and oo programming and came upon this: My Window class creates an input handler instance, the glfw user pointer is redirected to that object and methods there do the input handling for keyboard and mouse. That works. Now as part of the input handling i have an orbiting camera that is controlled by mouse movement. GLFW_CURSOR_DISABLED is set as proposed in the glfw manual. The manual says that in this case the cursor is automagically reset to the window's center. But if i don't reset it manually with glfwSetCursorPos( center ) mouse values seem to add up until the scene is locked up. Here are some code snippets, mostly standard from tutorials: // EventHandler m_eventHandler = new EventHandler( this, glm::vec3( 0.0f, 5.0f, 0.0f ), glm::vec3( 0.0f, 1.0f, 0.0f ) ); glfwSetWindowUserPointer( m_window, m_eventHandler ); m_eventHandler->setCallbacks(); Creation of the input handler during window creation. For now, the camera is part of the input handler, hence the two vectors (position, up-vector). In future i'll take that functionally out into an own class that inherits from the event handler. void EventHandler::setCallbacks() { glfwSetCursorPosCallback( m_window->getWindow(), cursorPosCallback ); glfwSetKeyCallback( m_window->getWindow(), keyCallback ); glfwSetScrollCallback( m_window->getWindow(), scrollCallback ); glfwSetMouseButtonCallback( m_window->getWindow(), mouseButtonCallback ); } Set callbacks in the input handler. // static void EventHandler::cursorPosCallback( GLFWwindow *w, double x, double y ) { EventHandler *c = reinterpret_cast<EventHandler *>( glfwGetWindowUserPointer( w ) ); c->onMouseMove( (float)x, (float)y ); } Example for the cursor pos callback redirection to a class method. // virtual void EventHandler::onMouseMove( float x, float y ) { if( x != 0 || y != 0 ) { // @todo cursor should be set automatically, according to doc if( m_window->isCursorDisabled() ) glfwSetCursorPos( m_window->getWindow(), m_center.x, m_center.y ); // switch up/down because its more intuitive m_yaw += m_mouseSensitivity * ( m_center.x - x ); m_pitch += m_mouseSensitivity * ( m_center.y - y ); // to avoid locking if( m_pitch > 89.0f ) m_pitch = 89.0f; if( m_pitch < -89.0f ) m_pitch = -89.0f; // Update Front, Right and Up Vectors updateCameraVectors(); } } // onMouseMove() Mouse movement processor method. The interesting part is the manual reset of the mouse position that made the thing work ... // straight line distance between the camera and look at point, here (0,0,0) float distance = glm::length( m_target - m_position ); // Calculate the camera position using the distance and angles float camX = distance * -std::sin( glm::radians( m_yaw ) ) * std::cos( glm::radians( m_pitch) ); float camY = distance * -std::sin( glm::radians( m_pitch) ); float camZ = -distance * std::cos( glm::radians( m_yaw ) ) * std::cos( glm::radians( m_pitch) ); // Set the camera position and perspective vectors m_position = glm::vec3( camX, camY, camZ ); m_front = glm::vec3( 0.0, 0.0, 0.0 ) - m_position; m_up = m_worldUp; m_right = glm::normalize( glm::cross( m_front, m_worldUp ) ); glm::lookAt( m_position, m_front, m_up ); Orbiting camera vectors calculation in updateCameraVectors(). Now, for my understanding, as the glfw manual explicitly states that if cursor is disabled then it is reset to the center, but my code only works if it is reset manually, i fear i am doing something wrong. It is not world moving (only if there is a world to render :-)), but somehow i am curious what i am missing. I am not a professional programmer, just a hobbyist, so it may well be that i got something principally wrong :-) And thanks for any hints and so ...
  14. Hi, currently I'm helping my friend with his master thesis creating 3d airplane simulation, which he will use as presentation {just part of the whole}. I've got Boening 737-800 model, which is read from obj file format and directly rendered using my simulator program. The model has many of moving parts, they move independently basing on defined balls. Here's left wing from the top to depict one element and its move logic: Each moving object {here it is aleiron_left} has two balls: _left and _right which define the horizontal line to rotate around. I created movement and rotate logic by just using translatef and rotatef, each object is moved independently within its own push and pop matrixes. Once the project runs I'm able to move the aleiron based on parameter I store in some variable, and it moves alright: First one is before move, and the second one after move. I added balls for direct reference with 3ds max model. The balls are also moved using translatef and rotatef using the same move logic, so they are "glued" to their parent part. Problem is, I need to calculate their world coordinates after move, so in the next step out of the render method I need to obtain direct x,y,z coordinates without again using translatef and rotatef. I tried to multiply some matrixes and so on, but without result and currently I've got no idea how to do that.. Fragment of code looks like this: void DrawModel(Model.IModel model) { GL.glLoadIdentity(); GL.glPushMatrix(); this.SetSceneByCamera(); foreach (Model.IModelPart part in model.ModelParts) this.DrawPart(part); GL.glPopMatrix(); } void DrawPart(Model.IModelPart part) { bool draw = true; GL.glPushMatrix(); #region get part children Model.IModelPart child_top = part.GetChild("_top"); Model.IModelPart child_left = part.GetChild("_left"); Model.IModelPart child_right = part.GetChild("_right"); Model.IModelPart child_bottom = part.GetChild("_bottom"); #endregion get part children #region by part switch (part.PartTBase) { case Model.ModelPart.PartTypeBase.ALEIRON: { #region aleiron float moveBy = 0.0f; bool selected = false; switch (part.PartT) { case Model.ModelPart.PartType.ALEIRON_L: { selected = true; moveBy = this.GetFromValDic(part); moveBy *= -1; } break; } if (selected && child_left != null && child_right != null) { GL.glTranslatef(child_right.GetCentralPoint.X, child_right.GetCentralPoint.Y, child_right.GetCentralPoint.Z); GL.glRotatef(moveBy, child_left.GetCentralPoint.X - child_right.GetCentralPoint.X, child_left.GetCentralPoint.Y - child_right.GetCentralPoint.Y, child_left.GetCentralPoint.Z - child_right.GetCentralPoint.Z); GL.glTranslatef(-child_right.GetCentralPoint.X, -child_right.GetCentralPoint.Y, -child_right.GetCentralPoint.Z); } #endregion aleiron } break; } #endregion by part #region draw part if (!part.Visible) draw = false; if (draw) { GL.glBegin(GL.GL_TRIANGLES); for (int i = 0; i < part.ModelPoints.Count; i += 3) { if (part.ModelPoints[i].GetTexCoord != null) GL.glTexCoord3d((double)part.ModelPoints[i].GetTexCoord.X, (double)part.ModelPoints[i].GetTexCoord.Y, (double)part.ModelPoints[i].GetTexCoord.Z); GL.glVertex3f(part.ModelPoints[i].GetPoint.X, part.ModelPoints[i].GetPoint.Y, part.ModelPoints[i].GetPoint.Z); if (part.ModelPoints[i + 1].GetTexCoord != null) GL.glTexCoord3d((double)part.ModelPoints[i + 1].GetTexCoord.X, (double)part.ModelPoints[i + 1].GetTexCoord.Y, (double)part.ModelPoints[i + 1].GetTexCoord.Z); GL.glVertex3f(part.ModelPoints[i + 1].GetPoint.X, part.ModelPoints[i + 1].GetPoint.Y, part.ModelPoints[i + 1].GetPoint.Z); if (part.ModelPoints[i + 2].GetTexCoord != null) GL.glTexCoord3d((double)part.ModelPoints[i + 2].GetTexCoord.X, (double)part.ModelPoints[i + 2].GetTexCoord.Y, (double)part.ModelPoints[i + 2].GetTexCoord.Z); GL.glVertex3f(part.ModelPoints[i + 2].GetPoint.X, part.ModelPoints[i + 2].GetPoint.Y, part.ModelPoints[i + 2].GetPoint.Z); } GL.glEnd(); } #endregion draw part GL.glPopMatrix(); } What happens step by step: Prepare scene, set the model camera by current variables {user can move camera using keyboard and mouse in all directions}. Take model which contains all modelParts within and draw them one by one. Each part has its own type and subtype set by part name. Draw current part independently, get its child_left and child_right -> those are the balls by name. Get moveBy value which is angle. Every part has its own variable which is modified using keyboard like "press a, increase variable x for 5.0f", which in turn will rotate aleiron up by 5.0f. Move and rotate object by its children Draw part triangles {the part has its own list of triangles that are read from obj file} When I compute the blue ball that is glued to its parent, I do exactly the same so it moves with the same manner: case Model.ModelPart.PartTypeBase.FLOW: { float moveBy = 0.0f; Model.IModelPart parent = part.GetParent(); if (parent != null) { moveBy = this.GetFromValDic(parent); if (parent.Reverse) moveBy *= -1.0f; child_left = parent.GetChild("_left"); child_right = parent.GetChild("_right"); } if (child_left != null && child_right != null) { GL.glGetFloatv(GL.GL_MODELVIEW_MATRIX, part.BeforeMove); GL.glTranslatef(child_right.GetCentralPoint.X, child_right.GetCentralPoint.Y, child_right.GetCentralPoint.Z); GL.glRotatef(moveBy, child_left.GetCentralPoint.X - child_right.GetCentralPoint.X, child_left.GetCentralPoint.Y - child_right.GetCentralPoint.Y, child_left.GetCentralPoint.Z - child_right.GetCentralPoint.Z); GL.glTranslatef(-child_right.GetCentralPoint.X, -child_right.GetCentralPoint.Y, -child_right.GetCentralPoint.Z); } } break; Before move I read the model matrix using GL.glGetFloatv(GL.GL_MODEL_MATRIX, matrix), which gives me those values {red contains current model camera}: The blue ball has its own initial central point computed, and currently it is (x, y, z): 339.6048, 15.811758, -166.209473. As far as I'm concerned, the point world coordinates never change, only its matrix is moved and rotated around some point. So the final question is, how to calculate the same point world coordinates after the object is moved and rotated? PS: to visualize the problem, I've created new small box, which I placed on the center of the blue ball after it is moved upwards. The position of the box is written by eye - I just placed it couple of times and after x try, I managed to place it correctly using similar world coordinates: First one is from the left of the box, and the second one is from behind. The box world coordinates are (x, y, z): 340.745117f, 30.0f, -157.6322f, so according to original central point of blue ball it is a bit higher and closer to the center of the wing. Simply put, I need to: take original central point of blue ball: 339.6048, 15.811758, -166.209473 after the movement and rotation of the blue ball is finished, apply some algorithm {like take something from modelview_matrix, multiply by something other} finally, after the algorithm is complete, in result I get 340.745117f, 30.0f, -157.6322f point {but now computed}, which is the central point in world matrix after movement. PS2: Sorry for long post, I tried to exactly explain what I'm dealing with. Thank you in advance for your help.
  15. Hello fellow programmers, For a couple of days now i've decided to build my own planet renderer just to see how floating point precision issues can be tackled. As you probably imagine, i've quickly faced FPP issues when trying to render absurdly large planets. I have used the classical quadtree LOD approach; I've generated my grids with 33 vertices, (x: -1 to 1, y: -1 to 1, z = 0). Each grid is managed by a TerrainNode class that, depending on the side it represents (top, bottom, left right, front, back), creates a special rotation-translation matrix that moves and rotates the grid away from the origin so that when i finally normalize all the vertices on my vertex shader i can get a perfect sphere. T = glm::translate(glm::dmat4(1.0), glm::dvec3(0.0, 0.0, 1.0)); R = glm::rotate(glm::dmat4(1.0), glm::radians(180.0), glm::dvec3(1.0, 0.0, 0.0)); sides[0] = new TerrainNode(1.0, radius, T * R, glm::dvec2(0.0, 0.0), new TerrainTile(1.0, SIDE_FRONT)); T = glm::translate(glm::dmat4(1.0), glm::dvec3(0.0, 0.0, -1.0)); R = glm::rotate(glm::dmat4(1.0), glm::radians(0.0), glm::dvec3(1.0, 0.0, 0.0)); sides[1] = new TerrainNode(1.0, radius, R * T, glm::dvec2(0.0, 0.0), new TerrainTile(1.0, SIDE_BACK)); // So on and so forth for the rest of the sides As you can see, for the front side grid, i rotate it 180 degrees to make it face the camera and push it towards the eye; the back side is handled almost the same way only that i don't need to rotate it but simply push it away from the eye. The same technique is applied for the rest of the faces (obviously, with the proper rotations / translations). The matrix that result from the multiplication of R and T (in that particular order) is send to my vertex shader as `r_Grid'. // spherify vec3 V = normalize((r_Grid * vec4(r_Vertex, 1.0)).xyz); gl_Position = r_ModelViewProjection * vec4(V, 1.0); The `r_ModelViewProjection' matrix is generated on the CPU in this manner. // No the most efficient way, but it works. glm::dmat4 Camera::getMatrix() { // Create the view matrix // Roll, Yaw and Pitch are all quaternions. glm::dmat4 View = glm::toMat4(Roll) * glm::toMat4(Pitch) * glm::toMat4(Yaw); // The model matrix is generated by translating in the oposite direction of the camera. glm::dmat4 Model = glm::translate(glm::dmat4(1.0), -Position); // Projection = glm::perspective(fovY, aspect, zNear, zFar); // zNear = 0.1, zFar = 1.0995116e12 return Projection * View * Model; } I managed to get rid of z-fighting by using a technique called Logarithmic Depth Buffer described in this article; it works amazingly well, no z-fighting at all, at least not visible. Each frame i'm rendering each node by sending the generated matrices this way. // set the r_ModelViewProjection uniform // Sneak in the mRadiusMatrix which is a matrix that contains the radius of my planet. Shader::setUniform(0, Camera::getInstance()->getMatrix() * mRadiusMatrix); // set the r_Grid matrix uniform i created earlier. Shader::setUniform(1, r_Grid); grid->render(); My planet's radius is around 6400000.0 units, absurdly large, but that's what i really want to achieve; Everything works well, the node's split and merge as you'd expect, however whenever i get close to the surface of the planet the rounding errors start to kick in giving me that lovely stairs effect. I've read that if i could render each grid relative to the camera i could get better precision on the surface, effectively getting rid of those rounding errors. My question is how can i achieve this relative to camera rendering in my scenario here? I know that i have to do most of the work on the CPU with double, and that's exactly what i'm doing. I only use double on the CPU side where i also do most of the matrix multiplications. As you can see from my vertex shader i only do the usual r_ModelViewProjection * (some vertex coords). Thank you for your suggestions!
  16. Hello again, everyone! Today is going to be a very short blog entry because all I wanted to show you is the new soundtracks which I created. As of now I only have 2 soundtracks, but I plan on making total of 12, or at least 10. The genre I choose is psychedelic trance. The main reason for it is because it's fast paced and I think it would suit my game pretty well. I also added shooting and exploding sounds which make the game just a bit more interesting. You can have a listen of the soundtracks here: That's all for today, thanks for reading! Twitter: https://twitter.com/extrabitgames Facebook: https://www.facebook.com/extrabitgames Website: http://extrabitgames.com
  17. Subscribe to our subreddit to get all the updates from the team! Recently I've been tackling with more organic low poly terrains. The default way of creating indices for a 3D geometry is the following (credits) : A way to create simple differences that makes the geometry slightly more complicated and thus more organic is to vertically swap the indices of each adjacent quad. In other words, each adjacent quad to a centered quad is its vertical mirror. Finally, by not sharing the vertices and hence by creating two triangles per quad, this is the result with a coherent noise generator (joise) : It is called flat shading.
  18. Good day, everyone! This week I've been working on various small fixes, improvements and features. Here's an incomplete list of all the things I've done: As you can see there are quite a few commits related to performance improvements such as added frustum culling, improved memory usage, pathfinding improvements and so on. The one main thing that's missing from this list is the addition of the new tower type which for now I just call "the slower" which as the name suggests, slows down the enemies. If any of you have played the original Red Alert you might notice that for this particular tower I took inspiration from the so called Tesla Coil. The only difference is that instead of electrocuting it's enemies, it just slows them down. Here's how it looks in Blender: And in game: Also, for the above image you can see how the UI looks. It's still very primitive and will be changed later on, but for now it does the trick. It lets to select a tower and tell the game when you are ready for the level to begin. Another thing which I've been working on was music. The funny thing is the genre which I chose. It's psytrance I thought that the fast tempo and monotonic bassline would be suitable for this type of game. After all, it is KIND of an action game. And the way I see it, when it's finished, there will be lots of explosions going on. Anyways, as mentioned in the list above, I also implemented placeholder main and options menus along with splash screen. This is how the menus look now (keep in mind they will be changed later on): Yes, very simple, I know. But they get the job done and it's still very early in development. One more thing which I added is money. You can now purchase towers and you will get money for each killed enemy. For now there's not functionality for selling the towers but it will be added soon. So that's all for this week. And for the next week, I think I will be doing 3D modelling and will be implementing new tower types along with other minor features. Twitter: https://twitter.com/extrabitgames Facebook: https://www.facebook.com/extrabitgames Website: http://extrabitgames.com
  19. thecheeselover

    Marching cubes

    Subscribe to our subreddit to get all the updates from the team! I have had difficulties recently with the Marching Cubes algorithm, mainly because the principal source of information on the subject was kinda vague and incomplete to me. I need a lot of precision to understand something complicated Anyhow, after a lot of struggles, I have been able to code in Java a less hardcoded program than the given source because who doesn't like the cuteness of Java compared to the mean looking C++? Oh and by hardcoding, I mean something like this : cubeindex = 0; if (grid.val[0] < isolevel) cubeindex |= 1; if (grid.val[1] < isolevel) cubeindex |= 2; if (grid.val[2] < isolevel) cubeindex |= 4; if (grid.val[3] < isolevel) cubeindex |= 8; if (grid.val[4] < isolevel) cubeindex |= 16; if (grid.val[5] < isolevel) cubeindex |= 32; if (grid.val[6] < isolevel) cubeindex |= 64; if (grid.val[7] < isolevel) cubeindex |= 128; By no mean I am saying that my code is better or more performant. It's actually ugly. However, I absolutely loathe hardcoding. Here's the result with a scalar field generated using the coherent noise library joise :
  20. thecheeselover

    Zone generation

    Subscribe to our subreddit to get all the updates from the team! I have integrated the zone separation with my implementation of the Marching Cubes algorithm. Now I have been working on zone generation. A level is separated in the following way : Shrink the zone map to exactly fit an integer number of Chunk2Ds, which are of 32² m². For each Chunk2D, analyse all zones inside its boundaries and determine all possible heights for Chunk3Ds, which are of 32³ m³. Imagine this as a three dimensional array as an hash map : we are trying to figure out all keys for Chunk3Ds for a given Chunk2D. Create and generate a Chunk3D for each height found. Execute the Marching Cubes algorithm to assemble the geometry for each Chunk3D. In our game, we want levels to look like and feel like a certain world. The first world we are creating is the savanna. Even though each Chunk3D is generated using 3D noise, I made a noise module to map 3D noises into the 2D to able to apply 2D perturbation to the terrain. I also tried some funkier procedural noises : An arch! The important thing with procedural generation, it's to have a certain level of control over it. With the new zone division system, I have achieved a minimum on that path for my game.
  21. Subscribe to our subreddit to get all the updates from the team! Idea We wanted units in our game to be able to pick up items, including the player. But how can the player know which item will be picked up when he executes that action? Traditionally, video game developers and designers do this by using an outline shader. However, in our game, we thought that it was too "normal", cartoonish and not enough A E S T H E T I C. Solution The solution was to use Fresnel optics to simulate a cooler outline. The Fresnel Effect is used, approximated actually because it is extremely complex, in shaders such as a reflection shader for specular colors. In real life, this visual effect occurs on surfaces that reflect and refract light. An object that refracts light lets a fraction of the incoming light pass through its body. A good example for this is water. Examples and Tutorials Here are examples and tutorials for the Fresnel Effect in computer graphics programming : Simple demonstration of what is the Fresnel Effect Tutorial and more profound explanations on the subject Simple get to the point tutorial with Unity as tool How to Do It Here's how I did it with the jMonkeyEngine, which is a Java 3D game engine. Firstly, you need to create a material definition that will describe what will be the shaders and their inputs. Material Definition MaterialDef Fresnel Outline { MaterialParameters { Color FresnelOutlineColor Float FresnelOutlineBias Float FresnelOutlineScale Float FresnelOutlinePower } Technique { VertexShader GLSL100: shader/vertex/fresnel_outline/fresnel_outline_vertex_shader.vert FragmentShader GLSL100: shader/fragment/fresnel_outline/fresnel_outline_fragment_shader.frag WorldParameters { WorldViewProjectionMatrix WorldMatrix CameraPosition } } } Material Parameters As you can see, there are 4 uniforms that we need to specify to the shaders for them to function properly : FresnelOutlineColor - The color of the Fresnel Effect. Usually, it is an environment map that deals with that. FresnelOutlineBias - Acts like a diffuse color for the specular component. FresnelOutlineScale - How far the Fresnel Effect affects the model. The smaller the angle between I and N (the surface's normal), the less Fresnel Effect there is and the more scale is needed for that surface to be lit by it. FresnelOutlinePower - Exponentially increases the Fresnel Effect's color but decreases the scale. We will need to either set them in the material or in the code. You'll see about that later in the article. Technique The technique describes what shaders to execute and their engine's uniforms. There's no need to use a recent version of OpenGL / GLSL for this shader. The first GLSL version will do. For the Fresnel shader, we need to the following uniforms to be supplied by the game engine : WorldViewProjectionMatrix - The MVP matrix is needed to compute each vertex' position WorldMatrix - The world matrix is needed to compute the position and normal for each vertex in world coordinates CameraPosition - The camera position is needed to calculate the I (incident) vector Material The material uses a material definition and then applies render states and parameters to it. It is instantiable codewise. Material Fresnel Outline : material_definition/fresnel_outline/fresnel_outline_material_definition.j3md { MaterialParameters { FresnelOutlineColor : 1.0 0.0 0.0 1.0 FresnelOutlineBias : 0.17 FresnelOutlineScale : 2.0 FresnelOutlinePower : 1.0 } } As you can see, we can set the uniforms right here instead of doing so in the code, which saves us programmers from dealing with design components. Vertex Shader #import "Common/ShaderLib/GLSLCompat.glsllib" uniform vec3 g_CameraPosition; uniform mat4 g_WorldMatrix; uniform mat4 g_WorldViewProjectionMatrix; uniform float m_FresnelOutlineBias; uniform float m_FresnelOutlineScale; uniform float m_FresnelOutlinePower; attribute vec3 inPosition; attribute vec3 inNormal; varying float fresnelOutlineR; void main() { vec3 worldPosition = (g_WorldMatrix * vec4(inPosition, 1.0)).xyz; vec4 worldNormal = normalize(g_WorldMatrix * vec4(inNormal, 0.0)); vec3 fresnelI = normalize(worldPosition - g_CameraPosition); fresnelOutlineR = m_FresnelOutlineBias + m_FresnelOutlineScale * pow(1.0 + dot(fresnelI, worldNormal.xyz), m_FresnelOutlinePower); gl_Position = g_WorldViewProjectionMatrix * vec4(inPosition, 1.0); } This is how each vertex position, normal, color, texture coordinates [...] are transferred into the vertex shader by the jMonkeyEngine. attribute vec3 inPosition The same procedure is applied onto g_ and m_ variables. g_ variables are definied by the engine, whilst m_ variables are defined by the material definition. Here's how to do the Fresnel outline shader on the vertex side : Compute the world position and normal Compute the eye to vertex direction Compute the Fresnel Effect R variable (description reference) Fragment Shader #import "Common/ShaderLib/GLSLCompat.glsllib" uniform vec4 m_FresnelOutlineColor; varying float fresnelOutlineR; void main() { gl_FragColor = mix(vec4(0.0, 0.0, 0.0, 1.0), m_FresnelOutlineColor, fresnelOutlineR); } All that's left to do is a linear interpolation between the black color and the desired color with R being the interpolation value between the two. Because this is a simple tutorial, it only shows how to compute the Fresnel outline specular color. Result And with a bias of 1.0. Teal! Tested on a pickable item
  22. About the Game As you probably have guessed from the title, the game I'm working on is a Tower Defense type of game. At this point I'm still not sure what theme it's going to be in, but I think I will go with military based theme. The game itself is inspired by Red Alert, Robo Defense and Kingdom Rush. For development side of things I'm using Java/Kotlin (mostly Kotlin) + OpenGL and LWJGL with the IntellJ IDEA editor. 3D Models In the last couple of weeks I've been learning how to make 3D models using Blender. After few days of modelling I got hang of the basics and could model few simple trees, turrets and a car which I then imported into the game "engine". Here are few screenshots of the models that I have created during those days: Pathfinding I have programmed a pathfinding management system, which uses flood-fill pathfinding algorithm to calculate where the enemies have to go. The way it works is pretty simple. You start by splitting your game map into square nodes and then generating a gradient map which will tell how far away the current node is from the target node. To do this, you firstly start at the target node, assign its value to 0, then for all the neighboring nodes increment their value by 1, or whatever number you want, as long as its a positive number. If the neighboring node is non-collidable, assign its value to something very large, like 999999. To get the path from the start node to the current node you start at the start node, and select its neighboring node with the lowest value. Then for that selected node, do the same process until you reach node with the value of 0, which will mean that you have reached your target node. This is how the gradient map looks in my game: Here you can see the numbers at the center of each node which represent its value. The cars are moving towards the surrounding nodes which have the lowest values until they reach 0, that's when they stop. Okey, so why did I use this method instead of the famous A* algorithm? First of all. This is heck of a lot faster. Instead of always calculating a path each frame for each entity. You just generate the gradient map once, and update it every time an object is placed on the map. The drawback of this method is that all the entities can only go to a single target destination. If you want multiple target destinations, you will have to recalculate the gradient map with different target nodes. Gameplay There's not much of a gameplay at this stage of development. As of now all you can do is place turrets, watch them shoot the enemies and that's pretty much it. Nonetheless this is the gameplay footage: Twitter: https://twitter.com/extrabitgamesFacebook: https://www.facebook.com/extrabitgamesWebsite: http://extrabitgames.com My website: http://extrabitgames.com
  23. What have I done this week This week I have fixed few bugs and finally implemented a fully working obstacle avoidance system which makes my pathfinding and collision/obstacle avoidance system done. Some minor things which I did include: Fixed bug where text rendering causes lighting issues Added calculation of game entity's axis-aligned bounding box from data contained in .obj file Added AABB to AABB collision detection and response Added Ray to AABB intersection detection Made the map size resizable My own solution to obstacle avoidance problem I had really hard time finding information about an easy way to do obstacle avoidance in the way I wanted it to be. So instead I worked few days and came up with my own solution which works pretty well. I think I kind of reinvented a wheel and someone might have a better approach to this problem than I do. But anyways, it's already done and it works the way I wanted it to, which is all I care about now. What I wanted from my obstacle avoidance/collision system For the obstacle avoidance system I wanted it to do few things: Stop the vehicle if it gets too close to other vehicle If vehicles are about to collide (traveling towards each other at the angle less than the threshold angle) then make them steer away from each other Don't ever let two vehicles overlap I tried and I failed Before I explain how it works, I will say things which I tried and which didn't work that well. First thing I tried was to create a separate AABB in the front of the enemy's car and check if it collides with any other car. If it does, stop the vehicle. I thought this would make it so that the vehicles wouldn't get too close to each other and hence wouldn't overlap. Well, that didn't work. Because when both vehicles collide to each other, they will both stop and will get stuck. To fix this, I added an if statement which checks whether the vehicles collide at each other if so, make it so that only one of two vehicles would stop and other would continue driving. But this made them overlap some of the times, which I didn't want. So after thinking for a while, I decided I should add AABB collision response, so that when cars hit each other they don't overlap, but get pushed back. So I did that and now it works pretty good, BUT if the vehicles are travelling towards each other there's no way of knowing which way to turn to avoid the collision. So I decided to scrap this AABB in front of the vehicle approach and try casting rays. Approach which worked My last and final try was to use rays instead of AABB to check for collisions with other entities. This time I say entities because I also want to check if the ray intersects the towers as well, this way we will know which way the vehicle can turn to avoid collisions. So the way I do it is pretty simple. The vehicle casts number of rays from its center towards the front of the car which check for intersection between towers' and enemies' bounding boxes. Then I have a function which does some magic and calculates (from the ray intersection information) which way the vehicle should steer. This steering is only done if two vehicles are facing each other and moving towards. If they are not moving towards, I check if the ray distance is smaller than the threshold value and if it is I just stop the vehicle. There are other few tiny hacks and tricks which I did to polish the system. But this is mainly how it works. Next week I still haven't decided what I am going to do this coming week. But I think I will add different tower types, add GUI and make it so that enemies will spawn inside a building or in a hidden area from where they will come to the game's map. You can see the state of the game here: Twitter: https://twitter.com/extrabitgamesFacebook: https://www.facebook.com/extrabitgamesWebsite: http://extrabitgames.com
  24. Hi all, I'm trying to generate MIP-maps of a 2D-array texture, but only a limited amount of array layers and MIP-levels. For instance, to generate only the first 3 MIP-maps of a single array layer of a large 2D-array. After experimenting with glBlitFramebuffer to generate the MIP-maps manually but still with some sort of hardware acceleration, I ended up with glTextureView which already works with the limited amount of array layers (I can also verify the result in RenderDoc). However, glGenerateMipmap (or glGenerateTextureMipmap) always generates the entire MIP-chain for the specified array layer. Thus, the <numlevels> parameter of glTextureView seems to be ignored in the MIP-map generation process. I also tried to use glTexParameteri(..., GL_TEXTURE_MAX_LEVEL, 3), but this has the same result. Can anyone explain me how to solve this? Here is an example code, how I do it: void GenerateSubMips( GLuint texID, GLenum texTarget, GLenum internalFormat, GLuint baseMipLevel, GLuint numMipLevels, GLuint baseArrayLayer, GLuint numArrayLayers) { GLuint texViewID = 0; glGenTextures(1, &texViewID); glTextureView( texViewID, texTarget, texID, internalFormat, baseMipLevel, numMipLevels, baseArrayLayer, numArrayLayers ); glGenerateTextureMipmap(texViewID); glDeleteTextures(1, &texViewID); } GenerateSubMips( myTex, GL_TEXTURE_2D_ARRAY, GL_RGBA8, 0, 3, // only the first 3 MIP-maps 4, 1 // only one array layer with index 4 ); Thanks and kind regards, Lukas
  25. I'm trying to implement PBR into my simple OpenGL renderer and trying to use multiple lighting passes, I'm using one pass per light for rendering as follow: 1- First pass = depth 2- Second pass = ambient 3- [3 .. n] for all the lights in the scene. I'm using the blending function glBlendFunc(GL_ONE, GL_ONE) for passes [3..n], and i'm doing a Gamma Correction at the end of each fragment shader. But i still have a problem with the output image it just looks noisy specially when i'm using texture maps. Is there anything wrong with those steps or is there any improvement to this process?
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

Participate in the game development conversation and more when you create an account on GameDev.net!

Sign me up!