Jump to content
  • Advertisement

Simmie

Member
  • Content Count

    16
  • Joined

  • Last visited

Community Reputation

571 Good

About Simmie

  • Rank
    Member
  1. Simmie

    DevLog #2: Skeletal Animations

    [color=rgb(29,33,41)][font='Helvetica Neue']Hey there,[/font][/color] this weekend I focused on animations. Animations are especially important for character in games as they deform the mesh and make the character "move". There are basically two different ways of animating a 3D model: morph targets and skeletal animations. The former stores the vertex positions (and normals) multiple times for different poses and interpolates them at runtime. This technique is very easy to implement, however, storing each vertex multiple times increases the size of the mesh drastically. This is something I wanted to avoid as download time is very critical in browser games. When using skeletal animations, a skeleton is created for the mesh and each vertex gets assigned to one or multiple bones. If then one bone is moved, all corresponding vertices are moved accordingly. Thus, you only need to store the transformations for a few bones instead of the positions of all vertices. Typically, the bones are organised in a tree-like structure and the transformations are stored relative to their parent. So, when the parent bone moves, all child bones move accordingly (e.g. moving your arm also moves your hand). This makes the approach very flexible and the skeleton can also be used to implement other techniques like ragdoll physics or inverse kinematics. However, before rendering the relative transformations of all bones must be converted to a global transformation. This task is usually very CPU intensive as it involves a lot of interpolation and matrix multiplications. As javascript is not really known for its blazing performance, this is something I definitely wanted to avoid. So instead of relative transformations, I store a 3x4 matrix for each bone that contains the global transformation for a specific keyframe. I lose some flexibility and some accuracy when blending between keyframes, but it simplifies the process a lot. I can store those matrices in a texture so that i don't have to re-upload them every frame and linear texture filtering gives me virtually free blending between keyframes. You can see the results (and some hilarious outtakes) in the video below. However, I still need to implement animation blending and the possibility to add different animations. The former enables smooth transitions between different animations and the latter allows to combine animations (e.g. so that the character can punch while running). Arbitrary complex adding and blending situations can be completely done on the GPU by using the static keyframes to render the final bone transformations to a render target texture that is then used to deform the mesh. During the rest of the week, I also worked on some other stuff: I wrote an "entity/component"-system, added (de-)serialization functionality and added support for textures and materials. Aside from the textures none of that is directly visible, but it makes my life a lot easier.
  2. Simmie

    DevLog #1

    Hello there, I recently discovered the game TANX (if you don't know it, check it out, it is really fun) and i really liked the format of the game. Like the more popular game Agar.io you simply visit a website and start playing which, I think, works very well for casual games. It has an extremely low barrier to entry as you don't need to download or install it first. In fact, i find this concept so interesting that I want to try to create a small game like this myself and start to experiment with the new technology. I am fairly new to the whole web development thing, so I first had to figure out how the development workflow in such an environment works. As the programming language I choose TypeScript over JavaScript because all the additional code completion makes life a lot easier and the Visual Studio Code editor has excellent support for it. However, this means that there is an additional compilation step before you can run the game in the browser. At first this was extremely frustrating, but after I familiarised me with some tools like webpack and started using them the workflow became a real breeze. In my current setup I can immediately see the result of the changes I made while coding. This enables extremely fast iteration cycles and lets me test many different things really fast. So for now, I spent most of my time setting up my development environment and creating a small framework for the game. However, I also started programming the server using Node.js. This was very straight forward and even without any experience it didn't took me long to get some basic communication going. Here is a short video of the current state: Everything is very basic right now, just a sample model I found on OpenGameArt and a simple untextured terrain generated with perlin noise. However, I still wanted to document the process, mostly for myself to look back to but maybe some of you find interesting as well. For the next step I want to add some animations. See you then!
  3. I think the problem is that the near-far ranges of the different projection matrices overlap and the same depth value maps to different camera space positions for each projection, so the depth buffer will not work correctly.   If you can afford to render the water twice, you can to the following: 1. Set z-near to 1000.0 and z-far to 10000.0 2. Draw water only 3. Clear z-buffer 4. Set z-near to 0.1 and z-far to 1000.0 5. Draw Ship and water   You probably have to experiment with the values for the near and far ranges. What is important, is that the z-near value of the first projection is the z-far value of the second, so you draw the background first and then the foreground, without an overlap.
  4. Make sure that no translucent object writes to the z-buffer, but from the code you posted above this should be the case. How do you set the alpha value? Can you show the exact code and the fragment shader?
  5. You need both, because if you disable the z-test, the order in which you render the objects is important. AFAIK the only way to avoid the sorting is either approximate translucent surfaces using premultiplied alpha or using alpha to coverage. If you search for that terms you should find decent articles/tutorials about both and you can check if one of them meets your requirements. Edit: Keep in mind that you probably do not have to sort the objects in every frame, if the camera position does not change much. Also, you could do the sorting in a separate thread so it does not block the rendering thread.
  6. In which order are you drawing the objects and what is the alpha value of the box? It looks like you draw the transparent purple object first with z-writes disabled and then the opaque box with zu writes enabled. In this case the box would be simply drawn on top of the purple object, like in your pictures. The correct way would be: 1. Render alls opaque objects with z-writes and reads enabled. 2. Render all transparent objects from back to front with z-reads enabled, but z-writes disabled. You need to sort the transparent objects from back to front, because the z-buffer does not handle transparent objects very well. If two transparent objects intersect, you need to split them in order to render them properly.
  7. It seems to me that you are currently overwriting the position in each interpolation step. How about saving the old and target position and calculate the current position by interpolating between those two.   So instead of position = lerp(position, targetPosition, delta) do position = lerp(oldPosition, targetPosition, t) Where t is a counter that is reset on every position update and that is increased according to elapsed time (make sure that it is clamped to [0,1]).   When a new position update arrives, it will become the new target position and the previous targetPosition will become the oldPosition.
  8. I might be a little late on this, but if the problem still exists, you could try to also read the depth value of the pixel. Together with the pixel coordinates and the inverse projection and view matrix you are then able to calculate the 3D world coordinates of the pixel.
  9. Simmie

    RockReaper

    Thank you. I really like the gameplay of Liero, but it is also much more complex. Maybe someday, if we somehow find the time for it...
  10. The C++ standard does not define exactly how many bits are used for the integer types (such as short, int, long etc.). For example a long on Windows has a size of 32 bits, but on Linux it has usually 64 bits. So, to be sure you get an (unsigned) integer with exactly X bits on all platforms and configurations, you can use the types (uintX_t) intX_t.     They are defined in the <cstdint> header, within the namespace std.   See also: http://en.cppreference.com/w/cpp/header/cstdint
  11. Simmie

    RockReaper

    Thank you two :)   @Navyman I've actually never played it before, as I was a little too young at that time. I only played a later clone called Cannon Hill :D
  12. Simmie

    RockReaper

    Hi everyone, it has been quite some time since my last (and only ) journal entry. In the meantime, two friends and I had to develop a game for a game programming course at our university. For that, we decided to create a clone of the old artillery game Scorched Earth. We call it RockReaper! Here is a short video showing the gameplay: [media][/media] The game was created from scratch using C++ and OpenGL 4.0. We used the SDL library, GLM and a thin C++ wrapper for OpenGL provided by our computer graphics chair. Our key features are: Randomly generated, fully destructible terrain Day/night-cycle Weather simulation with wind and rain (not shown in the video) Pixel perfect collision detection GPU Particles via Transform Feedback Dynamic lighting 5 different weapons Turn-based multiplayer (2-5 players) At the end, we made it in the top 12 and we now have to give a last presentation and submit a trailer for our game. We have no experience with creating trailers, so if anyone has tips or resources on that topic, please let me know.
  13. It is not only an approximation. n dot l is basically the cosine of the angle between the normal of the surface and the viewing ray and it is the factor by wich the area gets smaller when projected to the plane perpendicular to the light ray. Here a quick sketch: [attachment=28534:ndotl.png] Where A' = n dot l * A (Where A and A' denote the area before and after projection) So it basically takes into account how the light rays are spread out when they hit the surface with a greater angle.   I'm not really sure about what cure you are talking, but have you verified that the normal vector is correct and normalized? (Even if it's normalized in the vertex shader does not mean that it will be normalized in the pixel shader due to interpolation.)
  14. Only one program can be installed at a time, so glUseProgram(B) will override the call to glUseProgram(A) and glDrawArrays will only use program B for rendering. What effect are you trying to achieve? Edit: ninja'd
  15. Hey guys, recently, I had to work on a project for my lecture on global illumination. My partner and I decided to implement a variant of the progressive photon mapping technique, described by Hachisuka et al.[1]: instead of assigning each photon a certain color, we choose to map each photon to a certain wavelength. This allowed us to simulate caustics more accurately and achieve effects splitting a beam of white light into its spectrum of colors: So instead of using the refraction index, that is often given for transparent materials, we used Cauchys equation[
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!