• Advertisement


GDNet+ Standard
  • Content count

  • Joined

  • Last visited

Community Reputation

1438 Excellent

About noizex

  • Rank

Personal Information

  • Interests

Recent Profile Visitors

9654 profile views
  1. What you describe does not make much sense to me - not much idea what's going on there, so I will refer to the problem in topic: if your AI actor is kitematic (I assume it is) then no collisions are resolved for that object with the world, as you're expected to handle this on your own. Plus AI should not really walk into obstacles, and for anything that may fall or hit them - you need to resolve these collisions and responses on your own.
  2. There are many ways, you already mentioned a lot of techniques that will make terrain look better. For the above technique, Google for "Terrain Detail Textures", but since you mention GPU Gems book doesn't it already explain that? Here are few posts about it: https://docs.unrealengine.com/udk/Three/TerrainAdvancedTextures.html#Detail Textures: Adding near-distance detail https://blogs.msdn.microsoft.com/shawnhar/2008/11/03/detail-textures/ http://vterrain.org/Textures/detail.html Most of these materials is quite dated, but I don't think there are any revolutionary advances in the topic of terrain texturing - in case there are I'm all ears! Would like to hear about some modern techniques that apply to wide range of terrain, not only simple heightmap (because nowdays there is way more, like surface extracted voxel terrain which allows for caves and overhangs but requires different texturing approaches than simple heightmap). I think one of techniques that really adds a lot is more advanced texture blending by using each texture type heightmap to calculate the blend weights.
  3. +1 for Scratch, it's perfect starting place. I wish they improved the user experience a bit, since it became slightly dated now. But don't think there is anything so visual and so easy to grasp. This is some game I created with my (4 year old back then) kid when he was at home sick & bored, in under one hour: https://scratch.mit.edu/projects/116277609/ He's almost 6 now and loves Scratch even more, can now read the instructions and create programs as far as using loop inside a loop and switching context. We use Scratch-like product that has cardboard blocks for application designing and then iPad or such device to scan the code and display results of a game actions that you programmed. It's called Scottie! Go but I'm not sure it's available in other countries, as it seems to be Polish product. Here is the page https://scottiego.pl/en/. It's also intuitive enough and probably created also for parents with no prior programming experience. This is how it looks (you have some tasks in a top-down platform game, and you control a robot, and later other characters, to accomplish some tasks through the program that you create) Modding is also good idea, but it requires some prior programming experience (just setting the development environment may be daunting task), so a better place to start would be playing with Scratch, learning basic programming concepts, and after some practice trying modding.
  4. C++ Combat System

    System like this most likely takes into account mouse movement velocity, distance & direction. There is not much t it, really, and Life is Feudal has quite stiff feeling to it, I much preferred how it worked in Mount & Blade. Get some model that has animations that you're interested in and prototype some system, it's not really that hard if you know what you want to achieve. With more directions comes problems like how to actually indicate it with a mouse stroke? You'd have to add something more, like tilting character left or right using some keys (Q/E?) and taking this into account when calculating swing direction and so on. But first and foremost you must come here with some specific problem, because system like this is unique enough that there most likely are no materials that we can point you to, and nobody will write a snippets of code for you either (because what you described is too generic).
  5. Use GLFW or something, pretty please...
  6. Node Graph UI for Accidental Noise Library

    One of really few usages where such visual graph node approach really shines for people who can actually write code instead of connecting nodes. Great job !
  7. I'm not sure why do you want to send some events, you just manipulate this dynamic mesh and render it each frame "as is" - if it changes, it will be rendered as changed. Why rendering system should update it's state based on some mesh that changed? It just gets a task to render mesh and does this, it doesn't care if the mesh never changes or just changed in last frame. Or am I missing something here?
  8. OpenGL why unbind VAO

      That would work too, I suppose. Thanks for suggestion!
  9. OpenGL why unbind VAO

    I think it's not about binding VAO redundantly (this should be prevented by keeping information about currently bound VAO), but resetting VAO to zero because of the possibility of overriding some loosely bound VAO state by other operation - in my case it was binding index buffer which I wanted to update, completely unrelated to any VAO that would point to this index buffer.    Also, binding 0 to reset vertex array binding is allowed:   glBindVertexArray binds the vertex array object with name array. array is the name of a vertex array object previously returned from a call to glGenVertexArrays, or zero to break the existing vertex array object binding.   It's rendering without VAO that's wrong I think, but having zero bound as VAO after you finish rendering is, in my opinion, a good safety measure. Please correct if I messed something here.
  10. OpenGL why unbind VAO

    Because if you keep that last VAO bound over to next frame, and then do some operations on buffers using glBindBuffer() (but without binding appropriate VAO, because you don't have to) you'll be overwriting your currently bound VAO state, which in most cases may cause nasty and hard to track disasters :)   Edit: Though now that I think about it, I'm not so sure. I think I had issue with overriding VAO state when I kept VAO bound between frames so I started to reset it. At some point I was convinced that only glVertexAttribPointer() and friends affect VAO state, and these require glBindBuffer to work with. But I _think_ had cases where glBindBuffer (because I was uploading new data) somehow messed my other VAO state. So I can't give you 100% guarantee that what I say is correct, but I think it may be a good practice to know that VAO is enabled when it's needed and when you're done it's 0. Edit2: I did a small mistake, it was ELEMENT_ARRAY_BUFFER binding that was the problem, so if you keep VAO bound and somewhere you manipulate some index buffer by binding it, you may override the one pointed to by currently bound VAO. It isn't affected by just binding GL_ARRAY_BUFFER, as I previously speculated.
  11. Okay. Looks like the moment when you decide to write about your problem often happens just before you have that "eureka" moment and find the solution by yourself. So, for the sake of documentary and other people who might do this stupid thing too, I will let you know what the hell was wrong. It wasn't this tangent basis, turns out it works well the way it is on these screenshots.   The problem.   Was..   [SOLUTION]   Texture image for normal map loaded as sRGB. Turns out I had a line that set internal format of every loaded texture to sRGB (used it for testing once), so all my normal maps were sRGB too. Found it an hour after posting, while I desperately played with Blender cycles renderer and managed to reproduce exatly the same error as I had using a "color" node. Turned out setting it to "non-color data" fixed the problem in Blender, and after checking what this means, turned out it's Blender's way to tell it that you want color data and it treats it as sRGB.    So make sure your texture is in correct format when you have weird artifacts like this on UV seams. Can't even think how much time I wasted looking at my exporter, fixing shaders and pulling my hair out. And it was a single, tiny line in a code I didn't even look into.
  12. [SOLVED] Look below for solution.   Hello, I usually try to solve my problems on my own, as long as I'm making progress and don't hit the wall, but today I hit the wall. I spend ~20 hours trying to figure why tangents on my model which appears fine in Unity and Blender internal renderer are incorrect in my engine, and I'm lost. This is when I need to call upon powers of collective know-how and ask you for help, maybe someone knows enough on this topic or had this problem to help me figure out what is wrong.    PROBLEM: Wherever there is UV seam in my mesh, like on head or where arms connect to body, there is a wrong lighting due to incorrect tangent basis.    This is how the mesh looks with TBN vectors visualised:     You can clearly see that there is a seam through the top of the head where tangent basis is basically pointing in opposite direction. This is where the UV seam is too. I get these tangents from Blender itself, and it calculates them based on this UV map:     Now, I read a lot about how there _will_ be problem with tangents on the UV map seam, but I can't understand how Blender, using it's own internal renderer based on GLSL is able to get this render correctly, using material with the same normal texture, set to tangent space:     So I'm pretty sure the error is somewhere between Blender's structure representation and my exporter code. I have no idea how Blender uses these tangents it generates, I digged their shader code and even replicated it in my shaders just to be sure - but problem is the same, big lighting difference on the seam. How this is usually handled? When I export, I have 2 vertices that lie on the seam line that have 2 different attributes - uv & tangent. What I do when exporting is - check whether vertex is the same comparing all attributes, and if not, replicate it with these attributes that differ, in this case UV & tangent. This means I get 2 vertices on the same spot, having opposite tangents, and that's where this sharp line comes from.   I tried everything, best result I got by averaging these two tangents that collide on the seam vertices. But just normal average (tang1+tang2)*0.5 isn't flawless and definitely doesn't even come close to how Blender renders this mesh. This is what I get by averaging this tangent:     So there is no sharp line dividing 2 tangent basis, but it blends a bit - still looks weird, like the head is bent to the inside a bit, and the lighting "moves" in a weird way when I rotate object.   I could understand if Blender had the same problem, but it's internal render somehow copes with this, while my engine breaks :(   One more thing is that when visualizing per pixel normals, it looks like the problem not only applies to seams where it's most visible, but it also turns around the lighting on certain parts of the body:     You can see how right leg has totally different orientation of normals compared to whole back side of the model. This is due to tangent basis flowing in opposite direction there. I found that this may happen on mirrored models, but I have no idea how to fix this, is this a problem of model? Or how I export the mesh? Or Blender? And questions remains - why Blender renders correctly, if this model is wrong?      I will be _eternally_ grateful if someone figures this out and gives me some tips pointing me in the right direction, as any following hour I spend on this just doesn't move at all towards any solution.
  13. I'm using mTail on Win10 and like it a lot. Basically any good "tail-like" program will do.
  14. Use whatever vertex attribute that suits you. I think I used ivec3 for material id and vec3 for alphas (to blend between 1-3 possible materials per triangle), which exploited barycentric coordinates to allow blending between different materials in all vertices of a triangle. Material indices were always the same for each vertex (so quite a data redundancy) to prevent interpolation. Mind that it was 5 years ago, so things may have changed or there is some better way to do this.   Say, you had triangle that had 2 vertices with material A and 1 vertex with material B, so this is a case of 2 verts sharing same material and 1 having different one:   v1 (has material A, uses 1st position in material vector) material ivec3(A, A, B) alpha vec3(1.0, 0.0, 0.0)   v2 (has material B, uses 2nd position in material vector) material ivec3(A, B, A) alpha vec3(0.0, 0.0, 1.0)   v3 (has material A, uses 1st position in material vector) material ivec3 (A, B, A)  alpha(1.0, 0.0, 0.0)   Now case where each vertex has different material that has to be blended with others: A, B, C:   v1 (has material A, uses 1st position in material vector) material ivec3(A, B, C) alpha vec3(1.0, 0.0, 0.0)   v2 (has material B, uses 2rd position in material vector) material ivec3(A, B, C) alpha vec3(0.0, 1.0, 0.0)   v3 (has material C, uses 3st position in material vector) material ivec3 (A, B, C)  alpha(0.0, 0.0, 1.0)   Hope this makes sense. One problem with this approach is that it's very costly, because triplanar means 3x sampling, and this solutions means another 3x sampling (for each vertex, even if they all share the same material). Add to it some normal/detail map and you're getting texture-sample bound :/   If I were to do this again, I'd go for more draw calls, but batched by number of unique materials. So if all vertices share the same material, it's one batch, another for 2 different materials, and another for 3 different materials (which will be ~5% of all draw calls). Triplanar can be very expensive in terms of samples made, so you will have to sacrifice draw call number for this. I remember my solution from years ago was really on the border of my GPU sampling capabilities with triplanar.    Actually, I will be bringing this back really soon, and there is no other way than triplanar for this - parametric generation of texcoords for such mesh is not easy - if anyone has done that I'd love to hear how :) 
  15. That's generally how it's done! When you send commands, you will typically timestamp them "for simulation tick X," "for simulation tick X+1," etc. You then put them in a queue when you receive them, and run them at the appropriate tick. If you receive a command for a tick that's already executed on the server, tell the client that it's too far behind and should increase its estimate of transmission latency (add more to the clock to get estimated arrived server clock.) If you receive more commands when there's already more than one queued command (for some value of "one") then you can tell the client that it's running ahead, and should adjust its estimate backwards a bit. As long as you run ticks at the same rate on server and client, this will work fine! When you get out of sync, there will be a slight discrepancy, which the server needs to correct the client for. Typically, those will be so small that they're not noticed, and they won't happen that often after initial sync-up.     What does adjusting estimate mean in this situation? In my project, I'm doing exactly what is described above, so I have certain tick rate and I send client input commands to the server at client's tick rate which is the same as server's. I can't see situation where client sends the same input that was already processed - because I never send redundant inputs (I send them as UDP, but reliably so I'm not shooting with inputs blindly, with some redundancy like sending last 3 inputs each packet, maybe I should?), so if client sent input for tick #123 it will keep sending ticks that follow, not the same one, even if it won't confirm that the input was accepted by server.   Ah, and I perform client-side simulation too, and confirm ticks from server, then resimulate these queued on client. I can measure how many ticks haven't been confirmed for example, and if this piles up I could do something.  But you mention some estimates of latency, which is something I don't do at all. Does it affect the rate at which client should send inputs? As per above, I'm sending them with the rate of local simulation, which has the same rate as server sim (right now 30Hz). How should I understand adjusting estimate in my case? What should be done?
  • Advertisement