noizex

GDNet+ Standard
  • Content count

    133
  • Joined

  • Last visited

Community Reputation

1437 Excellent

About noizex

  • Rank
    Member

Personal Information

  • Interests
    |artist|business|designer|programmer|production|qa|educator|
  1. Use GLFW or something, pretty please...
  2. Node Graph UI for Accidental Noise Library

    One of really few usages where such visual graph node approach really shines for people who can actually write code instead of connecting nodes. Great job !
  3. I'm not sure why do you want to send some events, you just manipulate this dynamic mesh and render it each frame "as is" - if it changes, it will be rendered as changed. Why rendering system should update it's state based on some mesh that changed? It just gets a task to render mesh and does this, it doesn't care if the mesh never changes or just changed in last frame. Or am I missing something here?
  4. OpenGL why unbind VAO

      That would work too, I suppose. Thanks for suggestion!
  5. OpenGL why unbind VAO

    I think it's not about binding VAO redundantly (this should be prevented by keeping information about currently bound VAO), but resetting VAO to zero because of the possibility of overriding some loosely bound VAO state by other operation - in my case it was binding index buffer which I wanted to update, completely unrelated to any VAO that would point to this index buffer.    Also, binding 0 to reset vertex array binding is allowed:   glBindVertexArray binds the vertex array object with name array. array is the name of a vertex array object previously returned from a call to glGenVertexArrays, or zero to break the existing vertex array object binding.   It's rendering without VAO that's wrong I think, but having zero bound as VAO after you finish rendering is, in my opinion, a good safety measure. Please correct if I messed something here.
  6. OpenGL why unbind VAO

    Because if you keep that last VAO bound over to next frame, and then do some operations on buffers using glBindBuffer() (but without binding appropriate VAO, because you don't have to) you'll be overwriting your currently bound VAO state, which in most cases may cause nasty and hard to track disasters :)   Edit: Though now that I think about it, I'm not so sure. I think I had issue with overriding VAO state when I kept VAO bound between frames so I started to reset it. At some point I was convinced that only glVertexAttribPointer() and friends affect VAO state, and these require glBindBuffer to work with. But I _think_ had cases where glBindBuffer (because I was uploading new data) somehow messed my other VAO state. So I can't give you 100% guarantee that what I say is correct, but I think it may be a good practice to know that VAO is enabled when it's needed and when you're done it's 0. Edit2: I did a small mistake, it was ELEMENT_ARRAY_BUFFER binding that was the problem, so if you keep VAO bound and somewhere you manipulate some index buffer by binding it, you may override the one pointed to by currently bound VAO. It isn't affected by just binding GL_ARRAY_BUFFER, as I previously speculated.
  7. Okay. Looks like the moment when you decide to write about your problem often happens just before you have that "eureka" moment and find the solution by yourself. So, for the sake of documentary and other people who might do this stupid thing too, I will let you know what the hell was wrong. It wasn't this tangent basis, turns out it works well the way it is on these screenshots.   The problem.   Was..   [SOLUTION]   Texture image for normal map loaded as sRGB. Turns out I had a line that set internal format of every loaded texture to sRGB (used it for testing once), so all my normal maps were sRGB too. Found it an hour after posting, while I desperately played with Blender cycles renderer and managed to reproduce exatly the same error as I had using a "color" node. Turned out setting it to "non-color data" fixed the problem in Blender, and after checking what this means, turned out it's Blender's way to tell it that you want color data and it treats it as sRGB.    So make sure your texture is in correct format when you have weird artifacts like this on UV seams. Can't even think how much time I wasted looking at my exporter, fixing shaders and pulling my hair out. And it was a single, tiny line in a code I didn't even look into.
  8. [SOLVED] Look below for solution.   Hello, I usually try to solve my problems on my own, as long as I'm making progress and don't hit the wall, but today I hit the wall. I spend ~20 hours trying to figure why tangents on my model which appears fine in Unity and Blender internal renderer are incorrect in my engine, and I'm lost. This is when I need to call upon powers of collective know-how and ask you for help, maybe someone knows enough on this topic or had this problem to help me figure out what is wrong.    PROBLEM: Wherever there is UV seam in my mesh, like on head or where arms connect to body, there is a wrong lighting due to incorrect tangent basis.    This is how the mesh looks with TBN vectors visualised:     You can clearly see that there is a seam through the top of the head where tangent basis is basically pointing in opposite direction. This is where the UV seam is too. I get these tangents from Blender itself, and it calculates them based on this UV map:     Now, I read a lot about how there _will_ be problem with tangents on the UV map seam, but I can't understand how Blender, using it's own internal renderer based on GLSL is able to get this render correctly, using material with the same normal texture, set to tangent space:     So I'm pretty sure the error is somewhere between Blender's structure representation and my exporter code. I have no idea how Blender uses these tangents it generates, I digged their shader code and even replicated it in my shaders just to be sure - but problem is the same, big lighting difference on the seam. How this is usually handled? When I export, I have 2 vertices that lie on the seam line that have 2 different attributes - uv & tangent. What I do when exporting is - check whether vertex is the same comparing all attributes, and if not, replicate it with these attributes that differ, in this case UV & tangent. This means I get 2 vertices on the same spot, having opposite tangents, and that's where this sharp line comes from.   I tried everything, best result I got by averaging these two tangents that collide on the seam vertices. But just normal average (tang1+tang2)*0.5 isn't flawless and definitely doesn't even come close to how Blender renders this mesh. This is what I get by averaging this tangent:     So there is no sharp line dividing 2 tangent basis, but it blends a bit - still looks weird, like the head is bent to the inside a bit, and the lighting "moves" in a weird way when I rotate object.   I could understand if Blender had the same problem, but it's internal render somehow copes with this, while my engine breaks :(   One more thing is that when visualizing per pixel normals, it looks like the problem not only applies to seams where it's most visible, but it also turns around the lighting on certain parts of the body:     You can see how right leg has totally different orientation of normals compared to whole back side of the model. This is due to tangent basis flowing in opposite direction there. I found that this may happen on mirrored models, but I have no idea how to fix this, is this a problem of model? Or how I export the mesh? Or Blender? And questions remains - why Blender renders correctly, if this model is wrong?      I will be _eternally_ grateful if someone figures this out and gives me some tips pointing me in the right direction, as any following hour I spend on this just doesn't move at all towards any solution.
  9. I'm using mTail on Win10 and like it a lot. Basically any good "tail-like" program will do.
  10. Use whatever vertex attribute that suits you. I think I used ivec3 for material id and vec3 for alphas (to blend between 1-3 possible materials per triangle), which exploited barycentric coordinates to allow blending between different materials in all vertices of a triangle. Material indices were always the same for each vertex (so quite a data redundancy) to prevent interpolation. Mind that it was 5 years ago, so things may have changed or there is some better way to do this.   Say, you had triangle that had 2 vertices with material A and 1 vertex with material B, so this is a case of 2 verts sharing same material and 1 having different one:   v1 (has material A, uses 1st position in material vector) material ivec3(A, A, B) alpha vec3(1.0, 0.0, 0.0)   v2 (has material B, uses 2nd position in material vector) material ivec3(A, B, A) alpha vec3(0.0, 0.0, 1.0)   v3 (has material A, uses 1st position in material vector) material ivec3 (A, B, A)  alpha(1.0, 0.0, 0.0)   Now case where each vertex has different material that has to be blended with others: A, B, C:   v1 (has material A, uses 1st position in material vector) material ivec3(A, B, C) alpha vec3(1.0, 0.0, 0.0)   v2 (has material B, uses 2rd position in material vector) material ivec3(A, B, C) alpha vec3(0.0, 1.0, 0.0)   v3 (has material C, uses 3st position in material vector) material ivec3 (A, B, C)  alpha(0.0, 0.0, 1.0)   Hope this makes sense. One problem with this approach is that it's very costly, because triplanar means 3x sampling, and this solutions means another 3x sampling (for each vertex, even if they all share the same material). Add to it some normal/detail map and you're getting texture-sample bound :/   If I were to do this again, I'd go for more draw calls, but batched by number of unique materials. So if all vertices share the same material, it's one batch, another for 2 different materials, and another for 3 different materials (which will be ~5% of all draw calls). Triplanar can be very expensive in terms of samples made, so you will have to sacrifice draw call number for this. I remember my solution from years ago was really on the border of my GPU sampling capabilities with triplanar.    Actually, I will be bringing this back really soon, and there is no other way than triplanar for this - parametric generation of texcoords for such mesh is not easy - if anyone has done that I'd love to hear how :) 
  11. That's generally how it's done! When you send commands, you will typically timestamp them "for simulation tick X," "for simulation tick X+1," etc. You then put them in a queue when you receive them, and run them at the appropriate tick. If you receive a command for a tick that's already executed on the server, tell the client that it's too far behind and should increase its estimate of transmission latency (add more to the clock to get estimated arrived server clock.) If you receive more commands when there's already more than one queued command (for some value of "one") then you can tell the client that it's running ahead, and should adjust its estimate backwards a bit. As long as you run ticks at the same rate on server and client, this will work fine! When you get out of sync, there will be a slight discrepancy, which the server needs to correct the client for. Typically, those will be so small that they're not noticed, and they won't happen that often after initial sync-up.     What does adjusting estimate mean in this situation? In my project, I'm doing exactly what is described above, so I have certain tick rate and I send client input commands to the server at client's tick rate which is the same as server's. I can't see situation where client sends the same input that was already processed - because I never send redundant inputs (I send them as UDP, but reliably so I'm not shooting with inputs blindly, with some redundancy like sending last 3 inputs each packet, maybe I should?), so if client sent input for tick #123 it will keep sending ticks that follow, not the same one, even if it won't confirm that the input was accepted by server.   Ah, and I perform client-side simulation too, and confirm ticks from server, then resimulate these queued on client. I can measure how many ticks haven't been confirmed for example, and if this piles up I could do something.  But you mention some estimates of latency, which is something I don't do at all. Does it affect the rate at which client should send inputs? As per above, I'm sending them with the rate of local simulation, which has the same rate as server sim (right now 30Hz). How should I understand adjusting estimate in my case? What should be done?
  12. There is no only "engine" or only "artist". Artist does the rigging/skinning and makes animation clips which are then used by the engine to process further. There may be IK solver work done on engine side, clip blending, attaching objects to bones, adding restrictions, ragdolling and so on.   Look for "inverse kinematics lookat" to find some read about this topic. Just to confirm I understand what you're trying to achieve correctly, you want to do something like this:   https://www.youtube.com/watch?v=jzDCvlWvhfY   ? 
  13. Sounds good. Basically you want to separate things that are resources like vertex/index buffers, textures, shaders from your components. Maybe when creating new object ask some resource cache for needed resources and provide them to components. Important thing is to not do it for every object separately but keep one loaded version of each resource and just refer to it.    This can be too much for now, but another optimisation is keeping a state cache. If you render 10 meshes and all of them use the same VAO don't set this VAO 10 times, but keep a state of currentVAO/currentShader/currentRenderState and check if you really need to make GL call. This allows to grealy save on amount of GL calls, and more so if you queue your geometry so you can sort it based on shader/material/state to achieve even less state changes. 
  14. Well, it's hard to say for us what InitSystem does in a big picture (so far we know it allocates a lot of resources) and how often the update is called. But if you're doing this loop for every created bullet object that should be very lightweight (check flyweight pattern for example) and creating separate buffers, shaders and so on - then it's already pretty bad for performance, doesn't have to happen each frame (and for bullets that are short-lived objects it may happen quite often).    Considering you have bullets and fighter objects, looking at some basic situation you should have no need to use more than 2 different shaders (like one for solid static objects and other for some billboarding or whatever technique you use to render bullets, I don't even know if this is 3D or 2D case).   You're correct about VAO, you can just bind VAO and draw all same objects with it, changing only model matrix. 
  15. You're correct that you're wrong :) you can (and should) reuse the buffers (and everything else too). In rare cases where geometry is so dynamic it has to be changed each frame you can do it, but not in a way you're doing it in your code, but rather mapped buffers and not initializing every GL resource each update call. Allocating objects in such loops each frame without any sort of pooling isn't helping either.   First thing I would suggest is getting rid of re-updating GL resources each frame.