• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.


GDNet+ Standard
  • Content count

  • Joined

  • Last visited

Community Reputation

1437 Excellent

About noizex

  • Rank

Personal Information

  • Location
    Warsaw, Poland
  • Interests
  1.   That would work too, I suppose. Thanks for suggestion!
  2. I think it's not about binding VAO redundantly (this should be prevented by keeping information about currently bound VAO), but resetting VAO to zero because of the possibility of overriding some loosely bound VAO state by other operation - in my case it was binding index buffer which I wanted to update, completely unrelated to any VAO that would point to this index buffer.    Also, binding 0 to reset vertex array binding is allowed:   glBindVertexArray binds the vertex array object with name array. array is the name of a vertex array object previously returned from a call to glGenVertexArrays, or zero to break the existing vertex array object binding.   It's rendering without VAO that's wrong I think, but having zero bound as VAO after you finish rendering is, in my opinion, a good safety measure. Please correct if I messed something here.
  3. Because if you keep that last VAO bound over to next frame, and then do some operations on buffers using glBindBuffer() (but without binding appropriate VAO, because you don't have to) you'll be overwriting your currently bound VAO state, which in most cases may cause nasty and hard to track disasters :)   Edit: Though now that I think about it, I'm not so sure. I think I had issue with overriding VAO state when I kept VAO bound between frames so I started to reset it. At some point I was convinced that only glVertexAttribPointer() and friends affect VAO state, and these require glBindBuffer to work with. But I _think_ had cases where glBindBuffer (because I was uploading new data) somehow messed my other VAO state. So I can't give you 100% guarantee that what I say is correct, but I think it may be a good practice to know that VAO is enabled when it's needed and when you're done it's 0. Edit2: I did a small mistake, it was ELEMENT_ARRAY_BUFFER binding that was the problem, so if you keep VAO bound and somewhere you manipulate some index buffer by binding it, you may override the one pointed to by currently bound VAO. It isn't affected by just binding GL_ARRAY_BUFFER, as I previously speculated.
  4. Okay. Looks like the moment when you decide to write about your problem often happens just before you have that "eureka" moment and find the solution by yourself. So, for the sake of documentary and other people who might do this stupid thing too, I will let you know what the hell was wrong. It wasn't this tangent basis, turns out it works well the way it is on these screenshots.   The problem.   Was..   [SOLUTION]   Texture image for normal map loaded as sRGB. Turns out I had a line that set internal format of every loaded texture to sRGB (used it for testing once), so all my normal maps were sRGB too. Found it an hour after posting, while I desperately played with Blender cycles renderer and managed to reproduce exatly the same error as I had using a "color" node. Turned out setting it to "non-color data" fixed the problem in Blender, and after checking what this means, turned out it's Blender's way to tell it that you want color data and it treats it as sRGB.    So make sure your texture is in correct format when you have weird artifacts like this on UV seams. Can't even think how much time I wasted looking at my exporter, fixing shaders and pulling my hair out. And it was a single, tiny line in a code I didn't even look into.
  5. [SOLVED] Look below for solution.   Hello, I usually try to solve my problems on my own, as long as I'm making progress and don't hit the wall, but today I hit the wall. I spend ~20 hours trying to figure why tangents on my model which appears fine in Unity and Blender internal renderer are incorrect in my engine, and I'm lost. This is when I need to call upon powers of collective know-how and ask you for help, maybe someone knows enough on this topic or had this problem to help me figure out what is wrong.    PROBLEM: Wherever there is UV seam in my mesh, like on head or where arms connect to body, there is a wrong lighting due to incorrect tangent basis.    This is how the mesh looks with TBN vectors visualised:     You can clearly see that there is a seam through the top of the head where tangent basis is basically pointing in opposite direction. This is where the UV seam is too. I get these tangents from Blender itself, and it calculates them based on this UV map:     Now, I read a lot about how there _will_ be problem with tangents on the UV map seam, but I can't understand how Blender, using it's own internal renderer based on GLSL is able to get this render correctly, using material with the same normal texture, set to tangent space:     So I'm pretty sure the error is somewhere between Blender's structure representation and my exporter code. I have no idea how Blender uses these tangents it generates, I digged their shader code and even replicated it in my shaders just to be sure - but problem is the same, big lighting difference on the seam. How this is usually handled? When I export, I have 2 vertices that lie on the seam line that have 2 different attributes - uv & tangent. What I do when exporting is - check whether vertex is the same comparing all attributes, and if not, replicate it with these attributes that differ, in this case UV & tangent. This means I get 2 vertices on the same spot, having opposite tangents, and that's where this sharp line comes from.   I tried everything, best result I got by averaging these two tangents that collide on the seam vertices. But just normal average (tang1+tang2)*0.5 isn't flawless and definitely doesn't even come close to how Blender renders this mesh. This is what I get by averaging this tangent:     So there is no sharp line dividing 2 tangent basis, but it blends a bit - still looks weird, like the head is bent to the inside a bit, and the lighting "moves" in a weird way when I rotate object.   I could understand if Blender had the same problem, but it's internal render somehow copes with this, while my engine breaks :(   One more thing is that when visualizing per pixel normals, it looks like the problem not only applies to seams where it's most visible, but it also turns around the lighting on certain parts of the body:     You can see how right leg has totally different orientation of normals compared to whole back side of the model. This is due to tangent basis flowing in opposite direction there. I found that this may happen on mirrored models, but I have no idea how to fix this, is this a problem of model? Or how I export the mesh? Or Blender? And questions remains - why Blender renders correctly, if this model is wrong?      I will be _eternally_ grateful if someone figures this out and gives me some tips pointing me in the right direction, as any following hour I spend on this just doesn't move at all towards any solution.
  6. I'm using mTail on Win10 and like it a lot. Basically any good "tail-like" program will do.
  7. Use whatever vertex attribute that suits you. I think I used ivec3 for material id and vec3 for alphas (to blend between 1-3 possible materials per triangle), which exploited barycentric coordinates to allow blending between different materials in all vertices of a triangle. Material indices were always the same for each vertex (so quite a data redundancy) to prevent interpolation. Mind that it was 5 years ago, so things may have changed or there is some better way to do this.   Say, you had triangle that had 2 vertices with material A and 1 vertex with material B, so this is a case of 2 verts sharing same material and 1 having different one:   v1 (has material A, uses 1st position in material vector) material ivec3(A, A, B) alpha vec3(1.0, 0.0, 0.0)   v2 (has material B, uses 2nd position in material vector) material ivec3(A, B, A) alpha vec3(0.0, 0.0, 1.0)   v3 (has material A, uses 1st position in material vector) material ivec3 (A, B, A)  alpha(1.0, 0.0, 0.0)   Now case where each vertex has different material that has to be blended with others: A, B, C:   v1 (has material A, uses 1st position in material vector) material ivec3(A, B, C) alpha vec3(1.0, 0.0, 0.0)   v2 (has material B, uses 2rd position in material vector) material ivec3(A, B, C) alpha vec3(0.0, 1.0, 0.0)   v3 (has material C, uses 3st position in material vector) material ivec3 (A, B, C)  alpha(0.0, 0.0, 1.0)   Hope this makes sense. One problem with this approach is that it's very costly, because triplanar means 3x sampling, and this solutions means another 3x sampling (for each vertex, even if they all share the same material). Add to it some normal/detail map and you're getting texture-sample bound :/   If I were to do this again, I'd go for more draw calls, but batched by number of unique materials. So if all vertices share the same material, it's one batch, another for 2 different materials, and another for 3 different materials (which will be ~5% of all draw calls). Triplanar can be very expensive in terms of samples made, so you will have to sacrifice draw call number for this. I remember my solution from years ago was really on the border of my GPU sampling capabilities with triplanar.    Actually, I will be bringing this back really soon, and there is no other way than triplanar for this - parametric generation of texcoords for such mesh is not easy - if anyone has done that I'd love to hear how :) 
  8. That's generally how it's done! When you send commands, you will typically timestamp them "for simulation tick X," "for simulation tick X+1," etc. You then put them in a queue when you receive them, and run them at the appropriate tick. If you receive a command for a tick that's already executed on the server, tell the client that it's too far behind and should increase its estimate of transmission latency (add more to the clock to get estimated arrived server clock.) If you receive more commands when there's already more than one queued command (for some value of "one") then you can tell the client that it's running ahead, and should adjust its estimate backwards a bit. As long as you run ticks at the same rate on server and client, this will work fine! When you get out of sync, there will be a slight discrepancy, which the server needs to correct the client for. Typically, those will be so small that they're not noticed, and they won't happen that often after initial sync-up.     What does adjusting estimate mean in this situation? In my project, I'm doing exactly what is described above, so I have certain tick rate and I send client input commands to the server at client's tick rate which is the same as server's. I can't see situation where client sends the same input that was already processed - because I never send redundant inputs (I send them as UDP, but reliably so I'm not shooting with inputs blindly, with some redundancy like sending last 3 inputs each packet, maybe I should?), so if client sent input for tick #123 it will keep sending ticks that follow, not the same one, even if it won't confirm that the input was accepted by server.   Ah, and I perform client-side simulation too, and confirm ticks from server, then resimulate these queued on client. I can measure how many ticks haven't been confirmed for example, and if this piles up I could do something.  But you mention some estimates of latency, which is something I don't do at all. Does it affect the rate at which client should send inputs? As per above, I'm sending them with the rate of local simulation, which has the same rate as server sim (right now 30Hz). How should I understand adjusting estimate in my case? What should be done?
  9. There is no only "engine" or only "artist". Artist does the rigging/skinning and makes animation clips which are then used by the engine to process further. There may be IK solver work done on engine side, clip blending, attaching objects to bones, adding restrictions, ragdolling and so on.   Look for "inverse kinematics lookat" to find some read about this topic. Just to confirm I understand what you're trying to achieve correctly, you want to do something like this:   https://www.youtube.com/watch?v=jzDCvlWvhfY   ? 
  10. Sounds good. Basically you want to separate things that are resources like vertex/index buffers, textures, shaders from your components. Maybe when creating new object ask some resource cache for needed resources and provide them to components. Important thing is to not do it for every object separately but keep one loaded version of each resource and just refer to it.    This can be too much for now, but another optimisation is keeping a state cache. If you render 10 meshes and all of them use the same VAO don't set this VAO 10 times, but keep a state of currentVAO/currentShader/currentRenderState and check if you really need to make GL call. This allows to grealy save on amount of GL calls, and more so if you queue your geometry so you can sort it based on shader/material/state to achieve even less state changes. 
  11. Well, it's hard to say for us what InitSystem does in a big picture (so far we know it allocates a lot of resources) and how often the update is called. But if you're doing this loop for every created bullet object that should be very lightweight (check flyweight pattern for example) and creating separate buffers, shaders and so on - then it's already pretty bad for performance, doesn't have to happen each frame (and for bullets that are short-lived objects it may happen quite often).    Considering you have bullets and fighter objects, looking at some basic situation you should have no need to use more than 2 different shaders (like one for solid static objects and other for some billboarding or whatever technique you use to render bullets, I don't even know if this is 3D or 2D case).   You're correct about VAO, you can just bind VAO and draw all same objects with it, changing only model matrix. 
  12. You're correct that you're wrong :) you can (and should) reuse the buffers (and everything else too). In rare cases where geometry is so dynamic it has to be changed each frame you can do it, but not in a way you're doing it in your code, but rather mapped buffers and not initializing every GL resource each update call. Allocating objects in such loops each frame without any sort of pooling isn't helping either.   First thing I would suggest is getting rid of re-updating GL resources each frame.
  13. Hehe yeah, this thread inspired me too. Couldn't force myself to rework my rendering system into something more flexible and after a week I'm done with a nice way to render such effects easily so I will also have a try. Stay tuned for questions coming from me soon :P
  14.   I think you're confusing several concepts here and complicate unnecessarily. When you load the resource you want to supply whatever ID you need to use to identify this resource and its version on a medium that you store it. It may be file path, it may be just a name or numeric id, it may be something more complex like a struct that contains both, filename and version. It doesn't matter what type it will be, as long as your loader knows how to deal with it. One thing matters - type should be hashable because you want to store references to your resources in a unique_map or similar container for fast lookup. You don't want iterate over a vector and look up for the resource comparing strings, as you suggest above.   So for resource key just use whatever type is most convenient for you. Strings are perfectly fine for this, as they're hashable. You can make it all compile-time even with some C++ magic, but that's probably only needed if you do a lot of lookups on thousands of resources. In my case, I don't ask manager each time I need the resource (so, each frame just before rendering), I use a simple refcount and don't return plain pointer but a handle that can be resolved to a pointer (this also acts as a proxy and allows returning placeholder when doing async load). This resolving doesn't use a map but rather a vector, and a handle keeps index to the resource in that vector (this is the main container for my resources, map just holds references in case the key-lookup is needed through Get<T> method). This requires designing a good policy of when the handle is valid, what happens when resources in vector need to be moved to other indices and so on, but otherwise is fast where it's needed and allows easy lookup when it's needed.   In terms of code it looks a bit like this snippet: struct Model { Resource<Shader> shader; Resource<Texture> texture; Resource<Mesh> mesh; void LoadResources(ResourceManager& resources) { # Here it does the std::unique_map lookup on the Store class (mind that Store class is provided by default # but also can be user-defined with completly different storage implementation, so this unique_map is just # what is the usual case) shader = resources.Get<Shader>(ShaderKey("base_shader", E_NormalMapped | E_Lighting)); texture = resources.Get<Texture>("texture.dds"); mesh = resource.Get<Mesh>("mesh.iqm"); } } Now a bit on what happens when you Get<T>: Find appropriate meta data for resource type T - meta data contains load/store/retrieve function callbacks, Store object and Loader object Call Store->Retrieve(key) to check if resource exists in the cache If resource exists, Store returns Resource<T> handle to it and increases refcount If resource doesn't exist Store returns empty handle If handle is empty, proceed to loading the object, so Loader->Load(key) is called Depending on the loader, it either fetches resource from somewhere, or creates it using factory class, this all happens inside Loader and other classes like Store or ResourceManager itself don't know anything about it. Loader just has to return the resource (this time it's unique_ptr) or nullptr When loader returns and the handle is not empty, it gets stored into Store, increasing refcount to 1 (so the store callback is called with the unique_ptr, which is then moved into the Store, as the store is the owning entity for resources) Handle is returned to user Later on, when I actually need to use the resource I do it as if it were a normal pointer to resource it holds: RenderModel(mesh->GetVertexBuffer(), mesh->GetIndexBuffer(), texture->GetId(), shader.get()); When -> is called, the handle does the call on the owning Store (to which it holds a pointer) like: m_Owner->Get(m_ResourceIndex); where m_ResourceIndex is index into resource vector. This way it can return placeholder, or do all sort of things, because it has this indirection which you don't get if you point straight to Resource* pointer.   So to rendering system I usually pass straight numbers of resources (GLuints in my case), but sometimes I need resource object itself like in Shader case (I need to call few more methods and Shader keeps the state for this, like caching uniforms per shader type). This is why last argument is passed as shader.get(), which returns plain pointer to Shader*, just like std::unique_ptr and similar smart pointers. There is a policy that resource manager will never free any resource during rendering phase, so passing raw pointer and holding to it for a while (during the journey of RenderTask through queues, sorting, and into actual render call) is fine because we guarantee that the resource will stay alive during that time.   It's still a naive implementation so there may be some problems, but it worked for me so far and it's quite flexible so I hope it helps you a bit and not adds more confusion :)
  15. You may consider making it a bit more generic - with some template programming you could have resources of any type and not coupled with some common base class which is not really a good solution (like deriving from Resource class or similar). Not only it will have broader use (for example you could use some external classes that you can't derive from common base without modifying some library), but also should be faster because you won't use virtual calls for something so crucial as getting resource id, something that will happen a lot each frame.   Think of a handle like Resource<Texture>, Resource<Shader>, Resource<Config> etc. You could use resources that don't necessarily use the same key to fetch them (so for example instead of std::string some more sophisticated struct or maybe uint?).  This way you don't have the problem you seem to describe - that you have several "semi similar" classes, but they all have their own quirks and data. If you have Resource<Texture> you know the object you're dealing with is a texture. If you have Resource<Shader> you know it's a shader and may have completly different interface than texture. You are tying your hands with a common interface for things that may be completly unrelated in terms of their functionality, and only share one thing - that they're loaded and managed in a way that minimizes duplicates.    In terms of loading, that's where you specify what resource you're querying:   auto texture = resources.Get<Texture>("mytexture"); auto shader = resources.Get<Shader>(ShaderKey("myshader", shaderBitMask)); auto framebuffer = resources.Get<Framebuffer>(2);    etc.