Styves

Members
  • Content count

    187
  • Joined

  • Last visited

Community Reputation

1792 Excellent

About Styves

  • Rank
    Member

Personal Information

  • Interests
    Art
    Design
    Production
    Programming
  1. I'm not sure how many specializations you want to have for Object Supplier, but if you only want to support ecs::Data types as the specialization, and everything else to use the standard template, then you can just use std::conditional when declaring the Supplier type in registerBindFunction to select the ecs::Data type: using Supplier = ObjectSupplier<typename std::conditional<std::is_base_of<ecs::Data<Class>, Class>::value, ecs::Data<Class>, Class>::type; If you don't mind creating a single new overload for registerBindFunction then you can template the above to get rid of the explicit ecs::Data type if you want to support any other wrapped/CRTP base classes. Otherwise, you can use Zipsters approach. You can also attempt to use the type directly in registerBindFunction instead of using enable_if. Not sure how much sense that makes since I'm making this up as I go, but: using Supplier = ObjectSupplier<Class::UnderlyingType>; Of course, now you need to make sure any class used with ObjectSupplier has that type, which I'm not particularly happy about. You can use some macro to simplify it, or you can make it more explicit by requiring a new derived class that handles it for you: template <typename Object> struct ObjectReceiver { using UnderlyingType = Object; }; namespace ecs { template <typename T> class Data : ObjectReceiver<Data<T>> { }; } Or something like that, which makes the relationship to ObjectSupplier more clear. But I'm still not too fond of it because it involves even more boilerplate and adds complexity to the usage of ObjectSupplier (now you need the Receiver class base...). However you can use ObjectReceiver directly in registerBindFunction and statically assert that the type input is derived from it, which gives you a way to guarantee its use with a clear error message. That said, if you have some common function in your classes, then you can also completely bypass the need to provide that base type with a little bit of metaprogramming: template<typename T, typename R> T BaseOf(R T::*); // baseFunc is a common function in your base class, or some such using Supplier = ObjectSupplier<decltype(BaseOf(&Class::baseFunc))>; If this is enough, then you won't even need to add Type to the base, and all is well (you still need some consistent function or variable to point at to deduce the type of the base class though). There are probably more complex ways of getting the base type without requiring such a thing as "baseFunc", but I haven't looked into it much. This is all just off the top of my head stuff. std::conditional would be the simplest but Zipsters approach is probably more scalable. Unfortunately I don't have time to exercise "real template-foo" to get around the type declaration, but frankly it's not that important. The above is just food for thought (i.e. I wrote it up while I was eating a midnight snack, code probably doesn't even compile :P).
  2. Either the game suppots player-hosted servers, and tell you "please connect to my server!," or someone reverse-engineers the server-side protocol, and run their own server somewhere on the internet (and, again, say "please connect to my server"!) This is true for games like Minecraft, but there's an extra layer involved for something like Quake, which has an in-game server browser. So to add to that response: In order for a server browser to function, a "Master Server" (which is a central server that keeps track of active game servers) is used. When a server game is launched (dedicated, local PC, whatever), it sends information to the Master Server about itself (what it's IP is, the server name, etc). Any client that opens the Server Browser will contact the Master Server for the server list. Source games typically use Steam's server system as a Master Server. For Quake, it was originally dev hosted, but now I'm not entirely sure who's handling it - but you can still find games via server browser for Quake 1 and other derived clients thanks to the code being open sourced, which is why those games are still active today (you can always do direct connect via IP but server browser makes it easier to keep a small community alive).
  3. Dealing with frustration

    I'm assuming by "writing my own model-format", you mean an actual file format. Correct me if I'm wrong on that, because if so then this whole post is pointless. For this I highly suggest that you don't start with writing formats. Especially if, as you said, you consider yourself a beginner. Model formats are not a simple thing to write up. You typically write a format after you know what requirements and data will be supported by it, and not the other way around. I.e. you should already know how bone animations work in your system before you write the format, which is what you're now realizing and struggling with. If you do things in this way, you'll rip open Pandoras box and realize that there's way too much inside of it that you weren't aware of before, and you'll have to tackle all of it before you can finish your initial goal of writing the format. Since your goal was the format, you're now staring at a tower of obstacles, no doubt these will trample your motivation. You need to focus smaller, on smaller goals with smaller complexity. A model format involves a lot of different data (textures/skeletons/geometry/etc) and is something you can only design up-front if you have in-depth knowledge and experience. One approach you can take to get to a format, as a beginner, is to start with getting the assimp model to be loaded directly and try to get it running in your engine. Do it in whatever way works, dirty or not, it doesn't really matter. Just focus on learning and having fun getting things up onto the screen. Focus on each element separately, and eventually you'll have a complete setup. Once you've got that setup you can look back on it and examine how it works which will give you good idea of what your own format can/might look like. At this point you can write the converter and replace the assimp code with your new format. TLDR; A "model format" is too big and broad/abstract a scope for your initial goal, you should break that down into small reachable goals, like "I'm going to write a format that has nothing but geometry", and then take small steps on it, "I'm adding textures", "I'm adding bone data", "I'm adding animation data", etc, and focus on each of those steps as a separate goal. Don't even think about "I want to write a model format", just think of the small goals and eventually your model format will come together and you'll feel a little more accomplished and a bit less overwhelmed.
  4. DX11 Set Lighting according to Time of the day

    You can use longitude/latitude to describe the trajectory of the sun, and use the time to determine which point along the latitude that the sun is currently at. Longitude will control which direction the sun is actually coming from (east, north, west, south, etc). Latitude controls the height of the sun. That would look something like this: x = cos(long) * sin(lat); y = sin(long) * sin(lat); z = cos(lat); If you want to simplify it, you can always assume a longitude of 0 and only focus on the latitude, which will only control the height of the sun.
  5. MJP made a real nice post about this and also wrote a sample demo for people to try. Should be enough to get you going. Each of your 3D textures likely has a world bounds, so you can render this bounds as a deferred volume (box, whatever) and apply it to the screen using deferred shading, adding up all the results of your 3D volumes (or using blending so that they overwrite what's below). This way none of your geometry shaders need to know about which volumes affect them, you don't need to scan for volumes that the object needs to be lit by, and you don't have to worry about excessive overdraw (where a light might relight the same pixel many times). If you don't have a deferred shading pipeline in your engine or aren't sure how to add one, then you can also do an old-school forward shading pipeline where you render the geometry several times with additive blending over a base (which has ambient + sunlight/global lighting) using a new set of volume textures in each pass. Heavier on geometry load if you have heavy scenes but simple to implement (just iterate over your geometry draw for every N lights/volumes and set the appropriate data and blend state). However if you can keep your texture count within whatever limit you set (either from hardware or a decision on your part) you can just use the single pass. I assume each object likely won't have more than 1-3 volumes affecting it, so if that assumption is correct and you've got a 32-texture limit, you should be able to handle several volumes. Even with 9 textures per volume that's 3 volumes per object + 5 textures for other stuff (diffuse, spec/gloss/roughness, normals, emissive - all of which can be combined in certain ways to reduce the number of samples and textures you actually need, the above can be combined down to 2 textures if you know what you're doing). I think in this case you should be just fine. Cutting down to 5-gaussian SG lighting gives you more volumes and textures.
  6. Two ideas come to mind: Use Spherical Gaussians instead of Spherical Harmonics. You can knock your textures down to 5 textures and still have some pretty acceptable results. Use deferred shading and your texture limit concerns disappear entirely. You can definitely store 2 sets of data inside a single texture by packing the data. However, you'll lose all ability to interpolate your results, and with 3D textures that I imagine are somewhat coarse, interpolation is not something you'll want to lose.
  7. Stop worrying about your code design, you've seen first hand what it does to your projects. Code is about functionality, not design. You're not painting here. Too often I've seen people rewrite entire subsystems because they were "ugly" only to make things worse in the end, or to throw it away and rewrite it from scratch because it "doesn't look nice" (losing features in the process). You're not working on some kind of enterprise software or in a large team, so just let it go. You can refactor later if you really need to. I'll let you in on a little secret: the reason your code is so painful is because you're trying so hard to conform to design patterns or some kind of overarching set of "rules" that you think you're supposed to be following because the faceless coders of the internet say that's what good coding is. The link ddengster posted is a good one and proposes a nice solution to approaching code. Definitely give that a read because I think it'll help a lot. This video should also give you an idea of why you're struggling. (Ignoring the somewhat clickbait-y title). I don't know of any platformer engines online off the top of my head (you could look into Löve games but those are in Lua). It might be more complex (and FPS-centric), but idTech 4 (Doom 3) is a good one to look at. According to things like SOLID principle, etc, it's actually pretty awful. But in practice it's very easy to follow. Similarly Steam's SpaceWar example is also simple in the same fashion and frankly it's better than what I've seen most people write considering its simplicity and clear violation of many coding principles. Might be a better option as it's far smaller and 2D. You can the code for SpaceWar with SteamWorks.
  8. A lot of code bases opt for a change in terminology, and name their interfaces "abstract". i.e. my_abstract_class.cpp. That's one way anyway. Edit: you'd probably want to use my_class_interface though, since you're using an interface and it would be good to keep the distinction (Apoch makes a good point about the interchanging of terms being confusing, best not contribute to that). In my professional work our interface files are prefixed with I (we have PascalCase filenames). Not something I'm fond of, but that's how they do it. In my personal code I don't prefix my interfaces and instead go for clear naming. If something is ambiguous I'll tackle it when it comes up (rename derived classes, change the name to something more clear, refactor the architecture if needed). In which case I'd name the files just like everything else.
  9. OpenGL Quake3 ambient lighting

    The "get ambient from floor" only applies to Quake 2 (and possibly Quake 1). Quake 3 uses a 3D grid, each cell contained 3 values: Direction, Ambient light color, and Direct light color. This data is filled during lightmap baking. The grid is bound to the levels bounding box and divided by some value (default is xyz: 64,64,128). That is, each cell ranges 64 units in XY and 128 on Z (which is up in Quake). Modern games do largely the same thing, however they store the results in different forms (spherical harmonics, whatever else). JoeJ covers this in the post above. For ambient diffuse, a variation of the things JoeJ can be seen in modern engines, each with their own tradeoffs. For ambient reflections, cubemaps are used, often times hand-placed.
  10. Depth in water shader

    You're using Z/W. You should be using either linear depth, or depthDropoffScale should be proportionate (getting smaller with distance) to account for the non-linear distribution of your depth values.   I.e. use W/FarDist.
  11. ACES and dynamic range

    I think the approx is cool, but it only approximates the general curve of the output, none of the other "features" (like red adjustment, etc). If you properly integrate something like OpenColorIO, you can spit out a simple LUT for the entire ODT/RRT transform and get great results for pretty cheap (you should pre-expose and map your HDR input into 12-bit ACESproxy space though so you have a proper range for your LUT input, since it's bounded). You can also do this with a shader. I wrote an example on Shadertoy that performs the ODT/RRT transforms on an LUT, you can try pulling it and baking out the LUT from the first shader. FYI, if you do this you should probably store your LUT in something better than an 8bit texture. The professional apps use either 10 or 12 bit usually, occasionally in 16 bit float. 16^3 might not be enough for the ACES transform (I recall having issues at that res), so I'd suggest at least 32^3 or 64^3 if you can afford it. I'd suggest the OpenColorIO route for generating your LUT though since it saves you a ton of work in the long run (just send OCIO some array of colors and a transform name and it'll spit out the result with all the proper LUTs applied with it), and it's much, much cleaner than carrying around a bunch of ACES code.
  12. 1. What format are your textures. Is it possible to pack them together into different channels? (off the top of my head velocity + density is only 3 for 2D, and 4 for 4D?) 2. Don't think in terms of where you're writing, think of where you're reading (gather vs scatter). If you want to write 2 pixels to the right, then sample two pixels to the left (oversimplified but I hope that gets the point across). Pixel shaders are always in "local space", i.e. they define the write position, so you need to work backwards from that. 3. If you can't directly store the data in a single texture, you can try various packing techniques. You can attempt to squeeze data down so that it does fit into channels, you can try atlasing the textures, or you can do it in multiple passes if it's a fast enough computation.   It would also help if we knew what you were using these for (particle physics? fluid sims?) as we can suggest a few things within context of what you're doing.
  13. How do I handle transparency

    Like Yourself said, it's actually very common to have separate rendering lists. In CryEngine, we have separate rendering lists that the renderer processes in a specific order. When we want to draw something we add it to the respective list (general/opaque, transparent, etc). You can also batch shaders and textures this way if you know they won't change between draws.   What I normally do is quite similar and separate my draw calls from my models. For each model, I go over every material, and submit a draw item to my renderer. The draw contains information like which material to use, what mesh, the start vertex and element size to draw, etc. The renderer just sorts these items by material attributes (transparent, blend states, etc) before rendering and then draws them. If your model has a transparent part and an opaque part, then that would be two draw items submitted to the renderer and drawn at the appropriate times.
  14. How do I handle transparency

    I don't really understand the question. Why do you need to iterate over every geometry in the world? Are you talking about the objects that show up in reflections (i.e. planar reflections)? Or do you mean making sure that objects/materials that are transparent and part of an opaque mesh need to be reskinned?
  15. Reminds me of the lightsabers in Dark Forces II. Cool. ^_^