Styves

Members
  • Content count

    185
  • Joined

  • Last visited

Community Reputation

1792 Excellent

About Styves

  • Rank
    Member

Personal Information

  • Interests
    Art
    Design
    Production
    Programming
  1. Dealing with frustration

    I'm assuming by "writing my own model-format", you mean an actual file format. Correct me if I'm wrong on that, because if so then this whole post is pointless. For this I highly suggest that you don't start with writing formats. Especially if, as you said, you consider yourself a beginner. Model formats are not a simple thing to write up. You typically write a format after you know what requirements and data will be supported by it, and not the other way around. I.e. you should already know how bone animations work in your system before you write the format, which is what you're now realizing and struggling with. If you do things in this way, you'll rip open Pandoras box and realize that there's way too much inside of it that you weren't aware of before, and you'll have to tackle all of it before you can finish your initial goal of writing the format. Since your goal was the format, you're now staring at a tower of obstacles, no doubt these will trample your motivation. You need to focus smaller, on smaller goals with smaller complexity. A model format involves a lot of different data (textures/skeletons/geometry/etc) and is something you can only design up-front if you have in-depth knowledge and experience. One approach you can take to get to a format, as a beginner, is to start with getting the assimp model to be loaded directly and try to get it running in your engine. Do it in whatever way works, dirty or not, it doesn't really matter. Just focus on learning and having fun getting things up onto the screen. Focus on each element separately, and eventually you'll have a complete setup. Once you've got that setup you can look back on it and examine how it works which will give you good idea of what your own format can/might look like. At this point you can write the converter and replace the assimp code with your new format. TLDR; A "model format" is too big and broad/abstract a scope for your initial goal, you should break that down into small reachable goals, like "I'm going to write a format that has nothing but geometry", and then take small steps on it, "I'm adding textures", "I'm adding bone data", "I'm adding animation data", etc, and focus on each of those steps as a separate goal. Don't even think about "I want to write a model format", just think of the small goals and eventually your model format will come together and you'll feel a little more accomplished and a bit less overwhelmed.
  2. DX11 Set Lighting according to Time of the day

    You can use longitude/latitude to describe the trajectory of the sun, and use the time to determine which point along the latitude that the sun is currently at. Longitude will control which direction the sun is actually coming from (east, north, west, south, etc). Latitude controls the height of the sun. That would look something like this: x = cos(long) * sin(lat); y = sin(long) * sin(lat); z = cos(lat); If you want to simplify it, you can always assume a longitude of 0 and only focus on the latitude, which will only control the height of the sun.
  3. MJP made a real nice post about this and also wrote a sample demo for people to try. Should be enough to get you going. Each of your 3D textures likely has a world bounds, so you can render this bounds as a deferred volume (box, whatever) and apply it to the screen using deferred shading, adding up all the results of your 3D volumes (or using blending so that they overwrite what's below). This way none of your geometry shaders need to know about which volumes affect them, you don't need to scan for volumes that the object needs to be lit by, and you don't have to worry about excessive overdraw (where a light might relight the same pixel many times). If you don't have a deferred shading pipeline in your engine or aren't sure how to add one, then you can also do an old-school forward shading pipeline where you render the geometry several times with additive blending over a base (which has ambient + sunlight/global lighting) using a new set of volume textures in each pass. Heavier on geometry load if you have heavy scenes but simple to implement (just iterate over your geometry draw for every N lights/volumes and set the appropriate data and blend state). However if you can keep your texture count within whatever limit you set (either from hardware or a decision on your part) you can just use the single pass. I assume each object likely won't have more than 1-3 volumes affecting it, so if that assumption is correct and you've got a 32-texture limit, you should be able to handle several volumes. Even with 9 textures per volume that's 3 volumes per object + 5 textures for other stuff (diffuse, spec/gloss/roughness, normals, emissive - all of which can be combined in certain ways to reduce the number of samples and textures you actually need, the above can be combined down to 2 textures if you know what you're doing). I think in this case you should be just fine. Cutting down to 5-gaussian SG lighting gives you more volumes and textures.
  4. Two ideas come to mind: Use Spherical Gaussians instead of Spherical Harmonics. You can knock your textures down to 5 textures and still have some pretty acceptable results. Use deferred shading and your texture limit concerns disappear entirely. You can definitely store 2 sets of data inside a single texture by packing the data. However, you'll lose all ability to interpolate your results, and with 3D textures that I imagine are somewhat coarse, interpolation is not something you'll want to lose.
  5. Stop worrying about your code design, you've seen first hand what it does to your projects. Code is about functionality, not design. You're not painting here. Too often I've seen people rewrite entire subsystems because they were "ugly" only to make things worse in the end, or to throw it away and rewrite it from scratch because it "doesn't look nice" (losing features in the process). You're not working on some kind of enterprise software or in a large team, so just let it go. You can refactor later if you really need to. I'll let you in on a little secret: the reason your code is so painful is because you're trying so hard to conform to design patterns or some kind of overarching set of "rules" that you think you're supposed to be following because the faceless coders of the internet say that's what good coding is. The link ddengster posted is a good one and proposes a nice solution to approaching code. Definitely give that a read because I think it'll help a lot. This video should also give you an idea of why you're struggling. (Ignoring the somewhat clickbait-y title). I don't know of any platformer engines online off the top of my head (you could look into Löve games but those are in Lua). It might be more complex (and FPS-centric), but idTech 4 (Doom 3) is a good one to look at. According to things like SOLID principle, etc, it's actually pretty awful. But in practice it's very easy to follow. Similarly Steam's SpaceWar example is also simple in the same fashion and frankly it's better than what I've seen most people write considering its simplicity and clear violation of many coding principles. Might be a better option as it's far smaller and 2D. You can the code for SpaceWar with SteamWorks.
  6. A lot of code bases opt for a change in terminology, and name their interfaces "abstract". i.e. my_abstract_class.cpp. That's one way anyway. Edit: you'd probably want to use my_class_interface though, since you're using an interface and it would be good to keep the distinction (Apoch makes a good point about the interchanging of terms being confusing, best not contribute to that). In my professional work our interface files are prefixed with I (we have PascalCase filenames). Not something I'm fond of, but that's how they do it. In my personal code I don't prefix my interfaces and instead go for clear naming. If something is ambiguous I'll tackle it when it comes up (rename derived classes, change the name to something more clear, refactor the architecture if needed). In which case I'd name the files just like everything else.
  7. OpenGL Quake3 ambient lighting

    The "get ambient from floor" only applies to Quake 2 (and possibly Quake 1). Quake 3 uses a 3D grid, each cell contained 3 values: Direction, Ambient light color, and Direct light color. This data is filled during lightmap baking. The grid is bound to the levels bounding box and divided by some value (default is xyz: 64,64,128). That is, each cell ranges 64 units in XY and 128 on Z (which is up in Quake). Modern games do largely the same thing, however they store the results in different forms (spherical harmonics, whatever else). JoeJ covers this in the post above. For ambient diffuse, a variation of the things JoeJ can be seen in modern engines, each with their own tradeoffs. For ambient reflections, cubemaps are used, often times hand-placed.
  8. Depth in water shader

    You're using Z/W. You should be using either linear depth, or depthDropoffScale should be proportionate (getting smaller with distance) to account for the non-linear distribution of your depth values.   I.e. use W/FarDist.
  9. ACES and dynamic range

    I think the approx is cool, but it only approximates the general curve of the output, none of the other "features" (like red adjustment, etc). If you properly integrate something like OpenColorIO, you can spit out a simple LUT for the entire ODT/RRT transform and get great results for pretty cheap (you should pre-expose and map your HDR input into 12-bit ACESproxy space though so you have a proper range for your LUT input, since it's bounded). You can also do this with a shader. I wrote an example on Shadertoy that performs the ODT/RRT transforms on an LUT, you can try pulling it and baking out the LUT from the first shader. FYI, if you do this you should probably store your LUT in something better than an 8bit texture. The professional apps use either 10 or 12 bit usually, occasionally in 16 bit float. 16^3 might not be enough for the ACES transform (I recall having issues at that res), so I'd suggest at least 32^3 or 64^3 if you can afford it. I'd suggest the OpenColorIO route for generating your LUT though since it saves you a ton of work in the long run (just send OCIO some array of colors and a transform name and it'll spit out the result with all the proper LUTs applied with it), and it's much, much cleaner than carrying around a bunch of ACES code.
  10. 1. What format are your textures. Is it possible to pack them together into different channels? (off the top of my head velocity + density is only 3 for 2D, and 4 for 4D?) 2. Don't think in terms of where you're writing, think of where you're reading (gather vs scatter). If you want to write 2 pixels to the right, then sample two pixels to the left (oversimplified but I hope that gets the point across). Pixel shaders are always in "local space", i.e. they define the write position, so you need to work backwards from that. 3. If you can't directly store the data in a single texture, you can try various packing techniques. You can attempt to squeeze data down so that it does fit into channels, you can try atlasing the textures, or you can do it in multiple passes if it's a fast enough computation.   It would also help if we knew what you were using these for (particle physics? fluid sims?) as we can suggest a few things within context of what you're doing.
  11. How do I handle transparency

    Like Yourself said, it's actually very common to have separate rendering lists. In CryEngine, we have separate rendering lists that the renderer processes in a specific order. When we want to draw something we add it to the respective list (general/opaque, transparent, etc). You can also batch shaders and textures this way if you know they won't change between draws.   What I normally do is quite similar and separate my draw calls from my models. For each model, I go over every material, and submit a draw item to my renderer. The draw contains information like which material to use, what mesh, the start vertex and element size to draw, etc. The renderer just sorts these items by material attributes (transparent, blend states, etc) before rendering and then draws them. If your model has a transparent part and an opaque part, then that would be two draw items submitted to the renderer and drawn at the appropriate times.
  12. How do I handle transparency

    I don't really understand the question. Why do you need to iterate over every geometry in the world? Are you talking about the objects that show up in reflections (i.e. planar reflections)? Or do you mean making sure that objects/materials that are transparent and part of an opaque mesh need to be reskinned?
  13. Reminds me of the lightsabers in Dark Forces II. Cool. ^_^
  14. Writing your own rasterizer isn't really going to solve your problem, since you won't be utilizing your GPU at all (or if you use compute, not as efficiently as you could be). Just leave that stuff to the GPU guys, they know what they're doing. :)   Anyway, do you really need such precise culling? I mean, are you absolutely sure you're GPU bound? Going into such detail just to cull a few triangles might not be worth it, and could hurt your performance rather than help if you're actually CPU bound since modern GPUs prefer to eat big chunks of data more than they like to issue a draw call for each individual triangle. If you have bounding box culling on your objects, and frustum culling, then I think that's all you'll really need unless you're writing a big AAA title with a very high scene complexity.   Just bear in mind that Quake levels were built for some different hardware constraints, so you should probably break up the obj model you have into small sections to avoid processing the entire mesh in one chunk so that you can leverage those two culling systems a little more.   That said, if you really want to have some proper occlusion culling for triangles, you can either check out the Frostbite approach (it's quite complicated iirc), or try implementing a simple Hi-Z culling system using Geometry Shaders (build a simple quad-tree out of your zbuffer and do quad-based culling on each triangle using the geometry shader). The later is simpler to implement and I've had pretty good results with it.
  15. xD I was specifically thinking of Crysis 1, but yeah lol this is also shows a method you could try.