• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.

Styves

Members
  • Content count

    183
  • Joined

  • Last visited

Community Reputation

1790 Excellent

About Styves

  • Rank
    Member
  1. MJP made a real nice post about this and also wrote a sample demo for people to try. Should be enough to get you going. Each of your 3D textures likely has a world bounds, so you can render this bounds as a deferred volume (box, whatever) and apply it to the screen using deferred shading, adding up all the results of your 3D volumes (or using blending so that they overwrite what's below). This way none of your geometry shaders need to know about which volumes affect them, you don't need to scan for volumes that the object needs to be lit by, and you don't have to worry about overdraw. If you don't have a deferred shading pipeline in your engine or aren't sure how to add one, then you can also do an old-school forward shading pipeline where you render the geometry several times with additive blending over a base (which has ambient + sunlight/global lighting) using a new set of volume textures in each pass. Heavier on geometry load if you have heavy scenes but simple to implement (just iterate over your geometry draw for every N lights/volumes and set the appropriate data and blend state). However if you can keep your texture count within whatever limit you set (either from hardware or a decision on your part) you can just use the single pass. I assume each object likely won't have more than 1-3 volumes affecting it, so if that assumption is correct and you've got a 32-texture limit, you should be able to handle several volumes. Even with 9 textures per volume that's 3 volumes per object + 5 textures for other stuff (diffuse, spec/gloss/roughness, normals, emissive - all of which can be combined in certain ways to reduce the number of samples and textures you actually need, the above can be combined down to 2 textures if you know what you're doing). I think in this case you should be just fine. Cutting down to 5-gaussian SG lighting gives you more volumes and textures.
  2. Two ideas come to mind: Use Spherical Gaussians instead of Spherical Harmonics. You can knock your textures down to 5 textures and still have some pretty acceptable results. Use deferred shading and your texture limit concerns disappear entirely. You can definitely store 2 sets of data inside a single texture by packing the data. However, you'll lose all ability to interpolate your results, and with 3D textures that I imagine are somewhat coarse, interpolation is not something you'll want to lose.
  3. Stop worrying about your code design, you've seen first hand what it does to your projects. Code is about functionality, not design. You're not painting here. Too often I've seen people rewrite entire subsystems because they were "ugly" only to make things worse in the end, or to throw it away and rewrite it from scratch because it "doesn't look nice" (losing features in the process). You're not working on some kind of enterprise software or in a large team, so just let it go. You can refactor later if you really need to. I'll let you in on a little secret: the reason your code is so painful is because you're trying so hard to conform to design patterns or some kind of overarching set of "rules" that you think you're supposed to be following because the faceless coders of the internet say that's what good coding is. The link ddengster posted is a good one and proposes a nice solution to approaching code. Definitely give that a read because I think it'll help a lot. This video should also give you an idea of why you're struggling. (Ignoring the somewhat clickbait-y title). I don't know of any platformer engines online off the top of my head (you could look into Löve games but those are in Lua). It might be more complex (and FPS-centric), but idTech 4 (Doom 3) is a good one to look at. According to things like SOLID principle, etc, it's actually pretty awful. But in practice it's very easy to follow. Similarly Steam's SpaceWar example is also simple in the same fashion and frankly it's better than what I've seen most people write considering its simplicity and clear violation of many coding principles. Might be a better option as it's far smaller and 2D. You can the code for SpaceWar with SteamWorks.
  4. A lot of code bases opt for a change in terminology, and name their interfaces "abstract". i.e. my_abstract_class.cpp. That's one way anyway. Edit: you'd probably want to use my_class_interface though, since you're using an interface and it would be good to keep the distinction (Apoch makes a good point about the interchanging of terms being confusing, best not contribute to that). In my professional work our interface files are prefixed with I (we have PascalCase filenames). Not something I'm fond of, but that's how they do it. In my personal code I don't prefix my interfaces and instead go for clear naming. If something is ambiguous I'll tackle it when it comes up (rename derived classes, change the name to something more clear, refactor the architecture if needed). In which case I'd name the files just like everything else.
  5. OpenGL

    The "get ambient from floor" only applies to Quake 2 (and possibly Quake 1). Quake 3 uses a 3D grid, each cell contained 3 values: Direction, Ambient light color, and Direct light color. This data is filled during lightmap baking. The grid is bound to the levels bounding box and divided by some value (default is xyz: 64,64,128). That is, each cell ranges 64 units in XY and 128 on Z (which is up in Quake). Modern games do largely the same thing, however they store the results in different forms (spherical harmonics, whatever else). JoeJ covers this in the post above. For ambient diffuse, a variation of the things JoeJ can be seen in modern engines, each with their own tradeoffs. For ambient reflections, cubemaps are used, often times hand-placed.
  6. You're using Z/W. You should be using either linear depth, or depthDropoffScale should be proportionate (getting smaller with distance) to account for the non-linear distribution of your depth values.   I.e. use W/FarDist.
  7. I think the approx is cool, but it only approximates the general curve of the output, none of the other "features" (like red adjustment, etc). If you properly integrate something like OpenColorIO, you can spit out a simple LUT for the entire ODT/RRT transform and get great results for pretty cheap (you should pre-expose and map your HDR input into 12-bit ACESproxy space though so you have a proper range for your LUT input, since it's bounded). You can also do this with a shader. I wrote an example on Shadertoy that performs the ODT/RRT transforms on an LUT, you can try pulling it and baking out the LUT from the first shader. FYI, if you do this you should probably store your LUT in something better than an 8bit texture. The professional apps use either 10 or 12 bit usually, occasionally in 16 bit float. 16^3 might not be enough for the ACES transform (I recall having issues at that res), so I'd suggest at least 32^3 or 64^3 if you can afford it. I'd suggest the OpenColorIO route for generating your LUT though since it saves you a ton of work in the long run (just send OCIO some array of colors and a transform name and it'll spit out the result with all the proper LUTs applied with it), and it's much, much cleaner than carrying around a bunch of ACES code.
  8. 1. What format are your textures. Is it possible to pack them together into different channels? (off the top of my head velocity + density is only 3 for 2D, and 4 for 4D?) 2. Don't think in terms of where you're writing, think of where you're reading (gather vs scatter). If you want to write 2 pixels to the right, then sample two pixels to the left (oversimplified but I hope that gets the point across). Pixel shaders are always in "local space", i.e. they define the write position, so you need to work backwards from that. 3. If you can't directly store the data in a single texture, you can try various packing techniques. You can attempt to squeeze data down so that it does fit into channels, you can try atlasing the textures, or you can do it in multiple passes if it's a fast enough computation.   It would also help if we knew what you were using these for (particle physics? fluid sims?) as we can suggest a few things within context of what you're doing.
  9. Like Yourself said, it's actually very common to have separate rendering lists. In CryEngine, we have separate rendering lists that the renderer processes in a specific order. When we want to draw something we add it to the respective list (general/opaque, transparent, etc). You can also batch shaders and textures this way if you know they won't change between draws.   What I normally do is quite similar and separate my draw calls from my models. For each model, I go over every material, and submit a draw item to my renderer. The draw contains information like which material to use, what mesh, the start vertex and element size to draw, etc. The renderer just sorts these items by material attributes (transparent, blend states, etc) before rendering and then draws them. If your model has a transparent part and an opaque part, then that would be two draw items submitted to the renderer and drawn at the appropriate times.
  10. I don't really understand the question. Why do you need to iterate over every geometry in the world? Are you talking about the objects that show up in reflections (i.e. planar reflections)? Or do you mean making sure that objects/materials that are transparent and part of an opaque mesh need to be reskinned?
  11. Reminds me of the lightsabers in Dark Forces II. Cool. ^_^
  12. Writing your own rasterizer isn't really going to solve your problem, since you won't be utilizing your GPU at all (or if you use compute, not as efficiently as you could be). Just leave that stuff to the GPU guys, they know what they're doing. :)   Anyway, do you really need such precise culling? I mean, are you absolutely sure you're GPU bound? Going into such detail just to cull a few triangles might not be worth it, and could hurt your performance rather than help if you're actually CPU bound since modern GPUs prefer to eat big chunks of data more than they like to issue a draw call for each individual triangle. If you have bounding box culling on your objects, and frustum culling, then I think that's all you'll really need unless you're writing a big AAA title with a very high scene complexity.   Just bear in mind that Quake levels were built for some different hardware constraints, so you should probably break up the obj model you have into small sections to avoid processing the entire mesh in one chunk so that you can leverage those two culling systems a little more.   That said, if you really want to have some proper occlusion culling for triangles, you can either check out the Frostbite approach (it's quite complicated iirc), or try implementing a simple Hi-Z culling system using Geometry Shaders (build a simple quad-tree out of your zbuffer and do quad-based culling on each triangle using the geometry shader). The later is simpler to implement and I've had pretty good results with it.
  13. xD I was specifically thinking of Crysis 1, but yeah lol this is also shows a method you could try.
  14. Mirrors Edge had animations that were tailored to look good in first person, which prevent the neck from hitting the camera, etc. In third person these animations look hilarious, but they look great in first person. You can see them look funny in your shadows though, or if you watch some hacked 3rd person youtube videos.   Other games, like Crysis or Halo, simply render the lower half of the body (not the whole body, they either have a separate model built from the 3rd person model, or they on-the-fly clip the geometry) from the waist down. You can't look any lower than something like 70-80 degrees in these games, so the lowest you can look will still not show the entire torso. The first person hands are totally separate and rendered the traditional way (floating hands with a gun). This looks great IMO, and is pretty easy to setup - no new animations, and you can do it with the existing third person model if you clip off the top of the mesh.   In one game (I forget which) they actually don't treat the camera as a single rotating point in space, but rather move the camera when it rotates to simulate neck movement, which makes the camera lean forward when looking down, etc. You can apply this as well to avoid neck problems, but it'll change affect the way camera motion works.
  15. Calculating bent normals is just an extension of AO calculation where the direction of each sample is also averaged along with the occlusion amount.         The bent normals from substance painter seem correct. In any region that there is no AO, your normal won't be bent in any direction, therefore in tangent space it's pointing straight up along the surface normal. This is why you only see details in the ears.   I have no idea how Unity handles the bent normals but the usual way is to, instead of applying AO as a standard multiplier onto the lighting result (which gives this weird muddy look), use the dot product of the bent normal and the light instead. This is how we handle SSDO in CryENGINE for example.   I've only ever used a texture-based version of this technique once, however in this version the bent normals also contained information from the original tangent space normal map, and was used directly during lighting in place of the original.