• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.

spek

Members
  • Content count

    1368
  • Joined

  • Last visited

Community Reputation

1240 Excellent

About spek

  • Rank
    Contributor
  1. Well first, as for the original question(s): SSBO works, as the querying works - thank you guys! Got some ambient baked into a tree & probe array, as described in the first posts. That was the good news :) The bad news that (indeed), it's terribly slow. FPS crumbled from ~50 to ~20. And I'm not even taking multiple probes for interpolation yet :( Now my laptop isn't a graphical powerhouse, and obviously my methods are most likely less optimized than what the Quantum Break papers do behind curtains. Maybe they do the ambient pass on a lower resolution as well. Plus I didn't really play with different memory lay-outs & compressed struct sizes yet. Traversing the tree jumps through 16byte (vec4) sized structs, with 3 or jumps. Yet the code to figure out which subcell to access seems a bit complex to me. And reducing my original 388 probe to 48 bytes (6 colors per probe) didn't help at least. Then again it's still a huge struct. But all in all... not very promising. Or I'm doing something terribly wrong? I can paste the GLSL code if you guys are interested.   EDIT: Doh. 20 FPS because I was drawing all probes in debug at the same time. Without that the performance actually isn't that bad. Still not 100% convinced, but its bed time now hehe.   @JoeJ I think I'm missing the part where these pre-computed indices are stored... How does surfaceX know it's connected to index 1234? Or, given a certain pixel on screen (knowing its position, normal, and eventually which Tree / Offset it used, all baked into g-Buffers), what tells me this index? A lightmap? Now you mentioned "all static" earlier, note I'd like to use the same data for moving objects / particles as well. And maybe volumetric fog/raymarching, if not too expensive (and that would certainly kill the GPU with the sluggish method I now tried for traversing trees).
  2. Problem in my case is that there is no "whole scene". The world is divided in smaller sectors (mainly rooms and corridors in my case), and are loaded on the fly, when nearby enough. Which certainly doesn't make this story easier, because the tree itself is made of multiple sub-trees, and also the probe array is dynamically filled. If a sector gets unloaded, it releases a slot (X probes/cells), which can then be claimed by another sector that will be loaded.
  3. Promised. IF I can make it work, that is :D Then again you guys made me think again as well. Problem is always how to get the right probe(s) somehow. I was just thinking maybe probes can inject their ID(array index) into a 3D texture. Thus: * Make a volume texture. *** Since it only has to store a single int this time, it can be a relative big texture. For example, 256 x 128(need less height here) x 256 x R32 = 32 MB only * Volume texture follows your camera * Render all probes as Points into the Volume texture: *** Biggest probes (the larger cells) first. These would inject multiple points (using geometry shader) *** Smaller probes overwrite the ID's of the bigger cells they are inside. *** Leak reduction: Foreground room probes will overwrite rooms further away, Doesn't always work, but often it should. * Anything that renders, will fetch the probe ID by simply using its (world - camera offset) position. * Use the ID to directly fetch from the probe array   The probes may still be a SSBO (sorry, the topic drifted off as usual hehe). Could be done with textures as well, but I find the idea of having 12 or 13 textures messy - not sure if it matters performance wise... Of course, the ID-injection step also takes time, but I know from experience its pretty cheap. And from there on, anything (particles, glass, volumetric fog-raymarching) can figure out its probe relative easily). But I'm 100% sure I forgot about a few big BUT's! here :)
  4. You're pretty much right. What I'm trying to do is nicely described in this paper: https://mediatech.aalto.fi/~ari/Publications/SIGGRAPH_2015_Remedy_Notes.pdf So, the world is broken down in a tree structure. Each cell is connected to 1 probe, and eventually divides further into 4x4x4 (64) subcells, Advantage is that you don't need tenthousands of probes in an uniform 3D grid. Disadvantage is well, you need to traverse the tree first before you know which probe to pick for any given pixel-position.   The traversing in my case goes jumps deeper up to 3 times. So the first fetch will be a large cell, A cell has an offset and bitMask (int64), where each bit tells whether there is a deeper cell or not. Using this offset and how many bits were counted, we know where to access the next cell. If no deeper cell was found, the same counting mechanism will tell where to fetch the actual probe data. The probe in my case is basically a 1x1 faced cubemap. Plus it tells a few more details, like which Specular Probe to use, or stuff like fog-thickness. All in all, big-data (50+ MB in my case).   Currently I use "traditional" lightmaps, but having several problems. UV mapping issues in some cases, though that will be most likely gone if they simply refer further to a probe (your 1st solution). Still, doesn't work too well for dynamic objects / particles / translucent stuff (glass). Splatting the probes (I think that is your option3) on screenspace G-Buffers (depth/normal/position) is probably much easier. Like deferred lighting, each probe would render a cube (sized according the tree + some overlap with neighbours to cause interpolation), and apply its light-data on whatever geometry it intersects. Downside might be the large number of cubes overlapping each other, giving potential fill-rate issues. Plus particles and such require a slight different approach. There is also a chance on light leaking (splatting probes from neihgbour rooms), though I can think we can mask that with some "Room ID" number or something.   What I did in the past is simply making an unform 3D grid - thus LOTS of probes EVERYWHERE. I injected the probes surrounding the camera into a 32x32x32 3D texture. Simple & Fast, but no G.I. and popping for distant stuff, and a lot of probes (+baking time) wasted on vacuum space. Also sensitive for light leaks in some cases.
  5. >> testProbes [ ] And... it's gone :) The error message I mean. Not sure if things actually work, my code isn't far enough to really test, but at least the "ouf of memory" errors are gone.   >> Read & Write? Wasn't planning to write via a shader. At least not yet. Never tried "Buffer Textures", but looking at the link, one element can only contain up to 4 x 32bits ? My testcode is rather simplistic here, but the later version will have multiple arrays and bigger structs. Trying to make an octree-alike system with probes.   That brings me to another question. Does it make any (performance) difference when having to deal with either small or large structs? Because the actual buffer content will be something like this: struct AmbiProbeCell { ivec2 childMask; // 4x4x4(64 bit) sub-cells ivec2 childOffset; }; // 16 b struct Probe { vec4 properties; vec4 staticLight[6]; vec4 skyLight[6]; }; // 196 b layout (std430, binding=1) buffer ProbeArray{ Probe probes[]; }; layout (std430, binding=2) buffer ProbeTree{ TreeCell cells[]; }; The idea is that I query through TreeCells first (thus jumping from one small struct to another). Then leaf cells will contain an index to the other "Probes" array, which contains much larger structs. So basically I have a relative small, and a large array. But I could also join them both into a single (even bigger) array. But as you while traversing trees, I'll be jumping from one element to another. Does (struct) size matter here?
  6. Any idea why the code below gives me "GL_OUT_OF_MEMORY", or "internal malloc failed" errors? Because I can't believe I'm actually running out of memory. For the first time I'm using SSBO's to get relative large (~50 .. 100 MB) into the videocard. Making the buffer & setting its size with glBufferData doesn't seem to give any problems. But loading/compiling a shader with this code kills it: struct TestProbe { vec4 staticLight[6]; }; // 96 bytes layout (std430, binding=1) buffer ProbeSystem { TestProbe testProbes[262144]; }; // 96 x 262144 = 24 MB Making the array smaller eventually cures it, but why would this be an issue? Also tried a smaller 2D array ( "testProbes[64][4096]" ), but no luck there either. My guess is that Í forgot something, the GPU trying to reserve this memory in the wrong (uniform?) area or something... OR, maybe this just can't be done in a fragment shader, and I need a ComputeShader instead?
  7. >> JoeJ - Lightmaps I'm the type of guy who can implement stuff 90%. Pretty far, but the last 10% is what perfects techniques like this; better generation tools, seam errors, better UV space usage, leak reducing, better performance / compression, more accurate directional information, ... All together the results are pretty ok, but various little, yet nasty errors break the illusion. And as described above, my method for storing directional information isn't good. Especially at places where the dominant vector is taken over from 1 light to another (or from 1 light to none).   >> voxel tracing to lighmaps to probe grid Hehe. True, I wasted quite some time on getting G.I. Also tried realtime updated lightmaps a bit like Elighten does, with pre-computed relations between lightmap patches, a long time ago (the first Tower22 G.I. system actually). But as said, getting the last 10%... I think in my case probes produced the best results - so far. But moreover the simple conclusion is that no method is perfect, and it takes a lot of sweat :| I stepped away from the ambitious desire to get true "Realtime G.I.". It just has to look good... but then again I do have quite a lot situations where lights can be switched on/off locally, as well as a day/night cycle so a 100% static bake is not an option either.   >> blurry scene behind the ice That's pretty smart, got to remember this. I wasn't planning to throw the lightmaps away, though I might move the G.I. part back to probes. The good news is that I actually spend last year on T22 GAMEPLAY - no graphics (or well, just a little bit). So instead of trying to make things look good, I tried to avoid the player from falling through floors, a bit of A.I. Behavior Trees & Scripting, level design, and so on. And now the game(demo) is playable. But now we're back at the point we need to make things look good again. Which means I'll need to find some motivated artists -which is almost impossible :( Generating good screenshots sometimes helps to lure them though... So we're back on graphics yes.   >> Frenetic Pony - lighting needs spec information Actually there are specular probes (cubemaps). I'm splatting them as deferred cubes or spheres, as described here https://seblagarde.wordpress.com/2012/09/29/image-based-lighting-approaches-and-parallax-corrected-cubemap/ But I'm interested in trying the method described in the other paper I posted earlier. Here each (G.I.) probe refers to the most suitable SpecularProbe. Problem with the deferred approach is that I need to define the volumes of each probe manually (radius / depth / height / ...) to get them fit in the relative tight spaces I have. That means works for the artist, and also I sometimes overlap space in the other neighbor room, or forget some spots, meaning they don't get receive any specular (well, they do via Screen space RLR if available).   >> Eric Lengyel - Horizon Mapping From the few bits I understand, a (pre-computed) horizon-map contains the angle towards the "horizon". Not sure what that is, but is that enough information to test if a pixel is occluded y/n for any given light? In my situation there can be relative much (small, local) lights, and a deferred pipeline is used (thus splatting light volumes onto the screen, reading G-Buffers to fetch normals). Typically the parallax offsetting took place before that, while filling the G-Buffers.     Thank you all!
  8. Well thank you :) I must say the results aren't consistent though. Like I said, it takes pain and sweat to get even simple scenes look right in most cases.   >> Baked lighting Right now it's Lightmap(s). And instead of doing something like baking 3 lightMaps for 3 different directions, I only have 1 color, and 1 "dominant direction" vector stored. That might not be very good either, now that I think of it. If light comes from multiple directions, that "dominant vector" tends to be just forward. Actually there are 4 maps, as I also store sky-influence & direction and influence factors of 3 other lights (so you can change their colors realtime, making semi-dynamic realtime G.I... well, a little bit).   >> SSS Can't remember exactly... you mean storing thickness or curvature in the lightmap? I implemented that, but on a per-vertex level. Most concrete walls here aren't very good candidates for SSS hehe. And organic objects often have quite a lot vertices to make up.   But I was thinking about ditching lightmaps anyway. Every time when I try them, there is trouble. UV mapping issues, leaking edges, not useful for particles & dynamic objects, resolution terribly low at some points, and so on. In the past I used probes (and light from 6 directions, like a cubemap), which also had its share of problems, but felt like an easier and more allround solution for my needs. Maybe I should just do that. Saw something interesting here: http://advances.realtimerendering.com/s2015/SIGGRAPH_2015_Remedy_Notes.pdf (talks about using partially pre-computed probes, but in a smarter/more compacted way than I did in the past).
  9. If a game like RE7 (and yes, exactly one of the titles I had mind while writing this) indeed simply uses more man-made bricks and planks, than so be it. I just wanted to make sure I wasn't missing something smart here :) A bit off-topic, but do programs like Blender or the likes have tools for you to (auto)generate this, respecting the UV-coordinates of the (Flat) wall surface behind it? Doing all that by hand... Probably I can build a real pyramid faster. Biggest issue is that you can't easily change afterwards: every texture-change or even UV-shift would require a rebuild of that surface. So it would be something the artist does in a very final phase, when the scene is definite.   And maybe that's one of those things going on with "Flattish"... I learned that every scene I make sucks big time until the final tweaks have been made. Meaning a near perfect texture composition, correct UV's, details added (varying from big furniture to tiny wires and decals), and an appealing light set-up. When looking close, things may still suck, but the complete picture is ok.   But I still find it very hard to make a clean scene. Thus without tons of rubbish, wall cracks, pig grease, and broken lights to mask imperfections. Making an empty corridor with a boring light setup look realistic is darn hard. Like women, not everyone is a natural beauty without foundation. Yet I have a feeling some engines actually do manage to create a nice looking scene even with minimal effort. But as said, maybe it's just because all factors together are more complete/correct.   I figured PBR would be helpful, so materials should look natural in any case. Which requires correct textures as well. So once in a while I download some PBR-Ready textures, like the ones here https://www.poliigon.com/texture/48 In comparison, the white planks and bricks from that website have been applied in the attached screenshot (normalMapping on, tesselation toggled on in the second shot). When loading them into my game, it doesn't look that spectacular. Not bad, but a bit bland. Of course, the previews on that website use ultra quality, and the attached scene itself simply is empty and boring here. The third shot shows a different, lower quality texture. Same techniques, but instead of a 2K texture, this was less than 1K I believe.   Now in this shot, with the lamp right above the bricks, the normalMapping is pretty obvious. But the majority is litten indirectly. In general, things look more interesting (not necessarily more realistic) when having high contrasts & light coming from 1 dominant direction. But that's pretty much opposite to multi-bounce G.I., which spreads light all over the place.... Thinking about it, tweaking light contrasts is probably another key to success here...
  10. Oh yes, forgot to mention, but indeed, decals on POM walls is like trying to paint Bob Ross on a running toddler. It works, but... weird. On a tesselated surface it should be less of a problem, when doing deferred decals at least. I agree that in the end, tesselation works more natural with pretty much any effect that will follow up on your surfaces. Thanks for the Silhouette paper, I remember that being quite old actually, but I never tried it. So, on a cloudy sunday...   You state that current games generally don't use POM or Tesselation that much. Which is understandable for all the reasons given above. But... how do they do parallax effects right now then? Broken stuff, brick walls, old plank floors certainly appear "3D" in a lot of games I played last 5 years or so. Is that just manual-made 3D-offsets, or am I fooled easily? Or let me ask it differently. I have a feeling my normalMapped surfaces look flatter than in modern games. Unless I'm putting lights at extreme angles, making the "bump" visible, there isn't much shadowing going on, and with the lack of displacement/parallax, the end-result is... flattish. I'm guessing it's really just smarter, old fashioned, art-work in the end, but maybe I missed something and games are spicing their normalMapping effects up somehow...
  11. Hi there, Just wondering, what is the common way to achieve "enhanced bumpmapping" these days? In the past, I used POM (Parallax Occlusion Mapping), which more or less calculates the offset per pixel through ray-marching. Worked pretty well, but relative expensive (at least, I felt it was in 2012) and of course it's not REAL. Maybe you devs are fooling me, but old brick walls really look as if the bricks truly stick out in modern games, even when looking at the corners.   Now I have Tesselation, which truly offsets the geometry. If close enough, when stepping away the sub-division quickly reduces. In the beginning I thought it was awesome, but looking now, I'm actually thinking about using POM again. Problem is the jaggy edges at diagonal stuff. Sure it can be reduced by throwing in even more sub-division, but I guess it gets truly expensive then, and that for an effect that is often not noticed that much anyway. But maybe it's the norm these days, dunno, that's why I'm asking...   Both methods had 2 further issues in my case: * Edges / Corners. With POM you generally shift inwards... which looks kinda weird at the edges where a wall or another surface starts. With Tesselation, you can go both ways. Moving vertices away generates holes at edges, moving them inwards works better but now objects standing on top have their "feet" sunken into the floor, as if it was grass. It breaks the coolness merciless. My "solution" is just to minimize the offset at borders (right now I can Vertex-Paint the offset strength/direction). But.. but exactly at the corners is where displacement should shine! We want broken edges, bricks sticking out!   * Self-Shadowing (lack off). Offset is nice, but without self-shading, it still looks flat and ackward. POM demo's showed how to do it, but always with a single fixed lightsource. In a deferred pipeline with many lightsources (and also ambient), I wouldn't know how to achieve that "afterwards", when rendering the light volumes using the G-Buffers where the offset has already taken place. I guessed for Tesselation it would go more naturally. While filling the DepthBuffer, you can take these offsets into account. However, when rendering shadow(depth)Maps from light perspective, you would have to tesselate as well, otherwise the depth-comparison is incorrect. I haven't tried it yet, but doesn't this make things even worse, performance wise? Or should I just trust on the 2017's GPU powers?   Or maybe... I'm old fashioned and you guys use different tricks now? Or maybe... Maybe the truth is that parallax effects aren't used that much, and it's still about smartly placed props and the artist adding some geometry-love manually ??      
  12. I considered making a Node-based material system, but figured out most shaders would be the same more or less anyway - at least for the visuals I try to achieve. When going for "Realistic Graphics", eventually using PBR (Physically Based Rendering), and maybe doing a Deferred Rendering pipeline, the amount of "Special Cases" isn't that high, because all materials take the same sort of input, apply the same sort of tricks, and go through the same sort of lighting (hence, physically based) stages. At least, that goes for the majority of solid geometry, like your walls, terrains, furniture, and so on. Semi translucent matter like tissue or plants may need some boosters.   What I did, is making "Uber Shaders", basically 1 or few big all-scenario shaders, with a lot of #IFDEF's inside them. Then the artist would toggle (read #define) options, like NormalMapping on/off, or "Use layer2 diffuse texture". Most important are a small bunch of standard sliders, like the Smoothness/Roughness of a material, the Fresnel, or whether its a Metal or Non-Metal surface. Works out pretty well for most stuff, and the artist has almost nothing to do, other than drawing kick-ass textures of course. Depending on the enabled options, the actual shader gets subtracted from the Uber-Shader. And before doing so, I check if it wasn't made already by some other material using the same options. The actual amount of unique shaders stay pretty low, making it suitable for sorting.   The more special cases are typically alpha-blended materials, like glass, jelly-cake, or particles. Sure a Node System would be nice for those, especially from an Artist point of view. Then again, they make up a small fraction in my case, and simply toggling techniques on / off still works here, and is a hell lot faster than building your shader with nodes from ground up. But of course, the artist is bound to whatever options the programmer delivers, which can be limited.   I think the Node System comes from times where PBR and such weren't standard yet, so special hacks had to be made to simulate different materials, such as metal or wet pavement. GPU's are strong enough these days to treat them as one (with some overhead yet), running the same code over them.
  13. As usual, thank you all for sharing detailed experiences, concerns, and hints. And sorry for the late reply. Work or real (non-VR) life always grasps me away, and I tend to forget I posted a question here :p Many Hertz + Big resolutions seems to be my first biggest challenge, on the technical side - if I were into trying out VR. I'm having trouble reaching 60 FPS as it is already (not on a killer PC, but still), and I would predict games will suffer quality just to reach that "VR target". Hand in some geometry, doing screen space techniques like SSAO or RLR on relative poor resolutions, not too many dense particle clouds, ... I mean, if the bar was to keep above 30 or FPS at least, then 90 is a big leap. Avoiding motion sickness and all of the given hints above (avoiding high frequency changes/details, getting the right controls, ...) seems to be more serious than I thought. I'm sure shooters or racing games eventually figure out some golden do's and dont's. But curious, a non First-Person game, like Command & Conquer, The Sims, or Super Mario (3rd person camera), would that work?   Well. Rather than thinking about programming it, the next best step is probably to experience myself first. Instead of a technical, performance, or sickness-related issue, that would be a money-related problem. Finally got a PS4 and Resident Evil 7 here... But I'm not so sure yet if I want to pay 400+ euro's to cry, wet my pants, stumble over the kids, scare myself to death, and maybe get motion-sick as a bonus :)
  14. From 3D graphics and shaders to anti-vomit code. I can imagine a poor performance, crazy lighting effects, or certain environment settings are sickening indeed. But seriously, from what I read the biggest challenge is to keep up speed then, which is quite hard when looking at my own program that barely reaches 60 FPS on a 1600 x 900 resolution. Certainly with big particles flying around, things can get smudgy. And I guess even some AAA engines/titles suffer the same.   Gazing through the SMP article you posted, it seems you have to render everything twice (with a small offset like your own eyes have). But techniques like "Simultanous Multi Projection" saves you from having to push geometry twice, and the fishbowl approach avoids having to render everything twice. But are those steps automagically done for you by the videocard, or do you still have to teach your GPU a lesson?
  15. >> Nothing a game programmer can do or not do That almost sounds too good to be true :D I can understand the fisheye. But why lowering quality at the outer regions? Is that to emulate "out of focus" - but wouldn't your eyes already be doing that - I mean your own real eyes? Or is it more just to avoid overkilling your head with too much visuals all around?