• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.

ReaperSMS

Members
  • Content count

    63
  • Joined

  • Last visited

Community Reputation

1537 Excellent

About ReaperSMS

  • Rank
    Member
  1. It looks like your segment intersection test is actually an infinite line test. It will only return false if they are parallel or coincident...
  2. That all looks fine, assuming diffuse and ambient are float4's, which they almost certainly should be if you want lights that aren't just white.
  3. Assuming it's for a deep magic code ninja type position, ask why, and likely be satisfied with a coherent answer. If it's not a position that involves staring at hex dumps for bugs, it probably doesn't even come up... unless someone claims they have a better grasp of C++ than Stroustrup or Sutter.   Or, on bad days, be very relieved, as it means I don't have to dig that bit of the standard out of cold storage.
  4. Anything over an 8 means one of two things. They've either written a solid, production ready compiler frontend and runtime support library, or they're a 4. 7-8 from someone with a background that matches means "I've seen horrible things, and know how to avoid/diagnose them, but there are still fell and terrible things lurking in the dark corners of the earth".    An approach we used from time to time, at least for people that claim to be Really Good and Technical with it, is to just have them start drawing out the memory layout of an instance of a class object, working up from the trivial case, through to the virtual diamond one, and see where the floundering starts. Bonus points for knowing how dynamic_cast and rtti work (and a slight bit of walking through the process usually serves as a good reminder of why they aren't exactly free).
  5. It's a 3D scene, but with the view direction restricted to slightly off-axis, and camera motion restricted to a 2D plane.   The main area of play is about 400 units in front of the camera, with some near-field objects about 200 units past that that can accept shadows. Tons and tons of background objects lie far beyond that, the far plane is set to around 100,000. It isn't particularly ideal.   That soup gets thrown at a deferred lighting renderer, which is all fine and great up until it needs to light things that don't write depth.
  6. I was afraid of that.   The divide by pi is in there on the real code side, I left out some of the normalization to get down to just the SH bits. The lighting model for this project is ridiculously ad-hoc, as we didn't get a real PBS approach set up in the engine until a few months into production. Another project is using a much more well behaved setup, but it has the advantage of still being in preproduction.   For this project the scenes are sparse space-scapes, with a strong directional light, and an absurd number of relatively small radius point lights for effects, and only about three layers of objects (ships, foreground, and background). I suppose a brute force iteration over the light list might do the job well enough, as there might not be enough of these around to justify a fancy approach.
  7. We have a game here using a straightforward deferred lighting approach, but we'd like to get some lighting on our translucent objects. In an attempt to avoid recreating all the horrible things that came from shader combinations for every light combination, I've been trying to implement something similar to the technique Bungie described in their presentation on Destiny's lighting.   The idea is to collapse the light environment at various probe points into a spherical harmonic representation, that the shader would then use to compute lighting. Currently it's doing all of this on the CPU, but I've run into what seems to be a fundamental issue with projecting a directional light into SH.   After digging through all of the fundamental papers, everything seems to agree that the way to project a directional light into SH, convolved with the cosine response is void project_directional( float* SH, float3 color, float3 dir ) {    SH[0] = 0.282095f * color * pi;    SH[1] = -0.48603f * color * dir.y * (pi * 2/3);    SH[2] = 0.48603f * color * dir.z * (pi * 2/3);    SH[3] = -0.48603f * color * dir.x * (pi * 2/3); }   float3 eval_normal( float* SH, float3 dir ) {    float3 result = 0;      result = SH[0] * 0.282095f;    result += SH[1] * -0.48603f * dir.y;    result += SH[2] * 0.48603f * dir.z;    result += SH[3] * -0.48603f * dir.x;    return result; }   // result is then scaled by diffuse There's a normalization term or two, but the problem I've been running into, that I haven't seen any decent way to avoid, is that ambient term in SH[0]. If I plug in a simple light pointing down Z, normals pointing directly at it, or directly away from it behave reasonably, but a normal pointing down, say, the X axis will always be lit by at least 1/4 of the light color. It's produced a directional light that generates significant amounts of light at 90 degress off-axis.   I'm not seeing how this could ever behave differently. I can get vaguely reasonable results if I ignore the ambient term while merging diffuse lights in, but that breaks down the moment I try summing two lights, pointing in opposite directions in. Expanding out to the 9-term quadratic form does not help much either.   I get the feeling I've missed some fundamental thing to trim down the off-axis directional light response, but I'll be damned if I can see where it would come from. Is this just a basic artifact of using a single light as a test case? Is this likely to behave better by keeping the main directional lights out, and just using the SH set to collapse point lights in as sphere lights or attenuated directionals? Have I just royally screwed up my understanding of how to project a directional light into SH?   The usual pile of papers and articles from SCEE, Tom Forsyth, Sebastien Lagarde, etc have not helped. Someone had a random shadertoy that looked like it worked better in posted screenshots, but actually running it produces results more like what I've seen.
  8. The sites are the ones in the wrong. They're probably implemented in javascript, which I believe treats all numbers as floats, and thus are losing precision. As an example, your third number, punched into windows calc, as the first step would be:   22236810928128038 % 62 = 42, which should be 'g'. If we subtract 42 out of there, we get  22236810928127996, which on the second site properly ends up with a final digit of '0'. If you give it 22236810928127997, it still ends in '0', and if you give it 22236810928127998, it jumps to '4'. double precision floats only give about 16 digits of precision, so feeding it an 18 digit number means it starts rounding in units of 4.   The entire idea seems a bit odd however, as for this to be reasonable, you have to convert before encrypting, and need to know exactly where numbers live in the output to parse them back properly. It seems like it would be better to encrypt directly from binary, and base-64 convert the output if you need to send it over a restricted channel.
  9. Bleh, I see gl does the same thing. I suppose I shall have to put up with being terribly disappointed in the PC API's again.
  10. So, I'm trying to use SV_InstanceID as an extra input to a shader, to pick from a small set of vertex colors in code.   It seems to completely ignore the last argument of DrawIndexedInstanced(), and start at 0 per draw call. This seems less than useful, as it would make it impossible to transparently split up an instanced draw call, and defeat a lot of the purpose of having the system value at all.   How would one be expected to use SV_InstanceID properly in this case? The vertex shader looks about like so: struct VertexInput { float4 position : POSITION; uint instanceid : SV_InstanceID; }; struct VertexOutput { float4 projPos : SV_Position; float4 color : COLOR0; }; VertexOutput vs_main( const VertexInput input ) { VertexOutput output = (VertexOutput)0; output.projPos = mul( float4( input.position.xyz, 1.0f ), g_ViewProjection ); if ( input.instanceid == 0 ) { output.color = float4(1,0,0,1); } else if ( input.instanceid == 1 ) { output.color = float4(0,1,0,1); } else { output.color = float4(0.5,0.5,0.5,1); } return output; } This results in it always picking red. If I instead dig a color out of a separate vertex buffer, via D3D11_INPUT_PER_INSTANCE_DATA, it works as expected.   How do I make d3d useful?
  11. Or that the driver's just a little old, and the QoS is busted. We had an issue with devkit connectivity, where one machine could talk to a kit after an update, but not another machine. The initial webconfig page would start loading, and then come to a dead halt, and kill the http connection.   That turned out to be related to jumbo packets. The update enabled them for the devkit, and the machine that didn't work had a realtek driver dated ~5 days earlier than the other machine. That caused it to drop any and all jumbo packets, and the second packet the devkit tried sending over was about 20 bytes over the jumbo threshold...
  12. Typed UAVs have some restrictions, check the DXGI programming guide under Hardware Support for Direct3D 11 Formats.   Column 22 on mine is Typed UAV, and it does apply to most of the types. Conspicuously absent from it, however, are 96-bit RGB, 64-bit depth/stencil, 32-bit depth (use R32), packed 24/8 depth/stencil, shared exponent and odd RG_BG/GR_GB modes, and all of the block compressed formats.   tl;dr: DXGI_FORMAT_R32G32B32_FLOAT doesn't work for typed UAVs. The rest do.
  13. The renderer side is going to treat it as slices. If you really want to go this route, you're probably looking at using a geometry shader to replicate the light volume geometry out to all slices covered by it, doing the appropriate projections and such.   The practicality of all that seems questionable, memory restrictions are going to keep your lighting exceedingly lowres, and you're blowing the vast majority of it on empty or useless space.
  14. A certain PC RTS title of years past tried this, including sending raw floats over the wire. They hit issues between Intel and AMD, and after sorting some of those out, between Debug and Release. They tried the usual compiler options and floating point control word magic (that still needed resetting after every D3D call).   We got to port it to Linux, and tried very hard to keep it netplay compatible. All of the above applied, plus the fun of Visual Studio vs GCC when it came to fp codegen behavior. Rounding everything to ~3 decimal places mostly dealt with it. but not all of it. In particular, the AI code had some float comparisons lying around, on data that was never sent over the wire, that could change the number of calls to the RNG, and that *was* state that was tracked closely.   I managed to come up with a method that definitively solved the compiler issues -- eyeball the VC output assembly, and reimplement the function on the GCC side with the VC floating point translated to AT&T syntax, pasted in, and add some shim code around it to fix up differences in the calling convention. This is not how one should define C++ class methods, but such was life.   It even worked, and solved it definitively for that case. The next case that came up was the same sort of thing, two steps higher on the callstack. At that point I gave up, because we did not have the time to rewrite the entire AI system in assembly, as that was clearly going to be the end result.   This way lies madness. Stick to fixed point for anything that actually matters to the game simulation. You should probably also make sure your system is set up to be able to detect synchronization loss as immediately as possible, and even better, have a mechanism for resynchronizing. Otherwise you're in for debugging issues that only happen in 5+ player games, after 2 hours, with the bulk of the useful data being gigs upon gigs of value logs and callstack traces.
  15. Fillrate and memory bandwidth are not quite the same thing. Normalmaps don't really hit fillrate outside of a deferred or light prepass render, just memory bandwidth. However, their access patterns are fairly predictable, and scale better (assuming mipmaps) than random vertex access.   Given that every card imaginable these days shades and rasterizes in units larger than a pixel, 2x2 quads at the least, and far larger in practice, any ALU gains you get by not bothering with normalmaps will be consumed by small triangle overhead. 1x1 pixel triangles will generally compute as 2x2 quads, or worse, and throw away most of the results, so pixel for pixel they're 4-16x more expensive than a more reasonably sized triangle.   Lastly, mipmaps provide a more automatic method of LOD. With discrete triangles, lighting, texturing, etc will almost certainly break down into a flickery, sparkly mess as the triangles shrink to sub-pixel resolution.