• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.


  • Content count

  • Joined

  • Last visited

Community Reputation

768 Good

About Cypher19

  • Rank
  1. Thirded!
  2. For offline/non-interactive purposes, FLIPPIN' AWESOME! That level of image quality in just a second or so of processing is really damn great! For real-time...eeeeeesh, one-to-several dozen ms/frame for decent results on a moderately complex scene? If it wasn't for the ridiculous amount of overdraw (the inset on figure 1 makes my eyes bleed!) that could actually work really well for a game! Also, are you sure your implementation of regular SSAO is correct? I know that objects in the foreground have some halos due to the way the AO is computed, but I've never seen ones that look as bad as the one in your paper, e.g. here's a shot of Crysis's SSAO: I mean, there are some halos, but it's not to the point that the results are practically unusable. The implementation NV has in their SDK also looks fairly accurate and runs at a good framerate, and in fact, it runs better than your version of SSAO: http://developer.download.nvidia.com/SDK/10.5/direct3d/samples.html#ScreenSpaceAO I mean, yeah, they're using depth AND normal data of the scene, but you referenced using this solution in a deferred renderer: even a light pre-pass and inferred renderer have at least that much info.
  3. Quote:why would be impossible to have 2000 lights like you have in deferred rendering? It's not. But try doing that in a forward renderer and see what kind of performance you'll get.
  4. It's spelled "n e i g h b o u r", by the way.
  5. I'm definitely for this. My town in Simcity 2000 had a microwave power plant working great for years. ...until that incident that caused half of the city to burn up in flames.
  6. Quote:So if i had skinning being calculated, the code would need to be carried out multiple times for each light. Yes. Quote:He mentions using the depth information from stage one to reconstruct world positions for calculating the shadows maps so that you dont have to re-render the scene What he's referring to is this: -You render the shadow map. -Then you project it onto the scene from the camera's point of view. At this point, you can use the depth-G-buffer to reconstruct the world position of the pixel in question (i.e. calculate its world position, project the shadow map coordinates onto it, read the shadow map accordingly to a distance value, then calculate the distance of the pixel's world pos relative to the light, then do the compare) Quote:Also, am i right in thinking that each effect(forward rendering style) is going to need 2 techniques, one for normal rendering, and one for z-only rendering. So for example, a skining shader would have the same vertex shader, but a different pixel shader, that just outputs z depth. Effectively, yes. You can do some ninjaing around this though, by, for example, writing some code that would generate the pos/z-only version of the shader for you (e.g. grabbing the ASM, and analyzing everything that contributed to oPos).
  7. Because the game was designed primarily for consoles, and they didn't invest the resources into significantly rebuilding the UI for PCs.
  8. Does anyone else find it interesting that all four of the finalists are (realistically speaking) silent protagonists?
  9. Quote:Original post by bzroom You can wait about 10 years though and wait untill they release the source, like iD software does. Even then, parts of the code will still be commercially confidential, and would have to be removed, e.g. hooks into the XDK APIs. Even if you gained access to the source code for a console title, you'd need a development kit to test and run your changes on, which requires you to be a licensed developer with MS/Sony/Nintendo, and having thousands of dollars on hand for a single devkit that can run the unsigned executables that you compile. There is one sure-fire way to look at the source code, though: Get a job at the development studio that made the game.
  10. Quote:Its not really that I am rotating the shadowmaps, but when I rotate the camera the "look at" point for shadow rendering will also move. Isn't that normal? Oh yeah, that'll happen. As long as you round the look-at point to the nearest pixel, it'll be fine, but I thought you were doing that already.
  11. Quote: This however will fail if I rotate the camera and I can't find a good solution for that. Who said you have to rotate the shadow map so that it's aligned with the camera all of the time? Quote:I'm not even sure that I understand why its still swimming during rotation. It's because how you're hitting the samples changes. Let's say your shadow map is oriented so that some triangle edge is almost horizontal from its point of view, (so the render target stores a straight line with a few jags in it), and then another orientation has that same edge oriented so that it is at a 45d from the shadow's POV (so the render target stores a line with a stairstep with a slope of 1). Obviously, you have two entirely different patterns for hte line there, so sampling those two will look different.
  12. Quote:Original post by cignox1 I am excited, but not for its rasterization power, wich I fear will be far behind the upcoming generation of nVidia and AMD/ATI. I'm mostly interested in what its raytracing capabilities will be, since the code that runs on a x86 will run on LRB as well. Don't be so sure. Keep in mind each LRB core DOES have that (functionally complete!) 16-way SIMD in addition to the normal x86 functions. You wouldn't take, like, Quake's renderer and run it on LRB just because Quake ran on an x86. You'd want to redesign it so that it really uses the SIMD that LRB has, e.g. http://www.ddj.com/hpc-high-performance-computing/217200602. Personally, I actually think it will be competitive compared to NV and AMD when it comes out. Some aspects of it are slower, and the raw FLOPS compared to NV/AMD's next chips are going to be higher, but having the renderer behave completely differently (i.e. the fact that it's a TBDR instead of an immediate-mode renderer) will probably give some very good performance numbers, especially if the rendering code of a game in question is tweaked (or reworked) to run better on LRB. Not only that, but it may turn out that in practical cases, LRBs FLOPS will be "better" than NV/AMD's, due to stuff like a WAYYY higher cache size per LRB unit. Quote:That said, I still have to figure out how they think to scale LRB in the future: GPU get speed improvements changing architecture, but a x86 CPU doesn't change too much, so the only way I see to improve performance every year or so is by increasing processing units or frequency, wich lead both to heat and consume issues... GPUs have been getting speed improvements from adding more processing units all of the time, and increased the frequency of the chips all of the time. Heck, most of the time the arch of a specific series of chips is exactly the same, just with different numbers of units, clocks, or memory bandwidth. LRB will be no different in that regard, and just like GPUs, they'll be able to add more processing units by upgrading the manufacturing process every year or two.
  13. Quote:As for writing themselves into a hole, I believe the basic plot to Lost has been written since day one, the dialogue and exact content of each episode is all that remains to be written. Theres no danger of them getting off track. Well, I remember way, way, back when they were writing the pilot, they planned out about 5 season's worth of material. I'm not concerned about them getting off track, I'm concerned about them having to do a significant amount of hand-waving in the first ten minutes of season 6.
  14. To be honest, I hate Lost almost as much as I love it. This was me every time commercials came on wednesday night: The final minute was that times about a hundred, though. I really hope the writers didn't write themselves into a corner with the last couple episodes this season. Most specifically, this: Faraday going on about free will had better be him taking stupid pills and not actually being part of the rules of time travel on the show
  15. Quote:Original post by Andreas Persson I am dx user myself, how does OpenGLES stand up to "ordinary" OpenGL and dx? Is it only for portables/phones or is there anyone using it for pc development. It´s supported on the PS3 right? Yeah, it's supported on the PS3, but I imagine if you talked to any PS3 dev they'd tell you that they don't bother with that.