• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.

patw

Members
  • Content count

    73
  • Joined

  • Last visited

Community Reputation

223 Neutral

About patw

  • Rank
    Member
  1. Billboard polygon with a "light ray" texture on it. Fade the polygon to invisible with distance.
  2. That is not at all the purpose of the DSF buffer. The purpose of a DSF buffer allows you to use a lower-resolution light-buffer/g-buffer than the back-buffer. This also allows for the use of up to 4 layers of transparency by doing something akin to interlacing in order to render translucency. While it seems as though this information could be gathered via depth/normal discontinuities, you will find, in practice, that this has a high rate of error. I strongly recommend re-reading the paper. The DSF buffer is critical to the technique.
  3. -Generate a unit vector in a random direction -Multiply by the radius of the sphere -If you want the particle to be generated anywhere within the sphere (as opposed to just on the surface) multiply by a random number in the range 0..1 -Take the direction unit-vector and multiply by your desired parameters (and random) and you have velocity
  4. Yeah the memory transfer is always the killer. Without a trivial blend, the memory transfer costs effectively eliminate the benefit of pre-pass over multi-buffer, and it noticeably scales up performance cost as # of lights increase; the very problem deferred lighting tries to solve.
  5. One slight clarification: It does not let you store specular color, but it does allow you to more accurately re-construct specular than RGB storage. With RGB storage, you usually approximate luminance by extracting it from the RGB color, however this saturates luminance at 1.0. With Luv, luminance is actually stored, and so it allows you to blend lights in HDR since, for the purposes of lights (in Lambertian shading), luminance is: N dot L * Attenuation You are correct about the blend. The blend is most accurate when blending lights of similar luminance, since this value gets clamped (otherwise the color would extend beyond the two source colors and that would also be bad). The ideal way to do this, in my opinion, is not to LERP but to use some kind of curve (probably something based on: e^x) to blend the light colors. I would also like the function to be adjustable via parameters to give more control to the artists over the way lights blend. Ultimately I settled on a lerp because it was cheap. Some day it would be cool to do Luv and maybe the blend function on SPU's but I decided Luv was too expensive on the Xbox360. (For pre-pass on SPUs, check out: Parallelized Light Pre-Pass Rendering with the Cell Broadband Engine by Steven Tovey and Stephen McAuley, in GPU Pro)
  6. If you think about it, bilinear filtering does not invalidate the data of the normal-map. It does, however, create a vector which is not normalized. The vector is still valid, and should still point in a valid direction (this would depend on the specific values/uvs) but you need to re-normalize before using it in lighting equations.
  7. In matrix multiplication A * B != B * A, however, for an affine matrix, A * B == B^(-1) * A. You can save yourself a matrix inversion by switching your multiplication order, provided you know the input data is going to be kosher (which it should be, for a view matrix).
  8. Inferred lighting is a specialization of Pre-Pass lighting which expands the data stored in the G-buffer to allow for the transparency etc. Besides this, it is performed exactly like pre-pass lighting. n00body: I have poked at a few effects using post-processed light buffers, but I haven't come up with anything I'm ready to really write about yet. One of the main issues with doing post-processing of the light buffer alone, is that you have no albedo, and this can drastically change the values for which one would use the light buffer. So it really depends on what it is that you are doing, because you may just not have the data you need. Sorry that doesn't really answer the question.
  9. I would hold off until it is clear what Mac will support. We're still waiting on most GL 2.0 stuff, on that platform, so you've got a bit of time before 3.x stuff is relevant.
  10. Also suggest taking a look here: https://mollyrocket.com/forums/viewforum.php?f=21
  11. A few years back I was doing an NVIDA demo for the GDC launch of the FX cards (shader 2.0! woo! we were excited.) and was trying to get proper shader support into the material system. At one point I excitedly called the marketing guy over and exclaimed, "It's a shader!" to which he said, "It's...a black triangle." I replied that I could make it red with just a few keystrokes. He was still not impressed.
  12. Your pre-pass pixel shader should output the linear depth to the color channel(s), not to depth output. Depth output for the fragment is not altered; it is perspective depth as comes through from the position in the vertex shader. This is why you do not lose the hi-z.
  13. zedz, if you are not already doing so with your deferred rendering, I recommend reserving a stencil bit for "opaque", the shot you show has a ton of alpha-blended/clipped pixels because very little of the viewport is occupied by opaque objects.
  14. The hardware will use a Z-buffer if you tell it to. You will need to create, and bind that target yourself. You want this z-buffer because otherwise you would need to submit perfectly sorted polygons in order to not get errors in your shadow map. You could also enable blending and and use the MAX blend function, but I don't recommend it for this :)