• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.


  • Content count

  • Joined

  • Last visited

Community Reputation

1888 Excellent

About Bummel

  • Rank
  1.   I have trouble understanding where your problem lies then. You basically use the iFFT to efficiently sum a large amount of sine waves. That's it. The paper covers mainly how to calculate the complex coefficients of those waves. The Phillips spectrum really only provides a weight for each wave depending on its frequency/wavelength. Therefore, you get a radially symmetric frequency space texture at this point (considering the weights/coefficients of the low-freq waves are towards the middle of the tex. A 1D slice of the Phillips spectrum looks something like this then:  __...----==._.==----...__ ). Next you add some Gaussian noise since nature is never perfectly regular. You might also want to multiply with a directional weight to give the resulting waves a primary propagation direction. Up to this point you have calculated the magnitude of the complex coefficients - the height of each sine wave. But you also want the waves to actually move (I assume). For that you have to alter the phase of the wave by rotating its complex coefficient depending on a time variable. Here you want to consider that dispersion stuff which describes the relation of propagation speed to wavelength (low-freq waves travel faster). Also, adding a constant random offset to the phase angle might be a good idea (to break up regularities). Now perform an iFFT and you have got your height field. If instead of just sine waves you want to to sum Gerstner waves (which you totally want) you have some extra work to do. Basically you need two more textures + iFFTs to compute a horizontal offset vector in addition to the scalar height offset you already calculated (if I remember correctly the paper mentions something of multiplication with i in this context - this is really just a 90° rotation of the phase angle, nothing fancy). To ensure your FFT works correctly you could try to reproduce the low-pass filter results from the Fourier Transform section of this talk: http://www.slideshare.net/TexelTiny/phase-preserving-denoising-of-images It might also help if you could show us your frequency space results. 
  2. [url=http://www.opserver.de/ubb7/ubbthreads.php?ubb=showflat&Number=406621#Post406621]Papers about Rendering Techniques[/url]
  3. If you should decide to try Voxel Cone Tracing do it on (nested) dense grids. A Sparse Voxel Octree while not really being affordable performance-wise in a setting where you do other stuff beside dynamic GI is also a pain in the butt to implement and maintain. Trust me, I have been there. :\
  4. Because radiance is flux differentiated with respect to two quantities (projected solid angle and surface area) which gives you two differential operators in the denominator and therefore two in the numerator as well. Radiometry isn't really that hard once you got it but it can take some time and practice(!) depending on your foreknowledge and basic intelligence. :)
  5. To be physically plausible, Bloom/Glare/Glow should added before tonemapping since it simulates scattering due to diffraction effects in the eye/camera lens, which are wavelength but not intensity dependent. Usually the wavelength dependency is neglected too, though. Tonemapping happens as soon as the photons hit the sensor (or later as a post-processing step if the sensor supports a high dynamic range as it is the case with our virtual sensors) and therefore after the scattering events.
  6. I might by wrong here, but isn't Hi-Z about culling whole triangles before they are even passed to the rasterizer? With other words based on the depth values of the three vertices of a triangle?        But for intersecting triangles that doesn't work...or does it somehow?
  7. I'm by no means an expert with regard to SHs, but I think what you are doing is correct. A directional light is basically described by a delta function which you can not reproduce with a finite amount of SH bands. Intuitively spoken, the best you can do with just two bands is to set the 'vector' part (2nd band (index 1)) to the direction of the delta function and use the constant SH term (1st band (index 0)) to account for the clamping in the negative direction. This is basically the same you do to represent a clamped cosine lobe with SHs (I'm not sure whether the weights are exactly the same, though). All you could therefore do to increase the quality, is to increase the number of used SH bands. I think.
  8.   I'm searching msdn but can't find any source I could cite in my thesis that explicitly states this. Can anyone help me? Thx.
  9. Stream Out from geometry shader is ordered as far as I remember.
  10. About the physics behind alpha: https://www.cs.duke.edu/courses/cps296.8/spring03/papers/max95opticalModelsForDirectVolumeRendering.pdf
  11. Is there an API call to do that? Or at least an elegant hack? Besides unanswered questions I wasn't able to find anything on that topic.   Thanks.     EDIT: One idea I have would be to issue an indirect draw call which produces the appropriate number of vertices which will be rendered as points so that I can decrement the counter in the pixel shader by 1 for every point (I'm working with 11.0 so I have no access to UAVs in the vertex shader).
  12. @MJP: Do you use a single curvature map regardless of facial animations?
  13. If you calculate your curvature map using screen space derivatives of lerped normals then it is only to be expected that the curvature values are constant over individual polygons. Normalizing your normals beforehand could help a bit, I'm not sure how much though.   EDIT: Ok, the prob is that you have to consider the change in position too, which is constant over triangles anyways.