• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.


  • Content count

  • Joined

  • Last visited

Community Reputation

271 Neutral

About jameszhao00

  • Rank
  1. Ah thanks. I think I know why I came to the wrong conclusion with my earlier tests. I was reading + sampling textures in the same kernel, and either it's undefined, or it got optimized out.
  2. Have you tried anisotropic diffusion? From experience it's fast, has a constant time per pixel variable size blur, and isn't too hard to implement. You will need to manually add in bokeh shapes. I've also wanted to try out this adaptive manifold bilateral filter approximation. Think it could work for Dof. http://www.inf.ufrgs.br/~eslgastal/AdaptiveManifolds/
  3. Thank you for the tip. I set CUDA to use unnormalized texture coordinates though. CUDA allows you do to do normalized coordinates [0.0, 1.0-1/N] or unnormalized coordinates [0, N-1]. CUDA normalized coordinates seems a bit different from opengl/dx ones anyways, as to purely sample the 2nd pixel in a 2x1 image using normalized coord, we hit coord (0.5,0) [as anything higher goes into border behavior] Edit: Should probably test this... as there might be some undocumented intricacies.
  4. The previous frame is indexed using floating point pixel coordinates like (300, 400). Thought about this a bit more and I think it's some other issue on my end. Purely positive imprecision like +0.00001 shouldn't make the image float bottom right (as a pixel is only taking stuff to the right/bottom of itself) Screen is integer coordinates. I'm deriving the floating point read locations from the integer screen coordinates.
  5. I'm doing this reprojection in CUDA with cuda textures (aka cuda Arrays). I'm pretty sure that it's not the half pixel offset issue, because if I use the original screen positions (rather than the screen->world->screen computed ones) the picture is stable.
  6. Has anyone experienced major precision issues with the reprojection cache? I'm currently caching indirect lighting computations temporally, and going from screen->world->screen always gives me some minuscule precision issues in xy (+0.0001 ish in pixel coordinates). I tested this precision issue by leaving the scene alone and not moving the camera. In this situation, using 100% of the cached values causes the rendered image to float right/bottom. Have people experienced this before? If so, how did you mitigate it?
  7. I wrote some test stuff in OpenCL, and it seems to emit 64 bit multiplies on my GPU. I went and read the HLSL assembly reference, and the instruction umul [url="http://msdn.microsoft.com/en-us/library/windows/desktop/hh447250(v=vs.85).aspx"]http://msdn.microsof...0(v=vs.85).aspx[/url] does exactly what I want. Is there a way to 'get at' instruction function?
  8. Can you do something similar to a multiply-high in HLSL? mul_hi in opencl What it is: [url="http://stackoverflow.com/questions/3234875/question-about-multiply-high-signed"]http://stackoverflow...ply-high-signed[/url]
  9. What you're looking for is 3D DDA. I remember implementing this a while ago... be careful that this type of stuff is extremely hard to implement well due to precision/logic errors and debugging difficulties: Essentially we want to walk the grid while keeping track of the distance to the next grid cell in each dimension. Let's say in a grid of 1x1 cells, we're at (0, 0.5) (cell 0,0) with direction <1, 1>. The distance to the next cell in the X (DX) direction is sqrt(2), while the distance to the next cell in the Y direction (DY) is sqrt(1/2). Thus because DY is smaller, we need to go up 1 cell to cell (0, 1) (enters the cell at (0.5, 1)), and recalculate DX and DY. After we've went up 1 cell, DX is now smaller, so we need to visit the next cell in the X direction. And so forth. You can optimize this by precomputing a whole set of values. I have implemented it here [url="https://github.com/jameszhao00/lightway/blob/master/sln/lightway/lightway/uniformgrid.cpp"]https://github.com/j...uniformgrid.cpp[/url] dda_vals pre computes values dda_next walks to the next cell, given current cell/info Please again note that this kind of stuff is really frustrating/difficult to develop (my implemention is not correct for cases), so use a preexisting intersection library if at all possible (embree is great).
  10. DX11

    [quote name='MrOMGWTF' timestamp='1348208565' post='4982249'] ... I mean that, there is a white wall, a green wall, and a blue wall occluding green wall. The green wall will be still illuminationg the white wall, but it shouldn't. Because the blue wall is occluding the green wall. Shouldn't you stop tracing at the first intersection you find? Also, you do cone tracing for each pixel, yeah? [/quote] Cone tracing voxel mipmaps means you progressively lookat higher and higher level mipmaps. A higher level mipmap stores an occlusion distribution built from child voxels (and not concrete occluders). In his case, I think he's just storing average occlusion for the voxel, and not something that varies by direction/position/etc.
  11. DX11

    This looks pretty cool! - How are you choosing your grid bounds? - Is the voxelization done at runtime? I assume no? ("voxelization is performed on the CPU with raytracing") - The intersection generated by the +X ray is injected into the -X 3D texture? - During cone trace, how are you dealing with occlusion? - "which makes them look displaced from the scene even for glossy reflections." What does this mean? Shouldn't Eye <-> Glossy <-> Diffuse <-> Light work? Also, is there SSAO in those screenshots? Awesome stuff [img]http://public.gamedev.net//public/style_emoticons/default/biggrin.png[/img]
  12. By rays, do you mean a path? So are you generating only 1 path per pixel? (aka 1 sample per pixel?) Also, a key point of RR is [code] float3 color = blah blah blah r = 0.3 (or some other ratio) if(rand() < r) color /= r <---- key else terminate next bounce... [/code] Unless I'm missing some context, your weighting scheme is what everyone does, as part of the standard lighting calculating.
  13. [quote name='MrOMGWTF' timestamp='1344152867' post='4966291'] Well, I was trying hard, but my 13 years old brain can't handle cone tracing. Do you know any other, easy to implement, real time global illumination techniques? [/quote] I suggest you attempt to write a mirror/shadow only ray tracer. No high school level (or college, depending on POV) math required. This isn't true GI, but it's a very good start.
  14. Paper's at http://maverick.inria.fr/Publications/2011/CNSGE11b/index.php Seems extremely tricky to implement well.
  15. Hmm but the point of RR is unbiased results... whereas your weighting scheme (btw that doesn't look like a weighting scheme... just a simplified normal lighting bounce calculation) stops at N bounces... what about N+1 bounces? [quote]Russian roulette needs at least as many rays as there are depths[/quote] This statement doesn't quite make sense.