• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.

Buzzy

Members
  • Content count

    256
  • Joined

  • Last visited

Community Reputation

312 Neutral

About Buzzy

  • Rank
    Member

Personal Information

  • Location
    Canada
  1. How about a Diablo like action RPG? You could do procedural generation of the dungeon layouts, as well as things like the enemies, the loot, and even the textures if you wanted. It also lends itself well to a semi-persistent multiplayer world. I'm not sure if there's anything you could put into the design to really take advantage of parallel programming, aside from moving things like physics and audio processing to their own threads. --Buzzy
  2. So a while ago I saw a video for a 4D puzzle game ([url="http://marctenbosch.com/miegakure/"]Miegakure[/url]). I thought it was really neat, but it got me thinking... What would an actual 4D renderer look like? What's the best way to represent the fourth dimension? I thought about using 4D tetrahedral models, rendered with a shader to select the current 3D "slice", but that seemed too unwieldy. The most straight forward way, in my mind, was to take the "ray-casting 3D voxels" concept and just add a fourth dimension. My program uses a 4D sparse voxel octree (I call it a hypertree) which acts exactly the way you'd expect: each dimension splits into two, which means that a node has up to 16 four dimension children volumes. I copied the ray casting algorithm from the [url="http://www.tml.tkk.fi/~samuli/publications/laine2010tr1_paper.pdf"]Laine and Karras SVO paper[/url] (minus the contours), and added in an extra dimension to everything. To visualize the fourth dimension (W), I leave Z as up and down, but rotate the viewer's other three dimensions so that W replaces X or Y. Mathematically it works quite nicely, and doesn't look too bad. One of the biggest issues that I had with it is that a 4D hypertree can get very big very quickly. Since every node can have 16 children, if I were to store all the leaf nodes I'd only be able to work with relatively shallow trees (e.g. at 4 bytes per node, seven levels is 1 GB). Since it's a sparse tree I don't store all this, but the potential is there. I also came up with two other solutions to this size problem. The first is to have portal nodes, which store a transformation matrix to teleport viewing rays, or object positions, from that node to some other node (and orientation). So even if the entire world is only 128 leaf nodes on a side, you can make larger environments by hijacking other (unused) dimensions seamlessly. The portal transformation does incur a performance hit though for every ray-portal intersection. My second solution to the size problem is to not store unique geometry at the bottom of the tree. Using a palette of premade leaf node "tiles", you can give the environment more detail without having to store it all uniquely. Or at least that's how it would work... I haven't actually implemented this yet. I got the idea from watching that [url="http://www.youtube.com/watch?v=00gAbgBu8R4"]Unlimited Detail[/url] video, which looks like it uses a similar idea with 3D tiles nodes. My other issue with a 4D renderer is that generating interesting content is difficult to do without an editor. I stopped working on it about the time I realized that I'd need to make an editor to get the full potential out of it as a concept. I'll probably pick it up again one of these days though. So that's my experience with "voxels". If anyone wants me to go into more detail about anything I can, but I don't want to post the program right now.
  3. You could take the difference between the coefficients of the two, then integrate over the sphere with these new coefficients, but using the absolute value of the function, to get the L1 distance. To integrate I'd say probably just do a basic Monte Carlo integration by having a set of a few dozen or so points on the unit sphere that you can plug into the resulting difference SH function. This should work because the L1 distance between two functions is something like [eqn]\iint_S | \hat{f}(s) - \hat{g}(s) | ds[/eqn] where [eqn]\hat{f}(s) = \sum_{i} c_i y_i(s)[/eqn] and [eqn]\hat{g}(s) = \sum_{i} d_i y_i(s)[/eqn] [eqn]\Rightarrow \hat{f}(s) - \hat{g}(s) = \sum_{i}(c_i - d_i) y_i(s)[/eqn] Here, [i]c[/i] and [i]d[/i] are your coefficients, and the [i]y[/i]'s are the SH basis functions. So a Monte Carlo integration would be something like [eqn]\frac{1}{N} \sum_{j}^{N} |\sum_{i}(c_i - d_i) y_i(x_j)| w(x_j)[/eqn] for a set of N points (uniformly distributed) on the unit sphere. Here [i]w[/i](x) is a weight function, which would be equal to 4[i]pi[/i] if you use a uniform distribution on the sphere. You could also replace taking the absolute value with an L2 norm to get the L2 distance. I think I got all that right... hope that helps. --Buzzy
  4. You might find these interesting: Stochastic Transparency: [url="http://www.nvidia.com/object/nvidia_research_pub_016.html"]http://www.nvidia.com/object/nvidia_research_pub_016.html[/url] Colored Stochastic Shadow Maps: [url="http://research.nvidia.com/publication/hardware-accelerated-colored-stochastic-shadow-maps"]http://research.nvidia.com/publication/hardware-accelerated-colored-stochastic-shadow-maps[/url] The first is about doing screen-door transparency at a sub-pixel scale, using multisampling hardware, but randomizing the pattern. The second extends that for use in shadow maps. You might also find some other techniques in the related works sections of them. It sounds like your algorithm will be very useful, and I'm looking forward to reading about it. --Buzzy
  5. So I started to look at other means of checking whether or not the I'm setting things up correctly. While the framebuffer status says it's ok, and nothing seems to be causing any GL errors, I started checking all the framebuffer attachment parameters ([font="Courier New"]glGetFramebufferAttachmentParameteriv()[/font]). All parameters seem to check out except the one I want, [font="Courier New"]GL_FRAMEBUFFER_ATTACHMENT_LAYERED[/font]. It throws a [font="Courier New"]GL_INVALID_ENUM[/font] error even though it should work. With a quick internet search I found [url="http://www.opengl.org/discussion_boards/ubbthreads.php?ubb=showflat&Number=289008"]this[/url] and the follow up [url="http://lwjgl.org/forum/index.php?topic=3691.0"]LWJGL bug report[/url]. The poster's setup is virtually identical to mine (Win7 64-bit, Radeon 4850), so I strongly suspect that it's a driver problem; it's just not dealing with layered textures correctly. In the mean time I suppose I'll use karwosts' idea and just manually fill the texture with a solid color in a simple shader. --Alan
  6. I'm working on a project in which I want to render stuff into a 3D texture. Once I got everything set up and started testing it, it looked as if it wasn't clearing properly. My shaders seemed to be drawing correctly into the texture, but I of course want to clear it every frame before using it, but it only seems to be clearing the first layer of the texture, leaving the rest as is. To prove to myself that I wasn't just missing something in my messy project code, I wrote a little test program with GLUT to see if I could narrow down the problem. The problem persists, and I'm not sure what is going wrong. Full program (under 300 lines, needs GLEW and GLUT): [url="http://dl.dropbox.com/u/23346083/3dTexTest.cpp"]3dTexTest.cpp[/url] Framebuffer and 3d texture setup: [code] glGenTextures(1, &_volumeTex); glBindTexture(GL_TEXTURE_3D, _volumeTex); // Fill texture with some initial colors glTexImage3D(GL_TEXTURE_3D, 0, GL_RGBA8, VOL_DIM, VOL_DIM, VOL_DIM, 0, GL_RGBA, GL_UNSIGNED_BYTE, (GLvoid*)_3dTexu); glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_WRAP_R, GL_CLAMP_TO_EDGE); glGenFramebuffers(1, &_volumeFBO); glBindFramebuffer(GL_FRAMEBUFFER, _volumeFBO); // Let glFramebufferTexture attach all layers of _volumeTex to FBO, on color attachment 0. glFramebufferTexture(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, _volumeTex, 0); if (FboStatus()) { Deinit(); return 1; } glBindTexture(GL_TEXTURE_3D, 0); glBindFramebuffer(GL_FRAMEBUFFER, 0); [/code] Every frame I do this, then draw a box with 3d tex coords: [code] glPushAttrib(GL_VIEWPORT_BIT); glBindFramebuffer(GL_FRAMEBUFFER, _volumeFBO); glViewport(0, 0, VOL_DIM, VOL_DIM); glClearColor(1.0f,1.0f,0.0f,1.0f); glClear(GL_COLOR_BUFFER_BIT); glPopAttrib(); glBindFramebuffer(GL_FRAMEBUFFER, 0); glClearColor(0.0f,0.0f,0.0f,1.0f); glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); // Draw the box... [/code] If I comment out the clear of the FBO, the 3d texture stays the same as what I initially stuck in it (of course). However, when I do clear it here to yellow, only the bottom (i.e. far) layer of the texture is cleared: [img]http://dl.dropbox.com/u/23346083/3dTexTest.png[/img] Am I setting something up wrong? Is this a problem with my driver (AMD Radeon 4850, Catalyst 11.2 on Windows 7)? I've checked the FBO status and for any GL errors along the way, and nothing shows up. As I said, in my actual project I have a shader that draws into the texture using gl_Layer in the geometry shader, and that works, it's just this clearing thing. I just don't get it. Hopefully someone can see what's wrong. --Buzzy
  7. Here are some researchers who've released papers about this sort of thing recently: Chris Wyman: http://www.cs.uiowa.edu/~cwyman/pubs.html Tobias Ritschel: http://www.uni-koblenz.de/~ritschel/ I'd recommend checking out some of their papers, then checking some of the references in the papers, then using Google Scholar to find papers that cite them. Also, Ke-Sen Huang's Resource for Computer Graphics is a fantastic resource. Your university probably has an ACM account if you find any SIGGRAPH papers that need you to log in to view. Hope that helps, and good luck with the project! --Buzzy
  8. If you're interested in learning linear algebra, the MIT OpenCourseWare site has an intro to linear algebra (and matrix theory) with video lectures. Link. Some of the text book chapters are posted too. It's rather academic, and not focused on game development, but it should give you a good introduction. --Buzzy
  9. Quote:Original post by Promit Here's a hint: nobody wants to play on a 19" LCD monitor when there's a 52" HDTV in the next room. Actually, a 19" monitor at two or three feet away takes up about the same area of your field of view as a 52" HDTV at six to eight feet away. And resolutions are comparable (consoles that run in 1080p have a slight advantage atm, but PCs use antialiasing). Just thought I'd point that out. :)
  10. Hmmm... If you're dealing with (mostly) convex models, you could use that fact to see if you have the 'right' normal. For example, if you're working with a sphere, pick a vertex from every triangle, make an 'out' vector from the center to that vertex (just subtract the center position from the vertex position), then see if the dot product of the normal and 'out' is positive. If it's not, negate the normal. But that only works in general for convex shapes, which aren't too common. You could, I suppose, subdivide the model into convex hulls, but that seems more work than is necessary. I would probably try and just make sure that the vertices for each triangle follow a counter-clockwise ordering (so at least to denote which side is out), and use that to determine which edges to use when calculating your normals. I can't think of any other easy way of doing it, I'm afraid.
  11. You can figure out which edges to use based on the winding order of your vertices. In a right handed coordinate system, if you have a counter-clock wise order (the standard in OpenGL. I forget what D3D uses.) to draw your front faces, then the normal will be edge 1-2 x edge 1-3 for triangle 1-2-3. Then you probably want to normalize it. Does that make sense?
  12. Epic is listed as a key partner on the Geomerics website, and their licensing page says Quote:Enlighten is also available for licensing for use in Unreal Engine 3. I'd imagine there will be something out within the next year or so with real time GI/radiosity.
  13. I ran across this site a while back. Not as much detail as the Gang of Four book, but rather handy for looking up and learning about patterns, and it has more than the GoF too. Chain of Responsibility and Observer are, I believe, the two you described.
  14. I'd recommend Java (since it's very close to C++ and C# I presume). And I think that taking a look at some scripting languages couldn't hurt either. Perl, Python and/or Ruby would all be good to at least have peak at, and Bash is really useful if you ever use a *nix environment. A functional language would be a good investment, like Haskell or Scheme. Prolog and SQL would be good too, to round out the list. :)
  15. Ok, I'll try and take a stab at this. I've never implemented HDR before though, so I may be a little off, but I'll do my best. From what I've read, the med_lumi is the average luminance of all pixels on the screen. The DirectX SDK sample I have here uses that equation to average four pixels' luminance together, downscaling the size of the render texture by 0.25x0.25 every pass. They do it four times so that since they started with a 256x256 texture it basically finds the average of all pixels' luminance and puts it in a 1x1 texture. Since each pixel of the original texture can have any value in the HDR range, this average probably has a big number in it. Next, the image_key is supposed to be the _desired_ luminance of the scene. So if the scene is really bright with med_lumi = 1000.0, but you only want a luminance of 5, then exposure gets the ratio of their difference (0.005). I don't know what value would be good for the image_key, but I'm guessing this may be your problem. Try playing with it and see what you can get. Now for every pixel drawn, it scales the luminance of that pixel by this exposure and gives you scaled_lumi, then tone-maps it into final_lumi. This is where it gets squashed to the [0,1] range. If you try with a calculator putting numbers into that little equation, you see that it does what you want. Here's some examples: 0.0 => 0.0, 0.1 => 0.09, 0.25 => 0.2, 0.5 => 0.333, 1 => 0.5, 2 => 0.667, 10 => 0.909, 100 => 0.99, etc. So now that you have a nice LDR luminance for that pixel, multiply the color by it and display. Repeat for the next pixel. So like I said, I think your desired average luminance (image_key) may be the problem. The other problem I can see may be in finding what the actual average luminance is (med_lumi). Double check that to make sure it's giving the right number. Hope that helps. If I made a mistake somewhere, let me know. :) Edit: I just went and ran the DX sample and a few things became clear to me. The image_key should be between 0 and 1, which makes sense because you probably want a luminance in the LDR range. Assuming you have that, I'd make extra sure your med_lumi is doing what it's supposed to. You could try doing it the slow way by sending the luminance texture back to the CPU and linearly going through all the pixels and finding the average of them all that way, then comparing to the shader med_lumi. Slow, yes, but it may show an error in the way the shader is doing it. Aside from that, I can't imagine what else might be going wrong. [Edited by - Buzzy on November 16, 2007 6:43:49 PM]