# Buzzy

Members

256

312 Neutral

• Rank
Member

• Location
1. ## CS Honors Project

How about a Diablo like action RPG? You could do procedural generation of the dungeon layouts, as well as things like the enemies, the loot, and even the textures if you wanted. It also lends itself well to a semi-persistent multiplayer world. I'm not sure if there's anything you could put into the design to really take advantage of parallel programming, aside from moving things like physics and audio processing to their own threads. --Buzzy
2. ## Don't start yet another voxel project

So a while ago I saw a video for a 4D puzzle game ([url="http://marctenbosch.com/miegakure/"]Miegakure[/url]). I thought it was really neat, but it got me thinking... What would an actual 4D renderer look like? What's the best way to represent the fourth dimension? I thought about using 4D tetrahedral models, rendered with a shader to select the current 3D "slice", but that seemed too unwieldy. The most straight forward way, in my mind, was to take the "ray-casting 3D voxels" concept and just add a fourth dimension. My program uses a 4D sparse voxel octree (I call it a hypertree) which acts exactly the way you'd expect: each dimension splits into two, which means that a node has up to 16 four dimension children volumes. I copied the ray casting algorithm from the [url="http://www.tml.tkk.fi/~samuli/publications/laine2010tr1_paper.pdf"]Laine and Karras SVO paper[/url] (minus the contours), and added in an extra dimension to everything. To visualize the fourth dimension (W), I leave Z as up and down, but rotate the viewer's other three dimensions so that W replaces X or Y. Mathematically it works quite nicely, and doesn't look too bad. One of the biggest issues that I had with it is that a 4D hypertree can get very big very quickly. Since every node can have 16 children, if I were to store all the leaf nodes I'd only be able to work with relatively shallow trees (e.g. at 4 bytes per node, seven levels is 1 GB). Since it's a sparse tree I don't store all this, but the potential is there. I also came up with two other solutions to this size problem. The first is to have portal nodes, which store a transformation matrix to teleport viewing rays, or object positions, from that node to some other node (and orientation). So even if the entire world is only 128 leaf nodes on a side, you can make larger environments by hijacking other (unused) dimensions seamlessly. The portal transformation does incur a performance hit though for every ray-portal intersection. My second solution to the size problem is to not store unique geometry at the bottom of the tree. Using a palette of premade leaf node "tiles", you can give the environment more detail without having to store it all uniquely. Or at least that's how it would work... I haven't actually implemented this yet. I got the idea from watching that [url="http://www.youtube.com/watch?v=00gAbgBu8R4"]Unlimited Detail[/url] video, which looks like it uses a similar idea with 3D tiles nodes. My other issue with a 4D renderer is that generating interesting content is difficult to do without an editor. I stopped working on it about the time I realized that I'd need to make an editor to get the full potential out of it as a concept. I'll probably pick it up again one of these days though. So that's my experience with "voxels". If anyone wants me to go into more detail about anything I can, but I don't want to post the program right now.
3. ## Spherical Harmonics comparison

You could take the difference between the coefficients of the two, then integrate over the sphere with these new coefficients, but using the absolute value of the function, to get the L1 distance. To integrate I'd say probably just do a basic Monte Carlo integration by having a set of a few dozen or so points on the unit sphere that you can plug into the resulting difference SH function. This should work because the L1 distance between two functions is something like [eqn]\iint_S | \hat{f}(s) - \hat{g}(s) | ds[/eqn] where [eqn]\hat{f}(s) = \sum_{i} c_i y_i(s)[/eqn] and [eqn]\hat{g}(s) = \sum_{i} d_i y_i(s)[/eqn] [eqn]\Rightarrow \hat{f}(s) - \hat{g}(s) = \sum_{i}(c_i - d_i) y_i(s)[/eqn] Here, [i]c[/i] and [i]d[/i] are your coefficients, and the [i]y[/i]'s are the SH basis functions. So a Monte Carlo integration would be something like [eqn]\frac{1}{N} \sum_{j}^{N} |\sum_{i}(c_i - d_i) y_i(x_j)| w(x_j)[/eqn] for a set of N points (uniformly distributed) on the unit sphere. Here [i]w[/i](x) is a weight function, which would be equal to 4[i]pi[/i] if you use a uniform distribution on the sphere. You could also replace taking the absolute value with an L2 norm to get the L2 distance. I think I got all that right... hope that helps. --Buzzy
4. ## Order-Independent Transparency

You might find these interesting: Stochastic Transparency: [url="http://www.nvidia.com/object/nvidia_research_pub_016.html"]http://www.nvidia.com/object/nvidia_research_pub_016.html[/url] Colored Stochastic Shadow Maps: [url="http://research.nvidia.com/publication/hardware-accelerated-colored-stochastic-shadow-maps"]http://research.nvidia.com/publication/hardware-accelerated-colored-stochastic-shadow-maps[/url] The first is about doing screen-door transparency at a sub-pixel scale, using multisampling hardware, but randomizing the pattern. The second extends that for use in shadow maps. You might also find some other techniques in the related works sections of them. It sounds like your algorithm will be very useful, and I'm looking forward to reading about it. --Buzzy
5. ## Problem with clearing a 3D texture in FBO

So I started to look at other means of checking whether or not the I'm setting things up correctly. While the framebuffer status says it's ok, and nothing seems to be causing any GL errors, I started checking all the framebuffer attachment parameters ([font="Courier New"]glGetFramebufferAttachmentParameteriv()[/font]). All parameters seem to check out except the one I want, [font="Courier New"]GL_FRAMEBUFFER_ATTACHMENT_LAYERED[/font]. It throws a [font="Courier New"]GL_INVALID_ENUM[/font] error even though it should work. With a quick internet search I found [url="http://www.opengl.org/discussion_boards/ubbthreads.php?ubb=showflat&Number=289008"]this[/url] and the follow up [url="http://lwjgl.org/forum/index.php?topic=3691.0"]LWJGL bug report[/url]. The poster's setup is virtually identical to mine (Win7 64-bit, Radeon 4850), so I strongly suspect that it's a driver problem; it's just not dealing with layered textures correctly. In the mean time I suppose I'll use karwosts' idea and just manually fill the texture with a solid color in a simple shader. --Alan
6. ## Problem with clearing a 3D texture in FBO

I'm working on a project in which I want to render stuff into a 3D texture. Once I got everything set up and started testing it, it looked as if it wasn't clearing properly. My shaders seemed to be drawing correctly into the texture, but I of course want to clear it every frame before using it, but it only seems to be clearing the first layer of the texture, leaving the rest as is. To prove to myself that I wasn't just missing something in my messy project code, I wrote a little test program with GLUT to see if I could narrow down the problem. The problem persists, and I'm not sure what is going wrong. Full program (under 300 lines, needs GLEW and GLUT): [url="http://dl.dropbox.com/u/23346083/3dTexTest.cpp"]3dTexTest.cpp[/url] Framebuffer and 3d texture setup: [code] glGenTextures(1, &_volumeTex); glBindTexture(GL_TEXTURE_3D, _volumeTex); // Fill texture with some initial colors glTexImage3D(GL_TEXTURE_3D, 0, GL_RGBA8, VOL_DIM, VOL_DIM, VOL_DIM, 0, GL_RGBA, GL_UNSIGNED_BYTE, (GLvoid*)_3dTexu); glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_WRAP_R, GL_CLAMP_TO_EDGE); glGenFramebuffers(1, &_volumeFBO); glBindFramebuffer(GL_FRAMEBUFFER, _volumeFBO); // Let glFramebufferTexture attach all layers of _volumeTex to FBO, on color attachment 0. glFramebufferTexture(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, _volumeTex, 0); if (FboStatus()) { Deinit(); return 1; } glBindTexture(GL_TEXTURE_3D, 0); glBindFramebuffer(GL_FRAMEBUFFER, 0); [/code] Every frame I do this, then draw a box with 3d tex coords: [code] glPushAttrib(GL_VIEWPORT_BIT); glBindFramebuffer(GL_FRAMEBUFFER, _volumeFBO); glViewport(0, 0, VOL_DIM, VOL_DIM); glClearColor(1.0f,1.0f,0.0f,1.0f); glClear(GL_COLOR_BUFFER_BIT); glPopAttrib(); glBindFramebuffer(GL_FRAMEBUFFER, 0); glClearColor(0.0f,0.0f,0.0f,1.0f); glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); // Draw the box... [/code] If I comment out the clear of the FBO, the 3d texture stays the same as what I initially stuck in it (of course). However, when I do clear it here to yellow, only the bottom (i.e. far) layer of the texture is cleared: [img]http://dl.dropbox.com/u/23346083/3dTexTest.png[/img] Am I setting something up wrong? Is this a problem with my driver (AMD Radeon 4850, Catalyst 11.2 on Windows 7)? I've checked the FBO status and for any GL errors along the way, and nothing shows up. As I said, in my actual project I have a shader that draws into the texture using gl_Layer in the geometry shader, and that works, it's just this clearing thing. I just don't get it. Hopefully someone can see what's wrong. --Buzzy
7. ## Global Illumination Honors project!

Here are some researchers who've released papers about this sort of thing recently: Chris Wyman: http://www.cs.uiowa.edu/~cwyman/pubs.html Tobias Ritschel: http://www.uni-koblenz.de/~ritschel/ I'd recommend checking out some of their papers, then checking some of the references in the papers, then using Google Scholar to find papers that cite them. Also, Ke-Sen Huang's Resource for Computer Graphics is a fantastic resource. Your university probably has an ACM account if you find any SIGGRAPH papers that need you to log in to view. Hope that helps, and good luck with the project! --Buzzy
8. ## Started DX10 - I'm hopeless at the maths!

If you're interested in learning linear algebra, the MIT OpenCourseWare site has an intro to linear algebra (and matrix theory) with video lectures. Link. Some of the text book chapters are posted too. It's rather academic, and not focused on game development, but it should give you a good introduction. --Buzzy
9. ## Crytek: pc gaming not worth it

Quote:Original post by Promit Here's a hint: nobody wants to play on a 19" LCD monitor when there's a 52" HDTV in the next room. Actually, a 19" monitor at two or three feet away takes up about the same area of your field of view as a 52" HDTV at six to eight feet away. And resolutions are comparable (consoles that run in 1080p have a slight advantage atm, but PCs use antialiasing). Just thought I'd point that out. :)
10. ## Triangulation and normal vectors

Hmmm... If you're dealing with (mostly) convex models, you could use that fact to see if you have the 'right' normal. For example, if you're working with a sphere, pick a vertex from every triangle, make an 'out' vector from the center to that vertex (just subtract the center position from the vertex position), then see if the dot product of the normal and 'out' is positive. If it's not, negate the normal. But that only works in general for convex shapes, which aren't too common. You could, I suppose, subdivide the model into convex hulls, but that seems more work than is necessary. I would probably try and just make sure that the vertices for each triangle follow a counter-clockwise ordering (so at least to denote which side is out), and use that to determine which edges to use when calculating your normals. I can't think of any other easy way of doing it, I'm afraid.
11. ## Triangulation and normal vectors

You can figure out which edges to use based on the winding order of your vertices. In a right handed coordinate system, if you have a counter-clock wise order (the standard in OpenGL. I forget what D3D uses.) to draw your front faces, then the normal will be edge 1-2 x edge 1-3 for triangle 1-2-3. Then you probably want to normalize it. Does that make sense?
12. ## Any game that has radiosity?

Epic is listed as a key partner on the Geomerics website, and their licensing page says Quote:Enlighten is also available for licensing for use in Unreal Engine 3. I'd imagine there will be something out within the next year or so with real time GI/radiosity.
13. ## Terminology for message/event passing system

I ran across this site a while back. Not as much detail as the Gang of Four book, but rather handy for looking up and learning about patterns, and it has more than the GoF too. Chain of Responsibility and Observer are, I believe, the two you described.
14. ## Programming Languages and Technologies to Learn

I'd recommend Java (since it's very close to C++ and C# I presume). And I think that taking a look at some scripting languages couldn't hurt either. Perl, Python and/or Ruby would all be good to at least have peak at, and Bash is really useful if you ever use a *nix environment. A functional language would be a good investment, like Haskell or Scheme. Prolog and SQL would be good too, to round out the list. :)
15. ## Tone Mapping problem for HDRR

Ok, I'll try and take a stab at this. I've never implemented HDR before though, so I may be a little off, but I'll do my best. From what I've read, the med_lumi is the average luminance of all pixels on the screen. The DirectX SDK sample I have here uses that equation to average four pixels' luminance together, downscaling the size of the render texture by 0.25x0.25 every pass. They do it four times so that since they started with a 256x256 texture it basically finds the average of all pixels' luminance and puts it in a 1x1 texture. Since each pixel of the original texture can have any value in the HDR range, this average probably has a big number in it. Next, the image_key is supposed to be the _desired_ luminance of the scene. So if the scene is really bright with med_lumi = 1000.0, but you only want a luminance of 5, then exposure gets the ratio of their difference (0.005). I don't know what value would be good for the image_key, but I'm guessing this may be your problem. Try playing with it and see what you can get. Now for every pixel drawn, it scales the luminance of that pixel by this exposure and gives you scaled_lumi, then tone-maps it into final_lumi. This is where it gets squashed to the [0,1] range. If you try with a calculator putting numbers into that little equation, you see that it does what you want. Here's some examples: 0.0 => 0.0, 0.1 => 0.09, 0.25 => 0.2, 0.5 => 0.333, 1 => 0.5, 2 => 0.667, 10 => 0.909, 100 => 0.99, etc. So now that you have a nice LDR luminance for that pixel, multiply the color by it and display. Repeat for the next pixel. So like I said, I think your desired average luminance (image_key) may be the problem. The other problem I can see may be in finding what the actual average luminance is (med_lumi). Double check that to make sure it's giving the right number. Hope that helps. If I made a mistake somewhere, let me know. :) Edit: I just went and ran the DX sample and a few things became clear to me. The image_key should be between 0 and 1, which makes sense because you probably want a luminance in the LDR range. Assuming you have that, I'd make extra sure your med_lumi is doing what it's supposed to. You could try doing it the slow way by sending the luminance texture back to the CPU and linearly going through all the pixels and finding the average of them all that way, then comparing to the shader med_lumi. Slow, yes, but it may show an error in the way the shader is doing it. Aside from that, I can't imagine what else might be going wrong. [Edited by - Buzzy on November 16, 2007 6:43:49 PM]