How about a Diablo like action RPG? You could do procedural generation of the dungeon layouts, as well as things like the enemies, the loot, and even the textures if you wanted. It also lends itself well to a semipersistent multiplayer world. I'm not sure if there's anything you could put into the design to really take advantage of parallel programming, aside from moving things like physics and audio processing to their own threads.
Buzzy
 Home
 » Viewing Profile: Posts: Buzzy
Buzzy
Member Since 30 Jan 2000Offline Last Active Jul 20 2015 02:30 PM
Community Stats
 Group Members
 Active Posts 256
 Profile Views 1,880
 Submitted Links 0
 Member Title Member
 Age Age Unknown
 Birthday Birthday Unknown

Gender
Male

Location
Canada
Posts I've Made
In Topic: CS Honors Project
13 December 2012  08:03 PM
In Topic: Don't start yet another voxel project
26 June 2012  03:25 PM
So a while ago I saw a video for a 4D puzzle game (Miegakure). I thought it was really neat, but it got me thinking... What would an actual 4D renderer look like? What's the best way to represent the fourth dimension? I thought about using 4D tetrahedral models, rendered with a shader to select the current 3D "slice", but that seemed too unwieldy. The most straight forward way, in my mind, was to take the "raycasting 3D voxels" concept and just add a fourth dimension.
My program uses a 4D sparse voxel octree (I call it a hypertree) which acts exactly the way you'd expect: each dimension splits into two, which means that a node has up to 16 four dimension children volumes. I copied the ray casting algorithm from the Laine and Karras SVO paper (minus the contours), and added in an extra dimension to everything. To visualize the fourth dimension (W), I leave Z as up and down, but rotate the viewer's other three dimensions so that W replaces X or Y. Mathematically it works quite nicely, and doesn't look too bad.
One of the biggest issues that I had with it is that a 4D hypertree can get very big very quickly. Since every node can have 16 children, if I were to store all the leaf nodes I'd only be able to work with relatively shallow trees (e.g. at 4 bytes per node, seven levels is 1 GB). Since it's a sparse tree I don't store all this, but the potential is there. I also came up with two other solutions to this size problem. The first is to have portal nodes, which store a transformation matrix to teleport viewing rays, or object positions, from that node to some other node (and orientation). So even if the entire world is only 128 leaf nodes on a side, you can make larger environments by hijacking other (unused) dimensions seamlessly. The portal transformation does incur a performance hit though for every rayportal intersection.
My second solution to the size problem is to not store unique geometry at the bottom of the tree. Using a palette of premade leaf node "tiles", you can give the environment more detail without having to store it all uniquely. Or at least that's how it would work... I haven't actually implemented this yet. I got the idea from watching that Unlimited Detail video, which looks like it uses a similar idea with 3D tiles nodes.
My other issue with a 4D renderer is that generating interesting content is difficult to do without an editor. I stopped working on it about the time I realized that I'd need to make an editor to get the full potential out of it as a concept. I'll probably pick it up again one of these days though.
So that's my experience with "voxels". If anyone wants me to go into more detail about anything I can, but I don't want to post the program right now.
My program uses a 4D sparse voxel octree (I call it a hypertree) which acts exactly the way you'd expect: each dimension splits into two, which means that a node has up to 16 four dimension children volumes. I copied the ray casting algorithm from the Laine and Karras SVO paper (minus the contours), and added in an extra dimension to everything. To visualize the fourth dimension (W), I leave Z as up and down, but rotate the viewer's other three dimensions so that W replaces X or Y. Mathematically it works quite nicely, and doesn't look too bad.
One of the biggest issues that I had with it is that a 4D hypertree can get very big very quickly. Since every node can have 16 children, if I were to store all the leaf nodes I'd only be able to work with relatively shallow trees (e.g. at 4 bytes per node, seven levels is 1 GB). Since it's a sparse tree I don't store all this, but the potential is there. I also came up with two other solutions to this size problem. The first is to have portal nodes, which store a transformation matrix to teleport viewing rays, or object positions, from that node to some other node (and orientation). So even if the entire world is only 128 leaf nodes on a side, you can make larger environments by hijacking other (unused) dimensions seamlessly. The portal transformation does incur a performance hit though for every rayportal intersection.
My second solution to the size problem is to not store unique geometry at the bottom of the tree. Using a palette of premade leaf node "tiles", you can give the environment more detail without having to store it all uniquely. Or at least that's how it would work... I haven't actually implemented this yet. I got the idea from watching that Unlimited Detail video, which looks like it uses a similar idea with 3D tiles nodes.
My other issue with a 4D renderer is that generating interesting content is difficult to do without an editor. I stopped working on it about the time I realized that I'd need to make an editor to get the full potential out of it as a concept. I'll probably pick it up again one of these days though.
So that's my experience with "voxels". If anyone wants me to go into more detail about anything I can, but I don't want to post the program right now.
In Topic: Spherical Harmonics comparison
23 March 2012  07:25 PM
You could take the difference between the coefficients of the two, then integrate over the sphere with these new coefficients, but using the absolute value of the function, to get the L1 distance. To integrate I'd say probably just do a basic Monte Carlo integration by having a set of a few dozen or so points on the unit sphere that you can plug into the resulting difference SH function. This should work because the L1 distance between two functions is something like
where and
Here, c and d are your coefficients, and the y's are the SH basis functions. So a Monte Carlo integration would be something like
for a set of N points (uniformly distributed) on the unit sphere. Here w(x) is a weight function, which would be equal to 4pi if you use a uniform distribution on the sphere.
You could also replace taking the absolute value with an L2 norm to get the L2 distance. I think I got all that right... hope that helps.
Buzzy
where and
Here, c and d are your coefficients, and the y's are the SH basis functions. So a Monte Carlo integration would be something like
for a set of N points (uniformly distributed) on the unit sphere. Here w(x) is a weight function, which would be equal to 4pi if you use a uniform distribution on the sphere.
You could also replace taking the absolute value with an L2 norm to get the L2 distance. I think I got all that right... hope that helps.
Buzzy
In Topic: OrderIndependent Transparency
23 March 2012  04:10 AM
You might find these interesting:
Stochastic Transparency: http://www.nvidia.com/object/nvidia_research_pub_016.html
Colored Stochastic Shadow Maps: http://research.nvidia.com/publication/hardwareacceleratedcoloredstochasticshadowmaps
The first is about doing screendoor transparency at a subpixel scale, using multisampling hardware, but randomizing the pattern. The second extends that for use in shadow maps. You might also find some other techniques in the related works sections of them.
It sounds like your algorithm will be very useful, and I'm looking forward to reading about it.
Buzzy
Stochastic Transparency: http://www.nvidia.com/object/nvidia_research_pub_016.html
Colored Stochastic Shadow Maps: http://research.nvidia.com/publication/hardwareacceleratedcoloredstochasticshadowmaps
The first is about doing screendoor transparency at a subpixel scale, using multisampling hardware, but randomizing the pattern. The second extends that for use in shadow maps. You might also find some other techniques in the related works sections of them.
It sounds like your algorithm will be very useful, and I'm looking forward to reading about it.
Buzzy
In Topic: Problem with clearing a 3D texture in FBO
23 March 2011  05:55 PM
So I started to look at other means of checking whether or not the I'm setting things up correctly. While the framebuffer status says it's ok, and nothing seems to be causing any GL errors, I started checking all the framebuffer attachment parameters (glGetFramebufferAttachmentParameteriv()). All parameters seem to check out except the one I want, GL_FRAMEBUFFER_ATTACHMENT_LAYERED. It throws a GL_INVALID_ENUM error even though it should work. With a quick internet search I found this and the follow up LWJGL bug report. The poster's setup is virtually identical to mine (Win7 64bit, Radeon 4850), so I strongly suspect that it's a driver problem; it's just not dealing with layered textures correctly.
In the mean time I suppose I'll use karwosts' idea and just manually fill the texture with a solid color in a simple shader.
Alan
In the mean time I suppose I'll use karwosts' idea and just manually fill the texture with a solid color in a simple shader.
Alan