• Create Account

Awesome job so far everyone! Please give us your feedback on how our article efforts are going. We still need more finished articles for our May contest theme: Remake the Classics

# Reflexus

Member Since 21 Oct 2011
Online Last Active Today, 06:45 PM

### #5062410does this code look ok?

Posted by on 16 May 2013 - 05:23 PM

I love it how the screen shot has nothing to do with this topic, or even the code. hahaha the Window title says Direct11 and all they needed to tell us was "invalid operation"

You have the anti-virus update reminder shit overlapping the screenshot and ... both IE and Firefox in your task bar ...

Just hilarious.

### #5062328Sparks Effect

Posted by on 16 May 2013 - 11:29 AM

When I do that I don't see the lines.

Where are their positions defined anyway? If you're just using identity then you should make sure they're in that small limited frustum which is tanget to the XY plane (i.e. screen space), and make sure your Z-clipping fits. lol, otherwise, actually use the right matrices. I'll try to explain below. This is the code you showed me:

```D3DXMATRIX mWorld;
D3DXMatrixIdentity(&mWorld);

mWorldProjection = camera->getProjectionMatrix();

effect->SetMatrix( "mWV", &mWorld );
effect->SetMatrix( "mWVP", &mWorldProjection );
```

mWorld should be world * view, and mWorldProjection should be mWorld * your projection. What is the name of your matrix for viewspace? You don't seem to have any clue of what your own matrices are or what they do anyway. I think you need to go back and properly understand how transformation matrices are used to render 3D scenes.

Do you understand what a world matrix really is, and what a view matrix is?

WVP = world * view * projection. In simpleton terms, world refers to how individual content is transformed. View is how the virtual camera is positioned/oriented. Projection is how things are projected onto the screen plane. D3d9 abstracts these, but eventually it really just concatenates them into a single matrix and multiplies geometry by this to get it into its final position in camera space (pretty much screen space; the difference is somewhat trivial). So here, it asks for two matrices, WV and WVP. WV is world * view, and WVP is just WV * projection. Got it?

By the way, I may have thought of a good and fast way to do real volumetric lines, but I'll need to try it for myself before I start spewing nonsense.

### #5061700Sparks Effect

Posted by on 14 May 2013 - 12:46 AM

Well are you? Don't downvote me. I'm trying to help. Common...

### #5061698Explosion

Posted by on 14 May 2013 - 12:32 AM

I'd write a volumetric renderer directly into the engine of your game which caches into a texture atlas and repieces these billboard textures together during realtime rendering. I bet you can even bake proper normal maps for the volumes' textures (I mean, appropriately retaining light scattering/propagation effects). Your atlas probably will need a 2 dimensional structure; for variety and sequence. Investigate ways to handle occlusional and blending effects (for the flowing layers of fire vs smoke).

I have the following properties:

- Velocity

- Wind

- Gravity

- Size

- Texture UV

WTF. The top 3 can all be unified into a single property with acceleration controlled by a single parameter and the use of vector addition for any combination of forces (just let the acceleration be an open input which may be manipulated however desired). 'Size' is pretty ambiguous. Texture UV... wtf, well idk how that's useful.

By the way, why are you trying to generalize? You're trying to make an explosion. I promise that you won't get it looking as good as you want it to look if you start with a senseless set of maybe-good parameters. Hardcode it, foo. There's no point in generalizing it. I promise. Don't. Don't. Don't. Don't. Just generalize a point-billboard system. That's as far as you need.

### #5061697Sparks Effect

Posted by on 14 May 2013 - 12:31 AM

Are you using point-sprite billboards? lol

### #5058435What causes light scattering and absorbtion?

Posted by on 01 May 2013 - 03:54 PM

Just start with the fact that you can't get rug burns by standing on carpet. If you skid, it can be painful... unless it's very fine fabric rather than carpet... or you'll tear the material if it's too thin... *wink*

(Tip: Don't think about this kinetically, think about the stimulus of pain and destruction! xD)

Haha, or you can think about spraying various liquids onto surfaces such as a boulder, a bowling ball, solid-foam, flat marble and so forth... (consider soft water in comparison to Genuine Canadian Maple Syrup, Satisfaction Garunteed ?)

Some rhetorical questions:

Why do you suppose that diagram of Mie scattering *small particle* and *large particle* demonstrates more focused scattering when a photon hits a large particle, in comparison to the small particle?

Why does a peach get highlighted white around its contour, yet pink in the center?

If you can pass those questions, then you'll be ready to think about these phenomena intuitively.

What kind of interaction between light and matters leads to wavelength dependend absorbtion?

Understand the meaning of 'wavelength' (magnitude) and your question will be answered. Remember that 'wave' describes distribution, not form.

### #5029638Realtime non-raytraced curved mirrors

Posted by on 06 February 2013 - 11:17 PM

Rasterization based non-linear beam tracing. Unfortunately the first bone-head to have implemented it has filed a patent at Microsoft...

https://duckduckgo.com/?q=nonlinear+beam+tracing

Their implementation isn't that great anyway. Microsoft blows.

### #5024137Is Clustered Forward Shading worth implementing?

Posted by on 21 January 2013 - 07:59 PM

Haha, whoops. I don't know why I wrote BDRF. Here's a reminder (saw this on Twitter ):

BeaRDed F!

### #5001078Game Design Crysis!

Posted by on 14 November 2012 - 08:26 PM

Spelling crisis!

### #4985622Yet another voxel terrain system: presentation of my work and ideas

Posted by on 30 September 2012 - 08:35 PM

In a way, sampling is used to interpret implicit information (with a lot of ambiguity to worry about) from an incomplete data set i.e. spatial information but nothing else; hence the need to use a just-in-time generative approach based on the spatial information. A procedural model will describe additional features by inserting their associated details ontop or between the initial form of data. So, it's not even necessary, for that matter, to specify a material for each face of every voxel (which would be extremely consuming). I believe there's a huge variety of techniques you could conceive to describe materials. To elaborate this idea, look at this illustration and think about the way these cliffs work:

Although this works by the idea of isolines bridging between hierarchical planes of terrain, it's quite similar to Frenetic Pony's solution (a scalar map of grass density), which may be multichannel to support multiple planes. I'm guessing that an efficient extension of this concept may require some form of spatial hashing, because I'm not really sure how you would integrate this with voxels. As many of you may know, I hate voxels. Though it is in fact their very advantage, the structures' strict euclidean uniformity heavily sacrifice datas' sense of any spatial character foreign to euclidean regularity and especially non-spatial properties. I believe it's very possible to accomplish the same features which voxels are often utilized for with novel alternatives that may perform at least as well or better than voxels. In otherwords, it's possible to imagine stuctures which remain as uniform, predictable and thereby as efficient mediums of spatial dynamics as voxels, without sacrificing the extensibility yielded by procedural definition. If you align procedural definition with procedural execution, then you have a model perfect for your purposes. Think about the way 3D modeling programs represent triangle meshes. Not only are they capable of storing attributes per vertices, edges, faces etc. but this structure is also optimal for manipulation. Think about applying normal smoothing onto a plain list of vertices. Here are the steps required:

1. For each vertex, find other vertices which have the exact same position.
2. Average their normals (sum the normals and then normalize).
3. Go back through all of these vertices and find where you need to apply this average.

... without any stacks or intermediate storage regarding the mesh. Now that's just completely stupid... but it's an extreme example of the lacking approaches people often take.

For your case, I recommend you just describe enough for the sampling to know what's appropriate i.e. a single scalar code for each voxel which corresponds to a 6-sided set of materials Examples: some material sets may entirely include the cliff texture, sometimes a mix of messy grass at the top and cliff on the sides, dirt ontop and red rock on the sides, or chalky-dirt with weeds on the top and cobble on the sides etc.

### #4967220Don't start yet another voxel project

Posted by on 07 August 2012 - 08:35 PM

Listing the potential of voxels is almost like listing the potential for games which use 8-bit indexed color graphics, tilesets, and extremely small display resolutions. From the aspect of game mechanics, this field of possibilities is nearly identical to voxels' -- even more congruent than to traditional 3D graphics ('polygons') -- yet bounded by 2-dimensional space (and crappy colors). I certainly do have the imagination necessary to understand the potential of voxels when applied to games! Personally, I think the way we've figured to utilize voxels is somewhat idiotic, so far. They still have their place in computer graphics, although I don't think anyone currently understands, comprehensively, where they belong. Neither do I have the interest now to correctly elaborate how voxels might appropriately become useful in the future.

### #4958170Why Game Programming?

Posted by on 11 July 2012 - 03:23 PM

When we look at Unity or UDK, they exactly do whatever we want.

You must have some sick desires... but seriously, I strongly disagree with that.
You can name a lot of features they might have that apparently suite your purposes, but they could be largely different from how you want them. I've seen many games made with these tools (Unity, UDK, C4 etc.), and by my judgement upon the quality of their features while applied between a variety games, I'm very happy to roll out my own customized utility and core software. Apparently, many aspiring game developers who utilize these tools do not have the maturity in game development to deeply comprehend the quality of a certain implementation when reflected upon a certain application. I'm never really pleased when I see an engine so re-purposed. I heard the UE4 framework will be much more C++ oriented, and I'd say that this is the reason.

### #4952762Don't start yet another voxel project

Posted by on 25 June 2012 - 01:17 PM

http://publications....attlefield3.pdf

Clearly the terrain is precomputed to some extent, to fit the requirements of the game (e.g. a flat space here and there to place a building). Obviously the finer details can be procedurally generated via any method to simplify the artist's work but ultimately there is still an artist behind each map.

Is it just me, or are you . . .

First you do hello world when you enter programming as a beginner, and then you do a chess program when you transition to intermediate level programming. This is how it was for me in college and voxel projects are analogous to chess programs in my eyes. It's probably the easiest project that you can take on that covers such a broad range of topics while also having a decent amount of complexity. I also firmly believe that everyone interested in game programming should do one when they get the basics of 3D down. If I were teaching a game programming class there definitely would be a voxel project in the syllabus.

A decent game programmer should know how to import/export a variety of assets, including common formats, and custom/specialized data (maps, saving the game's state etc). A voxel project seems like a good reason to avoid that, and other essentials. Perhaps they can do that "to get the basics of 3d down," because I don't know how it could be considered much more. I mean, as a student project... like what you've made: http://blog.neumont....en/infinecraft/

That essentially is the bare "basics of 3D." I can't see much learning value in such a project. If the students were constructing much more sophisticated voxel projects, well, that would be a little too specific. It would be a good reason to neglect many other common skills used in real game programming. Even if dynamic/interpolated voxel terrain systems make a strong debut in a few future titles (I wouldn't doubt it), its likely they would remain too limiting, by a number of aspects, to be adopted by much of the full industry. It would be like a "parallax mapping" project. There are a few cases where it works well and adds some value, but in the most common implementations, it looks like shit.

### #4952393Don't start yet another voxel project

Posted by on 24 June 2012 - 12:52 PM

@Bacterius

I think the main attraction with voxels (setting the Minecraft hype apart for now) is the simplicity of generating procedural content with them rather than with triangles

No. There's always simplicity in procedurally affecting content, where the nature of its containment always has particular advantages. Both triangle mesh and voxel based substances have exclusive advantages. Skeletal animation is an equivalent for triangle meshes, just as the simplicity you've noted in voxels. But with voxels, the basis of content has always (in practice) been generated by atomic functions (random noise functions etc). Later in the pipeline, after the basis of content is generated, reiterated operations can be applied to enhance the existing content (which certainly is a procedural process, but not procedural generation). Voxels only have a single, static basis of content (the homogeneous volume), which manifests them to be problematic for the purposes of procedural generation, contrasted to polygons. "Procedural generation" isn't generic for "generation," and neither is "generation" generic for "procedural."

Obviously it's difficult to generate a highly structured object procedurally, like a sculpture or something

Yes it would be difficult. I can imagine using triangles to make something such as a seashell, a pillar, a simple water fountain, a pot, or a rock that doesn't look like a blob (like with voxels ). But if you actually wanted to craft a procedure for generating a human statue (I'm assuming a human statue), you could either go by a roughly user defined procedure (i.e. replicating the steps an artist uses to model a human), or you would require an insanely sophisticated mechanism for encapsulating the nature of a human's shape, to an extent. We can begin to imagine how this would work, and how it might determine the final surface's triangulation (if using triangles), but its not worth discussing here (lets stay on topic).

This means programmers from all over the world no longer have to painstakingly hire artists to design high quality terrain meshes and textures

The original Marching Cubes algorithm had a patent (but it has expired), and there's an ever growing crap load of new patents regarding voxel techniques. Users still need high quality textures, unless they generate them (But voxels aren't for that anyway). I don't think very many (non-voxel) algorithms which can apply to generating terrain meshes (e.g. height-map) have been patented. And since when was voxel terrain higher quality than any other generative technique? I've marked "low-resolution" in bold:

Obviously if you are going to render raw voxels, no level of resolution will fix the blocky look. But if you interpolate voxels (e.g. marching cubes) you can get away with a surprisingly low-resolution mesh thanks to normal/parallax mapping.

As far as I can tell, normal/parallax mapping hardly relates to interpolating voxels. This interpolation can often, also, be too blobby.

You can't do that with precomputed terrain.

Plenty of games don't use either "precomputed" terrain or voxel terrain. Take BF3 for example.

### #4951920Don't start yet another voxel project

Posted by on 22 June 2012 - 10:39 PM

Unless you have theorized something to add to whatever has been done before (I doubt), I recommend you don't start yet another voxel project so vaguely and senselessly.

Discuss:
• Why do you want to, anyway? Do you even know how to render anything decently credible, using traditional methods?
• Unless at an extremely fine and consuming resolution, aren't voxels too homogenous to bear much arbitrary definition? Do you concretely understand how much memory voxel data might consume, given theoretical circumstances? If you are experienced with voxels, how much memory consumption have you been able to mitigate (not theoretically!), and by which techniques?
• If you plan to work with voxels, or already have (preferably), in which kind of way?
• Direct Rendering
• Raycasting
• Other
• Polygonal Construction
• Perfect Cubes (Minecraft-like)
• Marching Cubes
• Other
• Simulative Application
• Electrodynamic Propagation
• Fluid
• Other
• Other