• Create Account

# JTippetts

Member Since 04 Jul 2003
Offline Last Active Oct 14 2016 12:47 PM

### #5289965How do I create tileable 3D Perlin/Simplex noise?

Posted by on 03 May 2016 - 04:38 PM

If you don't need it to tile in 3 dimensions, then 5-D noise can do it. If you need it to tile in 3 dimensions, then 6 D will work.

I wrote a journal entry on this topic some time back: http://www.gamedev.net/blog/33/entry-2138456-seamless-noise/

If you don't want to use higher orders of noise, then you can use the blending method, or you can use lattice wrapping, such as one of the answers in that stackexchange thred suggest. Assuming your noise is a simple, vanilla Perlin or simplex fractal, and not anything more complex than that or including domain rotations.

The Accidental Noise Library ( https://github.com/JTippetts/accidental-noise-library ) uses the 6d method, if you care to check it out. The implementation of 3D noise mapping of various seamless modes is at https://github.com/JTippetts/accidental-noise-library

Note that the seamless method using higher orders of noise operates using some interesting domain transformation (a later poster to my journal called it a Clifford torus ( https://en.wikipedia.org/wiki/Clifford_torus ); you learn something new every day ) and so it can have consequences if you are, for example, using repeating patterns as part of your pipeline. For cloud noise it works well, but the domain transformation will skew any pattern functions beyond recognition.

### #5289798Creating the game world.

Posted by on 02 May 2016 - 05:45 PM

Okay thanx for telling me how. But for real. Im only 14 so I have time, i dont see why we shouldn't shoot for perfect. We have time. I want to figure out how to join the process to make this a more real goa because i think once we can replicate every speck of dust i most certainly think we will be able to hack into the brain at that point. Let me pitch one more way and tell me if it is at all different from what i have been saying. I want a machine that can copy evolution of everything from the beggining of that universe. Now of course it would neeed too much processing power but by computer magic i meant the concept of magic. Instead of each cell going through every process magic can replace stuff to make stuff faster. Then a magical game would be able to use that world.

a) You don't have as much time as you think you do.
b) It's difficult to perfectly simulate something that you have an imperfect understanding of. The reason scientists are still digging in the dirt and smashing particles together underground is because they still do not yet fully understand the universe, the evolution of things, or the essence of the most basic processes that drive everything we see. All that stuff you want to simulate, you have to have the math for it, and right now our models are imperfect at best.
c) If you want to simulate all the atoms in the universe, you would need enough storage to store information about all the atoms in the universe. Technically, that's impossible. You would need more atoms than are in existence to provide the computational framework.
d) If dust simulations are your thing, you could certainly model a fairly realistic simulation on a smaller scale. Can't say how interesting it would be, but you could definitely do it.
e) They're already hacking the brain.
f) Magic is not a magical answer to magically doing anything.
g) There is a philosophical theory that posits that we ARE a simulation. Theoretically, if you could simulate the behavior of each and every particle and sub-atomic particle, using a consistent set of math, you could do it one instruction at a time and though a single "frame" of universe time, for you subjectively, might take a thousand thousand thousand years, for the entities being modeled, it would be the tiniest increment of time. It's a fairly windy theory, suitable mostly for alcohol- or drug-fueled navelgazing at best, imo, and to the people outside of the simulation (assuming they live long enough to see any significant chunk of it) it's probably not all that interesting most of the time.

### #5288989directx volume terrain

Posted by on 27 April 2016 - 02:45 PM

Wow, nice idea.  Maybe i dont have to recalculate all triangles but I still have to create dynamic buffer for each chunk :/

Probably, yes.

And I need to store a lot of volume data. What is the best way to increse speed? Binary files?

Well, binary files are usually more quickly loaded than text files, yes. Other than that, you'll need to profile to find where your bottlenecks are, and try to apply optimizations as needed.

### #5288958directx volume terrain

Posted by on 27 April 2016 - 12:45 PM

You will need to recalculate triangles only when a value is changed. You can also reduce the amount of recalculation that is performed by splitting your volume up into 'chunks', where each chunk is some arbitrary dimension of cells, and running marching cubes on a per-chunk basis. Then you only need to regenerate the chunks that are actually affected by the edit, rather than the entire landscape mesh.

### #5288917directx volume terrain

Posted by on 27 April 2016 - 08:54 AM

If you store your terrain as a 3D grid of values, then you can edit the terrain by changing the value stored at a particular location. You can generate your mesh using marching cubes, using the 3D grid as the density function. Each cell of the 3D grid would be interpreted as a corner point for a set of cubes. Editing operations could be either binary (set a particular array location to either 1 or 0) or smooth (add or subtract a small increment). Smooth editing would result in smoother mesh generation.

As for isolevel, you just pretty much determine what you want that to be when you start. Set it at, say, 0.5 and then you don't usually change that again. That's simply the arbitrarily-chosen value of where you locate your iso surface.

### #5288785directx volume terrain

Posted by on 26 April 2016 - 12:09 PM

but I cant imagine how it exacly works :/

Any examples in directx?

Marching cube is this black cube on video, but what is isosurface?

The black cube in the video is not "marching cube". That's just the guy's editing cursor, a visual marker to show the location he's looking at to edit.

Marching cube is an algorithm, or set of steps, that can convert a density field function to a mesh.

An isosurface can be described as the set of all points where the density function is equal to some value, this value being the "isolevel".

To start with, you need to understand that your volume terrain is going to basically be a mathematical function that for some input (x,y,z) will return a value, or density, at that location. This output value is typically a floating point value in the range of 0 to 1. So any given coordinate location within the bounds of your world or level will have a corresponding density value. The iso-level parameter determines where the boundary between "solid" and "open" lies. If you set iso-level to 0.5, then any (x,y,z) location whose density value is less than or equal to 0.5 is considered "solid", while everywhere else is considered open.

The tricky part in this type of thing is generating a mesh that follows the iso-surface of the density function at the threshold of iso-level. The Marching Cubes algorithm is one such technique. It operates by splitting the volume up into discrete cubical cells, and evaluating the density function at each corner of each cell. Cells where some of the points are "solid" and others are "open" are considered to be parts of the iso-surface, and the algorithm will generate a small bit of mesh geometry for this cell, representing a divide between the solid and open cells. Once all surface cells are evaluated, the resulting pieces of geometry are consolidated to form the surface mesh of the volume.

The term "marching cubes" comes from the mental metaphor of cubes "marching" across the surface of the volume, since an optimization in the algorithm includes starting at a known surface cell and recursing out to neighbors of that cell that are ALSO surface cells.

### #5287328Trying to understand normalize() and tex2D() with normal maps

Posted by on 17 April 2016 - 11:33 AM

why is ( 0, 0.7071, 0 ) not unit length? Is it because this value is < 1?[/size]

Linearly interpolating unit-length vectors (which is what the pixel color interpolation is doing) rarely results in a unit-length vector. Consider the following image:

Say you have the normals Green and Blue, and you want to interpret the normal halfway between them. Linearly interpolating them will result in the portion of the cyan vector that ends at the white line, since linear interpolation will interpolate the values along the white line between the endpoints of the two vectors. In order to obtain a unit-length vector, you have to re-normalize it, which will restore it to the full unit-length it needs to be for proper lighting calculation.

### #5287248Trying to understand normalize() and tex2D() with normal maps

Posted by on 16 April 2016 - 08:20 PM

tex2D(NormalSampler, input.mUV) samples a color from the normal map texture. The NormalSampler is a sampler associated with the texture, the provides the sampling state. input.mUV provide the texture coordinates to sample from.

A normal stored in a normal map must be "decoded" before it can be used. R and G channels encode the X and Y values of the normal (tangent to the face) while the blue channel encodes the Z value, and is typically simply set to the same value (255) for all values for simplicity, hence the nice cool blue color of a normal map texture.
A color channel holds values from 0 to 1, but the X and Y components of the normal can be values from -1 to 1, hence the need to decode. Inside the call to normalize, the normal sample is multiplied by 2, then 1 is subtracted from it. This has the effect of converting the R and G channels from the range 0,1 to -1,1.

Since most normalmap generation software uses a constant color for the blue channel, the resulting vector is not of unit length. A proper normal needs to be unit length (meaning that sqrt(x^2 + y^2 + z^2) = 1) That is what normalize() does: it takes a vector of arbitrary length and converts it to a vector of unit length.

### #5286344Game concept

Posted by on 11 April 2016 - 12:49 PM

Wait, wait, let me guess.

Shooter McGuy is a grizzled veteran of the Ubiquitous Space Marines, Shooting Division. Aliens have invaded! Shooter must don his space suit and do what he does best in order to save the galaxy. After years in retirement he must attempt to match up at least 3 of the same color of gem in a row, in order to destroy those gems and make room for the ever-falling cascade of gems from above. He can destroy special gems in order to gain access to combos and score additional points.

No, wait. I think it's more like this: Blink is a young boy from the far-off mystical land of Spherule. One day, his uncle gives him a sword and, with his dying breath, pulls the young boy closer and says, "Blink. You must go forth and save the land. You can do so by collecting cards that grant your units powerful attacks and spells, but you can only move 12 times per day. But Blink, there is a secret! If you pull up the Shop screen, and enter your dad's credit card number, you can purchase Droopies that allow you to extend your daily turn limit beyond 12. Blink, to save the kingdom, you NEED MORE DROOPIES!."

Although, when I see the road with the fence, I think it might be more like: Cleve is a miner, lost in a hostile world of block-shaped chunks of stone and dirt, cows made out of cubes, and a layer of burning hot lava lurking deep below. In order to survive, Cleve must delve and craft. And craft, and craft some more. He must grind 4000 linen bandages, gathering the cloth from an unending supply of mobs haunting the Cubic Castle ruins. He must gather 36 piles of steaming spider guts to make Red-rash Goulash. And if he raises his Tailoring skill high enough, he can even craft a frozen-weave burlap sack of glamour to wear on raids.

### #5286093Event-Listener with Lua and C++

Posted by on 09 April 2016 - 11:05 PM

For an interesting take on how one project has integrated Lua and C++, I suggest you take a look at Urho3D. Urho3D provides a component-based framework with event passing, and provides a structure such that a component can be added to an object that wraps a Lua object. The component handles the sending and receiving of events, and the Lua script code can subscribe to listen for events or send events as needed. It's quite an elegant system, but the underpinnings of it is a little complex. The github repo is here, you can navigate to the Source/Urho3D/LuaScript folder to get a peek at how scripting is done.

As an example from my own game, I have a Lua class called FloatingCombatText. This is a class implemented in Lua, which subscribes to listen for certain events, such as Damage Taken, Low Life, etc... events that happen to an object during the course of combat. In response to the events it is listening for, it will create floating combat text objects and queue them to a list, and these objects are then displayed as numbers or alerts animating above the entity's head. Simply by adding this script object component to any combat-enabled object, that object then will display floating combat text. A truncated version of the FloatingCombatText code:

```FloatingCombatText=ScriptObject()

function FloatingCombatText:Start()
self:SubscribeToEvent("Update", "FloatingCombatText:HandleUpdate")
self:SubscribeToEvent(self.node, "SpendResources", "FloatingCombatText:HandleSpendResources")
self:SubscribeToEvent(self.node, "DamageTaken", "FloatingCombatText:HandleDamageTaken")
self:SubscribeToEvent(self.node, "HealingTaken", "FloatingCombatText:HandleHealingTaken")
self:SubscribeToEvent(self.node, "LifeLow", "FloatingCombatText:HandleLifeLow")
self:SubscribeToEvent(self.node, "PrepareToDie", "FloatingCombatText:HandlePrepareToDie")
self.offsetheight=1
self.timetoupdate=0
self.updaterate=0.25
self.velocity=2
self.ttl=1
end

function FloatingCombatText:HandleLifeLow(eventType, eventData)
self.list:push_back({text="Life low!!", color={r=1,g=0,b=0}})
end

function FloatingCombatText:HandlePrepareToDie(eventType, eventData)
self.list:push_back({text="Dying!!!", color={r=1,g=1,b=0}})
end

function FloatingCombatText:HandleDamageTaken(eventType, eventData)
local total=eventData["total"]:Get()
if total>0 then
self.list:push_back({text=tostring(total), color={r=1,g=0.15, b=0.15}})
end
end

function FloatingCombatText:HandleHealingTaken(eventType, eventData)
local total=eventData["healing"]:Get()
if total>0 then
self.list:push_back({text=tostring(total), color={r=0.15,g=0.15, b=1}})
end
end
```
The Urho3D framework provides ScriptObject as a base 'class' from which all script classes inherit. The Start method is called when the object is created, and it subscribes to the various events from its owning node (to distinguish from events originating with another node.)

The Urho3D framework provides the ability to work with Lua how you see fit. You can write the majority of your game in C++, implementing only certain scriptable components in Lua, or you can implement the entirety of your game code in Lua, using the bound API to implement scriptable components as well as base game logic.

Sorry for the pimping post, it's just that I really do look at Urho3D as an interesting and elegant way in which Lua can be bound to your framework, going beyond the simple call-for-call API binding you typically see.

### #5285681Game frameworks and engines that aren't Unity, Cryengine, or UE4?

Posted by on 07 April 2016 - 06:53 PM

Urho3d is written in c++, has bindings for Lua and AngelScript, supports D3d11 and GL, provides 2d, 3d, gui, bullet physics, networking, sound, names path finding with crowds, etc.

http://urho3d.github.io/

Wow, this is amazing. Also found Atomic Game Engine which is a fork of it. Not sure which I want to use. Probably just stick with Urho, but the fact the site hasn't been updated in awhile is a bit concerning.

Now I need to brush up on my C++...

Urho3D is under active development. Last commit was three hours ago. The site doesn't get update a whole lot, but there is much activity in the repo itself.

### #5285523RPG effect resolution flow.

Posted by on 06 April 2016 - 08:30 PM

Using events for this kind of thing can work, you just need to be careful about going too deeply into it. Work from a few key events, and for the in-between stuff work locally.

For example, an object receives an Apply Damage event. This event may originate outside the entity (from another mob, from a trap, etc) or from within the entity (from a Damage over Time tick or other debuff, etc...) This event should hold all the information the entity needs to know: damage amount, damage type, etc. Once this event is received, the entity would put out a call (a broadcast event of some sort) letting anyone who is interested know that an Apply Damage event has been received by the entity and, if they are interested in responding, call back with their response. Then the entity sits and waits for all of the interested parties to register their interest in responding. The reactions need to be categorized according to specified rules, and applied according to the specified order. After all reactions are processed, the entity would then send an event, say Take Damage. This event would have the processed, modified and repackaged damage data, and would be sent to the entity itself. Any interested parties can respond to this event (floating combat text to show damage numbers above the head, combat log to show combat results as text, vital stats to reduce life and potentially trigger additional life-related events).

This event->broadcast for reactions->response event structure allows for decent flexibility. As an example:

An environment hazard generates a hit for 16 Fire damage against a player character. The hazard sends the Event "ApplyDamage -16 Fire Enviro" to the player.

The player's damage handling component receives this event, and sends out a call for responses, by passing on the event with additional tagging data if needed.

1) Fire Shield (local buff spell) wants to absorb 4 Fire damage.
2) Fire Resistance (local character stat) wants to mitigate 12% of Fire damage
3) Immolation Aura (area debuff) wants to increase Fire damage taken by 20%
4) Firewatcher (NPC AI mobile unit, faction-tied to player) wants to extinguish the source of the flames to protect its master

In this case, the responses that involve numerical adjustment of the incoming damage value are applied in an order proper to the game ruleset; ie, apply +/- X% adjustments first, then apply +/- hard value adjustments after. Additional responses don't necessarily need to be sent as reply events; ie, the Firewatcher can simply initiate its Extinguish Fires AI routine in response to the broadcast event.

After all of the numeric adjustments are applied, a TakeDamage event with the final adjusted total is packaged up and sent locally to the entity. At this point, any local components can listen for this event to respond appropriately. The CombatLogger component will write a message to the log ("Player takes X fire damage."), the FloatingCombatText component will read the event and respond by scrolling a number above the unit's head. The VitalStats component will reduce Life by the specified amount, and potentially trigger the sending of LifeLow or LifeDepleted events as needed.

### #5285177Game frameworks and engines that aren't Unity, Cryengine, or UE4?

Posted by on 04 April 2016 - 11:02 PM

Urho3d is written in c++, has bindings for Lua and AngelScript, supports D3d11 and GL, provides 2d, 3d, gui, bullet physics, networking, sound, names path finding with crowds, etc.

http://urho3d.github.io/

### #5284661Isometric art projected on 3D shapes

Posted by on 01 April 2016 - 03:35 PM

If you're choosing to use this particular abstraction (renders projected onto impostor geometry), you're probably going to be doing it for 2 main reasons: Z buffering and lighting. There are other benefits, to be sure, but these are the big ones. The Z buffering gets past some of the interesting sprite draw-order problems that have been highlighted over 30 years of isometric games, and the lighting just makes it look juicy.

But for both of these, you really want to have geometry that fits the rendered sprite that is projected on it. Cubes can work for many elements, as long as those elements are essentially cubic in nature. Since you are still going to try to avoid cases of intersection or clipping between objects, the occasional weird cube-like clipping artifact won't be too bad. But a mismatch between the rendered object and its impostor geometry is going to be very noticeable once you toss dynamic lighting into the mix.

So, no, there's really no one size fits all solution for impostor geometry. For this reason, I'd say that if you are looking for a paradigm to reduce your workload, this isn't it. Just go with a standard 2D sprite stacking approach instead, and deal with the edge cases as best you can. This technique ends up actually being more work, because after you have finished the intricate modeling of your rendered objects, you still have to perform modeling to obtain a good-fit impostor to stick in on. That second step can be skipped in a traditional 2D, at the cost of all the tradeoffs you have to make.

If you are willing to accept some lighting quirks, though, you can settle for cubes or portions of cubes for everything. It'll show up in lighting, but maybe you can finesse it so it's not that bad.

### #5284525Isometric art projected on 3D shapes

Posted by on 31 March 2016 - 02:04 PM

Yes, it's mostly for lighting. Cubes typically work well enough for z buffering, unless the shape is concave. They don't light very well, though.

PARTNERS