• Create Account

# LorenzoGatti

Member Since 25 Nov 2005
Offline Last Active Today, 12:43 PM

### #5112711Math terms for NURBS

Posted by on 28 November 2013 - 05:36 AM

I have an application that uses bezier triangle patches to define a hexagonal game board. I am finding that in certain situations the lack of local control provided by bezier surfaces is creating unacceptable visual artifacts. I am now hoping to experiment with triangular NURBS surfaces to see if I can solve some of these issues.
&nbsp;
I found a paper that seems rather thorough on the subject here, but I'm have trouble understanding some of the math terminology with my lack of formal math training. Here are a list of things from section 2.2 in that paper that I don't understand.

Congratulations for picking an egregiously terse and badly explained paper with unusual notation. Don't blame your lack of training too much.

In the first paragraph of section 2.2 it says "Let T = {(delta?)(i) = [r,s,t] | i = (i0,i1,i2) (thing that looks like an E) Z3+". I assume that r,s,t are the triangle vertices and that the index i is a number combo like 300, 030, 003 etc. If anyone can explain some of these symbols I would be grateful. I don't know what the vertical line means either.

T is a set of triangles; i is an index consisting of 3 non-negative integers; r,s and t are vertices of the triangulation.

The vertical bar inside the braces of a set definition, here and everywhere else, means "such that": T is the set of triangles such that their indices are three positive integers, abusing for a stealth definition what should be a notation for predicates.

Other vertical bars include projection onto some subspace (used along with the set-definition vertical bar for Ax in section 2.1) and separation between the main function argument and "parameters" such as spline control points (used for M all over section 2.1).

After that there is a knot sequence, which I understand is what is called a knot vector elsewhere, and is just a sequence of increasing numbers like {0,0,1,1,2,2} etc.

No, the knots are points. For each vertex v in the triangulation, we choose n knots (one of which is v).

Then it says we define a convex hull, and the beta symbol seems to be sort of like the index, I don't know what the 'absolute value' lines mean in this context but that symbol seems to have 3 components that add up to the 'order' of the triangle. I am unsure how the index and this beta symbol should be combined to get the desired number sequence.

Here the bars are a completely superfluous absolute value, and there is no "number sequence": V is the convex hull of n knots associated with the vertices of triangle i.

Then it defines a multivariate simplex spline M(u | Vibeta). The u is a two component coordinate (u,v), but I don't know what the line means.

M is the spline that has a 2D variable u (we are restricting ourselves to a surface, but similar definitions would hold for higher-dimensional or lower-dimensional manifolds) and V as its set of n knots, adapted from section 2.1.

Then it defines the basis function with that line again, then a lower case d, with the triangle vertices without the indices but with the beta thingies in parenthesis, 'lined' against the&nbsp;M(u | Vibeta). What does that line mean anyway? The lower case d is also unexplained.

Actually, d is one of the few simple and commonplace definitions: the area of a triangle with the three given vertices. There are a factor of 2 and an absolute value as an artifact of practical computation (for 3D points computing the area becomes a matrix determinant).

Then we have the unweighted B-spline surface. The Sigma operator is something i always have trouble with, but I interpret this as the sum of every index combo multiplied by the point and the basis function and multiplied again with the sum of all the beta combos multiplied by the point and basis function. However, I do not understand what is meant exactly by Pi,|beta|. Pi would be P300, P030, P003 etc, but I don't know what the beta subscript does to that interpretation.

The sum is over all triangles and over all choices of beta for that triangle, i.e. over all basis functions N for the whole triangulation T. Indexing of the arbitrary control points p matches that of the N functions.

Finally, there are arbitrary weights that can be set, and once I understand the indexing system, I think I understand this part as a weighted mean of the above.

Yes, it's the usual weighting and normalization of any kind of NURBS.

If anyone can help me make sense of this I would be super grateful.

I've found a less general but vastly clearer basic explanation of triangular NURBS surfaces in http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.51.5361 and there should be other better references.

### #5112430Industrial revolution strategy (post mortem and ideas)

Posted by on 27 November 2013 - 06:03 AM

The central stockpile can be entirely virtual, a mere abstraction for manual and automated management purposes. Actual resource movement can be done between actual places and on the actual road, rail and channel network with explicit trains, ships etc. and automated in the guise of minimum cost network flow problems (given resource producers, consumers and stockpiles, minimize the cost of moving them to the appropriate destinations along the edges of a graph representing places and transport between them). This kind of automation can remove the tedium of ordering transport of materials, collect statistics (how much does it cost to transport a certain resource, on average? Which long or expensive routes are contributing?) and signal to the player that a certain road, rail trunk, channel, river etc. is saturated (rejecting transport of low-priority goods). Details of a certain road etc. can show what travels on it and between what places. There should be a place for both global resource decisions (e.g. I built all possible mines but I need more Iron quickly for warships: buy it from abroad or recycle old ships?) and local transport decisions (e.g. Paris needs so much fish that it has to arrive fresh from Marseille: which railroads should I build in the middle of France?); what's important is making them interesting, nontrivial decisions and cutting the boring or useless details.

### #5112414Industrial revolution strategy (post mortem and ideas)

Posted by on 27 November 2013 - 05:03 AM

The easy way round this seems to be to vary the time it takes to produce a single resource. For example, if a fully staffed mine produces one unit of coal every 10 seconds, then an understaffed mine could produce only one unit of coal every 15 seconds.

It's turn based strategy. There are no seconds
It doesn't matter, you are simply measuring mine output in coal units per turn rather than coal units per second.

The same 2/3 ratio between the production rate of a fully staffed mine and an understaffed mine can be realized as 1 coal every turn vs skipping production every third turn, 1 coal every 6 turns vs 1 coal every 9 turns, 15 coal per turn vs 10 coal per turn, etc.

### #5112410AMD's Mantle API

Posted by on 27 November 2013 - 04:51 AM

&nbsp;

What EA does is relevant because they are a large, publicly-traded company whose livelihood hinges on the games they release and the technology used to make them. You may not like them, but you can be damn sure that they act according to what's in the best interest for them as a company. For them, an engineering decision of this level has the potential to cost 10's of millions of dollars (or more) over a few years time.

This kind of reasoning invites conspiracy theories. Maybe EA is so large and full of cash that their commitment to use Mantle is a ploy to pull competitors along with them into a technological and/or commercial dead end: the same number of wasted millions (the cost of developing and adopting a Mantle-based game engine) can be an acceptable "war expense" for a large company that's also pursuing other technology it actually stands behind, or a drain on the limited resources of a smaller company that needs to put too many eggs into one basket and a threat to their long-term survival.

Just gratuitous speculation, of course. Personally, I view Mantle as an experiment in novel graphics API design and possibly as an architectural improvement for the actually important DirectX, OpenGL and OpenCL drivers; the parts that prove useful are going to become OpenGL extensions sufficiently fast.

### #5111837How to run Notepad++ script in CMD

Posted by on 25 November 2013 - 08:20 AM

Powershell comes bundled with many Windows version. Try executing
`powershell`
in a command prompt window.

### #5111260About fixed time step update catch up

Posted by on 22 November 2013 - 05:30 AM

If its a single player game and its constantly very slow, I think the way out would be to switch down to a fixed number of less logical ticks with comparably bigger steps, when the game estimates it will not catch up. Though that would require the foresight to have the logic programmed for variable timesteps, even if in normal case on fast computer it would always do same fixed timestep.

This approach throws away deterministic simulation; in many game types the slightest variations in numerical values (unavoidable after a single increased timestep update, even with the best computation quality) can lead to large differences in game state later on.

### #5110711CCG patterns and antipatterns?

Posted by on 20 November 2013 - 02:59 AM

On that track, I do still think it holds true that in a battlefield/combat based game, a varied or dynamic start is something that can help balance certain deck-and-play styles without affecting others that might be in less compromising positions. Altering the base stats of a certain commander (or granting indirect nerfs to certain cards in a commander's deck) as needed to ensure that the deck is still fun to play, without being overwhelming.

I hope I don't understand what you mean to do, because all I can see in asymmetrical and nontrivial starting conditions is a world of pain without discernible benefits.
Apart from difficult self-inflicted balance problems (e.g. X units on the battlefield at the beginning are worth Y extra cards in the starting hand: find X and Y and ensure they are a balanced choice in all conditions), starting with something on the battlefield helps highly noninteractive rush strategies of various types: play a large horde and exploit numerical superiority to engage all enemy units allowing others to attack some objective undisturbed, play some kind of "artillery" and exploit long range attacks to exterminate arbitrary enemy units before they become a threat, play wizards and exploit early and excessive buffing to do something ordinarily impossible, and so on.

Can you give some examples of how varying setups can be an adaptation to the enemy Commander? What do you gain by letting players have their appropriate responses on the battlefield from the beginning instead of playing them gradually? Given a choice between deploying the specialized pieces of a rush attack or deploying things that are useful in a long game, what makes the latter more appealing? What makes adapting to the enemy Commander more useful than exploiting synergies with your own Commander?

### #5110517CCG patterns and antipatterns?

Posted by on 19 November 2013 - 11:41 AM

An Issue I have with MtG's 20-life 7-card start in eternal formats is that it doesn't encourage player interaction. It's not as much of an issue in the ever-expensive T2 format, but if you look at even modern decks with great winrates (zoo, UW Control, RDW) you begin to see a pattern - none of the decks actually care about what your opponent does.

The less they care about their opponent's actions, the better chance they have of getting some wincon on the field. This is because, as you mentioned, each deck can be tailored to a specific goal: getting 20 damage out; dealing with 7+turns cards (no cards = no options); decking an opponent; gaining 40 life with a creature on the field.

No, the effectiveness of "non-interactive" decks in Magic: the Gathering is a consequence of being able to win by directly messing with the opponent rather than by grinding through creatures and other permanents (as would be the case in any seriously battle-oriented CCG).

Varied starting condition would only make some of the boring strategies more effective. More life? You can pay more life as a cost. More cards? You might be able to run the opponent out of cards, or to rush a lot of spells in unusually few turns. Free lands? Cards with higher casting costs become practical. Scrying or the like? Less copies of certain cards can be enough. Strategies would change a bit and remain equally aggressive.

In particular, suppose Magic: the Gathering rules were changed to begin the first turn with, say, 20 permanents on the battlefield: the game would become even less interactive, with an ample choice of disgustingly bulletproof overkill combos (likely guaranteeing victory for the starting player during the upkeep phase, without even attacking; figure out how, it's fun but off topic).

Decks that minimize player interaction severely limit the amount of fun your opponents get out of the game, and definitely detract from the social aspect. I'm hoping to mitigate this by using a dynamic battlefield, with win conditions you can affect without having direct counters for them. An example could be Dul'Nam, the Dark Lich, has a ritual that will turn all living creatures that die into undead under his control. His opponent could disrupt the Ritual with a fast harrying unit, or power through his losses and stab at Dul'Nam directly, forcing him to divert resources from the ritual into faster sorceries.
This example is representative of critical aspects in which a battle-oriented card game should do the opposite of a magic duel card game like M:tG. Major actions (like the necromantic ritual) take many turns (as opposed to be a single spell, ability or attack that must be dealt with immediately), and just about any unit can easily attack the Commander or troublesome targets in general (as opposed to the general inability of M:tG permanents to hurt each other).
There should be enough time to obtain more units after the ritual begins and get them to attack the necromancer before the ritual is completed (provided they fight their way beyond enemy lines).

### #5110411Create infinite curve

Posted by on 19 November 2013 - 03:46 AM

### #5110406Simple rotation question (hopefully)

Posted by on 19 November 2013 - 03:18 AM

The magnetic vector is effectively the sensor's own world axis.

No, it's just a vector pointing north that you want to use to find a plane and then compute projections and angles.

I need to be able to get the local yaw of the sensor regardless of its orientation. To do that I felt I needed to go into the sensor's local coordinate space but now I'm not so sure.

If I understand correctly, you want a cartesian coordinate system tied to a certain point of the surface, in which axis z points "up" along the direction of gravity at that point (or more simply along the surface normal, treating Earth as a sphere).

Choice of x and y axes in the tangent plane at that point has one degree of freedom, which can be used either to place axis y in the plane spanned by the z axis and the north vector (so that it points towards the magnetic north) or to place axis y along a geographical meridian (so that the projection of the magnetometer reading on the xy plane work like a compass).

All angles you mention are measured between the x,y or z axes and the north vector or its projections on the xy, yz or xz planes, all of which are easy to find.

### #5110142What can you do with a map (strategy)

Posted by on 18 November 2013 - 05:03 AM

The factions could coexist in the same map sectors and try to assimilate or drive out "enemy" units without killing them.
&nbsp;
A potentially grim setting: political parties, with leaders, elected officers and groups of activists and supporters as "units", electoral districts within one country as map sectors, player moves that don't include assassination or civil war (e.g. bribery, blackmail, aggressions, essays against leaders, public speeches and propaganda, lawmaking, terrorism against the masses) and relevant events (e.g. elections and important parliamentary votes).
&nbsp;
A more lighthearted setting: tribes of clams colonizing some rocks, who try to persuade larval-stage mobile clams to join their tribe and settle on a certain rock, try to persuade "enemy" clams to leave rocks and go elsewhere by appropriately silly means like&nbsp; team-based insult duels, and win by "controlling" rocks.

### #5110133CCG patterns and antipatterns?

Posted by on 18 November 2013 - 04:13 AM

The overall outline for the game is that you've got a commander or leader of some kind that determines a lot of

your base stats for the game - health, toughness, strength, resource income, certain spells or abilities, etc. The

rest of your deck then gives you the ability to 'cast' various allies or creatures, spells, equipment, castles, and

battlefield terrain. Battles would be fought on a grid reminiscent of Heroes of Might and Magic, with your Leader

This seems extremely incoherent.

- If you want a card game in which the Commander deploys units and does useful things by playing cards, both sides should start out on an equal footing, with nothing on the battlefield but identical Commanders differing only in the contents of their card deck. Commander experience should give only cards, not troublesome advantages.

This is the Magic: the Gathering approach: everybody begins with 20 life, 7 cards and nothing else, any difference between players would be an almost unheard of handicap match, and a disadvantage of 1 or 2 cards often means defeat regardless of decks and player skill.

Fixed starting conditions allow players to optimize decks for a single scenario, rather than being forced to do everything (badly). For example, suppose you want to make an all-in aggressive deck (i.e. one that forfeits defense and runs out of resources to kill the opponent with cheap cards before he does much): should it reliably do 15 damage by turn 2, 20 damage by turn 3 or 30 damage by turn 4? Starting at 20 life means that the first deck is too weak, the second is the right target, and the latter is too slow; starting anywhere between 15 and 30 life means that all-in aggressive decks cannot operate properly against most opponents.

- If you want a grid-based, turn-based tactical combat game in which "health, toughness, strength, resource income, certain spells or abilities" of the Commander matter, the Commander is only one unit in the player's army and there's no reason not to begin the battle with more units, their equipments, already cast spells, etc. Cards could still have a role, for example to determine which special combat actions are available, but the CCG aspect would be limited.

A recent particularly pure example: Chaos in the Old World, a strategic game in which Chaos Gods conquer and corrupt regions of the Old World. Every turn, each player spends resources to place units in some region and to play cards from a god-specific deck,  which have local effects (one region, or even specific units in that region) and are aligned with that god's specialty and victory conditions (e.g. Khorne, god of slaughter, who scores points for killing others' minions, has many cards to fight more or to win fights).

There is no "commander" (major demons are merely middle-late game expensive units, not leaders) and no deckbuilding (the four gods are already very specialized, while selection and placement of units is the backbone of strategy); cards cause uncertainty and offer an opportunity to make the difference with dirty tricks, putting the right amount of Chaos into what would otherwise be a small scale hybrid of Risk and Diplomacy.

### #5110126Some questions regarding texture generation

Posted by on 18 November 2013 - 02:57 AM

- Why don't you like downscaled normal maps? If the 8K ones are normalized, why should they be normalized again?

- What's a cavity map? Are you sure it's directly related to the normal map?

- If you don't like a specular or roughness map, what prevents you from retouching it with an image editor?

.

### #5108940Version Control and programming with a team

Posted by on 13 November 2013 - 05:28 AM

Wouldn't the size problem be reduced if you avoid putting continuously updated precompiled binary files into git/mercurial and dont add all kinds of external stuff you depend on into a single repository but instead have some sub-repositories?
You could also split it up to put the source art files, the compiled art files, the compiled code into 3 svn repositories, to reduce the load in the dvcs further by only having source code inside. Or add some custom merge tool for some binary files or convert them when storing into the repository to make them mergeable.
Maybe on bigger projects people have some buildserver already that compiles everything for testing commits, that could also update the separate svn repositories for compiled things, for the artists to avoid having to compile code and the programmers avoiding to convert art files themself?

Compiled binary files have no reason to be in SOURCE control. Important ones, such as public packages of release or beta test versions, should be archived to Dropbox, network shared drives, or other types of "dumb" storage by build managers to let others download them as needed.
Old compiled binary files, as opposed to compiling the game by themselves, are needed by developers only for unusual reference purposes (e.g. testing that building the appropriate revision of the sources reproduces what's been released), not to be routinely updated, compared, and be copied to or from their personal workspaces like sources.

Correctly managed binary source assets are unlikely to be troublesome: they should be available to everybody (enabling all developers to build the game), which is a good reason to make all revisions easily available, and they should change only rarely and in meaningful fine-grained increments (for example, repainting a 3D model should change that model's texture maps, not a big texture atlas containing lots of unrelated images; the big texture atlas can be kept outside source control and rebuilt automatically).
A sane organization of assets and build tools is an opportunity, not a cost incurred because of source control; setting up an easy, automated and effective workflow using source control should be compared to skipping the initial effort, sinking into progressive complication and confusion, and throwing the towel (or wasting a lot of time) because of errors in manual builds.

### #5108929Version Control and programming with a team

Posted by on 13 November 2013 - 04:45 AM

You can run Git on your web server, with authentication to restrict repository access to your colleagues. It isn't difficult; I set up Git as a CGI (using Apache on Windows) with SSPI authentication (Windows network domain) and a whitelist of allowed Windows users without any trouble.

PARTNERS