Edit: I should say that this is for something open source. The code that I'm sharing here is for a version of DOT that I won't be releasing until January/Febuary
Once again, this method can be extremely crude. Its just to give the AI a general path that it can fix at runtime.
All these methods sound interesting, but the conditions have changed a bit now. Islands offer both negative and positive aspects, and entities can jump backwards.
So jumping to an island foward may give 3 gold and 4 water per turn, but it takes away 2 food.
If a resource ever hits zero, the entity dies.
Entities can backtrack as far as they want in order to stay alive. So they need to remember what is behind them.
After working with what Andy said, I derived a quite nice method. The entity looks at the straight line path between where it is right now, and its final objective. From there it sums all the (positive and negative) costs.
Working backwards, it figures out where blockages (That is to say Islands it can't jump between with the attributes it would have at the time) may be, and saves that as critical points. Blockages are found by comparing the positive sum from Start to Point X to the negative sum of End to Point X.
PositiveScore is basically the A* Score function.
The first itteratation returns the critical points between where it is now, to where it wants to be. Every itteration after that is computed over the critical point.
Start -> Objective
Start -> Critical -> Critical -> Objective
Start-> Critical_2 -> Critical_2 -> Critical_1 -> Critical_2 -> Critical_1 -> Objective
Its just normal A* still but slightly twisted. Some source code:
TContainer<IndirectAd> IEntityGroup::Plan(short Index, AdvertisementBase *Goal, TContainer<UNibble> ReferenceTableSum)
{
TContainer<IndirectAd> SolutionSet;
//The summation of all advertisements we've confirmed so far
TContainer<CostBase> NeededAttributes = TContainer<CostBase>(ReferenceTableSum.size());
int m = 0;
for (int i = BlockSize * Index; i < BlockSize * (Index+1); i++)
{
NeededAttributes[m].Value = RuntimeBody[i].Value;
m++;
}
//TOOD: Get local islands from the world structure (In this case, the islands along the path)
TContainer<AdvertisementBase*> LocalAds;
bool Finished = false; //When this is true, all of the needs for my objective have been met.
while (!Finished)
{
int i = 0;
//The value cannot get greater than -1
float Lowest = 1;
UNibble index = 0;
//Cycle through all the local goals. See what score is the lowest from after this advertisement -> Destination. Basic path finding
for (AdvertisementBase * CurrentAd : LocalAds)
{
float AverageScore = 0;
auto NeededTemp = NeededAttributes;
//Go through every positive cost. Do not worry about negative effects right now. Thats managed as error correcting
for (const CostBase & CurrentCost : CurrentAd->GetPositiveEffects())
{
//How much does this add increase our attributes by?
NeededTemp[ReferenceTableSum[CurrentCost.GlobalAdr]].Value += CurrentCost.Value;
}
//Does this ad let us meet the requirements?
for (const CostBase & DestinationCost : Goal->GetNegativeEffects())
{
Gradient grad = TemplateBody[ReferenceTableSum[DestinationCost.GlobalAdr]];
grad.Q3 = DestinationCost.Value; //Anything that would throw us over the value is rated to be negative
AverageScore += Scoring::PositiveScore_Linear((float)NeededTemp[ReferenceTableSum[DestinationCost.GlobalAdr]].Value, (float)DestinationCost.Value, grad); //Find the score between our destinatin cost, and our temporary attribute value
}
//Divide by the full size
AverageScore /= Goal->GetNegativeEffects().size();
if (AverageScore < Lowest)
{
//Its our new lowest score, so lets change the value of Lowest
Lowest = AverageScore;
index = i;
}
if (AverageScore <= 0)
{
//We've finished the chain. More than half of the needs are met. They rest are computed during runtime to allow for some dynamic response.
index = i;
Finished = true;
break;
}
i++;
}
/*Queue is not an order dependent list. Infact, after running the plan function, it is suggested that Queue be ran through a more extensive
scheduling function in order to retain realism.*/
IndirectAd ad;
ad.Address = LocalAds[index]->ReturnAddress();
ad.TimeConstant = LocalAds[index]->TimeConstant;
SolutionSet.push_back(ad);
//Make sure that our summation is kept up to date
for (auto Cost : LocalAds[index]->GetPositiveEffects())
{
NeededAttributes[ReferenceTableSum[Cost.GlobalAdr]].Value += Cost.Value;
}
}
return SolutionSet;
}
//Note that this approximation is VERY VERY crude. Its used for distant scheduling among AIs
static float PositiveScore_Linear(float CurrentValue, float FutureValue, Gradient grad)
{
float Q3_Factor = (float)grad.Q3 / (float)grad.Max;
Q3_Factor *= 2;
CurrentValue /= (float)grad.Max; //Scale the number to a 0 - 2 range
CurrentValue *= 2;
FutureValue /= (float)grad.Max; //Scale the numnber to a 0 - 2 range
FutureValue *= 2;
return (CurrentValue + FutureValue) - Q3_Factor;
}
Any opinions?