Some reference on utility systems... my lectures from GDC 2010 and 2012 (with Kevin Dill)
Some reference on utility systems... my lectures from GDC 2010 and 2012 (with Kevin Dill)
Sounds like you need to break your AI logic into two parts :
1) a goal selection system that analyses the current situation and picks the highest priority goal.+ solution (evaluation of this is often the most difficult to come up with equivalency metrics for situational factors)
2) an executor for that goal+solution which is specific to the solution - you could continue to use your Behavior Tree method here - just one 'tree' for each specific solution
Effectively you break up you one ginourmous tree to mutliple specific use trees - with one run when its matching goal+solution is active (this gets rid of excess logic outside its own domain)
The (1) analysis + goal selection does not have to run each cycle, though you may need to do so when certain significant events happen (triggering a general reevaluation...)
If the same goal+solution is selected again it should be resumed (to restore the progress made)
Each BT should have 'cut' checks that would be reevaluated frequently to see if the current solution is still valid (situation hasnt changed) which on failing would push execution back to the goal selection (1) stage