Afterglow implements its "AI" in the same way that Novarunner did: through pluggable "brains" which control the linked actor during an update (and also when receiving various stimuli: pain, enemy locality, etc). I have a basic skeleton framework of stimulus response and idle updates, and a sample "brain" which doesn't really do anything except run on the spot. As development progresses, I'll update the sample brain to use more of its facilities, then begin implementing the subclass brains for each enemy type.
One of the big problems I've been dreading is tackling pathfinding. While I had a reasonable A* implementation going in Glow (albeit with a sloppy tendency to cut corners), I will have to spend some actual time thinking about the problem in order to provide pathfinding through my extremely sparse, convex levels. One thing I've thought of is explicitly placing waypoints for the AI to follow to get from point to point, but this might be prone to failure and will mean more fiddly "content maintenance" than I really should be doing. We'll see how it goes.