Jump to content
  • Advertisement

Henrik Luus

  • Content Count

  • Joined

  • Last visited

Community Reputation

132 Neutral

About Henrik Luus

  • Rank

Personal Information

  • Role
    Game Designer
    Level Designer
  • Interests

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Hello! I've seen a lot of threads on these forums, in which people suggest using FSMs, Behavior Trees, or some of the newer approaches out there like Utility AI (thanks Dave. ). These are excellent and powerful tools, with long records of success. But in 11 years of AAA dev, every project I've worked on has used it's own, custom AI solution. Some were similar to these well established ones, but some were quite different. Most of these custom solutions, being proprietary, never see the light of day. Some are very optimized, specialized solutions - only useful in the niche game for which they were designed. But some are very general purpose, and re-used from title to title. I've often thought that, like BTs and Utility AI, some of these custom studio solutions could compete with FSMs, BTs, etc. as widely used - general purpose approaches if they were made public. So I'm wondering: Without giving confidential specifics, how many of you have worked on teams that used custom, general-purpose ai solutions? Is it as ubiquitous as it's been in my career? Thanks!
  2. Henrik Luus

    How to improve FLEE in RPG videogames ??

    If you were to something like choose #1, I would suggest making your chance of success relative to your enemy's level. Compare your character's level to the enemies you're fighting. The difference between your lvl and theirs would weight the roll to successfully flee. That way, the design scales over the course of the game. Your enemies could have additional traits/status effects which could further weight the roll. For example, a flying enemy with super-speed may be harder to flee from. Sounds like this is similar to some of what's already been proposed. GL with the system.
  3. Have you ever used psychology in your game design work? Has psych ever helped you make a game design decision? Please share any experiences you may have had, below. :]
  4. I would respectfully question this approach. A few reasons to consider: Organizationally, it may be strange to think of an 'event' as state data for an entity that raises it. In a strict ECS model, all logic takes place in systems. This would mean that if an event is raised, it's a system which does the raising, and not an entity. A system may raise a single event in response to the action of many entities. In that case, for which single entity would the event be considered to hold state? I would argue it makes more sense to think of events as their own entities, with their own state data in the form of their own components. Then it doesn't matter what raised them, or under what circumstances. And there's an existing framework in place for how to think about them (the same way we think of other entities). This is a personal preference, but if events are components, and if you can raise an undefined number of them during a game, then you may be setting yourself up for a memory management headache. You may have to use a dynamic, growing collection of some kind to store referenced to your components, which could be costly. And if you're using an environment with a garbage collector, you'll also have to watch out for those costs.
  5. Fwiw, @SomeoneRichards, I implement game events as entities in my ECS engine, just like you've described. Each of my game events are normal entities. Each one has a "PropertySet" component, which works as a blackboard of sorts, holding data relating to the entity. I use this Component for non-game-event entities too- it's just a general-case Component type for storing persistent entity data. My process of raising a game event goes like this: Grab the PropertySet Component associated with my game event's entity ID, from a central component database. Populate it with any data relating to the event (for example, the "enemy" in an "Enemy Detected" event) 'Raise' the game event's entity ID with my event system. Anything listening for that event receives a callback, with the entity ID I raised. For there, it can grab the event's PropertySet Component from the central database, and read data from it as needed. These game events end up being super light as entities go, so I instantiate as many as I'll possibly need, and pool them at startup.
  6. Henrik Luus

    Looking for Critiques

    Thanks, @IADaveMark. You’re absolutely right. The problem you described is one of the reasons I've spun task execution into its own system, which operates independently of whatever’s doing the planning. In my code, this system is really the engine which drives everything else agent-ai-related. If anyone’s willing, I would appreciate your feedback on the approach - it’s been doing a good job, but I’m not sure how similar it is to existing solutions out there which might be better: - - - 1. To handle task execution, I use a pattern I’ve been calling ‘behavior queues’. A behavior queue is a queue structure (first in, first out), which can hold a sequence of one or more agent behaviors. (Skip the rest of this explanation if you'd rather see a visual version, below.) 2. Only the oldest behavior in the queue is active at one time. When the active behavior notifies the queue that it’s complete, that behavior is popped from the queue, and the next behavior in line is activated. By this process, an agent performs behaviors. 3. An agent can own multiple behavior queues, and this is where the more complex problem solving comes in: Each of an agent’s queues has an Id, and these Ids can be themed to different, specific purposes. 4. For example: One queue, “Listen for Enemies”, could have a length of 1, and run a single behavior which never completes. This behavior would listen for enemy game events, for example, “Enemy within range”. Upon catching such an event, it could ask a separate planner system for an appropriate response, in the form of a sequence of behaviors. For example, this response might look like: [“Move to Enemy”, “Kill Enemy”] 5. The “Listen for Enemies” queue could then take that sequence of ‘response’ behaviors, and add them to a second, “Respond to Enemies” behavior queue. That sequence of behaviors would then immediately begin executing. 6. Crucially, the original “Listen for Enemies” behavior would still be active, back in its own queue. If another, bigger enemy appeared, it could ask the planner for the new set of response behaviors, clear out any remaining behaviors in the “Response” queue, and replace them with the new sequence. 7. This “Listen for Enemies” behavior could also listen for enemy death events, and clear the “Response” queue if the target enemy has died. That approach is how I’ve generally avoided the “agent whacking away at nothing” problem you described in your comment. - - - A few more notes about behavior queues, for posterity: In my implementation, each behavior queue has its own blackboard structure, which its active behavior can use to store some state data. The agent itself also has a blackboard, which multiple behaviors can use to communicate with each other. In my code, each behavior queue can have a set of tags. This makes it simple to do thing like “pause all behavior queues dealing with movement”, for example. beyond clearing a behavior queue, a new behavior can be inserted at any index in a queue. So for example, an agent with a plan to interact with ‘thing A’, and then ‘thing B’ could have new behaviors inserted in the middle of the queue to give it a third stop along the way. Do you have any thoughts on this pattern? How does this compare to what's already out there for handling task execution work? Regardless, thank you for reading such a long post. (Apologies for the image spam below. I don't know how to make these appear smaller. 😕)
  7. Henrik Luus

    Looking for Critiques

    Kind of! What I was describing there is just the function of a planner AI- to build a sequence of tasks which can accomplish some goal.
  8. Henrik Luus

    Looking for Critiques

    Hey Ninja. My apologies for making a misleading diagram. It doesn't depict a state machine. The arrows don't represent an agent transitioning between states. Kylotan has it right- this shows a contextual model of how ai problems are divided into different domains, and how those different domains relate to each other to govern different parts of an agent's Ai. Let me see if I can do a better job here. As an example, let's say you have an NPC agent: All of the agent's behaviors are carried out by a 'Task Execution' system. Maybe that agent is executing a behavior which listens for an event ("Enemy within range", let's say), then decides on a response. In order to respond, it pings a separate 'Goal Forming' system to choose a new goal for the agent, based on the current game state. This goal is then returned to the behavior. This goal is actually a target game state, which the agent would like to achieve- For example: one in which the enemy's Health == 0. The behavior then needs a sequence of behaviors which will actually form the response. It passes the goal game state to a Planner system, which looks through a wide space of available behaviors, and returns a sequence of them which can transform the current game state into the target (goal) state. Example: ["Run to Enemy", "Kill Enemy"] The behavior then hands this plan of behaviors back to the Execution system, which sets about performing those behaviors. The last point is that these Systems (Execution, Goal Forming, Task Planning) are abstracted from the specific ai techniques used to accomplish them. In the diagram, A Utility AI is being mapped to the Goal Forming domain. But in another project, a different technique might be used to choose goals. In that case, the AI technique may have changed, but the 'Goal Forming' purpose of the domain would not. Any better?
  9. Henrik Luus

    Looking for Critiques

    Here's a simple model for Game AI that I've come to use. I would appreciate hearing your thoughts on it, positive or negative. I like that it scales nicely between different types of projects. It also abstracts the idea of the AI problem being solved from the specific technique used to solve it. How does this map to the way you approach AI in your projects? Thank you for any input.
  10. Henrik Luus

    How does the environment spatially affect the payer?

    I can't find it so far, but I would love to see that.
  11. Henrik Luus

    How does the environment spatially affect the payer?

    That 'real life motivations vs. game motivations' is a great convo to have. Obvious statement here, but- By understanding the different motivators present in each world, you can better adjust your game stimuli to encourage the equivalent real-life behavior. There's thankfully some good, pre-existing research on the subject, and I hope that more game design stems from this idea in the future. Quote:The catch is that the rain must be an actual interactive force (i.e. must inflict damage). I am thinking that, otherwise, the player would not perceive a difference between inside or outside a cave. I'm not sure about this. Although I'd agree that the player could be conditioned to have no aversion to light, I wonder if a non-gamer wouldn't start with a (very) slight bias towards the cave (due to instincts). I don't know- I don't think I'd want to assume a conclusion to this one based on reason alone. I wonder if there have been relevant studies on this. Even if it isn't a big enough influence to affect decision making, I still find myself reacting with cave-like symptoms when I'm in a concert hall. Mainly, I just get really relaxed, even if there's no show, or I'm just walking through. I guess it's evidence, but not enough to form a conclusion.
  12. Henrik Luus

    How does the environment spatially affect the payer?

    Thanks, Wai. I gotta say, I agree with everything you wrote, and thanks for the long response. The direction of interactive force idea (I couldn't find it online, am I missing it?) you suggested seams useful. The rough cave approximation may have been too rough for its intended purpose. I'm curious to know if (or how much), regardless of context, being in a cave like place triggers any cave = safety instincts. One example was with something I hoped would have minimal context (the grey geometry), and another was a cave-like space with an unrelated context (the concert hall). I'd like to know how much of a resemblance it takes to trigger the player's instinctual (and therefore more predictable) behaviors in this one example. I think you're right, we jump to light and shading very quickly to determine our surroundings spatially. Unfortunately, I can't think of a good way to remove that element from a visual experiment, since without light there's no contrast, and no way to tell between objects. Still, blind people use spatial reasoning skills, so light can't be a fundamental part. I wish I could run an experiment with sound to show distance, but who has a research lab these day, right? right? [Edited by - orionburcham on January 10, 2009 11:54:16 PM]
  13. Henrik Luus

    How does the environment spatially affect the payer?

    I am researching studies on this subject, and if you know of any good ones please let me know. Thanks! btw, if I gave the impression I wanted to choose something from these images, I don't. I'm trying to understand the psychology behind how we perceive and react to different spatial environments. In the first two images I used a regular interval between same-length objects to make the space behind the viewer easier to predict. I left out things like texture or color on purpose, to isolate the idea of spacial awareness- something different in the brain from visuals (blind people also use it). The third image is meant to be a very rough approximation of the cave photo. Thanks- -O [Edited by - orionburcham on January 10, 2009 6:43:04 PM]
  14. (Howdy, I'm pretty new to these forums- If this topic has been brought up in detail before, please let me know.) Everything here is an attempt to answer this question: How can we predict how our environment designs will affect the player? Or, how can we know when to use this scene: vs this.: If we could predict how a player's spatial surroundings would affect his experience and decision making, environment design would be a lot easier, and a lot more effective. If we could devise a sort of rulebook (of guidelines) for what affects what, then level and environment design could be reduced to a paint-by-numbers affair, while having a greater impact on our audience. Lucky for us, we can draw on previous study in this area, such as cinematography, architecture, and civil engineering. A closely related field is also music- which is also the longest body of recorded study about how to emotionally affect an audience. Commonly, people leaving a concert will describe feeling similar emotions from the same moment in a song. This is exactly what we're trying to do, but aurally. So why do we interpret music emotionally, and similarly? There's evidence to suggest that when we hear music, we're automatically comparing it to voice. As the theory goes, the part of brain that listens and triggers emotions from music doesn't really know what music is, and we interpret it by the same system we use to judge voice. Since we use listening as a way to guess the emotions of a person speaking, when we hear melodies that remind us of speech (like the pitch changes of a question) we interpret it by the same rules. It's worth noting that we're set up to process a lot more than voices. If a sound reminds us more of a waterfall, for example, we're more likely to respond to it based on the rules we use to judge those. another interesting thing is that some phrasing in language crosses borders, and seems to be instinctual. For instance, the phrasing of a question sounds similar in several romance and non-romance languages. Another good example would be the way anger sounds in all languages. If this theory true, it tells us a lot of things: Since the phrasing/melodies of language are highly cultural, so is the way we interpret music. It also means that we're interpreting the music based on the closest thing ours brains think it might be, based on instinctual systems. And all of this can possibly tell us big things about how we interpret visuals. Here's where I run out of specific information. I want to find our more about: -Exactly how does the brain process spatial information? -What systems might we revert to when analyze our world spatially? (for instance how much does this: equal this: in our brains?) -How much of a cultural influence is there in the way we interpret things specially? As in- when we're here: how much do we still respond as if we're here?: I think this is a really useful subject, and I'd love to know your thoughts about it if you have any. Thanks- -Orion [Edited by - orionburcham on January 10, 2009 6:12:08 PM]
  15. sorry to bring this up here, but I have missed the support section if it is there. Is there any way to embed images inside your text in these forums? [img][/img] doesn't seem to work. thanks a lot
  • Advertisement

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!