Advertisement Jump to content
  • Advertisement


  • Content Count

  • Joined

  • Last visited

Community Reputation

123 Neutral

About midix

  • Rank

Personal Information

  • Role
  • Interests
  1. Oh, I see, yes, I can fall back to boolean inputs for considerations for cases when I clearly need opt-in / opt-out. I guess, my brain switched into curves mode and failed to think in simple booleans; or maybe I felt that boolean considerations are evil in utility AI because of being too strict and breaking the "fuzzy logic" feeling of it
  2. Thanks, yes, that makes sense. So, one solution for unique targets might be to treat their behaviors as standalone because there is no need to evaluate GoToWork for other targets if the agent has only one unique workplace to go to.
  3. Let's say I have a bunch of NPCs that need to go to work every weekday. There will be additional considerations for cases when they would choose to stay home (low health etc.). So, the basic logic is as follows: Behavior "GoToWork", should be considered only if it's Monday to Friday AND only if clock time is after 6:00 AM, applying random shift of up to 3 hours to prevent all NPCs rushing out at the same time. The world clock and calendar is knowledge base and I can map clock (minutes) to input range. But days and random shifting are not that straight forward. I could map minutes for entire week as a single input and build a custom piecewise curve with sharp transitions (because it's essentially yes/no decision - I don't want to consider it at night or on Sundays at all) and use that as the second consideration for GoToWork. But still I'm not sure how it is supposed to be implemented "the right way" with utility AI. Which condition checks belong where and how to add that random offset for each NPC separately?
  4. I was experimenting with Apoch's curvature project and everything made sense, until I got to the point where I had to think about how it automatically chose the target for behavior. The code fragment in question is the ChooseBehavior method here: That got me thinking if that's what I want and how to deal with it in cases when automatic target choice is not appropriate. On one hand, it makes sense to iterate over all possible targets and evaluate scores and then pick not only the winning behavior but also the winning target for the behavior. On the other hand, there might be a huge number of possible targets even for a simple "MoveTo" behavior. I definitely would want to somehow filter them, but the filter criteria cannot always be strictly defined. Sometimes I might want an NPS to consider MoveTo for visible targets only. Sometimes I might want it to consider moving to its home location, which might happen to be on the other side of the game world. And sometimes a behavior might be executable even without any target at all (for example, some Idle activity), but Curvature for this case gets stuck with "Stalled" message because it expects to always receive the winning context from ChooseBehavior. For example, if I try to implement GoHome behavior as a MoveTo behavior using some IsHome flag as an input, it would be very inefficient to run MoveTo behavior on every target to find if the target is marked as a IsHome for the NPC. After all, the NPC might have a property HomeLocation which could be immediately used as an input for MoveTo. But there is no way to pass it as an input for consideration (and it makes no sense - HomeLocation is a vector pointer and not some numeric value to use for consideration and scoring). So, my guess is that abstract MoveTo with automatic target choice might be appropriate when choosing among multitude similar items. But in cases when I want my NPC to navigate to some special or unique object, I have to create a very specific behavior (GoHome instead of MoveTo) which contains the logic for extracting the HomeLocation form the NPC agent. Or is there any other elegant way to solve target (context) filtering for behaviors without getting too specific and creating lots of behaviors like GoHome, GoToWork, GoToEnemy ... but without evaluating every possible item in the world?
  5. Thanks, so, I guess Apoch's Curvature is the best I can get at this moment. I started experimenting with it, works nicely; still lots to learn to comprehend its possibilities and integrate into Unreal.
  6. I guess I secretly hoped for some lazy option - like a GitHub project in C++ with full documentation and examples, that can be dropped into Unreal Engine (or any other engine). A quick GitHub search found only C# projects of good quality (and some of them more or less tied to Unity). C++ - not so much luck. Anyway, I could port something from C# to C++, unless the original code is too "C#-specific" and architecturally complex. seems really good starting point.
  7. Oh, thanks, I somehow missed that one out. Now looking at the Decision Maker Packages section, it seems totally relevant to what I was talking about, and it even mentions "situational packages" on the slide. Except that in the video it is more about pushing additional decisions to the stack of currently available decisions (and popping it off when not needed) instead of limiting the full list. But at the end it achieves the same purpose - optimization through limitation of available decisions. Which means that my reasoning about that "situational stuff" was not that bad as I thought. So, there's just one crucial piece to implement - decisions for pushing/popping new decisions (almost getting recursive here), when reacting to a situation. And how to wire it all into Unreal Engine...
  8. I'm playing around in Unreal Engine with intention to create some kind of a small town scale Sims-like simulation. I don't need that much micro-managing as in Sims. At first, I want to start with a bunch of simple NPCs that walk their route (using NavMesh, most probably) between work and home, based on time of day. But, depending on different factors, NPCs might choose to visit a shop or a cafe while on their way. Also, there are different jobs that require some flexible behaviors for cases when I closely follow an NPC or interact with it. Unreal Engine has Behavior Trees for AI but I find it somewhat limiting for my purposes, therefore I started looking into Utility AI. Currently I'm watching the GDC lecture videos about Utility AI and trying to figure out how do I go about implementing it in Unreal and to apply to my sandbox-like simulation. By the way, are there any Utility AI code projects that could be plugged into UE4? As my town will have many citizens (hundreds? thousands? the more, the better, as long as my PC can handle) and I want to track them all on a world map (just dots), I would like to make it work efficiently. Of course, I will turn off animations and meshes (with their collisions and physics) when an NPC gets offscreen. But what about AI? How do I switch to narrower set of parameters and actions to choose from when NPC is offscreen? And does it really matter in practice? Should I leave the AI evaluating all the possible action scores and apply some checking only inside the action itself (if NPC is offscreen, then I do simple waiting for some sensible time instead of doing the full simulation)? These thoughts led me to some other questions and ideas that might be useful for optimization, but I'm not sure how they would work in practice. So, I hope somebody else has already thought about this and tried it out and can share the experience. In essence, the idea is a new abstraction I call Situation. It could also be considered just some kind of a context, but that word is so overused, that's why "situation" was my choice. Basically, a Situation is something like a filter that contains a predefined (at design time) set of parameters and actions that are available, adequate and make sense at the particular Situation. For example, if an NPC enters a cafe, there is no need to continue evaluating the entire list of utility actions for being on a street or at a workplace. I don't want an NPC to decide that it should start running home as soon as possible (because of some not so carefully tweaked parameters) while waiting for an order in a cafe. Instead, I would want to make such action not available in that particular situation. Less actions to choose from = less parameters to tweak to prevent an NPC from doing something totally weird, right? I guess, this could also be visualized as a tree of utilities. Or it could turn into a hybrid of utility AI (for high level actions, like "Where should I go next?") and behavior trees ("What sequence of predictable, adequate actions I should attempt to perform while being in this situation?"). But I would like to avoid mixing multiple systems just to keep it less confusing. Or am I totally wrong here and I shouldn't attempt to use Utility AI for cases when an NPC should be very predictable, almost following a well-known scenario? Next, I was wondering, how an NPC would enter/exit the situations. How do we, humans, limit ourselves to some subset of activities to avoid doing something stupid "just because we can" or "I really wanted to do that at that very moment, even if that was highly inappropriate"? I guess, we do that based on our reaction and reminding about the environment we are in. "Hush, you are in a church now" - a mother reminds again and again to a child who starts to become too active. The rough idea is as follows. Unreal Engine has good environment query system and I can leverage that. Although, for performance reasons, I'll try to do it in reverse to human sense. If a human being sees a cafe and enters it, he/she is doing the query: "Am I in a cafe now?" In my case, it might be the cafe itself (through a trigger volume) triggering an event on the NPC: "You are in a cafe now." and NPC has the logic: "I should limit my activities to something I can normally do in a cafe and completely forget actions that are not sensible in a cafe." And when the NPC exits the cafe, it again receives a trigger volume "exit" event, which in turn removes the filtered restrictions and allows to execute evaluation of the full set of actions. If we assume that an NPC can be only in one situation at once, it looks like a simple stack with push/pop. Enter the cafe? Push the list of cafe actions and run utility AI evaluations on them. Exit the cafe? Pop the action list and now you are where you were before (with possibly some updated state parameters - you spent some money in the cafe and you reduced your hunger). But is it good enough? Can there be a combination of situations where I need a set of actions for multiple situations at once? In this case my idea breaks down because a stack is not enough anymore. So, trying to combine multiple situations can get messy really fast and I'd better go with evaluating the entire list of utility functions instead of trying to combine all those multiple situations. Still, I could use this approach in cases when I 100% know that I need a limited set of actions. What do you think about this rambling? Am I overcomplicating things (actually, because of trying to simplify them through not evaluating entire set of utility actions all the time)?
  9. Thanks for your response, that avatar implementation example seems a good one. About FRP - I gave a link to the article in my post, although I should state it more clearly, that I meant functional reactive programming. I had no idea that FRP has so many meanings, oh those abbreviations... Anyway, while looking for some implementations, I see that FRP would be overkill for me, too new, not so much materials to learn from, especially how to implement it for C++. create the components more like single-instance stateless functions[/quote] I just came on to an article, which states that components MUST be data only and no behaviour at all. Behaviour is implemented in some kind of XyzSystem (like AnimationSystem, CollisionDetectionSystem, MovingSystem). Those systems are state-less instances which work on data of entities and components. Now I think if this is better than having a behaviour components attached to each entity (like having an Entity with MovableComponent, AnimationComponent). I guess, each approach has it's pros and cons.
  10. I am reading a lot lately about various ways how to implement a game engine architecture. As I am doing this just for fun and tto develop my programming skills, I would like to try out some new concepts and how they work. Iam eager to makes some simple virtual networked sandbox. Now I have come to a point where I have chosen rendering engine, networking engine, physics engine and now I have to tie them all together in some arhitecture. As far as I understand, the component-based architecture is a great idea. For now it is a bit unclear for me even how would I implement a component system to make it scalable and data-driven, I still feel new to all this. I have seen an article where all the component properties are stored on the entity itself, although the Entity does not know what to do with those properties, only the components do. And I thoutght: this could actually mean that I can create completely state-less components, so they could be shared between all the entities on the system. I mean - I could have a single instance of PositionComponent, which does not have its x,y,z. The entity is like a big property-bag and pointers-to-components-bag. At the construction of the entity I can load the list of needed components, and each component could have a method InitContainerEntity(ent). This method adds all the needed properties (x,y,z) to the entity. So if I want to change the position of the entity, I just find the pointer to the component instance for this entity (if there is any) and say cmp->Update(ent). [size=2] One of the problems is the networking. Lets say, I have various kinds of game entities: some are local only, some are server-side with a proxies on the client. So does this mean that I should have some NetworkComponent and add it to all the entities on the server (I guess, there is no need for local-only entities on server) and to proxy entities on the client? So basically the components become more like global functions of time. And then maybe it is time to think about FRP, like this article describes: And if I try to put FRP into it, as I understand, this means that properties of the entity are synchronised "automagically" all over the system, like if I say that a=1; b=a+1; then b will always automatically be equal to a+1, no matter how and where a changes over time. But does this mean that I could also drop in the network part into my FRP implemenation? I mean: in some moment the server changes a=2; and all the clients automatically are synchronised and using the new value of a and also b stays again equal to a+1. Is this how FRP should work over a network? What about the following situation: the user is editing his avatar. Until he clicks Apply button, all the changes should be local-only. How to say to the NetworkComponent or to my FRP implementation that changes should not be synchronised with the server until I click Apply? What do you think about this all? Is it wise to create the components more like single-instance stateless functions? Will it mean the overhead to access each needed property of the Entity might become too high? And what is the FRP role in such a networked, data-driven system, has anyone tried FRP yet in C++?
  11. midix

    Facial rig for games - bones or morphs?

    Ok, I found some facial bone rig tutorials, but most of them are really complex, oriented more towards rendering and not for efficient realtime usage in games. I guess, I have to search harder and experiment with some rigs.
  12. [size="4"]My planned "game" is more like just another 3d virtual world and I'll need some acceptable facial animations for avatars and NPCs (maybe not really natural looking, just something that is not completely awkward). The game engine of my choice allows me to use and mix both skeletal and morph animations. As all the body animations are skeletal, at first I thought that it would be easier to deal only with skeletal animations and so I could create a facial rig with bones. But after some exploring I found out that people mostly use morph animations for face. It's hard to find any tutorials about how to create a facial rig with bones and without morphs. Now I am thinking, which approach would be more logical and more resource efficient considering that there may be many different avatars and NPCs which are running faical animations in the same scene (smiling, crying, talking)? Should I spend some time and create (or find) a facial rig with bones or just go with the flow and do the morphs? Another factor is that I would like to allow users to change some features of their avatar face. I guess, I could create morphs and expose control sliders for the user. But the problem: if user has modified his avatar with morphs, then how will the bone animations work? After the user modifications, the vertices will be in some other positions and not the ones which the bone animations need as zero positions. I would be grateful for any links to good tutorials about creating faical animations with bones.
  13. I have read a lot of info about DirectSound buffers but some articles are outdated and MSDN is tricky to navigate to get the overall structure clean in my head. Let's start with Windows XP and DirectX 9. Some old articles confused me saying that only static buffers are in hardware, but MSDN says: "On PCI cards, static buffers and streaming buffers are effectively the same." So, I assume, I can get streaming buffer in HW without problems, great. According to the MSDN, I should not mess with primary buffer, it is small and hard to manage. So I'll use a secondary streaming buffer and pass the flag to put in in hardware, if possible (it is also the default behavior of DS). But what about primary buffer? As far as I understand, primary buffer is where all the secondary buffers are mixed (all - what? all DirectSOund secondary buffers, all of my application sec.buffers or all of Windows secondary buffers?) But if I have a secondary buffer in hardware, how the mixing will be done and who will do it - the audio card (I would want that), the DirectSound or the KMixer? And if secondary buffer is in hardware, then the primary buffer also is in HW? If I have an integrated audio chip (Realtek, CMedia), does that mean that even "hardware mixing" is done on the computer CPU? And are hardware buffers available on those integrated chips? I have read about that primary buffer audio format should match my secondary buffer format to minimize mixing, but I am not sure what will happen if my secondary buffer is in hardware. If I want to get maximum out of my sound card and have minimal latency with one secondary streaming buffer in hardware, how do I avoid some mixing from occurring (to skip KMixer)? What about capture buffer, how to get it also to be efficient and skip mixing/resampling? ---------------------------------------------------- Now let's look at Windows Vista/7 DirectSound does not have access to hardware buffers at all. So what should I use there - XAudio2 or WASAPI to get the same DirectSound-like straight-forward implementation to efficiently access hardware buffer of my sound card? What I am trying to achieve is some simple C++ audio wrapper dll which would choose the best technology based on where I install the application - on XP or Vista/7. This dll is meant for use in a scientific audio application where latency is really important. And what almost kills my idea is also the following sentence from MSDN: "Hardware buffers are not supported in 64-bit operating systems." Completely and at all no way to get hardware acceleration on 64 bit Windows? Not on XP, not on Vista, not on Win 7 ? ------------------------------------------------------------- Some implementation questions I have not found what is the optimal way to manage playback buffer. I know that I need to be able to fill data fast enough to avoiddropouts and I should never write on the sector where play/write cursors are -dropouts guaranteed. But I have no idea in how many parts it is advisable todivide the buffer. If I have1 sec buffer, is it OK to use only two parts 0.5sec each? Or is it better to use four 0.5 sec parts and 2 sec buffer? If I letuser adjust latency, what should I allow to adjust - total length of the bufferor number of sectors in the buffer? Microsoft prefers using buffer notifications, but there aresome people who say - notifications get screwed up if many applications areusing them simultaneously. So the polling is a better choice? And finally - the play/write cursors on the secondary buffer. If the buffer is software buffer, then what does play cursor show - the sample which is being played or the sample which is being sent to the KMixer (which will introduce about 30ms delay, according to MSDN)? I know, too many questions in one post, but I'll be glad if you could help clear things up. P.S. Merry Christmas!
  14. Thanks for suggestions, I have already seen some of those models, but there are also some new. I guess then I'll try to get some parts and combine them all together to get something complete.
  15. The engine of my choice - Ogre3d - has bone blend masks, so I can create separate animations for skeleton and mix - not blend - them together, so game characters can run and read a book at the same time :) But the question is - is this a appropriate way for facial animations also? So I have two obvious choices: 1. bone masks - then I can do a single "animation pass" for face and body if I'll use hardware skinning. It seems also easier to manage and design a single type of animation - I am not really good 3d designer, so I'll use some existing rigs anyway. I know, then I'll need bone masks for separate details of a face - eyes, mouth, so I can mix them to get various animations, but as I do not need really precise animations, it is achievable, I think. 2. mix pose vertex animation and skeleton animation. Disadvantages - I'll have to manage two kinds of animations and also two passes will be needed to mix them. Advantage - using XSI, creating various facial expressions using shape keys is easier than with bones. Also which kind of animation would be easier to adapt to mesh changes, if I want to allow user to make character's face skinnier, then save his customized mesh but still be able to use the same animations? Or maybe I should not store and reload customized mesh but create new poses for user changes - so I could still share the mesh data between characters, but this approach seems more complicated, so I'll leave it for some later time anyway. What are your experiences with creating facial animation? Which way is easier / more effective? Thanks for any suggestions.
  • Advertisement

Important Information

By using, you agree to our community Guidelines, Terms of Use, and Privacy Policy. is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!