help me understand this properly [Discussion of Utility AI]

  • February 28, 2017 at 07:05 #18802

    Hey, I’m new to this script and this concept. I’m having trouble understanding how to design my utility functions for a simulation not unlike RimWorld or Dwarf Fortress, but for survival elements.

    Pardon my meanderings, but I need someone to help me understand the flaws in my thinking. I don’t think I understand how to use Utility theory properly.


    Let’s just say I have a “hunger” function that helps AIs manage their food supply.

    On a basic level, hunger gets low enough (high enough?) then the character should eat from their food supply.

    But what if they have no food? Obviously they should look for some. But you’d be pretty dumb if you waited until you had absolutely no food before you started looking.

    The answer is that the AI should look for food BEFORE they’re out of food.

    So now my utility function isn’t hunger. My character is going to eat regularly and this can be an automated (unintelligent) behavior. My utility function should be food.

    My utility function for food: as my food supply goes down, the utility of food goes up. At a certain threshold, it pushes a character to look for food.

    But is that dumb as well? The real key is if I have enough food to make a safe journey.

    So I design the food utility as: only look for food if I don’t have enough to make the journey.

    But this sounds a lot more like a very static behavior tree than a utility function. The agent isn’t really figuring anything out so much as I’m figuring it out for him, and hard coding it.

    What am I missing?


    Another example I’m having trouble with…

    I create a utility function for retreating from battle. On a basic level, it compares the defending NPC’s hitpoints to the attacking NPC’s damage per second. If the threat of death is high enough, then flee.

    But an NPC shouldn’t flee if it can kill the enemy before the enemy kills them. So I factor that into the function. Compare the NPC’s “time to die” against another NPC’s “time to die”. Maybe scale the function so the difference matters more when my time to die is small, but allows more uncertainty and risk taking when my time to die is big.

    But then I need to factor in the other idiots on the battlefield. Being surrounded by a friendly militia is no reason to flee, whereas I’d feel differently if I saw an enemy army coming over a hill.

    What happens if alliances and relationships can change? I suppose an NPC could walk into a trap, thinking they are safe around their friends, when actually it’s a mutiny waiting to happen.

    But some smarter NPCs should be able to estimate the threat by looking at the souring mood of their estranged allies and say “I’m going to get out of here before it gets really ugly”.

    Does this make my utility function for retreat so convoluted that it no longer represents anything? I’m not sure I understand the best way to do this.

    • This topic was modified 2 years, 2 months ago by priyesh.
    February 28, 2017 at 10:36 #18805

    Hi Privesh,

    Thanks for your post.

    It sounds like you are going through a lot of the right thought processes. Designing good AI in general is challenging and requires you to consider many circumstances, of which some can be incredibly rare, as also exemplified by your ‘mutiny’ scenario.

    Generally, there is no universally right solution – the right solution depends on a number of factors all more or less specific to your game, i.e. the AI performance budget, development time/budget for AI, experience and skills in designing AI, complexity of the game, number of use-cases that need to be covered, etc.

    Utility AI Theory affords somewhat emergent behaviour by utilizing the scoring of options. So, in your hunger example, the utility for whether to go for food (eating/searching) could be a combination of e.g.: time since last eaten, hunger level, length of impending journey, current food supplies, perhaps even body size (larger people need more food?). Each of these utilities can be written as an individual scorer in Apex Utility AI, giving you a flexible and modularized approach that favors exploration.

    When we design AI, we often write a number of possibly relevant scorers and then simply play around with them in-game (scorers can be enabled/disabled at runtime and scores can be changed at runtime too) until the desired behaviour begins to emerge.

    So, instead of trying to squeeze all considerations into a single scorer or qualifier that feels static and scripted, try combining a number of option scorers and tweak them until you start seeing what you want. Sometimes you will realize that you will need to add more scorers to account for scenarios that you had not initially thought of, or that some scorers you thought needed are in fact not.

    The flee example can be solved in a similar way. Start storing the number of enemies in view, and if that number grows the flee utility should get fairly high. Oh, what about allies? Well, start scoring allies in view and use that to ‘counteract’ the flee score from seeing enemies. Add health, ammunition, cover positions or any other relevant factors as individual scorers which can be tweaked, enabled or disabled – all at runtime.

    Those two-face allies wanting to lure the NPC into a trap is a somewhat harder problem. How would the NPC be able to evaluate “the souring mood”? Does all NPCs have a “mood” variable that can be queried? If so, build that into a scorer! If not, perhaps you have other relevant variables – e.g. how close allies are we? Do we trust them? How many of them are there? How are they standing (in a threatning way)? What are my quick fleeing options?

    So basically, compile a list of variables that you want to influence a particular decision. Compile those variables into tweakable scorers (make sure to expose all relevant variables for each scorer). Then start tweaking in-game at runtime until it starts looking good.

    I hope this helps, otherwise feel free to post back here again.

    February 28, 2017 at 15:29 #18813

    That was a really helpful post. If I understand correctly: even if my game revolves around (say) five different utility scores (when to eat, when to flee, when to heal, when to fight, when to build relationships)… each of those scores might actually be the aggregate of several more utility scores?

    This sounds like it could get messy, but if you’re telling me it’s very normal, I can probably wrap my head around it.

    February 28, 2017 at 16:27 #18815

    Hey Privesh,

    It shouldn’t get messy at all. You’ll still only have 1 qualifier for each of your e.g. 5 different utilities. Each utility can be made up of any number of option scorers. This way, each option scorer class is small, specific to its function and has no redundancies.

    You probably don’t need to do custom qualifiers, as Sum of Children is most likely what you want anyway.

    February 28, 2017 at 20:36 #18831

    Ahhh that makes sense. Sum of Children would keep things clean, unless a child would need multiple parents. For example, a “retreat” score that ties into several functions — a utility of leaving the area, a utility of finding cover, a utility of calling for help. This tool supports using the same score in multiple functions, yes?

    Also / alternatively… what if I wanted to tie a utility function in with a behavior tree. Say, the “retreat” score triggers a behavior tree that chooses between fleeing / taking cover / calling help?

    • This reply was modified 2 years, 2 months ago by priyesh.
    February 28, 2017 at 21:03 #18840

    Hey Privesh,

    You can reuse scorers between qualifiers, yes. That is one of the points in designing scorers in a modularized approach.

    Your Utility AI actions can trigger anything, they can simply call any method you want. Thus, if you have a separate behaviour tree implementation, you can simply trigger its execution in the Utility AI action.

    However, I want to note that you can easily reproduce a basic behaviour tree in the Utility AI Editor, but doing it in the Utility AI allows you to still use the Utility-based scoring when relevant/needed. There is nothing prohibiting you from designing your Utility AI in a behaviour tree-like structure, to the extent that makes sense to you.

    February 28, 2017 at 21:30 #18848

    That’s fascinating and useful to know. Thank you very much!

    March 6, 2017 at 05:44 #18957

    yea mostly you’ll want to be focused on identifying the variables that you think are important enough to be worth tracking. The things you think you’ll really want to know to make decisions because they will be important in making decisions in your game.

    so hunger yes, and journey distance and likelyhood of death etc.

    Then at that point youve identified all the worthwhile variables.

    Now it’s just a matter of taking all the variables and deciding there importance in the decision funciton.

    there value however probably wont be static, it wont just be kind of 2 hunger gets 2 score. There value will probably have not linear but exponential or other growth patterns as far as score.

    For example thirst is really unimportant like near zero if you just drank and it’s raining.
    But it’s basically the most important thing in the world once you reach the point of dying without it.
    if 100% thirst kills you and is represented as 100 thirst. you don’t want linear scoring because say 20% thirst or 20 thirst isn’t 1/5th the value 100%. 100% is tied with death which is the most important thing to avoid. so 100 thirst is far more than 5 times as valuable as 20%. Its value varies.

    So first you find these variables, then you find there value based on an algorithm that normally looks roughly like

    Y= Mx + b.

    B is the base value esentially, take getting gold for example, because money is so useful to have, even if you have quite a bit of money, it’s still at least a little worthwhile to go and collect more. It has a certain base value.

    X is basically the variable itself like say thirst.

    M is the modifier you use basically. M can just something simple like 2 but thats really just a linear graph.

    The more interesting is multiplying by a log or squaring/cubing or square rooting etc.

    if you square for example with thirst 1 thirst is worth 1 score and 100 thirst is worth 10,000 score or 10k times as much. For thirst something like this might be useful so that once thirst hits like 95% it’ll be worth so many points your AI will flee the field of battle it’s winning to get a drink because if it stays to fight it’ll die of thirst.

    Square roots are the opposite basically. Even a high value tends to worth only a little more than a low value. This can be useful for things where they aren’t worth much or where they fall off hard after you get some. Like you can have enough of it. If your an archer for example your first 50 arrows might have a lot of priority in your inventory but after that amount they are nice but less useful because your unlikely to ever really “need” that 51st arrow. As such your equation might be

    Score value = (arrows modulus 50) //so first 50 are worth 50 points + square root of arrows

    or something like that. the fifty first arrow isn’t worth nothing but it isn’t worth much.

    likewise you can use logs and such. what your doing with this is your trying to find the equation for a particular graph. that graph is basically if i have X amount of say hunger it’s Y amount of important I deal with that. If it’s a simple linear issue, where every 1 amount of hunger means your 1 more likely to want to fix it fine. But most decision variables are very unimportant at some values and very very very important at others. Like thirst which if your not at all thirsty is basically so unimportant you could do just about anything instead of getting a drink of water. and so important when your dying of thirst you’d abandon a chest of gold being danced on by naked ladies to get a glass.

    so all your doing is tweaking with these variables algorithms to try to find a graph that approximates real life normally. If you find that for everything then your AI will just score things in the same order of importance a human would and act as a human would, ideally.

    it’ll have the right graph so that in the rare case thats it’s about to win a major battle but to stay and win it’ll have let it’s lover die it’ll flee and save it’s lover AS LONG AS you have right algorithm for lover safety score.

    The important thing to realize though is remember your not really writing a like say battle AI function and a food eating AI function etc.

    Your just saying, this is how important it is to stay alive in battle. This is how important it is to eat.

    If at any point while your scoring this based on the algorithm I gave you to figure out the score of food the amount of points you get for eating basically is more than fighting, if food is more important then. Then stop fighting and start chowing down. I don’t care that your mid battle, because based on the algorithm I gave you your presumably more likely to die from not eating than from your enemy if it’s a higher score.

    so all your doing is writing algorithms on how to score different variables and organically out of that the decision on what to do arises based on what has the highest score. you can go to get food and in the middle you see a chest of gold and stop going for food and go get the chest and then someone charges at you and your “fight” score rises above your “get gold” score and you defend yourself and then mid battle your thirst goes so high your drink water score is higher even than your fight score and you flee to go drink water.

    You didn’t write a complicated script of endless if statements to make that happen, you just told it how important things were and the moment something became more important than what it was currently doing it starts doing that instead.

    Identify variables, identify the algorithms that graph the proper value of that variable at it’s different values and do that right and the behavior will write itself basically.

    • This reply was modified 2 years, 2 months ago by sparkz.
    March 6, 2017 at 12:33 #18960

    Hi Sparkz,

    Thank you very much for your very elaborate post. I’m sure it may help the OP or others searching for this topic.

    Only one note I want to add: Curves, as you extensively describe and discuss, are supported out of the box in Apex Utility AI through use of Unity’s Animation Curves, which means there is a convenient visual editor that makes it easy for a non-programmer to set up curve scoring.

    September 8, 2017 at 04:24 #21890
    Moodie Younis

    All this was very very helpful and interesting :)

You must be logged in to reply to this topic.