The AI's Intelligence

edited October 2014 in AI
So, being a major AI buff, AI is of course the first thing I think of. I'm wondering where exactly the AI of Voxel Quest is planned to fall.

Is it...
1. They are allowed to do very little so as to keep them from doing anything stupid, but their final decisions are random.
2. Follows simple if/then/else statements.
3. Follows simple if/then/else statements, but is smart enough to remember locations, danger zones, things that are bad ideas, good ideas, things that provide good outcomes, etc.
4. Weighs multiple factors when making decisions rather than if/then/else statements - for example, "low on health, almost out of items, I'm far from town, friendly party nearby, so I'll rest instead of using my last health potions" instead of "I'm low on health, so I'll use my health potions or go to town"

There's also the question of, "Will the AI be able to teach itself, or will it all be pre-programmed by hand?"

Obviously the majority of this applies primarily to combat situations, but it can work in other areas too: For instance, having the AI remember that they were attacked and forced to flee when they crossed a certain mountain pass, or that Godherb Sunflowers grow on a particular overhang to the south of Feldspar Inn. Other instances might be their handling of other NPCs and the player character, determining whether they feel they can "trust" that player - or whether or not they're willing to take on a deal, etc.

Story elements might be a bit more difficult to maintain with complex AI - although that could be handled as well by simple personality traits, like Josh Parnell's ACEGILS model: Agressiveness, Creativity, Exploration, Greed, Intelligence, Lawlessness, Socialization. That would add a decent level of complexity to the mix, and would feel gamey... if that information was made readily available to the player. If personalities were included in such a way, it would be pretty enjoyable (in my opinion) to learn about the personalities of the people you're with. On the other hand, that's a heck of a lot of dialogue work to deal with.

Then there's the question of, "will the AI all be pre-programmed, or will it be capable of teaching itself?" I'm a major fan of self-taught AI (no, not like Terminator), though it can be difficult to implement if you don't do it right. There's more thought involved, but it would have a better outcome in the end. This sort of thing works like this: "I've learned through trial and error that using my sword while the enemy is protected by warding magic isn't a good idea," or, "Stepping on fire runes hurts a bit, so I'm not doing that anymore".

Anyway, obviously AI is a pretty big thing for me... :P Hopefully this post made some degree of sense, and I'm interested to hear everyone else's thoughts on it.
«134

Comments

  • The core functionality of the AI is driven by needs and wants - called motivations. Most of these motivations actually wind down to very few common human desires - to stay alive, and to reproduce. To reproduce, you need to find an ideal mate. To do that, you must appear "alpha", which means you must be beautiful or wealthy or something. To stay alive you need to not die of hunger. To do that, you need food. To get food you need money. And so on. These are all handled by backward chaining as I describe in my other AI post. Each need is weighted under the current circumstances - if you are starving, food becomes much more valuable/important. AI will calculate the move each turn which generates the highest score - which action will result in the best events (i.e. which action won't kill me? :) ).

    There is no if-then. There are no state machines or diagrams. There is just a set of specified rules, and each NPC calculates a potential score based on a given action, given the rules. The action with the highest score is the one they take. It can be hard to soak in at first, but once I show the rule system it should be fairly easy to see how things work. :)
  • Also, I should add, actions are calculated within proximity of the user. The farther the distance from your NPC, the more dormant the AI is and only responds to major events.
  • gavanw said:

    Also, I should add, actions are calculated within proximity of the user. The farther the distance from your NPC, the more dormant the AI is and only responds to major events.

    Will there be anything to produce the illusion that they have been active? Or are everyone far away basically in stasis?
  • edited October 2014
    I really like the motivation bits, but that only determines their underlying goals, really - it doesn't say "I want food, so therefore, do I travel to this spot I know of where strawberries are going, attack the next traveler I see and hope they have something, or go to the distant town to the north?"

    As to the second post... Ack. e.e That could use a bit of help, and it would be easy to fix. Here's my proposed solution:

    You of course understand LoD. Apply that to the AI:
    1. Simulate at more and more basic levels the farther they are away - and at longer intervals.
    2. If far enough away, have the AI stop functioning until you come within range again - and then simulate for the amount of time it was dormant.

    It won't be exact, of course, but it would give you more of a sense of living in a truly dynamic, living, breathing world. If done correctly, you could also space it out so it didn't simulate the LoD-affected AI all at once, making a hardly-noticeable impact on framerate.


    Edit: Also curious... Will personality have any impact on whether or not someone falls in love with you? And how much? It seems like something debatable. I'm personally for some non-shallow AI from time to time. :)
  • It is worth considering how much effort it takes to simulate the underlying data without rendering everything or having any delays. Esp if such simulation can be done during spare processor cycles; if the game is asynchronous with the player taking discrete turns, there will often be times when they are idle, or otherwise giving the computer more time than needed to simulate the relevant things. Having a low-priority thread to do the background simulations could help keep the rest of the world breathing.
  • Talvieno said:

    I really like the motivation bits, but that only determines their underlying goals, really - it doesn't say "I want food, so therefore, do I travel to this spot I know of where strawberries are [growing], attack the next traveler I see and hope they have something, or go to the distant town to the north?"

    But wouldn't the "full emotional profile and temperament" of the character help to solve the example dilemma you've posited (by adjusting the score of the various available get_food actions)?

    Is this how the score maximizing algorithm works? In the case of a high score 'tie', is the turn solved then by a dice roll?

    See:
    gavanw said:

    Each turn, the AI runs a score maximization algorithm. [...] the [character] predicts, by facts in the system [...] The [character] would explore available actions to change the predicted course of action [...] All of these facts are evaluated against proximity, availability (is a hero around?), etc. [...]

  • Right! Lol, sorry, I misread. Yeah, that's a good way to do it. Okay, works fine by me. :) That particular AI model is actually really good for self-teaching... hmmmmm....
  • In theory, the get_food goal will have many possible solutions, and those solutions all have weights based on how feasible it will be for them to achieve it, the cost of doing it, and other factors like their own temperament which will lead to their decision. A violent man may go out to rob people as soon as they get hungry and have no food, a kind man may only resort to it from desperation when they have no other options
  • Mystify said:

    A violent man may go out to rob people as soon as they get hungry and have no food, a kind man may only resort to it from desperation when they have no other options

    Reminds me of a funny story. The AI in The Elder Scrolls IV: Oblivion was driven by a similar system, but apparently there wasn't enough weight put on not killing people at times. There were NPCs who would run off and kill someone else over a broom with which to sweep their floor.

    So... I guess the moral is make sure "don't murder" is weighted higher than "sweep floor."
  • edited October 2014
    Mystify said:


    Will there be anything to produce the illusion that they have been active? Or are everyone far away basically in stasis?

    AI will probably never be completely in stasis, just have its actions update much more infrequently. Still have to benchmark these things on a larger scale. Right now the game world is probably way too huge for its own good - you could never easily cover it all. It will likely be shrunk. Quality > Quantity

    Edit: this applies to Talvieno's comments as well. :)

  • Grent said:

    So... I guess the moral is make sure "don't murder" is weighted higher than "sweep floor."

    Funny you mention this - I pointed out in another thread (I think on RPG Codex) what happens if the motivation is not weighted properly. Example: you are really tired, but your house is burning down. Of course, the sensible thing is just to sleep in your neighbors bed while your house burns down, riiiiight? ;)

  • keep in mind that you could generate the world as-needed, thereby having the minimal nessecary world size at any given time without having firm borders.
  • gavanw said:

    The core functionality of the AI is driven by needs and wants - called motivations. Most of these motivations actually wind down to very few common human desires - to stay alive, and to reproduce. To reproduce, you need to find an ideal mate. To do that, you must appear "alpha", which means you must be beautiful or wealthy or something. To stay alive you need to not die of hunger. To do that, you need food. To get food you need money. And so on. These are all handled by backward chaining as I describe in my other AI post. Each need is weighted under the current circumstances - if you are starving, food becomes much more valuable/important. AI will calculate the move each turn which generates the highest score - which action will result in the best events (i.e. which action won't kill me? :) ).

    There is no if-then. There are no state machines or diagrams. There is just a set of specified rules, and each NPC calculates a potential score based on a given action, given the rules. The action with the highest score is the one they take. It can be hard to soak in at first, but once I show the rule system it should be fairly easy to see how things work. :)

    I do love the idea of the backward-chaining AI the way you describe it.

    It seems like such a simple system, yet powerful in its ability to create emergent behaviours.
  • gavanw said:

    . As with chess, you can adjust how deep the system makes predictions in order to speed up computation, at the cost of slightly less effective AI. In this case, even one level of prediction makes all the difference.

    It would be neat if different NPCs had a different depth. A farmer may have a simple AI that doesn't look very deep, while a select few, like kings, villians, etc are devious and explore much deeper to have more complex plots. By having only a few NPCs with such depth, it should be easier to afford the computation cycles on it, and it allows for the richer interactions you can get.
  • So you're an elitist then...you think these poor farmers should be denied their compute cycles. ;)

    As mentioned before, Quality > Quantity. I don't care if I have to simulate a small world, I would rather have 100 super intelligent NPCs than 1,000,000 dumb ones. Of course, with some trickery, you can have both, but initially, as I tend to do, I am not going to optimize much. We can scale the amount of NPCs over time, the first thing that should be done is getting it working properly :)
  • I think there are diminishing returns to consider. a bunch of hyper-intelligent farmers adds little value over reasonably intelligent famrers, but a hyper-intelligent grand vizar offers a lot of value over a reasonably intelligent grand vizar. The best quality for a farmer is not a scheming mastermind; their needs, goals, and ambitions are relatively simple, and hence won't take as much effort to accomplish them, and making them smarter doesn't help that much. In contrast, a NPC that needs to plot, scheme, lie, undermine, and manipulate everyone to achieve his goals needs enough capabilities to pull it off well.
    The way I was picturing it was that it is tied to an intelligence attribute of some kind, lets call it ability to plan,so a less intelligent NPC doesn't think so many layers ahead as a more intelligent one. the wiley NPCs need a much higher ability to plan to fulfill their goals and so it is the NPCs with a higher ability to plan that fill those rolls. A physicist is smarter than a farmer, not because being a physicist makes you smarter, but because you need to be smart to be a physicist.
  • Not only that, but a farmer could have a few farmhands to delegate to. A king could have hundreds of NPC at his disposal. So that should factor in decision making.
  • @gavanw‌ :
    Josh Parnell actually ran up against the exact same problem. As he was dealing with numbers in the tens of thousands, he ultimately went with exactly what @Mystify is suggesting: In the Limit Theory community, it's known as the "NPC/Worker split". Some NPCs retained full AI capabilities, while others became "dumb", only doing what was required of them.

    Depending on how large you want your world to grow... a mix could be useful. If we imagine that there are 20 people in a town, that's a maximum of five towns for the entire game world, right? That's a bit slim... but still interesting, at least, if you know that the person became a bartender not because bartending was their assigned job, but rather because they're good at it and it was a job that was in demand and lucrative. I suppose it could be optimized later... but it's all right for now.
  • Mystify said:

    I think there are diminishing returns to consider. a bunch of hyper-intelligent farmers adds little value over reasonably intelligent famrers, but a hyper-intelligent grand vizar offers a lot of value over a reasonably intelligent grand vizar. The best quality for a farmer is not a scheming mastermind; their needs, goals, and ambitions are relatively simple, and hence won't take as much effort to accomplish them, and making them smarter doesn't help that much. In contrast, a NPC that needs to plot, scheme, lie, undermine, and manipulate everyone to achieve his goals needs enough capabilities to pull it off well.

    Instead of depending on an arbitrary intelligence score (I mean, I'm probably sure even a medieval era farmer would still be more intelligent then what intelligence depth we're talking about here) I think it'd be more useful to rely on availability of mean.

    Let's say your farmer has a son who just became captain of the local town garrison. This farmer suddenly has a lot more possibilities than your typical farmer. Imagine he starts a revolution, it'd make for a great event.

    On the other hand, I'm concerned with the computing power a complete needs and motivation optimization would require. This looks a lot like a huge graph search that the game can certainly not afford each frame. Moreover, if everybody makes the same optimal choices for "get_food", wait until your [dumb] farmer see 30 people famished people come to him (probably ready to murder him) to get to his supplies. Because, you know, the merchant's prices were just a tad to high.

    All this to say : do you really want optimal decisions ? Probably not. Do you manually input randomness in their choices or just use the fact that your optimization algorithm might give suboptimal results ? Or even better, use stochastic optimization : it's faster but approximate in the short term.

    I guess it all comes down to the scope of the world and the target hardware VQ is expected to run on.

  • Whatever the means to accomplish it, I think that it is true that extra AI computations can be very useful in shaping events and laying forth schemes, but giving those computations to every NPC will limit the depth of the world and outweigh the benefits, so a mechanism needs to exist to apply the extra computations at the most beneficial locations.
  • Anyone ever see the Princess Bride? A farm boy changes the fate of the entire kingdom. :D (ducks shoe)

    But seriously, these are some of the things that make for great stories - unpredictable things, characters acting outside of the nature of their role, etc.

    (Cue trailer voice: In a world...where evil reigns...one farm boy...must fight against all odds...to claim his true destiny.)

    Anyhow, yes, anyone higher up the chain of command probably will, by necessity need more compute cycles, since they will likely evaluate the motives and inclinations of both those they command and those that they ally/engage in war with.
  • edited October 2014
    Okay, consider this. Maybe we're looking at it from the wrong angle. Perhaps, instead of arbitrarily deciding who or what should have "more cycles", maybe we should flip it.

    Set a small number of NPCs to be high-level AI like mayors, adventurers, etc, and a greater number of NPCs to be low-level AI, like farmers, bartenders, shopkeepers. Don't assign their jobs - let them choose which job is best for them and will make them the most money. The advantage with AI is that unlike the player, they can't get bored of tedious jobs like bartending. Simply make it a viable path, and if they think it's what's safest and what they're best suited for, they'll go for it.

    In addition, flip the switching between high and low level AI work around too: instead of making AI dumber when keeping it high-level is unnecessary, make it smarter - when necessary. This will work just as well, with the added bonus of not having to decide when smart AI isn't needed. It's a lot simpler to decide when it is needed.


    And yes, I think everyone has seen Princess Bride. :P It's safe to say it's a classic fantasy story.
  • I like that. Then, the next problem is to find a way to measure the necessity of higher intelligence.
  • I have some more thoughts on that matter.
    the AI shouldn't have to recompute every possibility from scratch every timestep. That is just a lot of wasted, repeated effort. This lends itself to 2 phases of AI; planning and updating. The planning works on longer term goals, and laying out an overall course of action. updating is a combination of verifying the ongoing feasibility of the plan and tweaking it to fit changes in circumstance, or delaying it to deal with more pressing concerns.
    Planning can be uneven, so NPCs with grander goals can get more time to plan, and planning time can also be asynchronous, occurring whenever there is free processing time, and applied where the plans are needed, while the updating is more even and done by everyone. Under such a scheme, a farmer has simple goals, and hence can put together a plan to accomplish them quickly and doesn't need much planning time, while the grand vizer has much harder goals, and needs more planning time to accomplish them. This willl still allow the farmer to put together a complex scheme if his circumstances warrant it, while being economical with the additional processing time
  • The way I imagine it will work is explained in detail here, but that's on a different forum.

    Mystify is right - things don't need to be recalculated every timestep. The key to optimization is figuring out when the value changes, and then recomputing those values. Do do this, let's say we have value A, which is B * C. Instead of recomputing B * C every step, simply keep in memory what B and C were before, and if one of them changes, recompute.

    As to determining when higher intelligence is necessary:

    Gavan has stated that the absolute goal for any NPC, be it animal, human, elf or monster, is to survive. Therefore, deciding when higher intelligence is necessary is easy: when the NPC is in danger of death, you up the intelligence levels (within reason). Instead of just wandering around in the same area and doing the same thing, the NPC may start to talk to people, search the surrounding area, or, if desperate, may even switch to banditry or thievery in an attempt to keep themselves a live. Likewise, if someone has exceedingly low hitpoints and is near death, they suddenly become more intelligent - this can be considered a "fight or flight response", or an "adrenaline rush". High threat levels, such as a town being attacked by a dragon, could do this to the inhabitants too.



    This is what I'd consider the different levels of intelligence for any VQ NPC:

    Search ability
    1. The NPC follows another NPC around for as long as it suits them: party members, for instance. The comfort statistic could make them leave if they grow too uncomfortable.
    2. The NPC stays in as comfortable a position as it knows. Most animals/monsters would use this.
    3. The NPC stays in as comfortable a position as it knows, while still trying to fulfill the secondary goal of acquisition of wealth. Most human NPCs would use this.
    4. The NPC is capable of searching near areas where it has been in an attempt to acquire knowledge. Any desperate NPC would use this.
    5. The NPC is capable of using knowledge it has acquired to explore the countryside on its own, or with others. NPCs with secondary goals such as "acquiring wealth" might do this by default - these are other adventurers.
    Can increase from 1 to 3, from 3 to 4, and from 4 to 5. Can also increase from 2 to 4 for animals. (This feels like it needs work.)

    Acquisition of knowledge
    1. The NPC is incapable of speech and draws from a pre-learned knowledge base.
    2. The NPC is incapable of speech but can learn through observation.
    3. The NPC is capable of using speech, but only says scripted things such as "Good morning, John!"
    4. The NPC is capable of using speech and queries other intelligent NPCs around it for information.
    Can increase from 1 to 2 in dire situations for animals, or from 3 to 4 for intelligent creatures.

    Altruism
    1. The NPC is concerned only with its own survival. Some animals like chickens might do this.
    2. The NPC is concerned about itself and a few other NPCs deemed "important": party members, family members. Most humans and dogs would do this.
    3. The NPC is concerned about everyone it knows that lives nearby: townsfolk. Highest level, and would be used primarily for governing types.
    Can decrease from 3, to 2, to 1 in dire situations - this is actually cheaper to calculate.
  • When they are at risk, then more intelligence is warranted, but that doesn't seem like its sufficient as the only trigger. Some people need more computation, not because they are at risk, but because they are dealing with a more complex goal. They are the one plotting and scheming and manipulating people, and that requires more effort than people going about their daily lives.
  • edited October 2014
    @Mystify@Talvieno‌ -

    Yes, not planning to recompute each time step. Rather, there is sort of an event listener system.

    Say some fact gets changed: "the king is alive" to "the king is dead" This changes a lot of things.

    Any AI rule which depended on "the king is alive" (or indirectly chained to that) get invalidated and recomputed.

    So, say some NPC had the goal: "assassinate the king" - well, looking into the ruleset: "required(assassinate,X:isAlive)" (the assassinate function/goal requires that the variable ("X") in question's isAlive property evaluates to true). Again, this is rough pseudocode but you get the idea (in reality, the isAlive test would probably be embedded in the definition of the assassinate function/goal)

    By the way, every string gets translated into a unique ID, a single 64 bit integer - (just as a compiler would do to a variable). Thus many computations are relatively fast since there is no string processing, just processing relationships between single integers. You can redefine strings at runtime and it will create new ids if needed. The lookup table goes both ways if needed - you can turn a string into an integer and vice versa, which is mostly useful for debugging).

  • Goal-generation is something Josh needed to deal with as well. (I promise not to do too many of these "Josh" references; I make this one only because it's interesting, not necessarily "better.")

    In his devlog of December 13, 2013, Josh described his unification of bi-directional reasoning. In addition to breaking down super-goals into sub-goals, which gives you the list of things to be done, he implemented forward-chaining that allows NPCs to generate plans. This lets them work out both "what should I do" and "how do I do it."

    Again, I'm not suggesting the current system for VQ either should or doesn't do that. It was just a very interesting comment from another developer who also wants to have NPC AI that behaves in ways that will feel right for that game.

    Here's a question: In VQ, will it be possible to ask NPCs why they did something? (That assumes we'll be able to interact with NPCs in a personal way, of course.)
  • Goal-generation is something Josh needed to deal with as well. (I promise not to do too many of these "Josh" references; I make this one only because it's interesting, not necessarily "better.")

    In his devlog of December 13, 2013, Josh described his unification of bi-directional reasoning. In addition to breaking down super-goals into sub-goals, which gives you the list of things to be done, he implemented forward-chaining that allows NPCs to generate plans. This lets them work out both "what should I do" and "how do I do it."

    Again, I'm not suggesting the current system for VQ either should or doesn't do that. It was just a very interesting comment from another developer who also wants to have NPC AI that behaves in ways that will feel right for that game.

    Here's a question: In VQ, will it be possible to ask NPCs why they did something? (That assumes we'll be able to interact with NPCs in a personal way, of course.)

    Interesting - I'm sure Josh has run into a lot of problems that I have not yet anticipated.

    Anyhow, yes, this is actually essential that you can ask NPCs about motivations (which they might lie about, if I can make the AI that strong). Not only this, but I have debated having the chain of thought visible, to show why NPCs are doing something, using sort of sims-style thought bubbles. Why is that NPC baking bread at 1:00 AM? Oh he is hungry. :) The downside of this, of course is that it could destroy a lot of the mystery in certain scenarios.
Sign In or Register to comment.