Quest Generation AI?

Has anyone read the doc on how F.E.A.R.'s AI was built?
http://www.media.mit.edu/~jorkin/gdc2006_orkin_jeff_fear.doc

Could a system like that be styled for quest generation too?
Assuming you've PCGed the world then their motives or personality characteristics, build an ends & means evaluation system that creates quests from NPCs taking in their goal, method (honest, asshole, greedy, generous), means (money on hand, skill training to offer, etc), and figures out a task that they want done, they're ability to ask and offer, and how they're follow through or not given that stuff.

Comments

  • I saw this post recently on a model of human emotions.

    http://markpneyer.me/2014/10/19/a-model-of-emotion/

    It leaves many technical details unspecified, but I imagine it could help NPC AI correspond well with quest and narrative elements. For example, stealing something makes a character mad, or another one sad, but those don't have to be hard-coded reactions. Instead, a low-level system could know which events are bad for characters with some forward-projection of likely futures, and there could be a general map from 'current-status + future-status-distribution' to emotional states.
  • I'll post from two other threads I dropped this in (RPG Codex and Limit Thoery), just for a small intro to how AI work in VQ:

    Dialogue is very crude, not English sentences like you might expect, but English tokens (words) combined together to form "facts" (which can in fact be "lies" - which is up to the NPC to figure out based on standard logical deduction techniques what is true, what is not, and what cannot be determined (in which case, its left up to the trustworthiness of the NPC in question, and how trusting the party is receiving the info). This is actually not new science, it is based heavily on something called logical programming, the flagship language being Prolog here. These techniques have been used for all sorts of AI, just never successfully in games to my knowledge. I've never been much of an academic, but I studied under one of the people who built the first autonomous vehicles for DARPA, and during that time my vision of AI greatly changed. I suspect many programmers never even get the chance to delve into many types of AI beyond the most common things like pathfinding. Anyhow (sorry for the tangent) - you can construct sentences with these tokens (even with autosuggest as to what words the system would accept following the ones you've input, so there is no ambiguity in the grammar and syntax).

    So, you present facts into the system - lets see how this works (i'm going to write psuedocode here, but anyone should be able to understand it by the context of the english words). Let's make some facts (psuedocode followed by my comments in parentheses)

    apple:red (this means that "An apple is red" evaluates to true - it is a fact
    apple:fruit (and apple is fruit...the amazing thing here is that the computer does not need to know what these words actually mean - you are just forming relationships between arbitrary strings of letters).
    banana:fruit
    banana:yellow
    fruit:food (fruit is food)
    is(food)? (here we are running a query to the system - what is food? and it would return the following list of results:)
    [apple, banana]
    is(red,food)? (what is red and is food?)
    apple
    now, I never specified directly that an apple was food. It applies a process called "backward chaining" to determine that an apple is food. (http://en.wikipedia.org/wiki/Backward_chaining) - this is based on a standard logic theorem called modus ponens. apple->fruit->food
    This simple rule is actually very powerful and the foundation of logical inference.
    these are simple rules and operators but more complex ones can be used or defined -- from another example I used on reddit:
    protects(shepherds,sheep) (a shepherd protects their sheep)
    kills(dragons,sheep) (a dragon kills (eats) sheep)
    killsOnPaymentOf(dragons,hero, 20 gold) (a hero kills a dragon for 20 gold - these rules are simplified but gives you the idea. The functions that define these rules can either be defined explicitly within the AI system or recursively using the grammar of existing words, functions, and phrases)

    Each turn, the AI runs a score maximization algorithm. Every NPC, monster, whatever tries to maximize their score by fullfilling as many goals as they can (their highest scoring goal is almost always to stay alive, but not always - sometimes they might sacrifice their life, i.e. to protect their children)
    So, here the shepherd predicts, by facts in the system, that a dragon will kill its sheep, which would lower his score since his goal is to protect the sheep. The shepherd would explore available actions to change the predicted course of action - in this case a dead dragon can't eat sheep, so he would hire a hero to kill the dragon). All of these facts are evaluated against proximity, availability (is a hero around?), etc. It sounds complex, and even CPU intensive, but it runs fast on a few iterations of predicting. As with chess, you can adjust how deep the system makes predictions in order to speed up computation, at the cost of slightly less effective AI. In this case, even one level of prediction makes all the difference. Of course, if you don't specify all the rules correctly, hilarity ensues. There will be a lot of WTF moments early on, I assure you. :)

    So, when you construct dialogue, you really just present new "facts" into the system, or query existing facts (you,job)? (what is your job). NPCs can decide if it works in their favor to lie about something, if they have the type of personality that might lie. Similarly you can lie to NPCs. Want to impress a love interest? (me,wealthy). :)

  • Do you know some good resources (books or otherwise) on this? I intend to google around later but was wondering on your take for the best way to intro yourself to it. This is the kind of topic that makes me excited to read about and try. My own experience with AI coding is terribly ignorant and amateurish yet its probably one of the areas I WANT to be better at the most.

    I'll google: "prolog" later and see what I can deduce from there to try and keep up.
  • You could probably include an emotion system with the logical structure. Events can be labeled as good or bad for a entity, and based on how many good or bad events (maybe including a weighting) they get an emotion flag set. This could interact with other decisions, like how easily they can be convinced to do things, how likely they are to take protective measures, etc. a paranoid emotion may get triggered by many personally bad things happening, which triggers them to act, well, paranoid, being sure to keep an eye on people, locking their doors, hiring guards, etc.
    It would be great to come across a town where everyone is terrified, staying inside, avoiding strangers, locking their doors, etc because of some external event- say, the town is haunted, so ghosts wander around. You can put a stop to it, and everyone's emotions return to normal, and they cheer up.
  • edited October 2014
    Sarophym said:

    I'll google: "prolog" later and see what I can deduce from there to try and keep up.

    Yeah I would google prolog, but you may be turned off by many of the tutorials, it is hard to find good ones. Still, if you can really get a handle on how Prolog works, it will make you a 10x better AI programmer. We typically program in imperative languages, where we specify the answer, but prolog is really just about phrasing the "question" correctly (or defining the facts to make the question)
  • Mystify said:

    You could probably include an emotion system with the logical structure. Events can be labeled as good or bad for a entity, and based on how many good or bad events (maybe including a weighting) they get an emotion flag set. This could interact with other decisions, like how easily they can be convinced to do things, how likely they are to take protective measures, etc. a paranoid emotion may get triggered by many personally bad things happening, which triggers them to act, well, paranoid, being sure to keep an eye on people, locking their doors, hiring guards, etc.
    It would be great to come across a town where everyone is terrified, staying inside, avoiding strangers, locking their doors, etc because of some external event- say, the town is haunted, so ghosts wander around. You can put a stop to it, and everyone's emotions return to normal, and they cheer up.

    Yep, there is no real concept of "good" or "bad" explicitly, but rather weighted characteristics in the range of 0.0 to 1.0. You can add, multiply, and average these to make various calculations. So for example, rather than saying is this pirate "good" or "bad" - you say how trustworthy is he (on a scale of 0.0 to 1.0), how loyal is he, how aggressive is he, etc. You can define rules that would make a judgement based on these qualities recursively. Someone with loyalty of X is likely to kill you with Y percent chance - bad example but you get the idea.
Sign In or Register to comment.