skip to Main Content

There are many papers about ranged combat artificial intelligences, like Killzones’s (see this paper), or Halo. But I’ve not been able to find much about a fighting IA except for this work, which uses neural networs to learn how to fight, which is not exactly what I’m looking for.

Occidental AI in games is heavily focused on FPS, it seems! Does anyone know which techniques are used to implement a decent fighting AI? Hierarchical Finite State Machines? Decision Trees? They could end up being pretty predictable.



  1. In our research labs, we are using AI planning technology for games. AI Planning is used by NASA to build semi-autonomous robots. Planning can produce less predictable behavior than state machines, but planning is a highly complex problem, that is, solving planning problems has a huge computational complexity.

    AI Planning is an old but interesting field. Particularly for gaming only recently people have started using planning to run their engines. The expressiveness is still limited in the current implementations, but in theory the expressiveness is limited “only by our imagination”.

    Russel and Norvig have devoted 4 chapters on AI Planning in their book on Artificial Intelligence. Other related terms you might be interested in are: Markov Decision Processes, Bayesian Networks. These topics are also provided sufficient exposure in this book.

    If you are looking for some ready-made engine to easily start using, I guess using AI Planning would be a gross overkill. I don’t know of any AI Planning engine for games but we are developing one. If you are interested in the long term, we can talk separately about it.

    Login or Signup to reply.
  2. You seem to know already the techniques for planning and executing. Another thing that you need to do is predict the opponent’s next move and maximize the expected reward of your response. I wrote a blog article about this: and . The game I consider is very simple, but I think the main ideas from Bayesian decision theory might be useful for your project.

    Login or Signup to reply.
  3. Another route to consider is the so called Ghost AI as described here & here. As the name suggests you basically extract rules from actual game play, first paper does it offline and the second extends the methodology for online real time learning.

    Check out also the guy’s webpage, there are a number of other papers on fighting games that are interesting.

    Login or Signup to reply.
  4. I have reverse engineered the routines related to the AI subsystem within the Street Figher II series of games. It does not incorporate any of the techniques mentioned above. It is entirely reactive and involves no planning, learning or goals. Interestingly, there is no “technique weight” system that you mention, either. They don’t use global weights for decisions to decide the frequency of attack versus block, for example. When taking apart the routines related to how “difficulty” is made to seem to increase, I did expect to find something like that. Alas, it relates to a number of smaller decisions that could potentially affect those ratios in an emergent way.

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top