1
\$\begingroup\$

I'm trying to program an AI for a Pac-Man-like game, where I would be the Pac-Man and move according to two simple rules:

  • move towards bonuses
  • avoid monsters and being killed

I read that one can create a "fake AI" with Dijkstra maps (a.k.a. heat maps) such as the ones described in the very good article Flow Field Pathfinding for Tower Defense.

Dikjstra map example

So, for each turn in my game, I have implemented these heat maps according to the aforementioned rules:

  • Flee map
  1. Add each monster as a source, and compute the Dijkstra map
  2. Multiply by a negative factor (-1.2) to make repulsive instead of attracting sources
  3. Rescan the cells and compute a new Dijkstra map (avoid being cornered)
  • Bonuses
  1. Add each bonus pill to the sources, and compute a Dijkstra map

The two maps seem fairly correct, however I can't manage to combine them together because this generates cases where the player (Pac-Man) is either stuck or not going in the right direction. I have tried for example to do such, but the results are not very good:

movement_map = 0.9 * flee_map + 0.1 * bonus_map

I also tried a single map with positive and negative weights for monsters and pills, but this does not seem to work either.

So my questions are:

  • Are Dijkstra maps appropriate when trying to respect opposites rules (go to bonus pills, AND avoid monsters)?
  • If not, which algorithm should be used? Otherwise, how can I combine, or at least, make these Dijkstra maps work together?
\$\endgroup\$
1
  • \$\begingroup\$ Build a single map. Start with the hunting pill map and then overwrite the map where fleeing should occur with the appropriate flee field. Anything far from a monster will hunt pills, anything close will run away \$\endgroup\$ Commented Oct 14, 2021 at 10:46

3 Answers 3

1
\$\begingroup\$

How about combining your Dijkstra map logic with a fuzzy logic state machine for your-AI driven character that helps the AI prioritize which goal it is pursuing?

The idea is, once the AI is say, fleeing, it continues to "flee" (using the Dijkstra map for flight) until it has sufficiently "fled" (this is where the fuzzy logic comes in.) Same for pursuing pills -- it pursues until it gobbles or the pill has "eluded," or monster threat has reached a threshold (fuzzy logic again).

state machine diagram

\$\endgroup\$
1
\$\begingroup\$

Use Collaborative Diffusion. It is also a hill-climbing approach, though a little more modernised.

  • It has already been successfully implemented for Pacman.
  • It combines attractance & avoidance maps into one (for each AI side - in your case, ghosts).
  • It supports multiple entities avoiding each other by each creating its own repulsion / avoidance field within the global map.
  • There is no need for head-scratching heuristics as in A* and its derivative algorithms, and it works well for small games with a limited number of agents / entities.

The original paper is here, and is fairly easy to follow.

\$\endgroup\$
1
\$\begingroup\$

This might be a matter of how you design your influence maps.

A nearby enemy is a very high priority, but an enemy further away isn't. So the influence map of enemies should start with a very high value, but then fall off rather quickly.

A bonus pill is an objective which is worth working towards even if it is further away, but not worth dying for. So it should start with a low value, but fall off very slowly.

Another thing you can do is modify the value of a power-pill depending on the proximity of threats. A power-pill with an enemy nearby might be too dangerous to pick up, while an unguarded one is a great target of opportunity. So you could scale the influence of a power-pill anti-proportional to the distance to the enemy closest to it.

\$\endgroup\$
1
  • \$\begingroup\$ A power pill is arguably worth more if there are enemies nearby (so long as you are nearer to the pill than your enemies are). \$\endgroup\$ Commented Oct 14, 2021 at 10:51

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.