Skip to main content
added 1 character in body
Source Link
Philipp
  • 123k
  • 28
  • 264
  • 344

Let's start with a simplified case and work up from there: Just one AI agent exploring the graph.

Calculating the perfect route to explore the whole graph in an acceptable time is an unsolved problem. So we shouldn't even try to find a perfect solution and instead aim for a simple solution which provides "good enough" results.

One such algorithm is the "greedy" algorithm: Always walk to the closest unexplored node.

This algorithm is very easy to implement and will cover a good amount of area in the short-term. But it has an unfortunate side-effect. It might lead the AI actor on a journey far away from the starting location before it fully explored its vicinity. You would usually expect them to look close to their starting location and then expand further when they don't find anything nearby.

A good way to account for that is to integrate this into the rating function for picking the next node. When you rate the nodes by priority, don't just take the distance to the current location of the agent into account, but also the distance to the starting location. How you weight these two criteria in relation to each other is more of an art than a science. Just experiment with this until you get a result you like.

Now how do we extend this to multiple cooperating agents? When they all perform the same algorithm, they will all walk to the same tile. They will likely stick close together and not make any use of their superior number.

In order to get them to divide and split, add a 3rd criteria to your rating function: proximity between the node and the next other agent. But assign a negative weight to this criteria. That means agents will avoid those tiles which are close to other agentagents and will instead focus on those areas of the graph no other agent currently attends to yet.

So final node rating formula:

node_rating = distance_agent * a + distance_origin * b - distance_closest_other_agent * c

Adjust the values of the constants a, b and c to taste. Lowest rating wins.

Now this algorithm might seem a bit too perfect. You said you wanted an "element of randomness and loss". To make the AI agents more fallible, you could screw with their list of visited nodes. While the AI agent wanders the forest, randomly mark previously visited nodes as unvisited, so the agents explore them again. Give each AI agent a separate list of forgotten nodes, so they wont all simultaneously run towards the same node. That way you will also observe the behaviour that they will occasionally not trust each other and check a node another agent checked previously.

How often to do this is another thing you might have to experiment with. Things I would experiment with is how frequently to forget nodes in which phase of the search and whether to prefer to forget nodes closer or further away from the agent.

Let's start with a simplified case and work up from there: Just one AI agent exploring the graph.

Calculating the perfect route to explore the whole graph in an acceptable time is an unsolved problem. So we shouldn't even try to find a perfect solution and instead aim for a simple solution which provides "good enough" results.

One such algorithm is the "greedy" algorithm: Always walk to the closest unexplored node.

This algorithm is very easy to implement and will cover a good amount of area in the short-term. But it has an unfortunate side-effect. It might lead the AI actor on a journey far away from the starting location before it fully explored its vicinity. You would usually expect them to look close to their starting location and then expand further when they don't find anything nearby.

A good way to account for that is to integrate this into the rating function for picking the next node. When you rate the nodes by priority, don't just take the distance to the current location of the agent into account, but also the distance to the starting location. How you weight these two criteria in relation to each other is more of an art than a science. Just experiment with this until you get a result you like.

Now how do we extend this to multiple cooperating agents? When they all perform the same algorithm, they will all walk to the same tile. They will likely stick close together and not make any use of their superior number.

In order to get them to divide and split, add a 3rd criteria to your rating function: proximity between the node and the next other agent. But assign a negative weight to this criteria. That means agents will avoid those tiles which are close to other agent and will instead focus on those areas of the graph no other agent currently attends to yet.

So final node rating formula:

node_rating = distance_agent * a + distance_origin * b - distance_closest_other_agent * c

Adjust the values of the constants a, b and c to taste. Lowest rating wins.

Now this algorithm might seem a bit too perfect. You said you wanted an "element of randomness and loss". To make the AI agents more fallible, you could screw with their list of visited nodes. While the AI agent wanders the forest, randomly mark previously visited nodes as unvisited, so the agents explore them again. Give each AI agent a separate list of forgotten nodes, so they wont all simultaneously run towards the same node. That way you will also observe the behaviour that they will occasionally not trust each other and check a node another agent checked previously.

How often to do this is another thing you might have to experiment with. Things I would experiment with is how frequently to forget nodes in which phase of the search and whether to prefer to forget nodes closer or further away from the agent.

Let's start with a simplified case and work up from there: Just one AI agent exploring the graph.

Calculating the perfect route to explore the whole graph in an acceptable time is an unsolved problem. So we shouldn't even try to find a perfect solution and instead aim for a simple solution which provides "good enough" results.

One such algorithm is the "greedy" algorithm: Always walk to the closest unexplored node.

This algorithm is very easy to implement and will cover a good amount of area in the short-term. But it has an unfortunate side-effect. It might lead the AI actor on a journey far away from the starting location before it fully explored its vicinity. You would usually expect them to look close to their starting location and then expand further when they don't find anything nearby.

A good way to account for that is to integrate this into the rating function for picking the next node. When you rate the nodes by priority, don't just take the distance to the current location of the agent into account, but also the distance to the starting location. How you weight these two criteria in relation to each other is more of an art than a science. Just experiment with this until you get a result you like.

Now how do we extend this to multiple cooperating agents? When they all perform the same algorithm, they will all walk to the same tile. They will likely stick close together and not make any use of their superior number.

In order to get them to divide and split, add a 3rd criteria to your rating function: proximity between the node and the next other agent. But assign a negative weight to this criteria. That means agents will avoid those tiles which are close to other agents and will instead focus on those areas of the graph no other agent currently attends to yet.

So final node rating formula:

node_rating = distance_agent * a + distance_origin * b - distance_closest_other_agent * c

Adjust the values of the constants a, b and c to taste. Lowest rating wins.

Now this algorithm might seem a bit too perfect. You said you wanted an "element of randomness and loss". To make the AI agents more fallible, you could screw with their list of visited nodes. While the AI agent wanders the forest, randomly mark previously visited nodes as unvisited, so the agents explore them again. Give each AI agent a separate list of forgotten nodes, so they wont all simultaneously run towards the same node. That way you will also observe the behaviour that they will occasionally not trust each other and check a node another agent checked previously.

How often to do this is another thing you might have to experiment with. Things I would experiment with is how frequently to forget nodes in which phase of the search and whether to prefer to forget nodes closer or further away from the agent.

added 580 characters in body
Source Link
Philipp
  • 123k
  • 28
  • 264
  • 344

Let's start with a simplified case and work up from there: Just one AI agent exploring the graph.

Calculating the perfect route to explore the whole graph in an acceptable time is an unsolved problem. So we shouldn't even try to find a perfect solution and instead aim for a simple solution which provides "good enough" results.

One such algorithm is the "greedy" algorithm: Always walk to the closest unexplored node.

This algorithm is very easy to implement and will cover a good amount of area in the short-term. But it has an unfortunate side-effect. It might lead the AI actor on a journey far away from the starting location before it fully explored its vicinity. You would usually expect them to look close to their starting location and then expand further when they don't find anything nearby.

A good way to account for that is to integrate this into the rating function for picking the next node. When you rate the nodes by priority, don't just take the distance to the current location of the agent into account, but also the distance to the starting location. How you weight these two criteria in relation to each other is more of an art than a science. Just experiment with this until you get a result you like.

Now how do we extend this to multiple cooperating agents? When they all perform the same algorithm, they will all walk to the same tile. They will likely stick close together and not make any use of their superior number.

In order to get them to divide and split, add a 3rd criteria to your rating function: proximity between the node and the next other agent. But assign a negative weight to this criteria. That means agents will avoid those tiles which are close to other agent and will instead focus on those areas of the graph no other agent currently attends to yet.

So final node rating formula:

node_rating = distance_agent * a + distance_origin * b - distance_closest_other_agent * c

Adjust the values of the constants a, b and c to taste.

  Lowest rating wins.

Now this algorithm might seem a bit too perfect. You said you wanted an "element of randomness and loss". To make the AI agents more fallible, you could screw with their list of visited nodes. While the AI agent wanders the forest, randomly mark previously visited nodes as unvisited, so the agents explore them again. Give each AI agent a separate list of forgotten nodes, so they wont all simultaneously run towards the same node. That way you will also observe the behaviour that they will occasionally not trust each other and check a node another agent checked previously.

How often to do this is another thing you might have to experiment with. Things I would experiment with is how frequently to forget nodes in which phase of the search and whether to prefer to forget nodes closer or further away from the agent.

Let's start with a simplified case and work up from there: Just one AI agent exploring the graph.

Calculating the perfect route to explore the whole graph in an acceptable time is an unsolved problem. So we shouldn't even try to find a perfect solution and instead aim for a simple solution which provides "good enough" results.

One such algorithm is the "greedy" algorithm: Always walk to the closest unexplored node.

This algorithm is very easy to implement and will cover a good amount of area in the short-term. But it has an unfortunate side-effect. It might lead the AI actor on a journey far away from the starting location before it fully explored its vicinity. You would usually expect them to look close to their starting location and then expand further when they don't find anything nearby.

A good way to account for that is to integrate this into the rating function for picking the next node. When you rate the nodes by priority, don't just take the distance to the current location of the agent into account, but also the distance to the starting location. How you weight these two criteria in relation to each other is more of an art than a science. Just experiment with this until you get a result you like.

Now how do we extend this to multiple cooperating agents? When they all perform the same algorithm, they will all walk to the same tile. They will likely stick close together and not make any use of their superior number.

In order to get them to divide and split, add a 3rd criteria to your rating function: proximity between the node and the next other agent. But assign a negative weight to this criteria. That means agents will avoid those tiles which are close to other agent and will instead focus on those areas of the graph no other agent currently attends to yet.

So final node rating formula:

node_rating = distance_agent * a + distance_origin * b - distance_closest_other_agent * c

Adjust the values of the constants a, b and c to taste.

  Lowest rating wins.

Let's start with a simplified case and work up from there: Just one AI agent exploring the graph.

Calculating the perfect route to explore the whole graph in an acceptable time is an unsolved problem. So we shouldn't even try to find a perfect solution and instead aim for a simple solution which provides "good enough" results.

One such algorithm is the "greedy" algorithm: Always walk to the closest unexplored node.

This algorithm is very easy to implement and will cover a good amount of area in the short-term. But it has an unfortunate side-effect. It might lead the AI actor on a journey far away from the starting location before it fully explored its vicinity. You would usually expect them to look close to their starting location and then expand further when they don't find anything nearby.

A good way to account for that is to integrate this into the rating function for picking the next node. When you rate the nodes by priority, don't just take the distance to the current location of the agent into account, but also the distance to the starting location. How you weight these two criteria in relation to each other is more of an art than a science. Just experiment with this until you get a result you like.

Now how do we extend this to multiple cooperating agents? When they all perform the same algorithm, they will all walk to the same tile. They will likely stick close together and not make any use of their superior number.

In order to get them to divide and split, add a 3rd criteria to your rating function: proximity between the node and the next other agent. But assign a negative weight to this criteria. That means agents will avoid those tiles which are close to other agent and will instead focus on those areas of the graph no other agent currently attends to yet.

So final node rating formula:

node_rating = distance_agent * a + distance_origin * b - distance_closest_other_agent * c

Adjust the values of the constants a, b and c to taste. Lowest rating wins.

Now this algorithm might seem a bit too perfect. You said you wanted an "element of randomness and loss". To make the AI agents more fallible, you could screw with their list of visited nodes. While the AI agent wanders the forest, randomly mark previously visited nodes as unvisited, so the agents explore them again. Give each AI agent a separate list of forgotten nodes, so they wont all simultaneously run towards the same node. That way you will also observe the behaviour that they will occasionally not trust each other and check a node another agent checked previously.

How often to do this is another thing you might have to experiment with. Things I would experiment with is how frequently to forget nodes in which phase of the search and whether to prefer to forget nodes closer or further away from the agent.

added 12 characters in body
Source Link
Philipp
  • 123k
  • 28
  • 264
  • 344

Let's start with a simplified case and work up from there: Just one AI agent exploring the graph.

Calculating the perfect route to explore the whole graph in an acceptable time is an unsolved problem. So we shouldn't even try to find a perfect solution and instead aim for a simple solution which provides "good enough" results.

One such algorithm is the "greedy" algorithm: Always walk to the closest unexplored node.

This algorithm is very easy to implement and will cover a good amount of area in the short-term, but. But it has an unfortunate side-effect. It might lead the AI actor on a journey far away from the starting location before it fully explored its vicinity. You would usually expect them to look close to their starting location and then expand further when they don't find anything nearby.

A good way to account for that is to integrate this into the rating function for picking the next node. When you rate the nodes by priority, don't just take the distance to the current location of the agent into account, but also the distance to the starting location. How you weight these two criteria in relation to each other is more of an art than a science. Just experiment with this until you get a result you like.

Now how do we extend this to multiple cooperating agents? When they all perform the same algorithm, they will all walk to the same tile. They will likely stick close together and not make any use of their superior number.

In order to get them to divide and split, add a 3rd criteria to your rating function: proximity between the node and the next other agent. But assign a negative weight to this criteria. That means agents will avoid those tiles which are close to other agent and will instead focus on those areas of the graph no other agent currently attends to yet.

So final node rating formula:

node_rating = distance_thisdistance_agent * a + distance_origin * b - distance_closest_other_agent * c

Adjust the values of the constants a, b and c to taste.

Lowest rating wins.

Let's start with a simplified case and work up from there: Just one AI agent exploring the graph.

Calculating the perfect route to explore the whole graph in an acceptable time is an unsolved problem. So we shouldn't even try to find a perfect solution and instead aim for a simple solution which provides "good enough" results.

One such algorithm is the "greedy" algorithm: Always walk to the closest unexplored node.

This algorithm will cover a good amount of area in the short-term, but it has an unfortunate side-effect. It might lead the AI actor on a journey far away from the starting location before it fully explored its vicinity. You would usually expect them to look close to their starting location and then expand further when they don't find anything nearby.

A good way to account for that is to integrate this into the rating function for picking the next node. When you rate the nodes by priority, don't just take the distance to the current location of the agent into account, but also the distance to the starting location. How you weight these two criteria in relation to each other is more of an art than a science. Just experiment with this until you get a result you like.

Now how do we extend this to multiple cooperating agents? When they all perform the same algorithm, they will all walk to the same tile. They will likely stick close together and not make any use of their superior number.

In order to get them to divide and split, add a 3rd criteria to your rating function: proximity between the node and the next other agent. But assign a negative weight to this criteria. That means agents will avoid those tiles which are close to other agent and will instead focus on those areas of the graph no other agent attends to yet.

So final formula:

node_rating = distance_this * a + distance_origin * b - distance_closest_other_agent * c

Adjust the values of the constants a, b and c to taste.

Lowest rating wins.

Let's start with a simplified case and work up from there: Just one AI agent exploring the graph.

Calculating the perfect route to explore the whole graph in an acceptable time is an unsolved problem. So we shouldn't even try to find a perfect solution and instead aim for a simple solution which provides "good enough" results.

One such algorithm is the "greedy" algorithm: Always walk to the closest unexplored node.

This algorithm is very easy to implement and will cover a good amount of area in the short-term. But it has an unfortunate side-effect. It might lead the AI actor on a journey far away from the starting location before it fully explored its vicinity. You would usually expect them to look close to their starting location and then expand further when they don't find anything nearby.

A good way to account for that is to integrate this into the rating function for picking the next node. When you rate the nodes by priority, don't just take the distance to the current location of the agent into account, but also the distance to the starting location. How you weight these two criteria in relation to each other is more of an art than a science. Just experiment with this until you get a result you like.

Now how do we extend this to multiple cooperating agents? When they all perform the same algorithm, they will all walk to the same tile. They will likely stick close together and not make any use of their superior number.

In order to get them to divide and split, add a 3rd criteria to your rating function: proximity between the node and the next other agent. But assign a negative weight to this criteria. That means agents will avoid those tiles which are close to other agent and will instead focus on those areas of the graph no other agent currently attends to yet.

So final node rating formula:

node_rating = distance_agent * a + distance_origin * b - distance_closest_other_agent * c

Adjust the values of the constants a, b and c to taste.

Lowest rating wins.

Source Link
Philipp
  • 123k
  • 28
  • 264
  • 344
Loading