Skip to content

Commit

Permalink
Tune Task 6: Better safe than sorry
Browse files Browse the repository at this point in the history
  • Loading branch information
orzechow committed Nov 19, 2024
1 parent 9ad53e0 commit 625accd
Showing 1 changed file with 15 additions and 19 deletions.
34 changes: 15 additions & 19 deletions docs/tasks/6_verification.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,37 +10,37 @@ Execute only safe commands and add a fallback strategy.
## Context

The arbitration graph is now complete and PacMan is eating dots like a pro.
But there is one last topic we want to talk about: safety and robustness.
But there is one last topic we want to talk about: **safety and robustness**.

Depending on your application, you might be interested in only executing commands that you know fulfill certain criteria.
The concrete requirements depend on your application and could be anything from physical constraints to safety requirements.
Depending on your application, you might only want to execute commands that you know meet certain criteria.
The specific requirements will depend on your application and could be anything from physical constraints to safety requirements.
In our case, we only want to execute commands where PacMan does not run into walls.

We can ensure that commands obey these requirements by adding a verifier to the arbitrators.
The arbitrator will then run the verification step and only choose commands that pass this step.
The arbitrator will then run the **verification step** and only choose commands that pass this step.

The leads us to another issue.
What to do if the command we wanted to execute does not pass the verification step?

Glad you asked!
The first and thing that will happen without us doing anything is that the arbitrator will just choose the next best option.
E.g., if the `EatClosestDot` is not safe, the `EatDot` arbitrator could just pass the `ChangeDotCluster` command to the root arbitrator
assuming the latter is both applicable and does itself pass verification.
The first thing that happens out-of-the-box: the arbitrator will just choose the next best option passing verification.
E.g., if the `EatClosestDot` is not safe, the `EatDot` arbitrator will just return the `ChangeDotCluster` command to the root arbitrator
in case `ChangeDotCluster` is both applicable and does itself pass verification.

If that's not the case, we can think about adding additional behavior components as fallback layers to enable graceful degradation of the system.
If that's not the case though, we can think about adding additional behavior components as fallback layers to enable **graceful degradation** of the system.
The first one is already there: `MoveRandomly` is something we probably don't really want to do under normal circumstances.
But if we run out of ideas, it is still a valid option.
It might also give our main behavior components a chance to recover or to solve deadlock situations.

Finally, it is a good idea to add a last resort fallback layer.
Finally, it is a good idea to add a **last resort** fallback layer.
This behavior component should be a simple implementation that is always applicable and does not require a lot of context knowledge.
If the system is in a failing state, the latter might not be available.
We can mark behavior components as last resort fallback layers which will lead to these components not having to pass verification.
After all, they are our last straw and it's better to execute that than to do nothing.
We can mark a behavior component as last resort fallback layer in order to exclude it from verification.
After all, it's our last straw and it's better to execute that than to do nothing.

In our case, we will add a `StayInPlace` behavior component.
PacMan is not actually able to stop, so he will just keep moving back and forth.
Probably not an ideal strategy to win the game but we can be sure to have a comprehensible command at all times.
Probably not an ideal strategy to win the game, but we can be sure to have a comprehensible command at all times.
Also, PacMan will never run into a wall with this behavior component.

Phew, that was long read. Time to get our hands dirty!
Expand All @@ -57,8 +57,8 @@ Add the `MoveRandomly` behavior component as a last resort fallback layer.
- Add an instance of the `Verifier` to the `PacmanAgent` class and initialize it in the constructor.
- Pass the `Verifier` instance to the constructors of the arbitrators.
(Hint: You'll need to adjust the template parameters of the arbitrators.)
- Add the `MoveRandomly` behavior component analogously to the other behavior components.
- Mark the `MoveRandomly` behavior component as a last resort fallback layer.
- Add the `StayInPlace` behavior component analogously to the other behavior components.
- Mark the `StayInPlace` behavior component as a last resort fallback layer.
- Try breaking a behavior component on purpose and see how the system reacts.
(Try throwing an exception in the `getCommand()` method of a behavior component or returning a command that will lead to a collision with a wall.)

Expand All @@ -74,11 +74,7 @@ VerificationResult analyze(const Time /*time*/, const Command& command) const {
Position nextPosition = environmentModel_->pacmanPosition() + nextMove.deltaPosition;

// The command is considered safe if the next position is in bounds and not a wall
if (environmentModel_->isPassableCell(nextPosition)) {
return VerificationResult{true};
}

return VerificationResult{false};
return VerificationResult{environmentModel_->isPassableCell(nextPosition)};
}
```
Expand Down

0 comments on commit 625accd

Please sign in to comment.