Merged PR 401: 2628 update benchmark script

## Summary
Updated the benchmark script so that it's compatible with PrimAITE v3.0.0

Chris also ran the benchmark and included the results

the script should also now work via cli (`python ./benchmark/primaite_benchmark.py`)

## Test process
n/a

## Checklist
- [X] PR is linked to a **work item**
- [X] **acceptance criteria** of linked ticket are met
- [X] performed **self-review** of the code
- [ ] written **tests** for any new functionality added with this PR
- [ ] updated the **documentation** if this PR changes or adds functionality
- [ ] written/updated **design docs** if this PR implements new functionality
- [ ] updated the **change log**
- [X] ran **pre-commit** checks for code style
- [ ] attended to any **TO-DOs** left in the code

Related work items: #2628
This commit is contained in:
Czar Echavez
2024-06-06 18:46:50 +00:00
12 changed files with 7937 additions and 375 deletions

View File

@@ -1 +1 @@
3.0.0b9
3.0.0

View File

@@ -37,6 +37,8 @@ class PrimaiteGymEnv(gymnasium.Env):
"""Name of the RL agent. Since there should only be one RL agent we can just pull the first and only key."""
self.episode_counter: int = 0
"""Current episode number."""
self.average_reward_per_episode: Dict[int, float] = {}
"""Average rewards of agents per episode."""
@property
def agent(self) -> ProxyAgent:
@@ -89,6 +91,8 @@ class PrimaiteGymEnv(gymnasium.Env):
f"Resetting environment, episode {self.episode_counter}, "
f"avg. reward: {self.agent.reward_function.total_reward}"
)
self.average_reward_per_episode[self.episode_counter] = self.agent.reward_function.total_reward
if self.io.settings.save_agent_actions:
all_agent_actions = {name: agent.history for name, agent in self.game.agents.items()}
self.io.write_agent_log(agent_actions=all_agent_actions, episode=self.episode_counter)