- All agent training demo notebooks now reference UC2. - Terminal-Processing Notebook now includes a few extra markdown cells for extra context. Additionally yaml snippets have been updated to reflect 4.0.0 schema - Request-and-Response notebook now includes a few more markdown cells for extra context as well as updated software names - General notebook cell clean up and tidying.
137 lines
3.1 KiB
Plaintext
137 lines
3.1 KiB
Plaintext
{
|
|
"cells": [
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"# Train a Multi agent system using RLLIB\n",
|
|
"\n",
|
|
"© Crown-owned copyright 2025, Defence Science and Technology Laboratory UK\n",
|
|
"\n",
|
|
"This notebook will demonstrate how to use the `PrimaiteRayMARLEnv` to train a very basic system with two PPO agents on the [UC2 scenario](./Data-Manipulation-E2E-Demonstration.ipynb)."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"#### First, Import packages and read our config file."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"!primaite setup"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"import yaml\n",
|
|
"import ray\n",
|
|
"from primaite import PRIMAITE_PATHS\n",
|
|
"from ray.rllib.algorithms.ppo import PPOConfig\n",
|
|
"from primaite.session.ray_envs import PrimaiteRayMARLEnv"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"with open(PRIMAITE_PATHS.user_config_path / 'example_config/data_manipulation_marl.yaml', 'r') as f:\n",
|
|
" cfg = yaml.safe_load(f)\n",
|
|
"ray.init(local_mode=True)"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"#### Create a Ray algorithm config which accepts our two agents"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"config = (\n",
|
|
" PPOConfig()\n",
|
|
" .multi_agent(\n",
|
|
" policies={'defender_1','defender_2'}, # These names are the same as the agents defined in the example config.\n",
|
|
" policy_mapping_fn=lambda agent_id, episode, worker, **kw: agent_id,\n",
|
|
" )\n",
|
|
" .environment(env=PrimaiteRayMARLEnv, env_config=cfg)\n",
|
|
" .env_runners(num_env_runners=0)\n",
|
|
" .training(train_batch_size=128)\n",
|
|
" .evaluation(evaluation_duration=1)\n",
|
|
" )\n"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"#### Start the training\n",
|
|
"This example will save outputs to a default Ray directory and use mostly default settings."
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"algo = config.build()\n",
|
|
"results = algo.train()"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "markdown",
|
|
"metadata": {},
|
|
"source": [
|
|
"### Evaluate the results"
|
|
]
|
|
},
|
|
{
|
|
"cell_type": "code",
|
|
"execution_count": null,
|
|
"metadata": {},
|
|
"outputs": [],
|
|
"source": [
|
|
"eval = algo.evaluate()"
|
|
]
|
|
}
|
|
],
|
|
"metadata": {
|
|
"kernelspec": {
|
|
"display_name": "Python 3 (ipykernel)",
|
|
"language": "python",
|
|
"name": "python3"
|
|
},
|
|
"language_info": {
|
|
"codemirror_mode": {
|
|
"name": "ipython",
|
|
"version": 3
|
|
},
|
|
"file_extension": ".py",
|
|
"mimetype": "text/x-python",
|
|
"name": "python",
|
|
"nbconvert_exporter": "python",
|
|
"pygments_lexer": "ipython3",
|
|
"version": "3.10.12"
|
|
}
|
|
},
|
|
"nbformat": 4,
|
|
"nbformat_minor": 2
|
|
}
|