csle_agents.agents.ppo package
Submodules
csle_agents.agents.ppo.ppo_agent module
- class csle_agents.agents.ppo.ppo_agent.PPOAgent(simulation_env_config: csle_common.dao.simulation_config.simulation_env_config.SimulationEnvConfig, emulation_env_config: Union[None, csle_common.dao.emulation_config.emulation_env_config.EmulationEnvConfig], experiment_config: csle_common.dao.training.experiment_config.ExperimentConfig, training_job: Optional[csle_common.dao.jobs.training_job_config.TrainingJobConfig] = None, save_to_metastore: bool = True)[source]
Bases:
csle_agents.agents.base.base_agent.BaseAgent
A PPO agent using the implementation from OpenAI baselines
- class csle_agents.agents.ppo.ppo_agent.PPOTrainingCallback(exp_result: csle_common.dao.training.experiment_result.ExperimentResult, seed: int, random_seeds: List[int], training_job: csle_common.dao.jobs.training_job_config.TrainingJobConfig, exp_execution: csle_common.dao.training.experiment_execution.ExperimentExecution, max_steps: int, simulation_name: str, start: float, states: List[csle_common.dao.simulation_config.state.State], actions: List[csle_common.dao.simulation_config.action.Action], player_type: csle_common.dao.training.player_type.PlayerType, env: csle_common.dao.simulation_config.base_env.BaseEnv, experiment_config: csle_common.dao.training.experiment_config.ExperimentConfig, verbose=0, eval_every: int = 100, eval_batch_size: int = 10, save_every: int = 10, save_dir: str = '', L: int = 3, gym_env_name: str = '', save_to_metastore: bool = False)[source]
Bases:
stable_baselines3.common.callbacks.BaseCallback
Callback for monitoring PPO training
- logger: stable_baselines3.common.logger.Logger
- model: base_class.BaseAlgorithm