Manual Installation of CSLE
The recommended way to install CSLE is to use Ansible, as described above. This section describes an alternative, manual, way of installing CSLE. The manual installation can be suitable if you want to customize the installation.
The installation of CSLE can be divided in four main steps (see Fig. 27). The first step is “Installation setup”, which comprises installation configuration and installation of build tools. In the second step, the metastore and the simulation system are installed. Lastly, in the third and the fourth steps, the emulation system and the management system are installed, respectively.
Installation Setup
In this step of the installation, the source code of CSLE is downloaded, configuration parameters are set, and tools required in later steps of the installation are installed.
Start with installing the necessary build tools and version management tools by running the following commands:
apt install build-essential
apt install make
apt install git
apt install bzip2
wget https://repo.anaconda.com/archive/Anaconda3-5.0.0-Linux-x86_64.sh
chmod u+rwx Anaconda3-5.0.0-Linux-x86_64.sh
./Anaconda3-5.0.0-Linux-x86_64.sh
Next, clone the CSLE repository and setup environment variables by running the commands:
git clone https://github.com/Limmen/clse
export CSLE_HOME=/path/to/csle/ # for bash
set -gx CSLE_HOME "/path/to/csle" # for fish
Further, add the following line to .bashrc
to set the environment variable CSLE_HOME
permanently:
export CSLE_HOME=/path/to/csle/
and add the following line to the fish configuration file to set the environment variable CSLE_HOME
permanently in the fish shell:
set -gx CSLE_HOME "/path/to/csle"
After performing the steps above, you should have the directory layout shown in Fig. 28.
Next, create a directory to store PID files by running the following commands (change my_user
to your username):
mkdir /var/log/csle
sudo chmod -R u+rw /var/log/csle
sudo chown -R my_user /var/log/csle
Similarly, create a directory to store log files by running the following commands (change my_user
to your username):
mkdir /tmp/csle
sudo chmod -R u+rw /tmp/csle
sudo chown -R my_user /tmp/csle
Next, add the following line to the sudoers
file using visudo
(change my_user
to your username):
| WARNING: take care when editing the sudoers file. If the sudoers file becomes corrupted it can make the system unusable. | | — |
my_user ALL = NOPASSWD: /usr/sbin/service docker stop, /usr/sbin/service docker start, /usr/sbin/service docker restart, /usr/sbin/service nginx stop, /usr/sbin/service nginx start, /usr/sbin/service nginx restart, /usr/sbin/service postgresql start, /usr/sbin/service postgresql stop, /usr/sbin/service postgresql restart, /bin/kill, /usr/bin/journalctl -u docker.service -n 100 --no-pager -e
By adding the above line to the sudoers
file, CSLE will be able to view logs and start and stop management services without requiring a password to be entered. (Note that the exact paths used above may differ on your system, very the paths by running the command whereis service
, whereis journalctl
, etc.)
Next, setup SSH keys so that all servers (leader and workers) have SSH access to each other without requiring a password.
To do this, generate an SSH key pair with the command ssh-keygen
on each server and copy the public key (e.g., id_rsa.pub
) to the file .ssh/authorized_keys
.
Lastly, define default username and password to the management system by editing the file: csle/config.json
.
Installing the Metastore
The metastore is based on PostgreSQL and Citus. Installing the metastore thus corresponds to installing and configuring PostgreSQL and Citus.
To install PostgreSQL v15 and the Citus extension v11.2, run the following commands:
curl https://install.citusdata.com/community/deb.sh | sudo bash
sudo apt-get -y install postgresql-15-citus-11.2
sudo pg_conftool 15 main set shared_preload_libraries citus
sudo pg_conftool 15 main set listen_addresses '*'
Verify the installed version of PostgreSQL by running the command
psql --version
Next, setup a password for the postgres
user by running the commands:
sudo -u postgres psql # start psql session
psql> \password postgres # set postgres password
Next, setup password authentication for the postgres
user and allow
remote connections by performing the following steps:
- Open the file
/etc/postgresql/<YOUR_VERSION>/main/pg_hba.conf
and replace the existing content to match your IP addresses and desired security level:local all postgres md5 host all all 127.0.0.1/32 trust host all all ::1/128 trust host all all 172.31.212.0/24 trust
- Restart PostgreSQL with the command:
sudo service postgresql restart
- Run the following command to have PostgreSQL restarted automatically when the server is restarted:
sudo update-rc.d postgresql enable
After completing the steps above, create the CSLE database and setup the Citus extension by running the following command:
cd metastore; make db
Next, edit the file csle/metastore/create_cluster.sql
and configure IP addresses of the worker servers and of the leader. Then, on the leader, run the following commands to setup the Citus cluster and create the tables:
cd metastore; make cluster
cd metastore; make tables
Next, update the variable called HOST
in the class METADATA\_STORE
in the file csle/simulation-system/libs/csle-common/src/csle\_common/constants/constants.py
.
Next, define ips of the cluster nodes and thet metastore leader by editing the file: csle/config.json
.
Lastly, make the PostgreSQL log files readable by your user by running the commands:
sudo chmod -R u+rw /var/log/postgresql
sudo chown -R my_user /var/log/postgresql
Installing the Simulation System
The simulation system consists of a set of Python libraries and a set of configuration files. To install the simulation system, the Python libraries need to be installed and the configuration files need to be inserted into the metastore.
If you do not have Python >3.9 in your base environment, start with installing Python 3.9 by running the commands:
conda create -n py39 python=3.9
conda activate py39 # alternatively, "source activate py39" for old versions of conda
The simulation system includes 17 Python libraries:
csle-base
, csle-collector
, csle-ryu
, csle-common
, csle-attacker
,
csle-defender
, csle-system-identification
, gym-csle-stopping-game
, gym-csle-apt-game
, gym-csle-cyborg
,
csle-agents
, csle-rest-api
, csle-cli
, csle-cluster
, gym-csle-intrusion-response-game
, csle-tolerance
,
and csle-attack-profiler
These libraries can either be installed from PyPi or directly from source.
To install all libraries at once from PyPi, run the command:
pip install csle-base csle-collector csle-ryu csle-common csle-attacker csle-defender csle-system-identification gym-csle-stopping-game csle-agents csle-rest-api csle-cli csle-cluster gym-csle-intrusion-response-game csle-tolerance gym-csle-apt-game gym-csle-cyborg csle-attack-profiler
To install the libraries one by one rather than all at once, follow the instructions below.
Install csle-base
from PyPi by running the command:
pip install csle-base
Alternatively, install csle-base
from source by running the commands:
cd simulation-system/libs/csle-base/
pip install -e .
cd ../../../
Next, install csle-collector
from PyPi by running the command:
pip install csle-collector
Alternatively, install csle-collector
from source by running the commands:
cd simulation-system/libs/csle-collector/
pip install -e .
cd ../../../
Next, install csle-ryu
from PyPi by running the command:
pip install csle-ryu
Alternatively, install csle-ryu
from source by running the commands:
cd simulation-system/libs/csle-ryu/
pip install -e .
cd ../../../
Next, install csle-common
from PyPi by running the command:
pip install csle-common
Alternatively, install csle-common
from source by running the commands:
cd simulation-system/libs/csle-common/
pip install -e .
cd ../../../
Next, install csle-attacker
from PyPi by running the command:
pip install csle-attacker
Alternatively, install csle-attacker
from source by running the commands:
cd simulation-system/libs/csle-attacker/
pip install -e .
cd ../../../
Next, install csle-defender
from PyPi by running the command:
pip install csle-defender
Alternatively, install csle-defender
from source by running the commands:
cd simulation-system/libs/csle-defender/
pip install -e .
cd ../../../
Next, install csle-system-identification
from PyPi by running the command:
pip install csle-system-identification
Alternatively, install csle-system-identification
from source by running the commands:
cd simulation-system/libs/csle-system-identification/
pip install -e .
cd ../../../
Next, install gym-csle-stopping-game
from PyPi by running the command:
pip install gym-csle-stopping-game
Alternatively, install gym-csle-stopping-game
from source by running the commands:
cd simulation-system/libs/gym-csle-stopping-game/
pip install -e .
cd ../../../
Next, install csle-agents
from PyPi by running the command:
Next, install `csle-agents` from PyPi by running the command:
Alternatively, install csle-agents
from source by running the commands:
cd simulation-system/libs/csle-agents/
pip install -e .
cd ../../../
Next, install csle-rest-api
from PyPi by running the command:
pip install csle-rest-api
Alternatively, install csle-rest-api
from source by running the commands:
cd simulation-system/libs/csle-rest-api/
pip install -e .
cd ../../../
Next, install csle-cli
from PyPi by running the command:
pip install csle-cli
Alternatively, install csle-cli
from source by running the commands:
cd simulation-system/libs/csle-cli/
pip install -e .
cd ../../../
Next, install csle-cluster
from PyPi by running the command:
pip install csle-cluster
Alternatively, install csle-cluster
from source by running the commands:
cd simulation-system/libs/csle-cluster/
pip install -e .
cd ../../../
Next, install gym-csle-intrusion-response-game
from PyPi by running the command:
pip install gym-csle-intrusion-response-game
Alternatively, install gym-csle-intrusion-response-game
from source by running the commands:
cd simulation-system/libs/gym-csle-intrusion-response-game/
pip install -e .
cd ../../../
Next, install csle-tolerance
from PyPi by running the command:
pip install csle-tolerance
Alternatively, install csle-tolerance
from source by running the commands:
cd simulation-system/libs/csle-tolerance/
pip install -e .
cd ../../../
Next, install csle-attack-profiler
from PyPi by running the command:
pip install csle-attack-profiler
Alternatively, install csle-attack-profiler
from source by running the commands:
cd simulation-system/libs/csle-attack-profiler/
pip install -e .
cd ../../../
Next, install gym-csle-apt-game
from PyPi by running the command:
pip install gym-csle-apt-game
Alternatively, install gym-csle-apt-game
from source by running the commands:
cd simulation-system/libs/gym-csle-apt-game/
pip install -e .
cd ../../../
Next, install gym-csle-cyborg
from PyPi by running the command:
pip install gym-csle-cyborg
Alternatively, install gym-csle-cyborg
from source by running the commands:
cd simulation-system/libs/gym-csle-cyborg/
pip install -e .
cd ../../../
Finally, on the leader node only, insert the simulation configurations into the metastore by running the commands:
cd simulation-system/envs
make install
cd ../../
Installing the Emulation System
The emulation system consists of a set of configuration files and a set of Docker images, which are divided into a set of “base images” and a set of “derived images”. The base images contain common functionality required by all images in CSLE whereas the derived images add specific configurations to the base images, e.g., specific vulnerabilities. To install the emulation system, the configuration files must be inserted into the metastore and the Docker images must be built or downloaded.
Start with adding Docker’s official GPG key to Ubuntu’s package manager by running the commands (you can also follow
the instructions for installing Docker at https://docs.docker.com/engine/install/ubuntu/
):
sudo mkdir -p /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
Next, install Docker and openvswitch
by running the commands:
sudo apt-get update
sudo apt-get install docker-ce docker-ce-cli containerd.io openvswitch-switch
sudo groupadd docker
sudo usermod -aG docker $USER
After running the commands above, start a new shell for the changes to take effect.
Next, setup a docker swarm by running the following command on the leader:
docker swarm init --advertise-addr <ip address of the leader>
After running the above command, a secret token will be returned. Use this token to run the following command on each worker to add it to the swarm:
docker swarm join --token <my_roken> leader_ip:2377
Note: If you forget the swarm token, you can display it by running the following command on the leader: docker swarm join-token worker
.
You can verify the Docker swarm configuration by running docker node ls
.
After completing the Docker installation, pull the base images of CSLE from DockerHub by running the commands:
cd emulation-system/base_images
make pull
cd ../../
Alternatively, you can build the base images locally (this takes several hours) by running the commands:
cd emulation-system/base_images
make build
cd ../../
Next, pull the derived images of CSLE from DockerHub by running the commands:
cd emulation-system/derived_images
make pull
cd ../../
Alternatively, you can build the derived images locally by running the commands:
cd emulation-system/derived_images
make build
cd ../../
Next, insert the emulation configurations into the metastore by running the commands on the leader node only:
cd emulation-system/envs
make install
cd ../../
Alternatively, you can install the base images, the derived images, and the emulation configurations all at once by running the commands:
cd emulation-system
make build
cd ../
A few configuration parameters of the kernel need to be updated to be able to execute emulations.
In particular, the configuration variables max_map_count
and max_user_watches
need to be updated.
Update max_map_count
by editing the file /etc/sysctl.conf
and add the following line:
vm.max_map_count=262144
Alternatively, for a non-persistent configuration, run the command:
sysctl -w vm.max_map_count=262144
You can check the configuration by running the command:
sysctl vm.max_map_count
Finally, update max_user_watches
by running the command:
echo fs.inotify.max_user_watches=524288 | \
sudo tee -a /etc/sysctl.conf && \
sudo sysctl -p
Installing the Management System
The management system consists of monitoring systems
(i.e., Grafana, Prometheus, Node exporter, and cAdvisor) and the web application
that implements the web interface,
which is based on node.js
.
Hence, installing the management system corresponds to installing these services and applications.
Start by installing node.js
, its version manager nvm
, and its package manager npm
on the leader only by running the commands:
curl https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.2/install.sh --output nvm.sh
chmod u+rwx nvm.sh
./nvm.sh
Then setup the nvm
environment variables by adding the following lines to .bashrc
:
export NVM_DIR="$HOME/.nvm"
[ -s "$NVM_DIR/nvm.sh" ] && \. "$NVM_DIR/nvm.sh"
Then you can install node.js
and npm
using the commands (again on the leader only):
nvm -v # Verify nvm installation
nvm install node # Install node
npm install -g npm # Update npm
node -v # Verify version of node
npm -v # Verify version of npm
Next install and build the web application of the management system by running the following commands:
cd csle/management-system/csle-mgmt-webapp
npm install
npm run build
Note: when you run the command npm install
you may need to add the flag --legacy-peer-deps
.
Further, if you have an old operating system you may need to
run the command export NODE_OPTIONS=--openssl-legacy-provider
before running npm run build
Next, install and start pgadmin
on the leader by running the following commands:
docker pull dpage/pgadmin4
docker run -p 7778:80 -e "PGADMIN_DEFAULT_EMAIL=user@domain.com" -e "PGADMIN_DEFAULT_PASSWORD=SuperSecret" -d dpage/pgadmin4
Next, configure Nginx on the leader by editing the file:
/etc/nginx/sites-available/default
Replace the current configuration with the following:
server {
listen 80 default_server;
listen [::]:80 default_server;
root /var/www/html;
index index.html index.htm index.nginx-debian.html;
server_name _;
location /pgadmin {
proxy_set_header X-Script-Name /pgadmin;
proxy_set_header Host $host;
proxy_pass http://localhost:7778/;
proxy_redirect off;
}
location / {
proxy_pass http://localhost:7777/;
proxy_buffering off;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Port $server_port;
}
}
Restart Nginx on the leader by running the command:
sudo service nginx restart
If you have HTTPS enabled on the REST API and have certificates you can configure them in Nginx on the leader by editing the file:
/etc/nginx/sites-available/default
as follows:
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name _;
return 301 https://$host$request_uri;
location /pgadmin {
proxy_set_header X-Script-Name /pgadmin;
proxy_set_header Host $host;
proxy_pass http://localhost:7778/;
proxy_redirect off;
}
}
server {
listen 443 ssl default_server;
listen [::]:443 ssl default_server;
ssl_certificate /var/log/csle/certs/csle.dev.crt;
ssl_certificate_key /var/log/csle/certs/csle_private.key;
root /var/www/html;
index index.html index.htm index.nginx-debian.html;
server_name csle.dev;
location / {
proxy_pass http://localhost:7777/;
proxy_buffering off;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Port $server_port;
}
}
Next, configure Nginx on the workers by editing the following file:
/etc/nginx/sites-available/default
Open the file on each worker and replace the current configuration with the following (replace leader-ip
with the actual ip):
server {
listen 80 default_server;
listen [::]:80 default_server;
root /var/www/html;
index index.html index.htm index.nginx-debian.html;
server_name _;
location /pgadmin {
proxy_set_header X-Script-Name /pgadmin;
proxy_set_header Host $host;
proxy_pass http://leader-ip:7778/;
proxy_redirect off;
}
location / {
proxy_pass http://leader-ip:7777/;
proxy_buffering off;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Port $server_port;
}
}
Next make the Nginx log files readable by your user by running the commands:
sudo chmod -R u+rw /var/log/nginx
sudo chown -R my_user /var/log/nginx
Lastly, restart Nginx on each worker and on the leader by running the command:
sudo service nginx restart
After completing the steps above, install the web application and the monitoring services by running the commands:
cd management-system
chmod u+x install.sh
./install.sh
Next, configure the IP of the leader by editing the following file on the leader:
csle/management-system/csle-mgmt-webapp/src
/components/Common/serverIp.js
Next, configure the port of the web interface on the leader by editing the file:
csle/management-system/csle-mgmt-webapp/src
/components/Common/serverPort.js
To start and stop the monitoring systems using the CSLE CLI, their binaries need to be added to the system path.
Add Prometheus binary to the system path by adding the following line to .bashrc
on all nodes:
export PATH=/path/to/csle/management-system/prometheus/:$PATH
If you have fish shell instead of bash, add the following line to the configuration file of fish:
fish_add_path /path/to/csle/management-system/prometheus/
Similarly, to add the Node exporter binary to the path, add the following line to .bashrc
on all nodes:
export PATH=/path/to/csle/management-system/node_exporter/:$PATH
If you have fish shell instead of bash, add the following line to the configuration file of fish:
fish_add_path /path/to/csle/management-system/node_exporter/
Finally, start the csle daemons and setup the management user account with administrator privileges by running the following command on all nodes:
csle init
- Previous
- Next