An emulation capability for Industrial Control Systems
View the Project on GitHub sandialabs/sceptre-docs
The SCEPTRE platform is a combination of COTS hardware, software, and Sandia-developed tools. Installation can be local (one computer) or distributed (multiple computers).
For the best performance, install SCEPTRE using the distributed installation guide.
For a local SCEPTRE installation, a single computer will act as both headnode and compute node.
Check “Prerequisites”
sudo su
Install required packages
apt install -y curl git make docker.io
mkdir -p /usr/local/lib/docker/cli-plugins
VERSION=$(GIT_SSL_NO_VERIFY=true git ls-remote https://github.com/docker/compose | grep refs/tags | grep -oP '[0-9]+\.[0-9][0-9]+\.[0-9]+$' | sort | tail -n 1)
curl -kL "https://github.com/docker/compose/releases/download/v${VERSION}/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/lib/docker/cli-plugins/docker-compose
chmod +x /usr/local/lib/docker/cli-plugins/docker-compose
Optional If behind a proxy server, you must add proxy info to your docker config
mkdir /etc/systemd/system/docker.service.d/
cat <<EOF | sudo tee /etc/systemd/system/docker.service.d/http-proxy.conf
[Service]
Environment="NO_PROXY=*.example.com"
Environment="HTTP_PROXY=http://proxy.example.com:8080/"
Environment="HTTPS_PROXY=https://proxy.example.com:8080/"
Environment="no_proxy=*.example.com"
Environment="http_proxy=http://proxy.example.com:8080/"
Environment="https_proxy=https://proxy.example.com:8080/"
EOF
systemctl daemon-reload
systemctl restart docker
Install topologies and base images
mkdir -p /phenix
cd /phenix
git clone https://github.com/sandialabs/sceptre-phenix-topologies.git topologies
git clone https://github.com/sandialabs/sceptre-phenix-images.git vmdb2
Install phēnix source files
mkdir -p /opt
cd /opt
git clone https://github.com/sandialabs/sceptre-phenix.git phenix
Install docker images
Pull pre-built docker containers. Useful for users of SCEPTRE.
docker pull ghcr.io/sandialabs/sceptre-phenix/phenix:main
docker pull ghcr.io/sandia-minimega/minimega/minimega:master
Alternatively, build the docker containers from source. Useufl for developers of SCEPTRE.
cd phenix/docker
docker compose build
Tip - If behind a proxy, you must add http_proxy
and https_proxy
build args to the build command (Ex. --build-arg http_proxy=http://proxy.example.com:8080
). Additionally, INSTALL_CERTS
build args may be required for custom certificates.
Set the CONTEXT environment variable and start up the SCEPTRE docker containers
echo "export CONTEXT=$(hostname)" >> ~/.rc && source ~/.rc
cd /opt/phenix/docker
docker compose up -d
Optional Add a few convenience aliases to your shell
cat <<EOF >> ~/._aliases
alias phenix='docker exec -it phenix phenix'
alias mm='docker exec -it minimega minimega -e'
alias mminfo='mm .columns name,state,ip,snapshot,cc_active vm info'
alias ovs-vsctl='docker exec -it minimega ovs-vsctl'
EOF
source ~/._aliases
Access the phēnix web GUI at 0.0.0.0:3000
(this is the IP of the host, or localhost
)
A distributed SCEPTRE installation requires one headnode computer and one or more compute nodes.
Check “Prerequisites”
sudo su
Headnode Install - The headnode is the computer where experiment management tools are installed. Virtual machines do not run on this machine. For hardware requirements, see Headnode Requirements
Install required packages
apt install -y curl git make docker.io
mkdir -p /usr/local/lib/docker/cli-plugins
VERSION=$(GIT_SSL_NO_VERIFY=true git ls-remote https://github.com/docker/compose | grep refs/tags | grep -oP '[0-9]+\.[0-9][0-9]+\.[0-9]+$' | sort | tail -n 1)
curl -kL "https://github.com/docker/compose/releases/download/v${VERSION}/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/lib/docker/cli-plugins/docker-compose
chmod +x /usr/local/lib/docker/cli-plugins/docker-compose
Optional If behind a proxy server, you must add proxy info to your docker config
sudo mkdir /etc/systemd/system/docker.service.d/
cat <<EOF | sudo tee /etc/systemd/system/docker.service.d/http-proxy.conf
[Service]
Environment="NO_PROXY=*.example.com"
Environment="HTTP_PROXY=http://proxy.example.com:8080/"
Environment="HTTPS_PROXY=https://proxy.example.com:8080/"
Environment="no_proxy=*.example.com"
Environment="http_proxy=http://proxy.example.com:8080/"
Environment="https_proxy=https://proxy.example.com:8080/"
EOF
sudo systemctl daemon-reload
sudo systemctl restart docker
Install topologies and base images
mkdir -p /phenix
cd /phenix
git clone https://github.com/sandialabs/sceptre-phenix-topologies.git topologies
git clone https://github.com/sandialabs/sceptre-phenix-images.git vmdb2
Install phēnix source files
mkdir -p /opt
cd /opt
git clone https://github.com/sandialabs/sceptre-phenix.git phenix
Install docker images
Pull pre-built docker containers. Useful for users of SCEPTRE.
docker pull ghcr.io/sandialabs/sceptre-phenix/phenix:main
docker pull ghcr.io/sandia-minimega/minimega/minimega:master
Alternatively, build the docker containers from source. Useufl for developers of SCEPTRE.
cd phenix/docker
docker compose build
Tip - If behind a proxy, you must add http_proxy
and https_proxy
build args to the build command (Ex. --build-arg http_proxy=http://proxy.example.com:8080
). Additionally, INSTALL_CERTS
build args may be required for custom certificates.
Configure NFS share
Setting up a Network File Share allows sharing of the base KVM images across multiple nodes
echo '/phenix/images *(rw,sync,no_subtree_check)' >> /etc/exports
service nfs-kernel-server restart
Tip - This is much more efficient than copying large base KVM images to each node individually
Set the CONTEXT environment variable and start up the SCEPTRE docker containers
echo "export CONTEXT=$(hostname)" >> ~/.rc && source ~/.rc
cd /opt/phenix/docker
docker compose up -d
Optional Add a few convenience aliases to your shell
cat <<EOF >> ~/._aliases
alias phenix='docker exec -it phenix phenix'
alias mm='docker exec -it minimega minimega -e'
alias mminfo='mm .columns name,state,ip,snapshot,cc_active vm info'
alias ovs-vsctl='docker exec -it minimega ovs-vsctl'
EOF
source ~/._aliases
Access the phēnix web GUI at 0.0.0.0:3000
“Compute Node” Install - The compute node is the computer where virtual machines run. For hardware requirements, see Compute Node Requirements
Install required packages
apt install -y nfs-common openvswitch-switch qemu-kvm tmux vim
Mount NFS share
Replace X.X.X.X
with the IP address of the headnode
mkdir /phenix/images
echo 'X.X.X.X:/phenix/images /phenix/images nfs auto,rw 0 0' >> /etc/fstab
mount -a
Build the required backing image
ubuntu.qc2
.Build this image via the CLI using the following commands on the headnode:
phenix image create -T /phenix/vmdb2/scripts/ubuntu --format qcow2 --release focal -c ubuntu
phenix image build ubuntu -o /phenix -c -x
mv /phenix/ubuntu.qc2 /phenix/images
Access phenix web
The phēnix web interface allows for creating, configuring, and managing SCEPTRE experiments. First open up http://<Headnode IP address>:3000
in your browser, and you’ll see the home page displayed:
Upload topology
You must first upload the topology file for phēnix to ingest. From the home page, click on the Configs
tab to navigate to the configurations page. Next click the button and drag/drop the helloworld.yaml
file into the dialog box to upload it:
Alternatively, you can upload the topology via the CLI using the following command on the headnode:
phenix config create /phenix/topologies/helloworld.yaml
You should now see the helloworld
topology in the configs table:
Create Experiment
Experiments
tab.Fill out the dialog as shown (leaving everything else blank) and then click the button:
Alternatively, you can create the experiment via the CLI using the following command on the headnode:
phenix exp create my_first_experiment -t helloworld
Deploy Experiment
Your newly created experiment will appear in the experiments table:
To start the experiment, click the button and then click :
Alternatively, you can deploy the experiment via the CLI using the following command on the headnode:
phenix exp start my_first_experiment
Once your experiment starts up, its status will be marked as . Click on the name of the experiment , and phēnix will switch to the experiment info page:
Tip - Click on the State of Health button to see a network topology map, and click the Go Back button to return to the Experiment Info page.
Test
From here you can interact with individual Virtual Machines (VMs) by clicking on the respective screenshot, which will open a new browser tab for that VM:
Login as the ubuntu
user (with password ubuntu
) for either of the VMs and trying pinging the other IP address:
Now that you can run the basic helloworld topology, we are ready to run a topology of a notional ICS. This topology, called SCEPTRE-on-a-Platter (SOAP), models a notional SCADA system for a 300 bus microgrid system. The model uses PyPower to model the physical process itself, Ignition SCADA software, and additionally includes the ControlThings.io environment to additionally provide a testing suite for the ICS environment.
phenix image create -O /phenix/vmdb2/overlays/bennu,/phenix/vmdb2/overlays/brash -T /phenix/vmdb2/scripts/aptly,/phenix/vmdb2/scripts/bennu --format qcow2 --release focal -c bennu
phenix image build bennu -o /phenix -c -x
wg-sceptre-core@sandia.gov
with your request.soap
topology.soap
scenario file under the “Experiment Scenario” dropdown. phenix exp create my_soap_experiment -t soap -s soap
To get help with SCEPTRE, contact us at wg-sceptre@sandia.gov
.