5. Operate¶
PEAT’s primary interface is a command-line program with sub-commands for each function.
scan: carefully discover supported devices on a networkpull: acquire artifacts from a device, such as process logic, configuration, or firmwareparse: parse artifacts to extract useful and human-readable logic or configurationpush: push firmware, logic, or configuration to a devicepillage: search for OT device-specific configuration and project files on a host machineheat: extract and parse device artifacts from network traffic captures (PCAPs)
5.1. Basics¶
Note
Refer to the system requirements and installation documentation for details on setup and installation
Note
Refer to Reference Documents for documentation of the available command line arguments
# Display the command line usage for PEAT and it's commands
# --help and -h both work to display help, as well as no arguments
peat --help
peat scan --help
peat pull -h
peat parse -h
peat push -h
peat pillage -h
# Examples
peat scan --examples
peat pull --examples
peat parse --examples
peat push --examples
peat pillage --examples
peat heat --examples
# The standard PEAT Linux install has a man page available
man peat
# Scanning
peat scan -i 192.0.2.0/24
# Pulling
peat pull -i 192.0.2.0/24
# Parsing
peat parse <path>
peat parse <path1> <path2> ...
peat parse *.ext
# -- is required before any path arguments if arguments
# like "-d" that have multiple values are used.
peat parse -d <device-type> -- <path>
# Run name is the name of the folder in ./peat_results with peat data
# either "--run-name" or "-R" can be used
peat scan -i 192.0.2.0/24 --run-name scan_example
peat pull -i 192.0.2.0/24 -R pull_example
peat parse -d selrtac --run-name parse_example -- ./examples/
# Pushing
peat push -d <device-type> -i <ip> -- <filename>
peat push -d <device-type> -i <ip> -t <push-type> <filename>
# Pillage
# NOTE: currently (04/20/2023) a configuration file with a "pillage" section is required.
# NOTE: pillage MUST be run as root, using sudo or by su'ing to root ("sudo su")
peat pillage -c ./examples/peat-config.yaml -P <raw-image-file>
peat pillage -c ./examples/peat-config.yaml -P <local-filesystem-path>
# List available device modules
# NOTE: currently this shows ALL modules, including ones not supported for the command (e.g. scan)
# A future version of PEAT will only show modules that support the command (e.g. scan)
peat scan --list-modules
peat pull --list-modules
peat parse --list-modules
peat push --list-modules
peat pillage --list-modules
# Dry run where no commands will be executed
# Useful for understanding PEAT's behavior and experimenting with configuration
# options before packets are sent to OT devices.
peat scan --dry-run -c ./examples/peat-config.yaml -vV -i 192.0.2.0/24
5.1.1. Note about Windows usage¶
We recommend running in an Administrator-level PowerShell terminal or script. Running as a standard user will reduce performance slightly, since certain Windows networking APIs are restricted. This will affect some network features, such as the ability to do ICMP pings, ARP pings, or network sniffing. PEAT will run fine in a CMD terminal but some terminal functionality may not work as well (e.g. terminal output colors and formatting).
5.2. Output¶
PEAT will output a number of files, depending on the subcommand used (e.g. scan). By default, these files will be saved to the directory ./peat_results/ in the current working directory. The directory files are saved to can be changed with the -o <dirname> command line argument, OUT_DIR in a configuration file, or by setting the PEAT_OUT_DIR environment variable.
Device-specific output will be in <out-dir>/<run-dir>/devices/<device-id>/. For example, the output from running peat pull -R example_pull -i 192.0.2.20 will be in the ./peat_results/example_pull/devices/192.0.2.20/ directory.
5.2.1. Run directory and run name¶
Every time PEAT is run, a new sub-directory of ./peat_results/ is created. This is the “run dir”, and contains all of the data for a run. The name of the directory can be configured using -R (--run-name) argument. If the run name isn’t specified, it will be auto-generated based on the following format: <peat-command>_<config-name>_<timestamp>_<run-id>. The run directory can also be set directly with --run-dir, which will bypass peat_results/.
Examples:
peat scan --run-name example_run -i 127.0.0.1results in./peat_results/example_run/peat pull -c ./examples/peat-config-sceptre-testing.yaml -i 127.0.0.1results in./peat_results/pull_sceptre-test-config_2022-06-17_165532013980/peat scan -i 127.0.0.1results in./peat_results/scan_default-config_2022-09-27_165532013980/peat scan --rundir example_run_dir -i 127.0.0.1results in./example_run_dir/
5.2.2. Directory structure¶
The location and names of directories are configurable, refer to the Configure section for details on how to do this.
devices/: All output for devices, with subdirectories for each device by device ID. The device ID is typically the IP address in the case of pulls, but can be other identifying information if the IP isn’t known, such as name, serial port, name of source file, or other identifiers.elastic_data/Copies of documents pushed to Elasticsearch. These can be used to rebuild the Elasticsearch data if you only have the files or don’t have a Elasticsearch server available when running PEAT. This is only created if Elasticsearch is in use (the-eargument).mappings/: Elasticsearch type mappings for the PEAT indices
heat_artifacts/: Output from HEAT (peat heat <args>)logs/: Records of PEAT’s command output, protocol logs, and other information that’s useful for debugging or knowing what PEAT did. These include protocol- and module-specific log files (e.g. Telnet logs, ENIP logs).peat_metadata/: files related to PEAT itself, including JSON and YAML formatted dumps of PEAT’s configuration and internal state.summaries/: Summary results of a command as JSON files, e.g. Scan summary, Pull summary, or Parse summary. These include metadata about the operation (e.g., how many files were parsed), as well as a combined set of device summaries (most of the data, but some fields are excluded, like events, memory, blobs, etc.). To view the complete results for devices, look in thedevices/directory.temp/: Temporary files, used by PEAT during a run to put files temporarily before being moved elsewhere.
5.2.2.1. Typical output structure¶
NOTE: the file structure below will differ if any of the *_DIR
variables were configured, e.g. OUT_DIR, ELASTIC_DIR or LOG_DIR.
... represents “miscellaneous files”.
The output directory structure generally looks like this:
./peat_results/
README.md
<command>_<config-name>_<timestamp>_<run_id>/
devices/
<device-id>/
device-data-summary.json
device-data-full.json
...
elastic_data/
mappings/
...
...
heat_artifacts/
...
logs/
enip/
...
peat.log
json-log.jsonl
debug-info.txt
elasticsearch.log
telnet.log
...
peat_metadata/
peat_configuration.yaml
peat_state.json
peat_state.yaml
summaries/
scan-summary.json
pull-summary.json
parse-summary.json
temp/
...
5.2.3. Viewing the results¶
Examples and helpful commands for inspecting the file results.
peat pull --run-name example_pull -i 192.0.2.0/24 10.0.0.5-10 172.16.17.18
# … wait a while …
# View scan results
cat peat_results/example_pull/summaries/scan-summary.json
# Use with the "jq" command for color-highlighted output (https://stedolan.github.io/jq/)
cat peat_results/example_pull/summaries/scan-summary.json | jq .
# View listing of all the files pulled (requires the "tree" command, install using "sudo apt install tree")
tree -arv peat_results/example_pull/devices/
# Filtering memory and event entries from device results for 192.168.3.200 using 'jq'
cat peat_results/example_pull/devices/192.168.3.200/device-data-full.json | jq 'del(.memory,.event)'
5.2.4. Device-specific results¶
Warning
These lists are not exhaustive
5.2.4.1. Schneider Modicon M340¶
Type of file |
File extension |
Description |
|---|---|---|
project |
apx |
Raw project file pulled from the device (‘peat parse’ can be run on this) |
parsed-config |
txt |
Configuration and metadata extracted from device and/or project file |
tc6 |
xml |
TC6 format usable by PLCOpen editor and compilable to Structured Text or executable C-code emulating the logic. Only written if logic and/or variables are successfully extracted. |
logic |
st |
Structured Text extracted from project file. Only written if logic is successfully extracted. |
text-dump |
txt |
Debugging dump created if logic extraction fails |
blob-packets |
txt |
Raw dump of the bytes transferred when downloading a project file |
umas-packets |
json |
Metadata and contents of UMAS packets transferred when downloading a project file |
5.2.4.2. Allen-Bradley ControlLogix¶
Type of file |
File extension |
Description |
|---|---|---|
parsed-logic |
txt |
Decompiled ladder logic in a human-readable form |
parsed-logic |
json |
Extracted values from the ladder logic in machine-readable form |
raw-logic |
json |
The raw tags and values pulled from the device |
5.2.4.3. SEL Relays¶
Type of file |
File extension |
Description |
|---|---|---|
SET_ALL |
TXT |
Text file containing all relay settings in one file |
CFG |
TXT |
Text file containing a list of all config files resident inside the relay |
SET_* |
TXT |
Individual configuration files for SEL relay, varies by relay model |
5.3. Scanning for systems on a network¶
PEAT’s scanning functionality is essentially a lightweight Nmap specialized for OT devices, with a focus on minimizing or eliminating impacts to field devices and processes. It can discover supported OT devices on a network, determine their type, and retrieve basic information about them.
IP subnets can be scanned by specifying the IP subnet using CIDR prefix notation. For example, running peat scan -i 192.0.2.0/24 will scan all 254 devices in the range 192.0.2.1 to 192.0.2.254.
The results will be written to a file named ./peat_results/the_run_name/summaries/scan-summary.json. Terminal output can additionally be enabled using the argument -E (--print-results). If Elasticsearch output is enabled, then scan results will also be saved to the peat-scan-summaries index in Elasticsearch.
Detailed information can be collected via a pull (peat pull). Note that peat pull will implicitly perform a scan, and only pull from the devices positively identified by the scan. If you know you want to collect detailed information in addition to device discovery, then just run a pull via peat pull instead of performing a scan followed by a pull.
5.3.1. Broadcast scanning¶
Network broadcasts can be used to discover devices on a network in a more efficient and less intrusive manner. This method will send a single packet (or set of packets) to the broadcast address for a subnet (e.g., 192.0.2.255 for the subnet 192.0.2.0/24) and wait for devices to respond. Any devices that respond will then be interrogated further using the normal unicast IP methods available. Note that only IP (OSI layer 3) broadcasts are currently supported. Layer 2 (MAC) broadcast support may be added at a later date.
5.3.1.1. Benefits¶
Reduced load on network (less packets sent/received)
Only devices expecting the traffic respond
Reduced risk of causing issues with unrelated devices
Can efficiently discover and query devices in extremely large networks (e.g. a full class B subnet with 65,535 IPs)
5.3.1.2. Supported devices¶
ControlLogix (using the CIP protocol)
5.3.1.3. Running as container¶
Warning
The --network "host" argument to Docker/Podman is required. This is because PEAT must be in the same broadcast domain(s) as the network(s) being scanned, and container network isolation prevents that.
TODO: walk through Docker arguments
# Podman
podman run --rm -v $(pwd)/peat_results:/peat_results --network "host" -i --privileged ghcr.io/sandialabs/peat scan -b 192.0.2.255
# Docker
docker run --rm -v $(pwd)/peat_results:/peat_results --network "host" -i --privileged ghcr.io/sandialabs/peat scan -b 192.0.2.255
5.3.1.4. Examples¶
# Discover devices on a network using IP broadcasts
peat scan -b 192.0.2.0/24
# Broadcast using an interface
peat scan -b eth1
# Broadcast from a file
peat scan -b examples/broadcast_targets.txt
# Broadcast combinations
peat scan -b 192.0.3.0/24 192.168.2.255 192.0.2.0/25 eth1 examples/broadcast_targets.txt
# Pull all devices discovered using IP broadcasts
peat pull -b 192.0.2.0/24
5.4. Usage examples¶
5.4.1. Scan¶
# Scan a single host
peat scan -i 192.0.2.1
# Discover devices on a network (scan a subnet) using Unicast IP
peat scan -i 192.0.2.0/24
# Discover devices on a network using IP broadcasts
peat scan -b 192.0.2.0/24
# Search for M340 and ControlLogix devices
peat scan -d m340 controllogix -i 192.0.2.0/24
# Search for PLCs
peat scan -d plc -i 192.0.2.0/24
# Discover devices on multiple networks, with verbose output
peat scan -v -i 192.168.200.0/24 192.0.2.0/24
# Scan a specific range of IP addresses (192.0.2.200 - 192.0.2.205)
peat scan -i 192.0.2.200-205
# Multiple subnets, specific IP, and only SCEPTRE and SEL devices
peat scan -d sceptre sel -i 192.0.2.0/24 192.0.0.0/24 192.168.0.10
# Multiple ranges of host arguments
# This combination resolves to 755 unique IPs
peat scan -i 172.16-30.80-90.12-14 192.0.2.19-23 localhost 10.0.9.0/24
# Broadcast using an interface
peat scan -b eth1
# Broadcast from a file
peat scan -b examples/broadcast_targets.txt
# Broadcast combinations
peat scan -b 192.0.2.0/24 192.0.0.255 192.168.0.0/25 eth1 examples/broadcast_targets.txt
# Use the results of a previous scan
peat scan -f examples/example-scan-summary.json
# Same as above, but with scan results piped to standard input (stdin)
cat examples/example-scan-summary.json | peat scan -f -
# Use a text file with hosts to target separated by newlines
peat scan -i examples/target_hosts.txt
# Use a JSON file with hosts to target as a JSON array
peat scan -i examples/target_hosts.json
# Use a combination of files and host strings
peat scan -i examples/target_hosts.txt examples/target_hosts.json 172.16.3.0/24 10.0.0.1
# Pipe results of one scan to another (note the "-q/--quiet" argument)
peat scan -q --print-results -d clx -i 192.0.2.0/24 | peat scan -f -
# Another example of piping results of one scan to another
# Note that '-E' is shorthand for '--print-results'
peat scan -q -E --sweep -i 192.0.2.0/24 | peat scan -f -
# Assume the host is online and skip the online check (similar to nmap -Pn)
# NOTE: this significantly increases the scan time of more than one host!
peat scan --assume-online -i 192.0.2.0/24
# Use previous results and skip online check
# Note: "-Y" is a shorthand for "--assume-online"
peat scan -Y -f examples/example-scan-summary.json
# Just find what hosts are online (similar to nmap -sn or -sS)
peat scan --sweep -i 192.0.2.0/24
# Upload results to an Elasticsearch server listening on localhost
peat scan -d selrelay -i 192.0.2.0/24 -e
# Send results to a Malcolm instance running on localhost
# Malcolm uses OpenSearch instead of Elasticsearch
peat scan -d selrelay -i 192.0.2.0/24 -e https://user:pass@localhost/mapi/opensearch
# Upload results to a remote Elasticsearch server
peat scan -d selrelay -i 192.0.2.0/24 -e http://192.0.2.20:9200
# Search for devices on serial ports 0 through 4 (COM0-4 on Windows or /dev/ttyS0-4 on Linux)
peat scan -s 0-4
# Scan for serial devices on /dev/ttyUSB0 and /dev/ttyS1
peat scan -s /dev/ttyUSB0 /dev/ttyS1
# Only use baud rate of 19200 when checking ports
peat scan -s 0-4 --baudrates 19200
# Enumerate active serial ports on a host
# On Windows, this would be COM0 - COM9
# On Linux, this would be /dev/ttyS0 - /dev/ttyS9 and /dev/ttyUSB0 - /dev/ttyUSB9
peat scan -s 0-9 --sweep
# Scan for SEL relays connected to serial devices on serial ports COM4 and COM6
peat scan -d selrelay -s COM4 COM6
# Same as above, but only attempt baud rate of 9600
peat scan -d selrelay -s COM4 COM6 --baudrates 9600
# Force identification checks of all ports during scanning, regardless of
# the status of the port and including closed ports. This takes significantly
# longer and generates much more traffic and load on devices. Only use if
# you aren't worried about potential performance impacts to field devices!
peat scan -d controllogix -i 192.0.2.0/24 --intensive-scan
# Name the run "scan_example". This will put results in ./peat_results/scan_example/
peat scan --run-name scan_example -d clx -i 192.0.2.0/24
# Use PEAT configuration settings from a YAML file (preferred method)
peat scan -d clx -i 192.0.2.0/24 -c peat-config.yaml
# List available modules
peat scan --list-modules
# List aliases
peat scan --list-aliases
# List mappings of aliases to PEAT module(s)
peat scan --list-alias-mappings
# List all modules, aliases, and alias to module mappings
peat scan --list-all
# Dry run, no scan will be executed
# Useful for verifying configuration options before pulling the
# metaphorical trigger on a scan.
peat scan --dry-run -d clx -i 192.0.2.0/24 -c peat-config.yaml
5.4.2. Pull¶
# Pull artifacts from a single device
peat pull -i 192.0.2.1
# Pull artifacts from all devices on a subnet
peat pull -i 192.0.2.0/24
# Pull all devices discovered using IP broadcasts
peat pull -b 192.0.2.0/24
# Pull from an AB ControlLogix PLC
peat pull -d controllogix -i 192.0.2.1
# Pull from all RTUs on a subnet
peat pull -d rtu -i 192.0.2.0/24
# Pull from a single M340 PLC, with a 1-second timeout
peat pull -d m340 -i 192.0.2.41 -T 1.0
# Pull from any M340 and ControlLogix PLCs with
# IPs in the range from 192.0.2.1 and 192.0.2.5
peat pull -d m340 controllogix -i 192.0.2.1-5
# Pull from multiple subnets and a specific IP
peat pull -i 192.0.2.0/24 192.0.0.0/24 192.168.0.10
# Only output the results from the pull, no logs (-q is equivalent to --quiet)
peat pull -q --print-results -d m340 -i 192.0.2.1
# Pull from all M340 PLCs and upload results to a local Elasticsearch server
peat pull -d m340 -i 192.0.2.0/24 -e
# Send results to a Malcolm instance running on localhost
# Malcolm uses OpenSearch instead of Elasticsearch
peat pull -d m340 -i 192.0.2.0/24 -e https://user:pass@localhost/mapi/opensearch
# Upload pull results to a remote Elasticsearch server running on 192.0.0.33
# NOTE: utilize a PEAT YAML config file to further customize Elasticsearch settings (e.g. index names)
peat pull -d m340 -i 192.0.2.0/24 -e http://192.0.0.33:9200
# Pull logic and config from M340 and upload results to a local Elasticsearch server
peat pull -v -d m340 -i 192.0.2.0/24 -e
# Use the results of a previous scan, pull, or push
peat pull -f examples/example-scan-summary.json
cat examples/example-scan-summary.json | peat pull -f -
# Use a text file with hosts to target separated by newlines
peat pull -i examples/target_hosts.txt
# Assume the host is online and skip the online check (similar to nmap -Pn)
peat pull -d sceptre --assume-online -i 192.0.2.35
# "-Y" is a shorthand for "--assume-online"
peat pull -d sceptre -Y -i 192.0.2.35
# Use previous results and skip online check
peat pull --assume-online -f examples/example-scan-summary.json
# Pull from a Woodward 2301E on serial port 0 (COM0 on Windows or /dev/ttyS0 on Linux)
peat pull -d 2301e -s 0
# Colorize and format the results of a pull using 'jq'
# Note that '-E' is shorthand for '--print-results'
peat pull -q -E -d clx -i 192.0.2.0/24 | jq .
# Name the run "pull_example". This will put results in ./peat_results/pull_example/
peat pull --run-name pull_example -d clx -i 192.0.2.0/24
# YAML configuration file with PEAT settings
# This enables fine-grained configuration, including login credentials
peat pull -d clx -i 192.0.2.0/24 -c peat-config.yaml
# Dry run, no pull will be executed
# Useful for verifying configuration options before pulling the
# metaphorical trigger on a pull.
peat pull --dry-run -d clx -i 192.0.2.0/24
5.4.3. Parse¶
# Run on a saved Schneider M340 project file (aka "Station.apx" in Unity)
peat parse -d m340 ./project-file.apx
# Grab the first .apx file found in the directory
peat parse -d m340 ./folder/
# Parse a Schneider Unity project file on Windows (e.g. on a engineering workstation)
peat parse -d m340 'C:\\Projects\\Station.apx'
# Parse a SET_ALL.txt file from a SEL relay
peat parse -d selrelay ./SET_ALL.TXT
# Parse configuration from a SEL QuickSet database (*.rdb file)
peat parse -d sel breaker-1.rdb
peat parse -d sel ./*.rdb
# Multiple file paths arguments. PEAT will automatically select the
# appropriate module to use based on the file names, in this case SELRelay.
peat parse ./set_all.txt ./751_001.rdb
# Parse piped input (Linux and MacOS)
cat ./SET_ALL.TXT | peat parse -d selrelay
# Parse input via file redirection
peat parse -d m340 < ./project-file.apx
# Piping in Windows PowerShell
# Note: Get-Content won't work with binary blobs
Get-Content .\\set_all.txt | peat parse -d selrelay
# Process parse results using 'jq' to extract the IP address
peat parse -q --print-results -d m340 ./project-file.apx | jq '.["M340"][]["ip"]'
# Count number of events using 'jq'
# '-E' is shorthand for '--print-results'
peat pull -q -E -d selrtac -i 192.0.2.2 | jq '.event | length'
# Upload results to a Elasticsearch server running on localhost
peat parse -e -d selrelay ./SET_ALL.TXT
# Send results to a Malcolm instance running on localhost
# Malcolm uses OpenSearch instead of Elasticsearch
peat parse -d selrelay -e https://user:pass@localhost/mapi/opensearch ./SET_ALL.TXT
# Upload results to a remote Elasticsearch server at 192.0.2.5
peat parse -e http://192.0.2.5:9200 -d selrelay ./SET_ALL.TXT
# Parse out of a directory (NOTE: this recursively searches for files!)
peat parse -d m340 ./m340_files/
# Name the run "parse_example"
# This will put results in './peat_results/parse_example/'
peat parse --run-name parse_example -d selrelay ./SET_ALL.TXT
# YAML configuration file with PEAT settings
# This enables fine-grained configuration, including login credentials
peat parse -c peat-config.yaml -d sel ./SET_ALL.TXT
5.4.4. Push¶
# !!! NOTE !!!
# Due to a Python quirk, a '--' is required between optional
# arguments (such as device types or hosts) and the positional
# argument (the push filepath). Otherwise, it will error.
# Push firmware to an Allen-Bradley ControlLogix 1756 PLC
peat push -d controllogix -i 192.0.2.1 -- ./1756.011.dmk
# Push a single configuration file to a SEL relay
peat push -d selrelay -i 192.0.2.1 -- SET_1.TXT
# Push a directory containing configuration files to a SEL 451 Relay
peat push -d selrelay -i 192.0.2.1 -- ./SETTINGS/
# NOTE: currently, only a single file or directory can be specified for a push,
# multiple files cannot be specified. A workaround is to create a new directory,
# copy the config files to be pushed to the new directory, then specify that
# directory in the push command.
mkdir ./custom_configs/
cp ./SET_1.TXT ./SET_6.TXT ./custom_configs/
peat push -d selrelay -i 192.0.2.1 -- ./custom_configs/
# Update the config of all SEL relays on multiple subnets
peat push -d selrelay -i 192.0.2.0/24 192.0.0.0/24 -- ./SET_1.TXT
# Skip the scan and verification step before performing a push.
# This also implicitly skips the online check, implying '--assume-online'.
peat push --push-skip-scan -d selrelay -i 192.0.2.22 -- ./SET_1.TXT
# Use PEAT configuration settings from a YAML file
peat push -d selrelay -i 192.0.2.222 -c peat-config.yaml -- ./examples/devices
# Dry run, no push will be executed.
# Useful for verifying configuration options before pulling the
# metaphorical trigger on a push.
peat push --dry-run -d selrelay -i 192.0.2.21 -c peat-config.yaml -- ./examples/devices
5.5. Summaries¶
The output of several commands is in a structure known as a “summary”. This can be a JSON file, an Elasticsearch document, or Python dict representing the results of a given command (e.g. scan).
5.5.1. Scan summary¶
The scan summary represents the results of device discovery and verification, such as during a scan, pull, push, or other related network operations. Scan summaries are stored as JSON in the directory configured in the SUMMARIES_DIR configuration option (defaults to peat_results/summaries/), printed to the terminal (stdout) as JSON when running a scan using peat scan, or returned as a dict when calling peat.api.scan_api.scan().
Field |
Example(s) |
Description |
|
|---|---|---|---|
peat_version |
keyword |
2.0.1.20210930 |
Version of PEAT that performed the scan, if applicable |
peat_run_id |
keyword |
162493555659 |
ID of the PEAT run for this scan. This is the value of |
scan_duration |
double |
5.114307479001582 |
Approximate time the scan took, in seconds |
scan_modules |
keyword |
SELRelay |
PEAT device modules used to perform the scan |
scan_type |
keyword |
unicast_ip |
Type of scan. |
scan_targets |
keyword |
192.0.2.0/24 |
Targets used by the scan. These are the actual targets used after PEAT has parsed them into usable targets and removed duplicates. |
scan_original_targets |
keyword |
|
Original arguments used for the scan. This is the targets as provided by the user prior to PEAT parsing them into usable targets and removing duplicates. |
num_hosts_active |
long |
1 |
Number of hosts that responded, regardless of verification status |
num_hosts_online |
long |
1 |
Number of hosts that responded but were not verified |
num_hosts_verified |
long |
1 |
Number of hosts that were verified |
hosts_online |
ip |
192.0.2.36 |
IDs of hosts that responded. Note: this does NOT include verified hosts! |
hosts_verified |
nested |
{…} |
Information from hosts that were successfully verified |
5.5.1.1. Example¶
{
"peat_version": "2.2.0.20221220",
"peat_run_id": "168201502118",
"scan_duration": 0.6260044369846582,
"scan_modules": [
"ION"
],
"scan_type": "unicast_ip",
"scan_targets": [
"192.0.2.55"
],
"scan_original_targets": [
"192.0.2.55"
],
"num_hosts_active": 1,
"num_hosts_online": 0,
"num_hosts_verified": 1,
"hosts_online": [],
"hosts_verified": [
{
"architecture": "ppc32",
"description": {
"brand": "PowerLogic ION",
"full": "Schneider Electric PowerLogic ION 8650",
"model": "8650",
"product": "PowerLogic ION 8650",
"vendor": {
"id": "Schneider",
"name": "Schneider Electric"
}
},
"id": "192.0.2.55",
"ip": "192.0.2.55",
"mac": "00:60:78:00:00:00",
"type": "Power Meter",
"os": {
"full": "KADAK AMX RTOS for Motorola PowerPC PPC32",
"name": "AMX",
"vendor": {
"id": "KADAK",
"name": "KADAK Products Ltd."
},
"version": "1.05a"
},
"interface": [
{
"type": "ethernet",
"mac": "00:60:78:00:00:00",
"ip": "192.0.2.55",
"services": [
{
"port": 23,
"protocol": "telnet",
"status": "open",
"transport": "tcp"
},
{
"port": 80,
"protocol": "http",
"status": "verified",
"transport": "tcp"
}
]
}
],
"service": [
{
"port": 23,
"protocol": "telnet",
"status": "open",
"transport": "tcp"
},
{
"port": 80,
"protocol": "http",
"status": "verified",
"transport": "tcp"
}
],
"related": {
"files": [
"index.html"
],
"ip": [
"192.0.2.55"
],
"mac": [
"00:60:78:00:00:00"
],
"ports": [
23,
80
],
"protocols": [
"http",
"telnet"
]
},
"peat_module": "ION"
}
]
}
5.5.2. Pull summary¶
The pull summary is a summary of device pulls. Pull summaries are stored as JSON in the directory configured in the SUMMARIES_DIR configuration option (defaults to peat_results/summaries/) or returned as a dict when calling peat.api.pull_api.pull(). Only the results of the pull (list of device data) is printed to the terminal (stdout) as JSON when running a scan using peat pull.
Field |
Example(s) |
Description |
|
|---|---|---|---|
peat_version |
keyword |
2.0.1.20210930 |
Version of PEAT that performed the pull, if applicable |
peat_run_id |
keyword |
162493555659 |
ID of the PEAT run for this pull. This is the value of |
pull_duration |
double |
5.114307479001582 |
Approximate time pull took, in seconds |
pull_modules |
keyword |
SELRelay |
PEAT device modules used to perform the pull |
pull_targets |
keyword |
192.0.2.0/24 |
Targets used by the pull. These are the actual targets used after PEAT has parsed them into usable targets and removed duplicates. |
pull_original_targets |
keyword |
|
Original arguments used for the pull. This is the targets as provided by the user prior to PEAT parsing them into usable targets and removing duplicates. |
pull_devices |
keyword |
192.0.2.36 |
Devices that were pulled from. Note that this may differ from |
pull_comm_type |
keyword |
unicast_ip |
Type of pull. |
num_pull_results |
long |
1 |
Number of hosts that were pulled from |
pull_results |
nested |
{…} |
Data of hosts that were pulled, with some fields excluded (such as binary blobs) |
5.5.2.1. Example¶
{
"peat_version": "2.0.1.20210930",
"peat_run_id": "164141725302",
"pull_duration": 14.823716692160815,
"pull_modules": [
"ION"
],
"pull_targets": [
"192.0.2.55"
],
"pull_original_targets": [
"192.0.2.55"
],
"pull_devices": [
"192.0.2.55"
],
"pull_comm_type": "unicast_ip",
"num_pull_results": 1,
"pull_results": [
{
"description": {
"brand": "PowerLogic ION",
"full": "Schneider Electric PowerLogic ION 8650",
"model": "8650",
"product": "PowerLogic ION 8650",
"vendor": {
"id": "Schneider",
"name": "Schneider Electric"
}
},
"firmware": {
"last_updated": "2018-05-23 16:19:59",
"version": "004.030.000"
},
"id": "192.0.2.55",
"ip": "192.0.2.55",
"mac": "00:60:78:00:00:00",
"type": "Power Meter",
"serial_number": "LW-1111A111-11",
"interface": [
{
"type": "ethernet",
"mac": "00:60:78:00:00:00",
"ip": "192.0.2.55",
"services": [
{
"port": 23,
"protocol": "telnet",
"status": "open",
"transport": "tcp"
},
{
"port": 80,
"protocol": "http",
"status": "verified",
"transport": "tcp"
}
]
}
],
"service": [
{
"port": 80,
"protocol": "http",
"status": "verified",
"transport": "tcp"
},
{
"port": 23,
"protocol": "telnet",
"status": "open",
"transport": "tcp"
}
],
"related": {
"ip": [
"192.0.2.55"
],
"mac": [
"00:60:78:00:00:00"
]
},
"extra": {
"note": "additional data removed from this example for conciseness"
}
}
]
}
5.5.3. Parse summary¶
The parse summary represents the results of parsing device artifacts using peat parse. Parse summaries are stored as JSON in the directory configured in the SUMMARIES_DIR configuration option (defaults to peat_results/summaries/), printed to the terminal (stdout) as JSON when running a parse using peat parse, or returned as a dict when calling peat.api.parse_api.parse().
Field |
Example(s) |
Description |
|
|---|---|---|---|
peat_version |
keyword |
2.0.1.20210930 |
Version of PEAT that performed the parse, if applicable |
peat_run_id |
keyword |
162493555659 |
ID of the PEAT run for this parse. This is the value of |
parse_duration |
double |
5.114307479001582 |
Approximate time parse took, in seconds |
parse_modules |
keyword |
SELRelay |
PEAT device modules used to perform the parse |
input_path |
keyword |
/devices/ |
Original argument to the parse |
files_parsed |
keyword |
/devices/set_all.txt |
Files that were parsed |
num_files_parsed |
long |
1 |
Count of the files that were parsed |
num_parse_successes |
long |
1 |
Count of successful parses |
num_parse_failures |
long |
1 |
Count of failed parses |
parse_failures |
nested |
{…} |
Results of failed parses |
parse_results |
nested |
{…} |
Results of successful parses |
5.5.3.1. Example¶
{
"peat_version": "2.2.0.20221220",
"peat_run_id": "168201430248",
"parse_duration": 0.012006660108454525,
"parse_modules": [
"AwesomeTool"
],
"input_paths": [
"./examples/example_peat_module/awesome_output.json"
],
"files_parsed": [
"/home/cegoes/peat/examples/example_peat_module/awesome_output.json"
],
"num_files_parsed": 1,
"num_parse_successes": 1,
"num_parse_failures": 0,
"parse_failures": [],
"parse_results": [
{
"name": "awesome_output.json",
"path": "/home/cegoes/peat/examples/example_peat_module/awesome_output.json",
"module": "AwesomeTool",
"results": {
"description": {
"vendor": {
"id": "ACME",
"name": "ACME, Inc."
}
},
"id": "192.0.2.20",
"ip": "192.0.2.20",
"name": "SomeDevice",
"type": "PLC",
"os": {
"full": "Ubuntu 19.10",
"name": "Ubuntu",
"version": "19.10"
},
"interface": [
{
"type": "ethernet",
"ip": "192.0.2.20",
"subnet_mask": "255.255.255.0",
"gateway": "192.0.2.1",
"services": [
{
"enabled": true,
"port": 21,
"protocol": "telnet",
"transport": "tcp"
}
]
}
],
"service": [
{
"enabled": true,
"port": 21,
"protocol": "telnet",
"transport": "tcp"
}
],
"related": {
"ip": [
"192.0.2.1",
"192.0.2.20"
],
"ports": [
21
],
"protocols": [
"telnet"
]
}
}
}
]
}
5.6. Containers (Docker/Podman)¶
Note
When using Podman on Red Hat Enterprise Linux (RHEL), replace docker with podman in commands. Podman is similar to Docker and has a nearly identical interface, and therefore most aspects of this guide are still applicable. However, there may be slight differences in lesser-used arguments as well as differences in behavior. Refer to the official Podman documentation for further details.
Note
This document was written with the assumption that Docker is running on Linux and is installed as directed by the official Docker documentation. Your environment will likely differ slightly and so there may be differences in output and commands (for example, filesystem paths or arguments used). Refer to the Docker documentation for your platform for further details.
Warning
The sudo command is required before all docker ... commands unless you have configured the docker group as directed by the Docker Linux setup guide. It is omitted from the commands in this guide for brevity and because it’s a common configuration.
5.6.1. Docker arguments¶
Note
Take note of the arguments to docker run when reading the examples. All command line arguments after “docker run” and before the image name (“ghcr.io/sandialabs/peat”) are arguments to Docker, and any after the image name are arguments to PEAT.
Warning
Results will NOT be saved unless the output directory is mounted in the container! Ensure -v $(pwd)/peat_results:/peat_results is always included in the arguments to docker run.
Docker arguments of note:
--network "host": removes Docker’s network isolation and provides PEAT access to the local network interfaces. If missing, scans will be less reliable, MAC addresses of devices will not be resolved, broadcast scanning will not work, and PEAT will not be able to push results to a Elasticsearch server listening onlocalhost.-i: “interactive”, which enables STDIN, and is necessary for CLI PEAT-v /local/system/path:/path/in/container/: makes a local filesystem directory available inside the container--privileged: gives PEAT full root access to the local system, which can be helpful for certain scans
5.6.2. Container usage¶
This is the standard PEAT command line interface, bundled up as a container.
Note
If peat is run as a container then the file path logged in the container will differ from that on host. This affects logging messages and anywhere else paths are noted or logged.
Warning
Currently, PEAT Pillage when used as a container WILL NOT work with Windows disk images, and MAY NOT work reliably with Linux disk images. File systems WILL work if used with a volume mount, e.g. using -v with docker run (see the examples below for details). In the meantime, if you need Pillage functionality we recommend using the Linux or Windows executable version of PEAT instead of the container.
5.6.2.1. View the command line help¶
# Podman
podman run --rm -i ghcr.io/sandialabs/peat --help
podman run --rm -i ghcr.io/sandialabs/peat scan --help
# Docker
docker run --rm -i ghcr.io/sandialabs/peat --help
docker run --rm -i ghcr.io/sandialabs/peat scan --help
5.6.2.2. Parsing files and directories¶
Warning
File paths on the host system (the system running docker) cannot be used directly due to Docker’s filesystem isolation. Instead, pipe the file in and use the -i argument to Docker run (for a single file), or mount a volume into the container (for multiple files or a directory). Examples of both are below.
To parse a single file, use a pipe (|), or use redirection to STDIN (<)
# Using cat
cat examples/devices/sel/sel_351/set_all.txt | docker run --rm -i ghcr.io/sandialabs/peat parse -d selrelay
# Using file redirection
docker run --rm -i ghcr.io/sandialabs/peat parse -d selrelay < examples/devices/sel/sel_351/set_all.txt
To parse data from a directory, mount it as a volume
# General usage. "/dirname" is the name of the directory you want to parse.
docker run --rm -v "$(pwd)/dirname":"/dirname" -v $(pwd)/peat_results:/peat_results -i ghcr.io/sandialabs/peat parse -v -d ion -- "/dirname"
# Push the parse results to an Elasticsearch server listening on localhost
docker run --rm -v $(pwd)/peat_results:/peat_results --network "host" -i ghcr.io/sandialabs/peat parse -e -v -d selrelay -- "/peat_results/*/devices/"
# Another concrete example of parsing a directory. Note the absolute path to /examples.
docker run --rm -v "$(pwd)/examples":"/examples" -v $(pwd)/peat_results:/peat_results --network "host" -i --privileged ghcr.io/sandialabs/peat -VV -e http://localhost:9200 -d selrelay /examples/devices/sel/*/*.rdb
5.6.2.3. Pulling data from devices¶
Running a basic pull
docker run --rm -v $(pwd)/peat_results:/peat_results --network "host" -i ghcr.io/sandialabs/peat pull -i 192.0.2.0/24
To improve scanning abilities, run as root using “privileged”. This requires root privileges on the host system running Docker.
docker run --rm --privileged -v $(pwd)/peat_results:/peat_results --network "host" -i ghcr.io/sandialabs/peat pull -i 192.0.2.0/24
Pull from a SEL relay, and export the results to Elasticsearch
docker run --rm -v $(pwd)/peat_results:/peat_results --network "host" -i ghcr.io/sandialabs/peat pull -vV -e -d selrelay -i 192.0.2.22
Pull from three relays on two independent networks, and export the results to Elasticsearch
docker run --rm -v $(pwd)/peat_results:/peat_results --network "host" -i ghcr.io/sandialabs/peat pull -vV -e -d selrelay -i 192.0.3.44-55 192.0.2.22-33
5.6.2.4. Pushing data to devices¶
Pushing a set of configuration files to a SEL relay
docker run --rm -v $(pwd)/peat_results:/peat_results --network "host" -i ghcr.io/sandialabs/peat push -vV -d selrelay -i 192.0.2.22 -- "/relay_configs/"
5.6.2.5. Development and Debugging¶
Testing inside of the container and saving the changes
docker run --name "peat_dev" -v $(pwd)/peat_results:/peat_results -i -t --entrypoint "/bin/sh" ghcr.io/sandialabs/peat
docker commit peat_dev
Attach to an existing container (including a running one)
docker ps -a
docker exec --network "host" -it <container-name> "/bin/sh"
5.6.3. General Docker usage and reference¶
Note
Images and Containers are distinct terms that are easy to confuse. Images refer to the “image” that is built (e.g. PEAT) and used to create containers, created using docker build. Containers are instances, created when using docker run.
5.6.3.1. Images¶
# Load a image
docker load -i image.tar
# List installed images
docker images
docker images -a
# Delete an image
docker rmi <image-id>
# Cleanup residual images and layers (e.g. leftover from builds)
docker image prune
5.6.3.2. Logs¶
docker logs -f <container>
docker logs --since 4h <container>
docker logs <container> 2>&1 | head -n 10 # Container that writes to stderr
# Monitor status of containers and view images using "lazydocker"
# ("q" to quit, bottom of screen has usage)
# Install from: https://github.com/jesseduffield/lazydocker
lazydocker
5.6.3.3. Containers¶
# View RUNNING containers
docker ps
# View RUNNING and STOPPED containers
docker ps -a
# Delete a container
docker rm -f <container>
# Delete all STOPPED containers
docker container prune
# Cleanup images, containers, networks, and volumes
docker system prune
5.6.3.4. Further reading¶
5.7. Pillage¶
5.7.1. Description¶
Pillage is a sub-command of PEAT that searches for relevant ICS/OT project files to import into PEAT for further analysis and comparison to project files retrieved elsewhere. It can search through a specific directory on the host system, a directory that is connected/mounted to the host system, or a raw disk image for possible files.
The search criteria defined in the configuration file is used to validate if a file should be considered a a valid file for copying.
When a valid file is found it will be copied into a ./pillage_results/ directory located in the current working directory of PEAT. The valid files are sorted into sub-directories based on the brands (plus DEFAULT) defined in the pillage configuration file. If a file is found to fit multiple brands then it will be copied into a MULTIPLE sub-directory for the user to determine which specific brand it belongs too. For more details regarding which brands apply to a file and why it was copied review the PEAT logs after the run.
Before a file is copied into a results sub-directory Pillage checks to see if a file with the same name already exists. If it does then the new file is copied to the results sub-directory and renamed with an integer added. For example if set_all.txt is found but a file with that same name already exists in ./pillage_results/SEL/, then the new file will be renamed to set_all.1.txt
Refer to Pillage config reference for the available options and examples.
5.7.2. Requirements¶
Must be run on a Linux host system (Currently only tested on Ubuntu 18+)
Must be run as
root. This is specifically needed if mounting an image.qemu-nbd, a part of theqemu-utilspackage. Installation:sudo apt install qemu-utilskmodpyPython package. This should be automatically installed with PEAT.If pillaging a raw disk image, the host system must support the filesystem. The filesystems supported by the host can be found through the following commands: .. code-block:
- Opening the file `/proc/filesystems` on the host system - Running `ls -1 /lib/modules/$(uname -r)/kernel/fs` on the host system
5.7.3. Running Pillage¶
5.7.3.1. Example commands¶
Pillage files from a raw disk image
peat pillage -c examples/peat-config.yaml -P raw_disk.img
Pillage files from VM images
peat pillage -c examples/peat-config.yaml -P eng_vm.qcow2
peat pillage -c examples/peat-config.yaml -P SomeVM.vmdk
Pillage files from a mounted drive or local directory
peat pillage -c examples/peat-config.yaml -P /home/peat/pillage_this
Results can also be pushed to Elasticsearch
peat pillage -e http://192.0.2.21:9200 -c examples/peat-config.yaml -P /home/peat/pillage_this
5.7.3.2. Command line arguments¶
-cThe PEAT configuration file to use. Refer to Pillage config reference.-PThe source image or directory to search
5.7.3.3. Examples¶
# See the "Pillage" section in the PEAT documentation
# for a detailed explanation of the PILLAGE config,
# or refer to the example PEAT config YAML file.
# NOTE: the pillage output and extracted files
# are copied to the "pillage_results/" directory.
# Pillage files from a raw disk image
peat pillage -c peat-config.json -P raw_disk.img
# Pillage files from a mounted drive or local directory
peat pillage -c peat-config.json -P /home/user/pillage_this
# Pillage files from a VMDK image and upload results to a local Elasticsearch server
peat pillage -c peat-config.json -P raw_disk.vmdk -e
# Pillage files from a qcow2 image and upload results to a remote Elasticsearch server
peat pillage -c peat-config.json -P raw_disk.qcow2 -e http://192.0.2.33:9200/
5.7.4. When things go wrong¶
If there is a critical failure and PEAT is unable to cleanup, run the following commands to cleanup pillage_temp:
sudo umount pillage_temp
sudo rm -rf pillage_temp
sudo qemu-nbd -d /dev/nbd1
sudo rmmod nbd
5.7.5. Notes¶
If a disk image is used as the source it must be a raw disk image. Pillage does not support any other image formats. To use unsupported images with pillage either convert it to a raw disk image or mount it to the host filesystem manually and provide the mount point as the input source to Pillage.
If a disk image is used as a source Pillager will mount it (read only) to a directory prior to searching called
pillage_temp. It will be located in the current working directory when Pillage is ran. Once Pillager is complete the image will be unmounted and this directory will be removed.There have been times during development and testing when the host OS would not mount the image, but if the same Pillage command was tried again after waiting a few seconds it would mount just fine.
If running in a VMware Workstation VM, pillage can run on a disk image or file system loaded in a shared folder. See the VMware documentation for details on how to set this up. If you run into issues, this askubuntu answer may be helpful: How do I mount shared folders in Ubuntu using VMware tools?
To manually mount an image in the same manner as pillage:
sudo modprobe nbd
sudo mkdir /mnt/myimage
qemu-nbd -r -c /dev/nbd1 /path/to/disk/image.vmdk
mount -o ro /dev/nbd1p1 /mnt/myimage
# Cleanup
sudo umount /mnt/myimage
sudo qemu-nbd -d /dev/nbd1
sudo rmmod nbd
5.8. HEAT: High-fidelity Extraction of Artifacts from Traffic¶
HEAT reconstructs artifacts (device files, e.g. configuration, logic, firmware, logs, etc.) from data in a network traffic capture and parses those artifacts using PEAT. Examples of data extracted include process logic, register mappings, protocol and network service configurations, I/O points, device types and roles, vendor and model, and more.
5.8.1. Protocols supported¶
These can be listed by running peat heat --list-heat-protocols.
5.8.2. Usage¶
Network traffic either must be parsed by ingest-tshark and available in Elasticsearch, or in a PCAP file (.pcap / .pcapng), depending on the HEAT protocol plugin. For example the FTP Extractor uses Zeek to process PCAP files directly.
5.8.2.1. HEAT FTP Extractor¶
The FTPExtractor plug-in for HEAT uses the Zeek network monitoring tool to parse pcap files. A .pcap file must be present locally to use this plugin. The location of the pcap file can be specified using the --pcap argument when calling HEAT.
It’s strongly recommended to use the Docker container version of PEAT, as it bundles the correct version of Zeek (6.0) and it’s dependencies. If you are unable to use the container, then ensure you have Zeek 6.0 installed on your host and in the system PATH variable (or in /opt/zeek/).
# This example will process all PCAP files in "./pcaps", and save the results to "./peat_results"
docker run --rm -i --network host -v "$(pwd)/pcaps":/pcaps -v $(pwd)/peat_results:/peat_results ghcr.io/sandialabs/peat:latest heat -vVV -e http://heat-elastic:9200 --pcaps /pcaps --heat-file-only --heat-protocols FTPExtractor
5.8.2.2. Examples¶
# HEAT: High-fidelity Extraction of Artifacts from Traffic
# List protocols available for use with HEAT
peat heat --list-heat-protocols
# Process packet data from the 'heat-elastic' Elasticsearch server and
# store the results in 'results-elastic' Elasticsearch server.
# NOTE: if '--heat-elastic-server' isn't specified then the value
# of '-e'/'--elastic-server' is used instead.
peat heat -e http://results-elastic:9200/ --heat-elastic-server http://heat-elastic:9200/
# Limit data to only Elasticsearch indices beginning with "packetbeat-2017."
# NOTE: '-e' with no argument defaults to 'http://localhost:9200/'
peat heat -e --heat-index-names "packetbeat-2017.*"
# Only output the files that were extracted and exit.
# The results will not be parsed by PEAT and will not be stored in Elasticsearch.
# These files will be in ./peat_results/<run-dir>/heat_artifacts/ (by default).
# This location is configurable using HEAT_ARTIFACTS_DIR or --heat-artifacts-dir.
peat heat -e --heat-file-only
# Exclude any results with an IP address of 192.0.2.10 or 192.0.2.20
# as the source or destination. Subnet ranges can also be used here.
peat heat -e --heat-exclude-ips 192.0.2.10 192.0.2.20
# Exclude any results from the subnet 192.0.2.0/24 (192.0.2.1 - 192.0.2.254)
peat heat -e --heat-exclude-ips 192.0.2.0/24
# Only include results with an IP from the subnet 192.0.2.0/24
# as the source or destination.
peat heat -e --heat-only-ips 192.0.2.0/24
# Limit search to a specific time range
peat heat -e --heat-date-range "2021-07-15T00:00:00.000 - 2021-07-16T12:34:12.143"
# Use PEAT configuration settings from a YAML file
peat heat -e -c peat-config.yaml
5.9. Elasticsearch¶
Note
Refer to PEAT Elasticsearch indices reference for a table of the Elasticsearch indices used by PEAT
5.9.1. Introduction¶
PEAT has the ability to push artifacts from runs to an Elasticsearch server, such as scan results, logs, and device configurations. It uses multiple Elasticsearch indices to store data. Indices are described in detail here: Database Schema
PEAT data is not saved to Elasticsearch by default. To do so, use the -e command line argument or the ELASTIC_SERVER configuration option and specify the server to export data to. Examples of usage can be found in the command line examples earlier in this chapter.
5.9.2. Configuration and notes¶
- Binary blobs or large data fields (e.g. firmware images or raw configuration files) are NOT saved to Elasticsearch by default!.
To enable saving of large data, use the
--elastic-save-blobscommand line argument or theELASTIC_SAVE_BLOBSconfiguration option.
- Indices are “split” split by date, so a new index is created for each day.
Format:
<index-name>-<year>.<month>.<day>Timestamps are in the UTC timezone, not the host’s timezone.
Example:
ot-device-hosts-timeseries-2023.04.21for all host data collected on April 20th, 2023.This behavior can be disabled by setting
ELASTIC_DISABLE_DATED_INDICESto true or settingPEAT_ELASTIC_DISABLE_DATED_INDICESenvironment variable to true. This will result in only the base names of indices being used and no timestamped indices being created, e.g. all host data will be written to the index namedot-device-hosts-timeseriesinstead ofot-device-hosts-timeseries-2022.04.29and so on.
- PEAT’s logging events and dumps of it’s configuration and state are stored in Elasticsearch by default.
This behavior can be disabled via the following configuration options:
ELASTIC_SAVE_LOGS,ELASTIC_SAVE_CONFIG, andELASTIC_SAVE_STATE.
The timeout for PEAT to connect to the Elasticsearch server can be configured via the
ELASTIC_TIMEOUTconfiguration option or the--elastic-timeoutcommand line argument.
5.9.3. JSON file copies of Elasticsearch exports¶
Most data sent to Elasticsearch by PEAT is saved locally as JSON files. These files can be used to rebuild the indices in case of a issue with the server, or if you want to import the data to another server and have lost access to the server the data was originally exported to. By default, these files are saved to peat_results/the_run_name/elastic_data/. This location can be configured via the ELASTIC_DIR configuration option in a config file or the PEAT_ELASTIC_DIR environment variable.
5.10. Third-party device modules¶
PEAT uses a modular architecture for implementing the functionality of devices supported by PEAT, with the functionality for a particular device (for example, a SEL Relay) bundled up as a semi-standalone “PEAT device module” (in this case, peat.modules.sel.sel_relay.SELRelay). While PEAT includes a large selection of modules, additional modules can be imported and used at runtime, with no changes to PEAT’s code. These modules are generally referred to as “PEAT device modules”, “PEAT modules”, “third-party modules”, or “runtime modules”. Use cases for runtime loaded modules are modules that aren’t able to be open-sourced due to sensitivities or modules implemented by a user (like you!).
Implementing a module to support a new device is simple and only requires a text editor and ability to write Python code. Refer to the Module developer guide for details on implementing a module.
5.10.1. Usage example¶
Simple example performing a peat parse using the AwesomeTool PEAT device module, which parses the output of the fictional awesome-tool:
# "-d AwesomeTool" : specify what PEAT module to use, in this case the "AwesomeTool" module you created
# "-I awesome_module.py" : import the middleware module so it's usable by PEAT
# "-- awesome_output.json" : the file to parse, in this case the output of running "awesome-tool"
peat parse -d AwesomeTool -I ./examples/example_peat_module/awesome_module.py -- ./examples/example_peat_module/awesome_output.json
5.11. Troubleshooting¶
5.11.1. Getting troubleshooting data (logs)¶
Run PEAT with “verbose” flag (-v) to see more events on the terminal. These events are always written to the log file in peat_results/the_run_name/logs/, as well as Elasticsearch if it’s configured.
Additional information can be generated by enabling “Debugging” mode. There are multiple debugging levels ranging from 1 to 4. These can be set via command line arguments, e.g. -V for level 1 or -VVV for level 3, or by setting the DEBUG configuration option to the desired debugging level.
peat pull -VVV -R example_tshoot -d selrelay -i 192.0.2.0/24
# View the log files generated
ls -lAht peat_results/example_tshoot/logs/
# View the log file with the "less" command
less peat_results/example_tshoot/logs/peat.log
5.11.2. Log files¶
Note
Much of the data listed below is also stored in Elasticsearch, if configured. Refer to PEAT Elasticsearch indices reference.
Name |
Description |
Default file path |
|---|---|---|
Logs |
Primary log file for PEAT. Human-readable text file that contains most logging events generated by PEAT, as well as some metadata generated at startup. |
|
Configs |
Configuration dump, in YAML format. This contains the configuration PEAT used for the run. |
|
State |
State dump, in JSON format. This contains the internal state of PEAT as of the end of the run. |
|
JSON logs |
PEAT logs, in JSON format. Each line in the log file is a JSON-formatted log record, following the |
|
Elasticsearch logs |
Logging events generated by PEAT’s Elasticsearch module, in a human-readable text format. Useful if you’re troubleshooting issues with Elasticsearch. |
|
Telnet logs |
Raw Telnet protocol events, in a human-readable text format. Useful for troubleshooting issues with PEAT modules that use Telnet. |
|
5.11.3. Troubleshooting issues with Elasticsearch¶
Logging events from PEAT’s Elasticsearch internals are NOT written to the normal places PEAT logs are saved. Instead, they’re written to a special log file named elasticsearch.log, which by default is located in peat_results/run_name/logs/.
Common issues with Elasticsearch include bad type mappings. If this occurs, delete the index, then re-attempt the push. Ensure any important data is saved BEFORE deleting the index! Data can be exported using elasticsearch-dump. To delete an index, use curl with the -XDELETE option: curl -XDELETE localhost:9200/ot-device-hosts-*.
5.12. Limitations¶
General limitations of PEAT that aren’t bugs. Refer to Known Issues for a list of known issues with PEAT (bugs).
MAC addresses of devices will not be resolved during a scan or pull if the device is behind a router or gateway (e.g. in another subnet than the device performing the scanning).
The ability to check host online status using ARP or ICMP requests requires root (Linux) or Administrator (Windows) permissions on the host running PEAT. If PEAT is unable to use these protocols, it falls back to using TCP SYN requests. These requests are less reliable and may be blocked by firewalls.