FortiSandbox
FortiSandbox provides a solution to protect against advanced threats and ransomware for companies who don’t want to implement and maintain a sandbox environment on their own.
cborgato_FTNT
Article Id 196703
Description
This article describes what is HA Cluster and how to configure and check HA-Cluster on FortiSandBox.

Scope
FortiSandbox 2.4, 2.5


Solution

To handle the scanning of a high number of files concurrently, multiple FortiSandbox devices can be used together in a load-balancing high availability (HA) cluster.

Roles
There are three types of nodes in a cluster: Master, Primary Slave, and Slaves.

Master node roles:
  -  manages the HA-Cluster,
  -  distributes jobs and gathers the results,
  -  interacts with clients/admin,
  -  can also perform normal file scans.
All of the scan related configuration should be done on the master node and they will be broadcasted from the Master node to the other nodes. Any scan related configuration that has been set on a slave will be overwritten.

It is advised to use a FortiSandbox-3000D or above model for Master role and the Primary Slave.

Primary Slave node roles:
  -  HA support
  -  normal file scans
It monitors the master's condition and, if the master node fails, the primary slave will assume the role of master. The former master will then become a primary slave.

Primary Slave node must be the same model as the Master node (so as per Master advise, 3000D or above model).

Slave node roles:
  -  perform normal file scans and report results back to the master and primary slave,
  -  they can also store detailed job information.
Slave nodes should have its own network settings and VM image settings.

Slave nodes in a cluster don't not need to be the same model.

Requirements and Failover description

Requirements to configure a HA Cluster

  -  scan environment on all cluster nodes should be the same (example. same set of Windows VM      should be installed on all nodes so the same scan profile can be used),
  -  port3 on all nodes should be connected to the Internet separately,
  -  all nodes should be on the same firmware build,
  -  each node should have a dedicated network port for internal cluster communication (heartbeat port).

Internal cluster communication includes:
  a) job dispatch
  b) job result reply
  c) setting synchronization
  d) cluster topology broadcasting

Failover Description
The Master node and Primary Slave nodes sends heartbeats to each other to detect if it peers are alive. If something goes wrong (like Master reboot or network issue), failover will trigger in one of the 2 possible ways:

  -  Objective node available:
     The Object node is a slave (either Primary or Regular) that can justify the new Master. After a Primary Slave node takes over the Master role, and the new role is accepted by the Object node, the original Master node will accept the decision when it is back online.
     After the original Master is back online, it will become a Primary Slave node.
  -  No Objective node available:
     This occurs when the cluster's internal communication is down. For example the internal cluster communication is down due to a failed switch, all Slave nodes become the Master (more than one Master unit).
     When the system is back online, the unit with the largest Serial Number will keep the Master role and the other will return back to a Primary Slave.

When the new Master is decided, it will:
  -  Restart the main controller to rebuild the scan environment.
  -  Apply all the setting synchronized from the original Master except port3 IP and the internal cluster IP of the original Master.

When the original Master becomes than the Primary Slave node, it will:
  -  keep its original Port3 IP and internal cluster communication IP.
  -  will be shutdown all other interface ports

HA-Cluster on CLI and GUI

Main HA-Cluster CLI Commands
HA-Cluster configuration can be done on GUI only

hc-settings Configure the unit as a HA Cluster mode unit and cluster fail-over IP set.
hc-status List the stats of HA Cluster units.
hc-slave Add, Update, Remove a slave unit to or from the HA Cluster.
hc-master Turn on/off the file scan on the Master node and adjust the Master's scan power.

Main HA-Cluster GUI on Master
Go to HA-Cluster > Status to check all the nodes with S/N, Types (Roles), names, IPs (Internal heatbeat ports) and Status (active or inactive).
Go to HA-Cluster > Job Summary to see job statistics data of each node with S/N and Pending, Malicious, Suspicious, Clean and Other states.
Go to HA-Cluster > Health Check to set up a Ping server to ensure the network condition between client devices and FortiSandbox is always up. If not, failover will be triggered.
Go to HA-Cluster > SerialNumber to navigate on Primary Slave or Regular Slave GUI from Master.

Example configuration
This example shows the steps for setting up an HA cluster using two FortiSandbox 3000E units and one FortiSandbox vm.

Minimum 3 subnets need it:
  -  on port1 set management access and be sure is able to reach FDN for licenses check and FortiGuard updates (10.5.16.0 /20)
  -  on port2 set internal cluster communications subnet. Port2 is the heartbeat port (10.139.0.0 /20)
  -  on port3 set the outgoing port (port 3) on each unit (10.138.0.0 /20)

Master configuration
IP ports configuration:
> set port1-ip 10.5.25.40/20
> set default-gw 10.5.31.254
> set port2-ip 10.139.9.40/20
> set port3-ip 10.138.9.40/20


IP ports verification:
> show
Configured parameters:
Port 1  IPv4 IP: 10.5.25.40/20  MAC: 00:62:6F:73:28:01
Port 2  IPv4 IP: 10.139.9.40/20         MAC: 00:62:6F:73:28:02
Port 3  IPv4 IP: 10.138.9.40/20         MAC: 00:62:6F:73:28:03
Port 4  IPv4 IP: 192.168.3.99/24        MAC: 00:62:6F:73:28:04
Port 5  IPv4 IP: 192.168.4.99/24        MAC: 00:62:6F:73:28:05
Port 6  IPv4 IP: 192.168.5.99/24        MAC: 00:62:6F:73:28:06
IPv4 Default Gateway: 10.5.31.254

HC-setting configuration:
> hc-settings -sc -tM -nFSA1 -cTT -pfortinet -iport2 (-sc is for cluster role definition and -tM defined the device role(Master), -n device name, -c cluster name, -I heartbeat port)
The unit was successfully configured.

> hc-settings -si -iport1 -a10.5.25.41/20 (-si is for External IP cluster and -i heartbeat port, -a External IP Cluster)

HC-setting verification:
> hc-settings -l
SN: FSA-VM0000000123
Type: Master
Name: FSA1
HC-Name: TT
Authentication Code: fortinet
Interface: port2


Cluster Interfaces:
        port1: 10.5.25.41/255.255.240.0
> hc-master -l
File scan is enabled with 50 processing capacity


Primary configuration
IP ports configuration:
> set port1-ip 10.5.27.113/20
> set default-gw 10.5.31.254
> set port2-ip 10.139.11.113/20
> set port3-ip 10.138.11.113/20


IP ports verification:

> show
Configured parameters:
Port 1  IPv4 IP: 10.5.27.113/20         MAC: 00:71:75:61:0D:01
Port 2  IPv4 IP: 10.139.11.113/20       MAC: 00:71:75:61:0D:02
Port 3  IPv4 IP: 10.138.11.113/20       MAC: 00:71:75:61:0D:03
Port 4  IPv4 IP: 192.168.3.99/24        MAC: 00:71:75:61:0D:04
Port 5  IPv4 IP: 192.168.4.99/24        MAC: 00:71:75:61:0D:05
Port 6  IPv4 IP: 192.168.5.99/24        MAC: 00:71:75:61:0D:06
IPv4 Default Gateway: 10.5.31.254


HC-setting configuration:
> hc-settings -sc -tP -nFSA2 -iport2 (-sc is for cluster role definition and -tP defines the device role (Primary Slave), -n device name, -i heartbeat port)
The unit was successfully configured.

Warning:
        Primary slave unit may take over the master role of the cluster if the original master is down, you have to make sure it has the same network environment settings as master unit.
For example:
         *) configure same subnet for port1 on master and primary slaves
         *) configure same subnet for port3 on master and primary slaves
         *) configure route table on master and primary slaves

> hc-slave -a -s10.139.9.40 -pfortinet (-a adds unit into the cluster, -s defines the Master HeartBeat port IP, -ip for cluster password)
The unit was successfully configured


HC-setting verification:
> hc-settings -l
SN: FSA-VM0000000456
Type: Primary Slave
Name: FSA2
Interface: port2
> hc-status -l
Status of master and primary slave units in cluster: TT
--------------------------------------------------------------------------------
SN                   Type            Name                 IP                   Active
FSA-VM0000000123     Master          FSA1                 10.139.9.40          1 second(s) ago
FSA-VM0000000456     Primary Slave   FSA2                 10.139.11.113        1 second(s) ago


Slave configuration

IP ports configuration:

> set port1-ip 10.5.27.160/20
> set default-gw 10.5.31.254
> set port2-ip 10.139.11.160/20
> set port3-ip 10.138.11.160/20


IP ports verification:
> show
Configured parameters:
Port 1  IPv4 IP: 10.5.27.160/20         MAC: 00:71:75:61:3C:01
Port 2  IPv4 IP: 10.139.11.160/20       MAC: 00:71:75:61:3C:02
Port 3  IPv4 IP: 10.138.11.160/20       MAC: 00:71:75:61:3C:03
Port 4  IPv4 IP: 192.168.3.99/24        MAC: 00:71:75:61:3C:04
Port 5  IPv4 IP: 192.168.4.99/24        MAC: 00:71:75:61:3C:05
Port 6  IPv4 IP: 192.168.5.99/24        MAC: 00:71:75:61:3C:06
IPv4 Default Gateway: 10.5.31.254


HC-setting configuration:
> hc-settings -sc -tR -nFSA3 -iport2 (-sc is for cluster role definition and -tR defines the device role (Normal Slave), -n device name, -i heartbeat port)
The unit was successfully configured.

> hc-slave -a -s10.139.9.40 -pfortinet (-a adds unit into the cluster, -s defines the Master HeartBeat port IP, -ip for cluster password)
The unit was successfully configured


HC-setting verification:
> hc-settings -l
SN: FSA-VM0000000789
Type: Regular Slave
Name: FSA3
Interface: port2

More details are available on administration guide on https://docs.fortinet.com/


Contributors