FortiGate
FortiGate Next Generation Firewall utilizes purpose-built security processors and threat intelligence security services from FortiGuard labs to deliver top-rated protection and high performance, including encrypted traffic.
adebeer_FTNT
Staff
Staff
Article Id 230162
Description This article lists useful parameters for the configuration of a FGSP cluster.
Scope FortiGate 6.4, 7.0.
Solution

Suggested configuration parameters for a standalone cluster

 

# config system standalone-cluster

set standalone-group-id 11

set group-member-id 1

set layer2-connection available

set session-sync-dev "port1" "port2"

 

Parameter explanations

 

set standalone-group-id 11
Each FGSP cluster needs an unique group-id. This ID should be the same for all devices in the cluster. Change this ID to a non-default value.

 

set group-member-id 1
The cluster member ID. Each member in the cluster needs a different id.

 

set layer2-connection available
This parameter indicates whether layer2 connection is available for use. If there is layer2 connection between the cluster members, cluster sync can be performed with kernel-space connections present among the members using dedicated session-sync-dev. As per the configuration, the connection will offload the session-sync process from Daemon (userspace) to the Kernel.

This is typically much more efficient. The process uses EtherType 8892 to broadcast session sync information to the Peer.

Note: session-sync-dev ONLY supports physical ports. It cannot be used for virtual interfaces such as VLAN.

 

set session-sync-dev "port1" "port2"

The previous configuration settings have offloaded the session-sync process to the kernel, which means it's necessary to define the connected interfaces the sync will be performed on. If one interface becomes disconnected, the state will change to a standalone probe and the remaining interfaces will be use for session synchronization. Session-sync packets will load balance between the two interfaces.

 

Consider that large amounts of session synchronization traffic can increase network congestion. It is recommended to keep this traffic off of the main network by using dedicated connections for it.


Each session-sync-dev will perform heartbeats and ensure heartbeat packets are received over layer2. When the peers are discovered over the IP connectivity through the configured peer IP address, the cluster members will sync their full session table in bulk over TCP connection to the peer. After, provided L3 connectivity is the only option, they will sync any consecutive session create/update/delete queries for new, existing and/or old sessions over UDP:708.


If L2 is available between peers, the direct sync from the kernel can be configured over session-sync-dev ports (this recommended for high CPS networks).

 

Suggested configuration parameters for cluster-sync

 

# config system cluster-sync

edit 5

set peerip 10.10.0.2

set syncvd test root

set down-intfs-before-sess-sync "port1" "port2"

set hb-interval 2

set hb-lost-threshold 10

 

Parameter explanations

 

set peerip
This IP address is an interface IP address for a peer. It's used for session sync (user-space). The peer should be in the same VDOM and the same interface on all peers. This peer will facilitate the 'fallback' sync method.

 

set syncvd test root
This parameter defines which VDOMs will be synchronized with this session synchronization configuration, whether the synchronization is performed over user-space or kernel.

 

set down-intfs-before-sess-sync "port1" "port2"
The list of interfaces that should remain DOWN until the session synchronization finishes.

 

set hb-interval 2
The heartbeat interval. 2 is the default and should not be reduced. If false positive failovers occur, increase this value depending on the network between the 2 peers. By default, this represents a number of 100 milliseconds: for example, an hb-interval of 2 is equal to a timeout of 200 milliseconds.

 

set hb-lost-threshold 10
This is the lost heartbeat threshold. In this case, the default was used. Depending on the size and the speed of the network between the devices, confirm if any false positive failovers are occurring. Increase the threshold to prevent these failovers.

 

Configuration parameters for all High Availability clusters

 

# config system ha

set session-pickup enable

set session-pickup-connectionless enable

set session-pickup-expectation

set session-pickup-delay enable

set session-pickup-nat enable

set override disable

end

 

Parameter explanations


set session-pickup enable
The FGSP enforces firewall policies for asymmetric traffic, including cases where the TCP 3-way handshake is split between two FortiGates. For example, FortiGate-A receives the TCP-SYN, FortiGate-B receives the TCP-SYN-ACK, and FortiGate-C receives the TCP-ACK. Under ordinary circumstances (if auxiliary sessions are not configured), a firewall will drop this connection since the 3-way handshake was not overseen by the same firewall. In this case, the FortiGates with FGSP configured will be able to pass this traffic as the the firewall sessions appear to be synchronized.

 

set session-pickup-connectionless enable
This parameter will ensure that UDP and ICMP sessions are synchronized. By default, ONLY TCP sessions will be synchronized.

 

set session-pickup-expectation enable
For some traffic, control sessions will establish a link between server and client to negotiate the ports and protocols that will be used for data communications. The FortiGate will open these PINHOLE ports as negotiated in the control plane. These session helpers (FTP or SIP) will create expectation sessions through the FortiGate (PINHOLES) for the ports and protocols negotiated by the control session.

 

set session-pickup-delay enable
To prevent synchronizing ALL sessions, reduce the memory sync sessions can take in the session table. This will sync ONLY the sessions that last more than 30 seconds.

set session-pickup-nat enable
Consider the devices in the cluster when NAT is used in the policies. After a failover, in all sessions that include the interface IP interfaces as NAT on the failed FortiGate, the packets will have nowhere to go since the IP addresses of the failed FortiGate unit will no longer be known by the "new" FortiGate. In this case, it's recommended not to configure NAT to use the destination interface IP address and to ensure the FGSP FortiGate units have different IP addresses. To avoid this issue, use IP pools with the type set to 'overload' (which is the default IP pool type).