FortiGate
FortiGate Next Generation Firewall utilizes purpose-built security processors and threat intelligence security services from FortiGuard labs to deliver top-rated protection and high performance, including encrypted traffic.
rmetzger
Staff
Staff
Article Id 198601
Description

FortiOS 5.0 offers the possibility to accelerate inter-VDOM traffic if the FortiGate device contains NP4 network processor(s).


Scope

FortiOS 5.0, FortiGate with NP4 processor


Solution
A FortiGate running FortiOS 5.0 and with VDOM enabled in NAT mode will list a new inter-VDOM links type.

The new NPU based inter-VDOM link are named like: npuX-vlink0/npuX-vlink1, where X is index of the NP4 processor on the device.

For example:

- npu0-vlink0, npu0-vlink1 : inter-VDOM link end points attached to the first NP4 processor.
- npu1-vlink0, npu1-vlink1 : inter-VDOM link end points attached to second NP4 processor (if applicable).


Viewing the new inter-VDOM links
FGFT-3950 (global)# get hardware nic
The following NICs are available:
[....]
npu0-vlink0
npu0-vlink1
npu1-vlink0
npu1-vlink1
How to verify on which NP4 the inter-VDOM links are mapped.

The following CLI command will show all inter-VDOM mapped to the existing NP4 processors:
FGT-3950 (global)# diagnose npu np4 list
ID Model Slot Interface
0 On-board port1 port2 port3 port4
port5 port6 npu0-vlink0 npu0-vlink1|
1 FMC-C20 FMC3 fmc3/1 fmc3/2 fmc3/3 fmc3/4
fmc3/5 fmc3/6 fmc3/7 fmc3/8
fmc3/9 fmc3/10 fmc3/11 fmc3/12
fmc3/13 fmc3/14 fmc3/15 fmc3/16
fmc3/17 fmc3/18 fmc3/19 fmc3/20
npu1-vlink0 npu1-vlink1
How to create multiple accelerated inter-VDOM links.

To configure an inter-VDOM link using the default interfaces, assign the interfaces to two VDOMs like regular inter-VDOM links.

To add multiple inter-VDOM links, create VLAN interfaces on top of the inter-VDOM interface pair.

For example:
config system interface
edit "npu0-vlink0"
set vdom "root"
set type physical
next
edit "npu0-vlink1"
set vdom "root"
set type physical
next
edit "npu1-vlink0"
set vdom "root"
set type physical
next
edit "npu1-vlink1"
set vdom "VDOM2"
set type physical
next
edit "IVL-VLAN1_ROOT"
set vdom "root"
set ip 10.20.20.1 255.255.255.0
set allowaccess ping
set interface "npu0-vlink0"
set vlanid 1
next
edit "IVL-VLAN1_VDOM1"
set vdom "VDOM1"
set ip 10.20.20.2 255.255.255.0
set allowaccess ping
set interface "npu0-vlink1"
set vlanid 1
next
end
Verify that traffic is accelerated.

Use the following CLI command to obtain the interface index and then correlate them with the session entries. In the following example traffic was flowing between new accelerated inter-VDOM links and physical ports port1 and port 2 also attached to the NP4 processor.
FGT-3950 (global)# diagnose ip address list
IP=172.31.17.76->172.31.17.76/255.255.252.0 index=5 devname=port1
IP=10.74.1.76->10.74.1.76/255.255.252.0 index=6 devname=port2
IP=10.20.20.1->10.20.20.1/255.255.255.0 index=55 devname=IVL-VLAN1_ROOT
IP=10.20.20.2->10.20.20.2/255.255.255.0 index=56 devname=IVL-VLAN1_VDOM1


FGT-3950(VDOM1) # diagnose sys session list
session info: proto=1 proto_state=00 duration=282 expire=24 timeout=0 session info: proto=1 proto_state=00 duration=124 expire=59 timeout=0 flags=00000000 sockflag=00000000 sockport=0 av_idx=0 use=3
origin-shaper=
reply-shaper=
per_ip_shaper=
ha_id=0 policy_dir=0 tunnel=/
state=may_dirty npu
statistic(bytes/packets/allow_err): org=180/3/1 reply=120/2/1 tuples=2
orgin->sink: org pre->post, reply pre->post dev=55->5/5->55 gwy=172.31.19.254/10.20.20.2
hook=post dir=org act=snat 10.74.2.87:768->10.2.2.2:8(172.31.17.76:62464)
hook=pre dir=reply act=dnat 10.2.2.2:62464->172.31.17.76:0(10.74.2.87:768)
misc=0 policy_id=4 id_policy_id=0 auth_info=0 chk_client_info=0 vd=0
serial=0000004e tos=ff/ff ips_view=0 app_list=0 app=0
dd_type=0 dd_mode=0
per_ip_bandwidth meter: addr=10.74.2.87, bps=880
npu_state=00000000
npu info: flag=0x81/0x81, offload=4/4, ips_offload=0/0, epid=160/218, ipid=218/160, vlan=32769/0


session info: proto=1 proto_state=00 duration=124 expire=20 timeout=0 flags=00000000 sockflag=00000000 sockport=0 av_idx=0 use=3
origin-shaper=
reply-shaper=
per_ip_shaper=
ha_id=0 policy_dir=0 tunnel=/
state=may_dirty npu
statistic(bytes/packets/allow_err): org=180/3/1 reply=120/2/1 tuples=2
orgin->sink: org pre->post, reply pre->post dev=6->56/56->6 gwy=10.20.20.1/10.74.2.87
hook=pre dir=org act=noop 10.74.2.87:768->10.2.2.2:8(0.0.0.0:0)
hook=post dir=reply act=noop 10.2.2.2:768->10.74.2.87:0(0.0.0.0:0)
misc=0 policy_id=3 id_policy_id=0 auth_info=0 chk_client_info=0 vd=1
serial=0000004d tos=ff/ff ips_view=0 app_list=0 app=0
dd_type=0 dd_mode=0
per_ip_bandwidth meter: addr=10.74.2.87, bps=880
npu_state=00000000
npu info: flag=0x81/0x81, offload=4/4, ips_offload=0/0, epid=219/161, ipid=161/219, vlan=0/32769
total session 2
The document "Hardware FortiOS Handbook v3 for FortiOS 4.0 MR3" provides conditions for traffic to be accelerated in the section "Session fast path requirements".


Contributors