FortiSOAR Knowledge Base
FortiSOAR: Security Orchestration and Response software provides innovative case management, automation, and orchestration. It pulls together all of an organization's tools, helps unify operations, and reduce alert fatigue, context switching, and the mean time to respond to incidents.
Andy_G
Staff
Staff
Article Id 193113
Description

Overview

This document details the performance benchmark tests conducted in CyberSponse labs. The performance benchmarking tests were performed on CyOPs™ version 5.0.0 Build 866.

Objective

The objective of this performance test is to measure the time taken to create alerts in CyOPs™, and complete the execution of corresponding playbooks on the created alerts on a single-node CyOPs™ appliance and a cluster setup of CyOPs™.

The data from this benchmark test can help you in determining your scaling requirements for CyOPs™ instance to handle the expected workload in your environment.


Solution

Environment

CyOPs™ Virtual Appliance Specifications

ComponentSpecifications
CPU8 CPUs
Memory32 GB
Storage250 GB virtual disk running on top of Samsung SSD 360 Pro model attached to VMware ESX server

Operating System Specifications

Operating SystemKernel Version
CentOS 73.10.0-957.5.1.eI7.x86_6 

Pre-test Conditions

At the start of each test run -

  • The test environment contained zero alerts.
  • The test environment contained only the CyOPs™ built-in connectors such as IMAP, Utilities, etc.
  • The system playbooks were deactivated and there were no running playbooks.
  • The playbook execution logs were purged.

Details of the CyOPs™ Performance Benchmarking Test

The test was executed using an automated test bed that initiated HTTPS calls per clock tick ( x alerts ingested per second ) which created alerts in CyOPs™ and then triggered a playbook for each alert created. Steps are as follows:

  1. The alerts were created using JMeter to simulate parallel invocation of the API - ‘/api/3/alerts/’.
  2. When an alert was created, a post-create playbook is triggered which performs the following steps: 
    • Declare variables using the Set Variable Step
    • Extract artifacts from the source data of the alert using the “CyOPs: Extract Artifacts from String” action of the CyOPs Utilities connector.
    • Add the extracted artifacts in the “Closure Notes” field of the alert.
    • Update the status of the alert to “Closed”.

Observations

The data in the following tables outlines the number of alerts ingested in a clock tick, the total time taken to ingest those alerts, and the total time taken for all the playbooks triggered to finish execution.

Single Invocation Test run on a single-node CyOPs™ appliance 

Number of alerts created in CyOPs™
Total time taken to create all alerts in CyOPs™ (in seconds)
Total time taken to execute all Playbooks (in seconds)
25
6
10.755
50
11
23.729
100
23
47.240
150
27
70.388
170
37
79.673

Single Invocation Test run on a two-node Active-Active CyOPs™ cluster 

Number of alerts created in CyOPs™
Total time taken to create all alerts in CyOPs™ (in seconds)
Total time taken to execute all Playbooks (in seconds)
25
4
7.111
50
7
13.656
100
16
26.456
150
22
37.307
170
24
44.557
200
26
53.43
300
39
77.537

Sustained Invocation Test

In the sustenance test conducted on a two-node Active-Active CyOPs™ cluster, we could ingest 100 Alerts every 30 secs over 24 hours and observed that 176904 alerts were generated and corresponding playbooks successfully completed.

In the sustenance test conducted on a single node machine we could ingest 100 Alerts every 30 secs over 24 hours and observed that 15017 alerts were generated and corresponding playbooks successfully completed.

* The number of alerts ingested in the system are the same as the alerts generated by the performance tool.

Conclusion

Based on this test, we conclude, that CyOPs™ could process an average of 6250 alerts in an hour in a single node and 7084 alerts in an hour in a two-node Active-Active CyOPs™ cluster. This includes creation of alerts, and running corresponding playbooks to process the alerts.

Notes

In a production environment the following factors might vary, which could affect the observations:

  1. The size of alert data.
  2. The number of playbooks that are being executed in parallel for each alert (ex, system playbook for notification or triage/investigate playbooks).
  3. The number of steps in each playbook.
  4. The network bandwidth especially for outbound connections to applications such as VirusTotal.

Contributors