Skip to content

Installation of OpenCTI with helm-managed backends

Info

Use this installation method when you cannot provide high-availability backends/databases required by OpenCTI using managed services or in-house deployments managed by an experienced team, or if you have a preference of managing backends using operators & helm. Please note that in that case, you'll need to manage operators separately and ensure required softwares are up-to-date and configured correctly. The helm chart provided by Filigran cannot do everything on its own.

Elasticsearch configuration

The following snippet will deploy a 3-nodes ElasticSearch cluster. It will also deploy Kibana, allowing you to explore and interact with your data locally.

elasticsearch:
  enabled: true
  replicas: 3
  storageSize: 200Gi
  dataNodeJavaOpts: -Xms16g -Xmx16g -Xlog:disable -Xlog:all=warning:stderr:utctime,level,tags -Xlog:gc=debug:stderr:utctime
  kibana:
    http:
      tls:
      selfSignedCertificate:
        disabled: true

Please, note that two options are set by default on the cluster:

  • allowMmap

AllowMmap is a recommendation for memory consumption by ElasticSearch on big deployments, and on production environment it is highly recommended to set it to true.

  • threadPoolSearchQueueSize.

Regarding threadPoolSearchQueueSize, this value should be high enough to accomodate OpenCTI heavy usage of ElasticSearch, to avoid performance issues. This value is set, by default, to 5000.

RabbitMQ configuration

The following snippet will deploy a 3-nodes RabbitMQ cluster.

rabbitmq:
  enabled: true
  replicas: 3
  additionalConfig: |
    disk_free_limit.absolute = 1500Mi
    max_message_size = 536870912
    consumer_timeout = 86400000
    management.disable_stats = false
    log.console.level = error
  additionalPlugins:
    - rabbitmq_management

Info

For more information about the RabbitMQ configuration, please refer to the RabbitMQ operator documentation: RabbitMQ Additional Configuration, RabbitMQ Additional Plugins.

Redis configuration

This chart allows you to deploy either a standalone Redis data node (not highly available) or a highly-available cluster (recommended !) using the Redis Sentinel Pattern.

Info

It is recommended to always set your storage size 2 * the value you set in RAM requests. This is to make sure that Redis always has the space to dump its data in a .rdb file if needed.

Standalone

redis:
  enabled: true

Sentinel

redis:
  enabled: true
  mode: sentinel
  storageSize: 30Gi
  resources:
    limits:
      cpu: 2
      memory: 8Gi
    requests:
      cpu: 1
      memory: 4Gi

Info

A Sentinel Redis setup requires at least 3 replicas of sentinel and 2 of data nodes. OpenCTI does not need more and we recommend to stick to these default values.

MinIO configuration

This chart will automatically deploy an HA and prod-like setup for your MinIO tenant. This decision is made because of the nature of MinIO and the efficiency of the operator in managing such scenarios.

minio:
  enabled: true
  standardPool:
    resources:
      requests:
        memory: 512Mi
      limits:
        memory: 4Gi

Deploy!

  • Login to the Filigran registry with the credentials provided by Filigran

    helm registry login -u your_user -p your_password filigran.jfrog.io
    

You can directly deploy the chart on your cluster with the commands below

  • Initial deployment (if no version specified, will use latest, to use one just add --version x.x.x) :

    helm install opencti oci://filigran.jfrog.io/filigran-ee/opencti -f values.yaml -n opencti --create-namespace
    
  • Or, on configuration change / upgrade (if no version specified, will use latest, to use one just add --version x.x.x):

    helm upgrade opencti oci://filigran.jfrog.io/filigran-ee/opencti --install -f values.yaml -n opencti
    

If you wish to pull the chart beforehand, and apply it later you can run the following commands:

helm pull oci://filigran.jfrog.io/filigran-ee/opencti --version x.x.x
helm apply opencti-x.x.x.tgz -f values.yaml