Skip to content

Breaking Changes

1.3.0

With the bump to default OpenCTI version to v7, a new mandatory configuration is now required to run the platform, the APP_ENCRYPTION_KEY environment variable. This key is used by OpenCTI to encrypt sensitive data at rest and in transit, such as credentials and tokens.

This parameter needs to be included in your configuration like the example below:

Generate the key:

openssl rand -base64 32
Update your values file:
opencti:
  env:
    APP__ENCRYPTION_KEY: "ChangeMe"

More info on this change: https://docs.opencti.io/latest/deployment/breaking-changes/?h=encryption#mandatory-cryptography-configuration

1.2.7

Redis definitions split between data and sentinel nodes

To provide more flexibility in the Redis Sentinel mode a distinction is now made between the Redis Data(aka Replication) nodes and Redis Sentinel Nodes for most of the configuration.

If you're deploying in Standalone mode, the Redis container is now also configured through the data section like the Redis Replication in Sentinel mode.

The changes introduced in this version mainly affects the Redis Data node configuration. If you had to customize storage size, storage class name or resources please check the before and after below to better understand the required changes in your values.

Before

redis:
  affinity: ...
  storageSize: 8Gi
  storageClassName: abc-class
  resources: ...

After

redis:
  data:
    affinity: ...
    storageSize: 8Gi
    storageClassName: abc-class
    resources: ...
    nodeSelector: ...
    tolerations: ...
  sentinel: # if you're using sentinel
    affinity: ...
    resources: ...
    nodeSelector: ...
    tolerations: ...

Removal of the nodeSelector value for RabbitMQ

The chart initially provided the option to customize the nodeSelector attribute for RabbitMQ. The RabbitMQ actually does not support such parameter and therefore has been removed.

Instead, you can specify a nodeSelector through the override attribute. See Reference.

MinIO antiAffinity configuration

The chart initially provided a single boolean enableHostAffinity: false to configure MinIO pods anti affinity. You now have the possibility to configure a classic affinity rule as per the Kubernetes API definition. See reference

1.2.0

Elastic Node sets topology

To better follow Elastic recommendations regarding Production and HA deployments we made the Elasticsearch topology evolve to deploy 2 nodes sets, one for master nodes and one for data nodes.

This introduces a breaking changes on the way Elasticsearch values are to be set in your configuration.

Before

elasticsearch:
  ...
  # -- Number of data nodes to deploy
  replicas: 2
  ...

After

elasticsearch:
  ...
  masterNodes:
    # -- Number of master nodes replicas (must be odd or 0)
    replicas: 1

    # -- Default size of your ES master nodes
    storageSize: 10Gi

    # -- Storage Class Name to use in your volumes
    storageClassName: ""

    # -- Affinity for ES master nodes assignment
    # <br> Ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity
    affinity: { }

    # -- Tolerations for ES master nodes assignment
    # <br> Ref: https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/
    tolerations: [ ]

    # -- The resources limits and requests for ES master nodes
    # <br> Ref: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
    resources:
      limits:
        memory: 4Gi
      requests:
        cpu: 0.5
        memory: 2Gi

    # -- Node labels for ES master nodes assignment
    # <br> Ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector
    nodeSelector: { }

    # -- Configure the priority class for your ES master nodes
    # <br> Ref: https://kubernetes.io/docs/concepts/scheduling-eviction/pod-priority-preemption/#priorityclass
    priorityClass:

      # -- Enable or disable the priority class configuration
      enabled: false

      # -- Priority class name to apply on your ES master nodes
      name: "default"

    javaOpts: -Xms500m -Xmx500m -Xlog:disable -Xlog:all=warning:stderr:utctime,level,tags -Xlog:gc=debug:stderr:utctime

  dataNodes:
    # -- Number of data node replicas
    replicas: 2

    # -- Default size of your ES data nodes
    storageSize: 100Gi

    # -- Storage Class Name to use in your volumes
    storageClassName: ""

    # -- Affinity for ES data nodes assignment
    # <br> Ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity
    affinity: { }

    # -- Tolerations for ES data nodes assignment
    # <br> Ref: https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/
    tolerations: [ ]

    # -- The resources limits and requests for ES data nodes
    # <br> Ref: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
    resources:
      limits:
        memory: 6Gi
      requests:
        cpu: 1
        memory: 4Gi

    # -- Node labels for ES data nodes assignment
    # <br> Ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector
    nodeSelector: { }

    # -- Configure the priority class for your ES data nodes
    # <br> Ref: https://kubernetes.io/docs/concepts/scheduling-eviction/pod-priority-preemption/#priorityclass
    priorityClass:

      # -- Enable or disable the priority class configuration
      enabled: false

      # -- Priority class name to apply on your ES data nodes
      name: "default"

    ## -- Java Options to set on your ES data nodes
    javaOpts: -Xms2g -Xmx2g -Xlog:disable -Xlog:all=warning:stderr:utctime,level,tags -Xlog:gc=debug:stderr:utctime

    # -- Sets the default Queue Size for the Search thread pool. This is a value recommended by Filigran
    # <br> Ref: https://docs.opencti.io/latest/deployment/installation#configure-the-environment
    threadPoolSearchQueueSize: 5000

  ...

Redis service headless

  • Impact: Helm fail to appply without cause by redis service
  • Reason: Compatibility with native kubernetes cluster (current config work on cilium but not on default kubernetes CNI providers)
    kubenetes offical docs
  • Actions required:
    • Delete existing redis headless service
    • Recreate with clusterIP set to None

Configuration changes breakdown

Old implementation

# redis-sentinel-headless
apiVersion: v1
kind: Service
metadata:
  labels:
    app.kubernetes.io/name: redis-replication
    app.kubernetes.io/instance: "redis-{{ $.Release.Namespace }}"
    app.kubernetes.io/component: redis
    app.kubernetes.io/part-of: {{ .Release.Name }}
    app.kubernetes.io/role: replication
  name: redis-replication-headless
spec:
  ports:
  - name: redis-client
    port: 6379
    protocol: TCP
    targetPort: 6379
  selector:
    app.kubernetes.io/name: redis-replication
    app.kubernetes.io/role: replication
  type: ClusterIP
  clusterIP: x.x.x.x
---
# redis-sentinel-headless
apiVersion: v1
kind: Service
metadata:
  labels:
    app: redis-sentinel
    role: sentinel
  name: redis-sentinel-headless
spec:
  ports:
  - name: sentinel-client
    port: 26379
    protocol: TCP
    targetPort: 26379
  selector:
    app.kubernetes.io/name: redis-sentinel
    app.kubernetes.io/role: sentinel
  type: ClusterIP
  clusterIP: x.x.x.x
New implementation
# redis-replication-headless
apiVersion: v1
kind: Service
metadata:
  labels:
    app.kubernetes.io/name: redis-replication
    app.kubernetes.io/instance: "redis-{{ $.Release.Namespace }}"
    app.kubernetes.io/component: redis
    app.kubernetes.io/part-of: {{ .Release.Name }}
    app.kubernetes.io/role: replication
  name: redis-replication-headless
spec:
  ports:
  - name: redis-client
    port: 6379
    protocol: TCP
    targetPort: 6379
  selector:
    app.kubernetes.io/name: redis-replication
    app.kubernetes.io/role: replication
  type: ClusterIP
  clusterIP: None
---
# redis-sentinel-headless
apiVersion: v1
kind: Service
metadata:
  labels:
    app: redis-sentinel
    role: sentinel
  name: redis-sentinel-headless
spec:
  ports:
  - name: sentinel-client
    port: 26379
    protocol: TCP
    targetPort: 26379
  selector:
    app.kubernetes.io/name: redis-sentinel
    app.kubernetes.io/role: sentinel
  type: ClusterIP
  clusterIP: None

Commands

kubectl delete service -n <namespace> redis-sentinel-headless,redis-replication-headless
helm upgrade --reuse-value -n <namespace> filigran/opencti <release> --version 1.2.0

Impact

Operation should have no impact on a running instance (connections are already established)