Skip to content

Breaking Changes

1.2.0

Elastic Node sets topology

To better follow Elastic recommendations regarding Production and HA deployments we made the Elasticsearch topology evolve to deploy 2 nodes sets, one for master nodes and one for data nodes.

This introduces a breaking changes on the way Elasticsearch values are to be set in your configuration.

Before

elasticsearch:
  ...
  # -- Number of data nodes to deploy
  replicas: 2
  ...

After

elasticsearch:
  ...
  masterNodes:
    # -- Number of master nodes replicas (must be odd or 0)
    replicas: 1

    # -- Default size of your ES master nodes
    storageSize: 10Gi

    # -- Storage Class Name to use in your volumes
    storageClassName: ""

    # -- Affinity for ES master nodes assignment
    # <br> Ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity
    affinity: { }

    # -- Tolerations for ES master nodes assignment
    # <br> Ref: https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/
    tolerations: [ ]

    # -- The resources limits and requests for ES master nodes
    # <br> Ref: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
    resources:
      limits:
        memory: 4Gi
      requests:
        cpu: 0.5
        memory: 2Gi

    # -- Node labels for ES master nodes assignment
    # <br> Ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector
    nodeSelector: { }

    # -- Configure the priority class for your ES master nodes
    # <br> Ref: https://kubernetes.io/docs/concepts/scheduling-eviction/pod-priority-preemption/#priorityclass
    priorityClass:

      # -- Enable or disable the priority class configuration
      enabled: false

      # -- Priority class name to apply on your ES master nodes
      name: "default"

    javaOpts: -Xms500m -Xmx500m -Xlog:disable -Xlog:all=warning:stderr:utctime,level,tags -Xlog:gc=debug:stderr:utctime

  dataNodes:
    # -- Number of data node replicas
    replicas: 2

    # -- Default size of your ES data nodes
    storageSize: 100Gi

    # -- Storage Class Name to use in your volumes
    storageClassName: ""

    # -- Affinity for ES data nodes assignment
    # <br> Ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity
    affinity: { }

    # -- Tolerations for ES data nodes assignment
    # <br> Ref: https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/
    tolerations: [ ]

    # -- The resources limits and requests for ES data nodes
    # <br> Ref: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
    resources:
      limits:
        memory: 6Gi
      requests:
        cpu: 1
        memory: 4Gi

    # -- Node labels for ES data nodes assignment
    # <br> Ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector
    nodeSelector: { }

    # -- Configure the priority class for your ES data nodes
    # <br> Ref: https://kubernetes.io/docs/concepts/scheduling-eviction/pod-priority-preemption/#priorityclass
    priorityClass:

      # -- Enable or disable the priority class configuration
      enabled: false

      # -- Priority class name to apply on your ES data nodes
      name: "default"

    ## -- Java Options to set on your ES data nodes
    javaOpts: -Xms2g -Xmx2g -Xlog:disable -Xlog:all=warning:stderr:utctime,level,tags -Xlog:gc=debug:stderr:utctime

    # -- Sets the default Queue Size for the Search thread pool. This is a value recommended by Filigran
    # <br> Ref: https://docs.opencti.io/latest/deployment/installation#configure-the-environment
    threadPoolSearchQueueSize: 5000

  ...

Redis service headless

  • Impact: Helm fail to appply without cause by redis service
  • Reason: Compatibility with native kubernetes cluster (current config work on cilium but not on default kubernetes CNI providers)
    kubenetes offical docs
  • Actions required:
    • Delete existing redis headless service
    • Recreate with clusterIP set to None

Configuration changes breakdown

Old implementation

# redis-sentinel-headless
apiVersion: v1
kind: Service
metadata:
  labels:
    app.kubernetes.io/name: redis-replication
    app.kubernetes.io/instance: "redis-{{ $.Release.Namespace }}"
    app.kubernetes.io/component: redis
    app.kubernetes.io/part-of: {{ .Release.Name }}
    app.kubernetes.io/role: replication
  name: redis-replication-headless
spec:
  ports:
  - name: redis-client
    port: 6379
    protocol: TCP
    targetPort: 6379
  selector:
    app.kubernetes.io/name: redis-replication
    app.kubernetes.io/role: replication
  type: ClusterIP
  clusterIP: x.x.x.x
---
# redis-sentinel-headless
apiVersion: v1
kind: Service
metadata:
  labels:
    app: redis-sentinel
    role: sentinel
  name: redis-sentinel-headless
spec:
  ports:
  - name: sentinel-client
    port: 26379
    protocol: TCP
    targetPort: 26379
  selector:
    app.kubernetes.io/name: redis-sentinel
    app.kubernetes.io/role: sentinel
  type: ClusterIP
  clusterIP: x.x.x.x
New implementation
# redis-replication-headless
apiVersion: v1
kind: Service
metadata:
  labels:
    app.kubernetes.io/name: redis-replication
    app.kubernetes.io/instance: "redis-{{ $.Release.Namespace }}"
    app.kubernetes.io/component: redis
    app.kubernetes.io/part-of: {{ .Release.Name }}
    app.kubernetes.io/role: replication
  name: redis-replication-headless
spec:
  ports:
  - name: redis-client
    port: 6379
    protocol: TCP
    targetPort: 6379
  selector:
    app.kubernetes.io/name: redis-replication
    app.kubernetes.io/role: replication
  type: ClusterIP
  clusterIP: None
---
# redis-sentinel-headless
apiVersion: v1
kind: Service
metadata:
  labels:
    app: redis-sentinel
    role: sentinel
  name: redis-sentinel-headless
spec:
  ports:
  - name: sentinel-client
    port: 26379
    protocol: TCP
    targetPort: 26379
  selector:
    app.kubernetes.io/name: redis-sentinel
    app.kubernetes.io/role: sentinel
  type: ClusterIP
  clusterIP: None

Commands

kubectl delete service -n <namespace> redis-sentinel-headless,redis-replication-headless
helm upgrade --reuse-value -n <namespace> filigran/opencti <release> --version 1.2.0

Impact

Operation should have no impact on a running instance (connections are already established)