Lena Maldacker b1803603b3 Merge branch 'tenantUserManagementTests' into 'main'
Update Chart.RED-6686: Change link to user management, user preferences, smtp...

See merge request redactmanager/qa/helm!11
2023-06-27 14:33:12 +02:00
2023-06-05 12:20:04 +02:00
2023-04-27 07:30:22 +02:00
2023-06-05 16:06:37 +02:00

Prometheus 2.x

Prometheus version 2.x has made changes to alertmanager, storage and recording rules. Check out the migration guide here

Users of this chart will need to update their alerting rules to the new format before they can upgrade.

Upgrading from previous chart versions.

As of version 5.0, this chart uses Prometheus 2.1. This version of prometheus introduces a new data format and is not compatible with prometheus 1.x. It is recommended to install this as a new release, as updating existing releases will not work. See the prometheus docs for instructions on retaining your old data.

Example migration

Assuming you have an existing release of the prometheus chart, named prometheus-old. In order to update to prometheus 2.1 while keeping your old data do the following:

  1. Update the prometheus-old release. Disable scraping on every component besides the prometheus server, similar to the configuration below:

    alertmanager:
      enabled: false
    alertmanagerFiles:
      alertmanager.yml: ""
    kubeStateMetrics:
      enabled: false
    nodeExporter:
      enabled: false
    pushgateway:
      enabled: false
    server:
      extraArgs:
        storage.local.retention: 720h
    serverFiles:
      alerts: ""
      prometheus.yml: ""
      rules: ""
    
  2. Deploy a new release of the chart with version 5.0+ using prometheus 2.x. In the values.yaml set the scrape config as usual, and also add the prometheus-old instance as a remote-read target.

       prometheus.yml:
         ...
         remote_read:
         - url: http://prometheus-old/api/v1/read
         ...
    

    Old data will be available when you query the new prometheus instance.

Configuration

The following table lists the configurable parameters of the Prometheus chart and their default values.

Parameter Description Default
basic.namespace.server GIN Server Namespace gin-server
basic.namespace.stateless GIN Stateless Namespace gin-stateless
basic.runVersion Set test version stable
basic.runController Set test job all
basic.gatewayPort Set port 8080
basic.runDebugMode Enable debug false
basic.dockerTag Set test revision 4.8.3-0
basic.disablePostgres If true, tests will be run locally true
basic.repository Set repository nexus.iqser.com:5001/
nextcloud.username Set username ***
nextcloud.password Set password ***
nextcloud.url Set url https://cloud.iqser.com/remote.php/webdav
test.config.content_processing_timeout Set content processing timeout 10
test.config.warmup_wait_time Set warmup wait time 0
test.config.discovery.retries Set number of retries for discovery 20
test.config.discovery.timeout Set timeout for discovery 20
test.config.loadtest.users alertmanager Ingress TLS configuration (YAML) 10
test.config.loadtest.duration node labels for alertmanager pod assignment 10

alertmanager.persistentVolume.storageClass | alertmanager data Persistent Volume Storage Class | unset alertmanager.persistentVolume.subPath | Subdirectory of alertmanager data Persistent Volume to mount | "" alertmanager.podAnnotations | annotations to be added to alertmanager pods | {} alertmanager.replicaCount | desired number of alertmanager pods | 1 alertmanager.resources | alertmanager pod resource requests & limits | {} alertmanager.service.annotations | annotations for alertmanager service | {} alertmanager.service.clusterIP | internal alertmanager cluster service IP | "" alertmanager.service.externalIPs | alertmanager service external IP addresses | [] alertmanager.service.loadBalancerIP | IP address to assign to load balancer (if supported) | "" alertmanager.service.loadBalancerSourceRanges | list of IP CIDRs allowed access to load balancer (if supported) | [] alertmanager.service.servicePort | alertmanager service port | 80 alertmanager.service.type | type of alertmanager service to create | ClusterIP alertmanagerFiles.alertmanager.yml | Prometheus alertmanager configuration | example configuration configmapReload.name | configmap-reload container name | configmap-reload configmapReload.image.repository | configmap-reload container image repository | jimmidyson/configmap-reload configmapReload.image.tag | configmap-reload container image tag | v0.1 configmapReload.image.pullPolicy | configmap-reload container image pull policy | IfNotPresent configmapReload.extraArgs | Additional configmap-reload container arguments | {} configmapReload.extraConfigmapMounts | Additional configmap-reload configMap mounts | [] configmapReload.resources | configmap-reload pod resource requests & limits | {} initChownData.enabled | If false, don't reset data ownership at startup | true initChownData.name | init-chown-data container name | init-chown-data initChownData.image.repository | init-chown-data container image repository | busybox initChownData.image.tag | init-chown-data container image tag | latest initChownData.image.pullPolicy | init-chown-data container image pull policy | IfNotPresent initChownData.resources | init-chown-data pod resource requests & limits | {} kubeStateMetrics.enabled | If true, create kube-state-metrics | true kubeStateMetrics.name | kube-state-metrics container name | kube-state-metrics kubeStateMetrics.image.repository | kube-state-metrics container image repository| k8s.gcr.io/kube-state-metrics kubeStateMetrics.image.tag | kube-state-metrics container image tag | v1.1.0 kubeStateMetrics.image.pullPolicy | kube-state-metrics container image pull policy | IfNotPresent kubeStateMetrics.args | kube-state-metrics container arguments | {} kubeStateMetrics.nodeSelector | node labels for kube-state-metrics pod assignment | {} kubeStateMetrics.podAnnotations | annotations to be added to kube-state-metrics pods | {} kubeStateMetrics.tolerations | node taints to tolerate (requires Kubernetes >=1.6) | [] kubeStateMetrics.replicaCount | desired number of kube-state-metrics pods | 1 kubeStateMetrics.resources | kube-state-metrics resource requests and limits (YAML) | {} kubeStateMetrics.service.annotations | annotations for kube-state-metrics service | {prometheus.io/scrape: "true"} kubeStateMetrics.service.clusterIP | internal kube-state-metrics cluster service IP | None kubeStateMetrics.service.externalIPs | kube-state-metrics service external IP addresses | [] kubeStateMetrics.service.loadBalancerIP | IP address to assign to load balancer (if supported) | "" kubeStateMetrics.service.loadBalancerSourceRanges | list of IP CIDRs allowed access to load balancer (if supported) | [] kubeStateMetrics.service.servicePort | kube-state-metrics service port | 80 kubeStateMetrics.service.type | type of kube-state-metrics service to create | ClusterIP nodeExporter.enabled | If true, create node-exporter | true nodeExporter.name | node-exporter container name | node-exporter nodeExporter.image.repository | node-exporter container image repository| prom/node-exporter nodeExporter.image.tag | node-exporter container image tag | v0.15.2 nodeExporter.image.pullPolicy | node-exporter container image pull policy | IfNotPresent nodeExporter.extraArgs | Additional node-exporter container arguments | {} nodeExporter.extraHostPathMounts | Additional node-exporter hostPath mounts | [] nodeExporter.extraConfigmapMounts | Additional node-exporter configMap mounts | [] nodeExporter.nodeSelector | node labels for node-exporter pod assignment | {} nodeExporter.podAnnotations | annotations to be added to node-exporter pods | {} nodeExporter.tolerations | node taints to tolerate (requires Kubernetes >=1.6) | [] nodeExporter.resources | node-exporter resource requests and limits (YAML) | {} nodeExporter.securityContext | securityContext for containers in pod | {} nodeExporter.service.annotations | annotations for node-exporter service | {prometheus.io/scrape: "true"} nodeExporter.service.clusterIP | internal node-exporter cluster service IP | None nodeExporter.service.externalIPs | node-exporter service external IP addresses | [] nodeExporter.service.loadBalancerIP | IP address to assign to load balancer (if supported) | "" nodeExporter.service.loadBalancerSourceRanges | list of IP CIDRs allowed access to load balancer (if supported) | [] nodeExporter.service.servicePort | node-exporter service port | 9100 nodeExporter.service.type | type of node-exporter service to create | ClusterIP pushgateway.enabled | If true, create pushgateway | true pushgateway.name | pushgateway container name | pushgateway pushgateway.image.repository | pushgateway container image repository | prom/pushgateway pushgateway.image.tag | pushgateway container image tag | v0.4.0 pushgateway.image.pullPolicy | pushgateway container image pull policy | IfNotPresent pushgateway.extraArgs | Additional pushgateway container arguments | {} pushgateway.ingress.enabled | If true, pushgateway Ingress will be created | false pushgateway.ingress.annotations | pushgateway Ingress annotations | {} pushgateway.ingress.hosts | pushgateway Ingress hostnames | [] pushgateway.ingress.tls | pushgateway Ingress TLS configuration (YAML) | [] pushgateway.nodeSelector | node labels for pushgateway pod assignment | {} pushgateway.podAnnotations | annotations to be added to pushgateway pods | {} pushgateway.tolerations | node taints to tolerate (requires Kubernetes >=1.6) | [] pushgateway.replicaCount | desired number of pushgateway pods | 1 pushgateway.resources | pushgateway pod resource requests & limits | {} pushgateway.service.annotations | annotations for pushgateway service | {} pushgateway.service.clusterIP | internal pushgateway cluster service IP | "" pushgateway.service.externalIPs | pushgateway service external IP addresses | [] pushgateway.service.loadBalancerIP | IP address to assign to load balancer (if supported) | "" pushgateway.service.loadBalancerSourceRanges | list of IP CIDRs allowed access to load balancer (if supported) | [] pushgateway.service.servicePort | pushgateway service port | 9091 pushgateway.service.type | type of pushgateway service to create | ClusterIP rbac.create | If true, create & use RBAC resources | true server.name | Prometheus server container name | server server.image.repository | Prometheus server container image repository | prom/prometheus server.image.tag | Prometheus server container image tag | v2.1.0 server.image.pullPolicy | Prometheus server container image pull policy | IfNotPresent server.extraArgs | Additional Prometheus server container arguments | {} server.prefixURL | The prefix slug at which the server can be accessed | `server.baseURL` | The external url at which the server can be accessed | server.extraHostPathMounts | Additional Prometheus server hostPath mounts | [] server.extraConfigmapMounts | Additional Prometheus server configMap mounts | [] server.extraSecretMounts | Additional Prometheus server Secret mounts | [] server.configMapOverrideName | Prometheus server ConfigMap override where full-name is {{.Release.Name}}-{{.Values.server.configMapOverrideName}} and setting this value will prevent the default server ConfigMap from being generated | "" server.ingress.enabled | If true, Prometheus server Ingress will be created | false server.ingress.annotations | Prometheus server Ingress annotations | [] server.ingress.hosts | Prometheus server Ingress hostnames | [] server.ingress.tls | Prometheus server Ingress TLS configuration (YAML) | [] server.nodeSelector | node labels for Prometheus server pod assignment | {} server.tolerations | node taints to tolerate (requires Kubernetes >=1.6) | [] server.persistentVolume.enabled | If true, Prometheus server will create a Persistent Volume Claim | true server.persistentVolume.accessModes | Prometheus server data Persistent Volume access modes | [ReadWriteOnce] server.persistentVolume.annotations | Prometheus server data Persistent Volume annotations | {} server.persistentVolume.existingClaim | Prometheus server data Persistent Volume existing claim name | "" server.persistentVolume.mountPath | Prometheus server data Persistent Volume mount root path | /data server.persistentVolume.size | Prometheus server data Persistent Volume size | 8Gi server.persistentVolume.storageClass | Prometheus server data Persistent Volume Storage Class | unset server.persistentVolume.subPath | Subdirectory of Prometheus server data Persistent Volume to mount | "" server.podAnnotations | annotations to be added to Prometheus server pods | {} server.replicaCount | desired number of Prometheus server pods | 1 server.resources | Prometheus server resource requests and limits | {} server.service.annotations | annotations for Prometheus server service | {} server.service.clusterIP | internal Prometheus server cluster service IP | "" server.service.externalIPs | Prometheus server service external IP addresses | [] server.service.loadBalancerIP | IP address to assign to load balancer (if supported) | "" server.service.loadBalancerSourceRanges | list of IP CIDRs allowed access to load balancer (if supported) | [] server.service.nodePort | Port to be used as the service NodePort (ignored if server.service.type is not NodePort) | 0 server.service.servicePort | Prometheus server service port | 80 server.service.type | type of Prometheus server service to create | ClusterIP serviceAccounts.alertmanager.create | If true, create the alertmanager service account | true serviceAccounts.alertmanager.name | name of the alertmanager service account to use or create | {{ prometheus.alertmanager.fullname }} serviceAccounts.kubeStateMetrics.create | If true, create the kubeStateMetrics service account | true serviceAccounts.kubeStateMetrics.name | name of the kubeStateMetrics service account to use or create | {{ prometheus.kubeStateMetrics.fullname }} serviceAccounts.nodeExporter.create | If true, create the nodeExporter service account | true serviceAccounts.nodeExporter.name | name of the nodeExporter service account to use or create | {{ prometheus.nodeExporter.fullname }} serviceAccounts.pushgateway.create | If true, create the pushgateway service account | true serviceAccounts.pushgateway.name | name of the pushgateway service account to use or create | {{ prometheus.pushgateway.fullname }} serviceAccounts.server.create | If true, create the server service account | true serviceAccounts.server.name | name of the server service account to use or create | {{ prometheus.server.fullname }} server.terminationGracePeriodSeconds | Prometheus server Pod termination grace period | 300 server.retention | (optional) Prometheus data retention | "" serverFiles.alerts | Prometheus server alerts configuration | {} serverFiles.rules | Prometheus server rules configuration | {} serverFiles.prometheus.yml | Prometheus server scrape configuration | example configuration networkPolicy.enabled | Enable NetworkPolicy | false |

Specify each parameter using the --set key=value[,key=value] argument to helm install. For example,

$ helm install stable/prometheus --name my-release \
    --set server.terminationGracePeriodSeconds=360

Alternatively, a YAML file that specifies the values for the above parameters can be provided while installing the chart. For example,

$ helm install stable/prometheus --name my-release -f values.yaml

Tip

: You can use the default values.yaml

RBAC Configuration

Roles and RoleBindings resources will be created automatically for server and kubeStateMetrics services.

To manually setup RBAC you need to set the parameter rbac.create=false and specify the service account to be used for each service by setting the parameters: serviceAccounts.{{ component }}.create to false and serviceAccounts.{{ component }}.name to the name of a pre-existing service account.

Tip

: You can refer to the default *-clusterrole.yaml and *-clusterrolebinding.yaml files in templates to customize your own.

ConfigMap Files

AlertManager is configured through alertmanager.yml. This file (and any others listed in alertmanagerFiles) will be mounted into the alertmanager pod.

Prometheus is configured through prometheus.yml. This file (and any others listed in serverFiles) will be mounted into the server pod.

Ingress TLS

If your cluster allows automatic creation/retrieval of TLS certificates (e.g. kube-lego), please refer to the documentation for that mechanism.

To manually configure TLS, first create/retrieve a key & certificate pair for the address(es) you wish to protect. Then create a TLS secret in the namespace:

kubectl create secret tls prometheus-server-tls --cert=path/to/tls.cert --key=path/to/tls.key

Include the secret's name, along with the desired hostnames, in the alertmanager/server Ingress TLS section of your custom values.yaml file:

server:
  ingress:
    ## If true, Prometheus server Ingress will be created
    ##
    enabled: true

    ## Prometheus server Ingress hostnames
    ## Must be provided if Ingress is enabled
    ##
    hosts:
      - prometheus.domain.com

    ## Prometheus server Ingress TLS configuration
    ## Secrets must be manually created in the namespace
    ##
    tls:
      - secretName: prometheus-server-tls
        hosts:
          - prometheus.domain.com

NetworkPolicy

Enabling Network Policy for Prometheus will secure connections to Alert Manager and Kube State Metrics by only accepting connections from Prometheus Server. All inbound connections to Prometheus Server are still allowed.

To enable network policy for Prometheus, install a networking plugin that implements the Kubernetes NetworkPolicy spec, and set networkPolicy.enabled to true.

If NetworkPolicy is enabled for Prometheus' scrape targets, you may also need to manually create a networkpolicy which allows it.

Description
Testsuite Helm ImageljkAIOSD
Readme 222 KiB
3.6.12-0 Latest
2024-06-17 11:43:29 +02:00
Languages
Markdown 100%