Skip to content

fix: add hash calculation #389

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: main
Choose a base branch
from
Open

fix: add hash calculation #389

wants to merge 1 commit into from

Conversation

Copy link

@orca-security-us orca-security-us bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Orca Security Scan Summary

Status Check Issues by priority
Passed Passed Infrastructure as Code high 0   medium 0   low 0   info 0 View in Orca
Passed Passed SAST high 0   medium 0   low 0   info 0 View in Orca
Passed Passed Secrets high 0   medium 0   low 0   info 0 View in Orca
Passed Passed Vulnerabilities high 0   medium 0   low 0   info 0 View in Orca

@obs-gh-mattcotter
Copy link
Collaborator

@icco thanks for your pull request! Were you able to get this to work locally? When I try it, the subchart is unable to find the template since it looks for a relative path. Here's the error I see:

Error: INSTALLATION FAILED: template: agent/charts/prometheus-scraper/templates/deployment.yaml:35:12: executing "agent/charts/prometheus-scraper/templates/deployment.yaml" at <include "opentelemetry-collector.configTemplateChecksumAnnotation" .>: error calling include: template: agent/charts/cluster-events/templates/_helpers.tpl:239:22: executing "opentelemetry-collector.configTemplateChecksumAnnotation" at <include (print $.Template.BasePath "/" .Values.configMap.existingPath) .>: error calling include: template: no template "agent/charts/prometheus-scraper/templates/prometheus-scraper-configmap.yaml" associated with template "gotpl"

Seems like someone else has had this same issue when using the collector chart as a subchart:
open-telemetry/opentelemetry-helm-charts#1302 (comment)
If you have this working, let's figure out what is missing in my setup and we can get this feature enabled.

@icco
Copy link
Author

icco commented Jul 8, 2025

@icco thanks for your pull request! Were you able to get this to work locally? When I try it, the subchart is unable to find the template since it looks for a relative path. Here's the error I see:

Error: INSTALLATION FAILED: template: agent/charts/prometheus-scraper/templates/deployment.yaml:35:12: executing "agent/charts/prometheus-scraper/templates/deployment.yaml" at <include "opentelemetry-collector.configTemplateChecksumAnnotation" .>: error calling include: template: agent/charts/cluster-events/templates/_helpers.tpl:239:22: executing "opentelemetry-collector.configTemplateChecksumAnnotation" at <include (print $.Template.BasePath "/" .Values.configMap.existingPath) .>: error calling include: template: no template "agent/charts/prometheus-scraper/templates/prometheus-scraper-configmap.yaml" associated with template "gotpl"

Seems like someone else has had this same issue when using the collector chart as a subchart: open-telemetry/opentelemetry-helm-charts#1302 (comment) If you have this working, let's figure out what is missing in my setup and we can get this feature enabled.

I didn't have a local test environment for this so I didn't test it, but you're right, running helm template, I also see this error.

open-telemetry/opentelemetry-helm-charts#1737 also highlights this issue. I tried reading through https://helm.sh/docs/chart_template_guide/subcharts_and_globals/#sharing-templates-with-subcharts and I'm not enough of a helm expert to understand how to take advantage of this. I wonder if we could make a fix to the otel helm chart to make this work.

Do you have a test env setup to test changes to the otel chart with your observe chart?

Or should we try the hash option that the issue I linked suggests as a work around?

@obs-gh-mattcotter
Copy link
Collaborator

I didn't have a local test environment for this so I didn't test it, but you're right, running helm template, I also see this error.

open-telemetry/opentelemetry-helm-charts#1737 also highlights this issue. I tried reading through https://helm.sh/docs/chart_template_guide/subcharts_and_globals/#sharing-templates-with-subcharts and I'm not enough of a helm expert to understand how to take advantage of this. I wonder if we could make a fix to the otel helm chart to make this work.

Do you have a test env setup to test changes to the otel chart with your observe chart?

Or should we try the hash option that the issue I linked suggests as a work around?

@icco I looked into the possibility of sharing templates, and my most promising lead was that we could re-define opentelemetry-collector.configName and change the name of the template referenced in the pod volumes. However, I hit a similar problem of not being able to pass variables into a template definition. For example, something like this would be great:

{{ $forwarderConfig := include "observe.daemonset.applyForwarderConfig" . }}

{{- define "opentelemetry-collector.configName" -}}
    {{- if eq .Chart.Name "forwarder" -}}
        forwarder-{{ print $forwarderConfig | sha256sum }}
    {{- else -}}
        {{ .Chart.Name }}
    {{- end -}}
{{- end }}

but the result is Error: parse error at (agent/templates/_config.tpl:47): undefined variable "$forwarderConfig".

The workaround of always restarting by adding a random annotation seems solid to me if that's the behavior that you want. You can do that with our current helm chart by adding this to your values.yaml file (or using it as a separate file):

cluster-events:
  podAnnotations:
    randAlphaNum/autoRestartEverytime: "{{ randAlphaNum 5 }}"

prometheus-scraper:
  podAnnotations:
    randAlphaNum/autoRestartEverytime: "{{ randAlphaNum 5 }}"

cluster-metrics:
  podAnnotations:
    randAlphaNum/autoRestartEverytime: "{{ randAlphaNum 5 }}"

node-logs-metrics:
  podAnnotations:
    randAlphaNum/autoRestartEverytime: "{{ randAlphaNum 5 }}"

monitor:
  podAnnotations:
    randAlphaNum/autoRestartEverytime: "{{ randAlphaNum 5 }}"

forwarder:
  podAnnotations:
    randAlphaNum/autoRestartEverytime: "{{ randAlphaNum 5 }}"

I also was able to find a workaround using config map hashing when deploying with Kustomize, but that's a completely separate tool and wouldn't help unless you happen to be using it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants