Skip to content

make honorTimestamps configurable for linkerd-proxy PodMonitor #14905

@bezarsnba

Description

@bezarsnba

What problem are you trying to solve?

When scraping linkerd-proxy metrics with Prometheus, samples are sometimes dropped due to duplicate timestamps with different values. This results in WARN logs like:

Error on ingesting samples with different value but same timestamp scrape_pool=podMonitor/linkerd/linkerd-proxy

Although this behavior is expected from Prometheus, it creates operational noise, alert fatigue, and confusion during incident analysis, especially in large clusters with frequent pod restarts and short scrape intervals.

How should the problem be solved?

Allow users to optionally configure honorTimestamps for the linkerd-proxy PodMonitor.

For example:

podMonitor:
  honorTimestamps: false

or per endpoint:

podMonitor:
  metrics:
    honorTimestamps: false

This would let Prometheus use the scrape time instead of exporter-provided timestamps, preventing duplicate timestamp drops while keeping the current behavior as default.

Any alternatives you've considered?

Handling this entirely on the Prometheus side by manually patching the generated PodMonitor.

How would users interact with this feature?

How would users interact with this feature?

Users would enable the behavior via Helm values or configuration flags when installing Linkerd, for example:

podMonitor:
  honorTimestamps: false

This keeps backward compatibility and allows users to opt in only if they observe the issue.

Would you like to work on this feature?

yes

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions