Releases: linkerd/linkerd2
v18.8.4
v18.8.3
- Web UI
- Improved Kubernetes resource navigation in the sidebar
- Improved resource detail pages:
- New live request view
- New success rate graphs
- CLI
tapandtophave been improved to sample up to 100 RPS
- Control plane
- Injected proxy containers now have readiness and liveness probes enabled
Special thanks to @sourishkrout for contributing a web readability fix!
v18.8.2
- CLI
- New
linkerd topcommand has been added, displays live traffic stats linkerd checkhas been updated with additional checks, now supports a--preflag for running pre-install checkslinkerd checkandlinkerd dashboardnow support a--waitflag that tells the CLI to wait for the control plane to become readylinkerd tapnow supports a--outputflag to display output in a wide format that includes src and dst resources and namespaceslinkerd statincludes additional validation for command line inputs- All commands that talk to the Linkerd API now show better error messages when the control plane is unavailable
- New
- Web UI
- New individual resources can now be viewed on a resource detail page, which includes stats for the resource itself and its nearest neighbors
- Experimental web-based Top interface accessible at
/top, aggregates tap data in real time to display live traffic stats - The
/tappage has multiple improvements, including displaying additional src/dst metadata, improved form controls, and better latency formatting - All resource tables have been updated to display meshed pod counts, as well as an icon linking to the resource's Grafana dashboard if it is meshed
- The UI now shows more useful information when server errors are encountered
- Proxy
- The
h2crate fixed a HTTP/2 window management bug - The
rustlscrate fixed a bug that could improperly fail TLS streams
- The
- Control Plane
- The tap server now hydrates metadata for both sources and destinations
v18.8.1
- Web UI
- New Tap UI makes it possible to query & inspect requests from the browser!
- Proxy
- New Automatic, transparent HTTP/2 multiplexing of HTTP/1 traffic
reduces the cost of short-lived HTTP/1 connections
- New Automatic, transparent HTTP/2 multiplexing of HTTP/1 traffic
- Control Plane
- Improved
linkerd injectnow supports injecting all resources in a folder - Fixed
linkerd tapno longer crashes when there are many pods - New Prometheus now only scrapes proxies belonging to its own linkerd install
- Fixed Prometheus metrics collection for clusters with >100 pods
- Improved
Special thanks to @ihcsim for contributing the inject improvement!
v18.7.3
Linkerd2 v18.7.3 completes the rebranding from Conduit to Linkerd2, and improves
overall performance and stability.
- Proxy
- Improved CPU utilization by ~20%
- Web UI
- Experimental
/tappage now supports additional filters
- Experimental
- Control Plane
- Updated all k8s.io dependencies to 1.11.1
v18.7.2
Linkerd2 v18.7.2 introduces new stability features as we work toward production readiness.
You can easily install this release (and others!). Simply:
curl https://run.conduit.io/install\?v18.7.2 | shlinkerd install | kubectl apply -f -linkerd dashboard
Release notes:
- Control Plane
- Breaking change Injected pod labels have been renamed to be more consistent with Kubernetes; previously injected pods must be re-injected with new version of linkerd CLI in order to work with updated control plane
- The "ca-bundle-distributor" deployment has been renamed to "ca"
- Proxy
- Fixed HTTP/1.1 connections were not properly reused, leading to elevated latencies and CPU load
- Fixed The
process_cpu_seconds_totalwas calculated incorrectly
- Web UI
- New per-namespace application topology graph
- Experimental web-based Tap interface accessible at
/tap - Updated favicon to the Linkerd logo
v18.7.1
Linkerd2 v18.7.1 is the first release of Linkerd2, which was formerly hosted at https://github.com/runconduit/conduit.
This is a beta release. It is the first of many as we work towards a GA release. See the blog post for more details on where this is all going.
The artifacts here are the CLI binaries. To install Linkerd2 on your Kubernetes cluster, download the appropriate binary, rename it to linkerd, and run linkerd install | kubectl apply -f -.
- Packaging
- Introduce new date-based versioning scheme,
vYY.M.n - Move all Docker images to
gcr.io/linkerd-iorepo
- Introduce new date-based versioning scheme,
- User Interface
- Update branding to reference Linkerd throughout
- The CLI is now called
linkerd
- Production Readiness
- Fix issue with Destination service sending back incomplete pod metadata
- Fix high CPU usage during proxy shutdown
- ClusterRoles are now unique per Linkerd install, allowing multiple instances to be installed in the same Kubernetes cluster
v0.5.0
Conduit v0.5.0 introduces a new, experimental feature that automatically
enables Transport Layer Security between Conduit proxies to secure
application traffic. It also adds support for HTTP protocol upgrades, so
applications that use WebSockets can now benefit from Conduit.
- Security
- New
conduit install --tls=optionalenables automatic, opportunistic
TLS. See the docs for more info.
- New
- Production Readiness
- The proxy now transparently supports HTTP protocol upgrades to support, for
instance, WebSockets. - The proxy now seamlessly forwards HTTP
CONNECTstreams. - Controller services are now configured with liveness and readiness probes.
- The proxy now transparently supports HTTP protocol upgrades to support, for
- User Interface
conduit statnow supports a virtualauthorityresource that aggregates
traffic by the:authority(orHost) header of an HTTP request.dashboard,stat, andtaphave been updated to describe TLS state for
traffic.conduit tapnow has more detailed information, including the direction of
each message (outbound or inbound).conduit statnow more-accurately records histograms for low-latency services.conduit dashboardnow includes error messages when a Conduit-enabled pod fails.
- Internals
- Prometheus has been upgraded to v2.3.1.
- A potential live-lock has been fixed in HTTP/2 servers.
conduit tapcould crash due to a null-pointer access. This has been fixed.
v0.4.4
v0.4.4
Conduit v0.4.4 continues to improve production suitability and sets up internals for the
upcoming v0.5.0 release.
- Production Readiness
- The destination service has been mostly-rewritten to improve safety and correctness,
especially during controller initialization. - Readiness and Liveness checks have been added for some controller components.
- RBAC settings have been expanded so that Prometheus can access node-level metrics.
- The destination service has been mostly-rewritten to improve safety and correctness,
- User Interface
- Ad blockers like uBlock prevented the Conduit dashboard from fetching API data. This
has been fixed. - The UI now highlights pods that have failed to start a proxy.
- Ad blockers like uBlock prevented the Conduit dashboard from fetching API data. This
- Internals
- Various dependency upgrades, including Rust 1.26.2.
- TLS testing continues to bear fruit, precipitating stability improvements to
dependencies like Rustls.
Special thanks to @alenkacz for improving docker build times!
v0.4.3
v0.4.3
Conduit v0.4.3 continues progress towards production readiness. It features a new
latency-aware load balancer.
- Production Readiness
- The proxy now uses a latency-aware load balancer for outbound requests. This
implementation is based on Finagle's Peak-EWMA balancer, which has been proven to
significantly reduce tail latencies. This is the same load balancing strategy used by
Linkerd.
- The proxy now uses a latency-aware load balancer for outbound requests. This
- User Interface
conduit statis now slightly more predictable in the way it outputs things,
especially for commands likewatch conduit stat all --all-namespaces.- Failed and completed pods are no longer shown in stat summary results.
- Internals
- The proxy now supports some TLS configuration, though these features remain disabled
and undocumented pending further testing and instrumentation.
- The proxy now supports some TLS configuration, though these features remain disabled
Special thanks to @ihcsim for contributing his first PR to the project and to @roanta for
discussing the Peak-EWMA load balancing algorithm with us.