Releases: linkerd/linkerd2
edge-23.1.1
edge-23.1.1
This edge release fixes a caching issue in the destination controller, converts
deprecated policy resources, and introduces several changes to how the proxy
works.
A bug in the destination controller that could potentially lead to stale pods
being considered in the load balancer has been fixed.
Several Linkerd extensions were still using the now deprecated
ServerAuthorization resource. These instances have now been converted to using
AuthorizationPolicy. Additionally, removed several policy resources that
authenticated probes, since probes are now authenticated by default.
As part of ongoing policy work, there are several changes with how the proxy
works. Routes are now lazily initialized so that service profile routes will
not show up in metrics until the route is used. Furthermore, the proxy’s
traffic splitting behavior has changed so that only available resources are
used, resulting in less failfast errors.
Finally, this edge release contains a number of fixes and improvements from our
contributors.
- Converted
ServerAuthorizationresources toAuthorizationPolicyresources
in Linkerd extensions - Removed policy resources bound to admin servers in extensions (previously
these resources were used to authorize probes but now are authorized by
default) - Added a
resourcesfield in the linkerd-cni chart (thanks @jcogilvie!) - Fixed an issue in the CLI where
--identity-external-cawould set an
incorrect field (thanks @anoxape!) - Fixed an issue in the destination controller's cache that could result in
stale endpoints when using EndpointSlice objects - Added namespace to namespace-metadata resources in Helm (thanks @joebowbeer!)
- Added support for Pod Security Admission (Pod Security Policy resources are
still supported but disabled by default) - Changed routes to be initialized lazily. Service Profile routes will no
longer show up in metrics until the route is used (default routes are always
available when no Service Profile is defined for a service) - Changed the proxy's behavior when traffic splitting so that only services
that are not in failfast are used. This will enable the proxy to manage
failover without external coordination - Updated tokio (async runtime) in the proxy which should reduce CPU usage,
especially for proxy's pod local (i.e in the same network namespace)
communication - Fixed an issue where
linkerd viz tapwould display wrong latency/duration
value (thanks @olegy2008!)
stable-2.12.3
stable-2.12.3
This stable release is packed with various fixes in both the core linkerd
controllers and extensions.
-
CLI
- Fixed
linkerd checkfailing when the cluster had services of type
ExternalName - Fixed
linkerd multicluster installnot honoring thegateway.UIDsetting - Fixed flag
linkerd upgrade --from-manifests
- Fixed
-
Destination Controller
- Fixed race condition in destination controller
- Fixed issue in the destination controller where
hostPortmappings were
being ignored
-
linkerd-proxy-init
- Set the
noopinit container user to be the same asproxy-init's to avoid
errors when the security context disallows running as root - Introduced
proxyInit.privilegedsetting to allow running
linkerd-proxy-initwithout restrictions when required - Added port 6443 to default skipped ports to bypass proxy when ebpf CNIs
override the API Server packet destination
- Set the
-
Extensions
- Removed unnecessary
proxyProtocolrestriction in the multicluster gateway
Server (thanks @psmit!) - Added "Exists" toleration to the
linkerd-cniDaemonSet to have it
installed by default in tainted nodes - Make dashboard loading more robust when in the presence of browser plugins
injecting script tags (thanks @junnplus!)
- Removed unnecessary
edge-22.12.1
edge-22.12.1
This edge release introduces static and dynamic port overrides for CNI eBPF
socket-level load balancing. In certain installations when CNI plugins run in
eBPF mode, socket-level load balancing rewrites packet destinations to port
6443; as with 443 already, this port is now skipped as well on control plane
components so that they can communicate with the Kubernetes API before their
proxies are running.
Additionally, a potential panic and false warning have been fixed in the
destination controller.
- Updated linkerd-jaeger's collector to expose port 4318 in order support HTTP
alongside gRPC (thanks @uralsemih!) - Added a
proxyInit.privilegedsetting to control whether theproxy-init
initContainer runs as a privileged process - Fixed a potential panic in the destination controller caused by concurrent
writes when dealing with Endpoint updates - Fixed false warning when looking up HostPort mappings on Pods
- Added static and dynamic port overrides for CNI eBPF to work with socket-level
load balancing
edge-22.11.3
edge-22.11.3
This edge release fixes connection errors to pods that use hostPort
configurations. The CNI network-validator init container features
improved error logging, and the default linkerd-cni DaemonSet
configuration is updated to tolerate all node taints so that the CNI
runs on all nodes in a cluster.
- Fixed
destinationservice to properly discover targets using ahostPort
different than theircontainerPort, which was causing 502 errors - Upgraded the
network-validatorwith better logging allowing users to
determine whether failures occur as a result of their environment or the tool
itself - Added default
Existstoleration to thelinkerd-cniDaemonSet, allowing it
to be deployed in all nodes by default, regardless of taints
edge-22.11.2
edge-22.11.2
This edge release introduces the use of the Kubernetes metadata API in the
proxy-injector and tap-injector components. This can reduce the IO and memory
footprint for those components as they now only need to track the metadata for
certain resources, rather than the entire resource itself. Similar changes will
be made for the destination component in an upcoming release.
- Bumped HTTP dependencies to fix a potential deadlock in HTTP/2 clients
- Changed the proxy-injector and tap-injector components to use the metadata API
which should result in less memory consumption
edge-22.11.1
edge-22.11.1
This edge releases ships a few fixes in Linkerd's dashboard, and the
multicluster extension. Additionally, a regression has been fixed in the CLI
that blocked upgrades from versions older than 2.12.0, due to missing CRDs
(even if the CRDs were present in-cluster). Finally, the release includes
changes to the helm charts to allow for arbitrary (user-provided) labels on
Linkerd workloads.
- Fixed an issue in the CLI where upgrades from any version prior to
stable-2.12.0 would fail when using the--from-manifestflag - Removed un-injectable namespaces, such as kube-system from unmeshed resource
notification in the dashboard (thanks @MoSattler!) - Fixed an issue where the dashboard would respond to requests with 404 due to
wrong root paths in the HTML script (thanks @junnplus!) - Removed the proxyProtocol field in the multicluster gateway policy; this has
the effect of changing the protocol from 'HTTP/1.1' to 'unknown' (thanks
@psmit!) - Fixed the multicluster gateway UID when installing through the CLI, prior to
this change the 'runAsUser' field would be empty - Changed the helm chart for the control plane and all extensions to support
arbitrary labels on resources (thanks @bastienbosser!)
edge-22.10.3
edge-22.10.3
This edge release adds network-validator, a new init container to be used when
CNI is enabled. network-validator ensures that local iptables rules are
working as expected. It will validate this before linkerd-proxy starts.
network-validator replaces the noop container, runs as nobody, and drops
all capabilities before starting.
- Validate CNI
iptablesconfiguration during pod startup - Fix "cluster networks contains all services" fails with services with no
ClusterIP - Remove kubectl version check from
linkerd check(thanks @ziollek!) - Set
readOnlyRootFilesystem: truein viz chart (thanks @mikutas!) - Fix
linkerd multicluster installby re-addingpausecontainer image
in chart - linkerd-viz have hardcoded image value in namespace-metadata.yml template
bug correction (thanks @bastienbosser!)
stable-2.12.2
stable-2.12.2
This stable release fixes an issue with CNI chaining that was preventing the
Linkerd CNI plugin from working with other CNI plugins such as Cilium. It also
fixes some sections of the Viz dashboard appearing blank, and adds an optional
PodMonitor resource to the Helm chart to enable easier integration with the
Prometheus Operator. Several other fixes are included.
-
Proxy
- Fixed proxies emitting some duplicate inbound metrics
-
Control Plane
- Fixed handling of
.conffiles in the CNI plugin so that the Linkerd CNI
plugin can be used alongside other CNI plugins such as Cilium - Added a noop init container to injected pods when the CNI plugin is enabled
to prevent certain scenarios where a pod can get stuck without an IP address - Fixed the
NotInlabel selector operator in the policy resources being
erroneously treated asIn. - Fixed a bug where the
config.linkerd.io/proxy-versionannotation could be
empty
- Fixed handling of
-
CLI
- Added a
linkerd diagnostics policycommand to inspect Linkerd policy state - Added a check that ClusterIP services are in the cluster networks
- Expanded the
linkerd authzcommand to display AuthorizationPolicy
resources that target namespaces (thanks @aatarasoff!) - Fixed warning logic in the "linkerd-viz ClusterRoles exist" and "linkerd-viz
ClusterRoleBindings exist" checks inlinkerd viz check - Fixed the CLI ignoring the
--api-addrflag (thanks @mikutas!)
- Added a
-
Helm
- Added an optional PodMonitor resource to the main Helm chart (thanks
@jaygridley!)
- Added an optional PodMonitor resource to the main Helm chart (thanks
-
Dashboard
- Fixed the dashboard sections Tap, Top, and Routes appearing blank (thanks
@MoSattler!) - Updated Grafana dashboards to use variable duration parameter so that they
can be used when Prometheus has a longer scrape interval (thanks @TarekAS)
- Fixed the dashboard sections Tap, Top, and Routes appearing blank (thanks
edge-22.10.2
edge-22.10.2
This edge release fixes an issue with CNI chaining that was preventing the
Linkerd CNI plugin from working with other CNI plugins such as Cilium. It also
includes several other fixes.
- Updated Grafana dashboards to use variable duration parameter so that they can
be used when Prometheus has a longer scrape interval (thanks @TarekAS) - Fixed handling of .conf files in the CNI plugin so that the Linkerd CNI plugin
can be used alongside other CNI plugins such as Cilium - Added a
linkerd diagnostics policycommand to inspect Linkerd policy state - Added a check that ClusterIP services are in the cluster networks
- Added a noop init container to injected pods when the CNI plugin is enabled
to prevent certain scenarios where a pod can get stuck without an IP address - Fixed a bug where the
config.linkerd.io/proxy-versionannotation could be empty
edge-22.10.1
edge-22.10.1
This edge release fixes some sections of the Viz dashboard appearing blank, and
adds an optional PodMonitor resource to the Helm chart to enable easier
integration with the Prometheus Operator. It also includes many fixes submitted
by our contributors.
- Fixed the dashboard sections Tap, Top, and Routes appearing blank (thanks
@MoSattler!) - Added an optional PodMonitor resource to the main Helm chart (thanks
@jaygridley!) - Fixed the CLI ignoring the
--api-addrflag (thanks @mikutas!) - Expanded the
linkerd authzcommand to display AuthorizationPolicy resources
that target namespaces (thanks @aatarasoff!) - Fixed the
NotInlabel selector operator in the policy resources, being
erroneously treated asIn. - Fixed warning logic around the "linkerd-viz ClusterRoles exist" and
"linkerd-viz ClusterRoleBindings exist" checks inlinkerd viz check - Fixed proxies emitting some duplicate inbound metrics