Releases: linkerd/linkerd2
edge-22.7.2
edge-22.7.2
This release adds support for per-route authorization policy using the
AuthorizationPolicy and HttpRoute resources. It also adds a configurable
shutdown grace period to the proxy which can be used to ensure that proxy
graceful shutdown completes within a certain time, even if there are outstanding
open connections.
- Removed kube-system exclusions from watchers to fix service discovery for
workloads in the kube-system namespace (thanks @JacobHenner) - Added annotations to allow Linkerd extension deployments to be evicted by the
autoscaler when necessary - Added missing port in the Linkerd viz chart documentation (thanks @haswalt)
- Added support for per-route policy by supporting AuthorizationPolicy resources
which target HttpRoute resources - Fixed the
linkerd checkcommand crashing when unexpected pods are found in
a Linkerd namespace - Added a
config.linkerd.io/shutdown-grace-periodannotation to configure the
proxy's maximum grace period for graceful shutdown
stable-2.11.4
stable-2.11.4
This release includes a security improvement. When a user manually specified the
policyValidator.keyPEM setting, the value was incorrectly included in the
linkerd-config ConfigMap. This means that this private key was erroneously
exposed to ServiceAccounts with read access to this ConfigMap. Practically, this
means that the Linkerd proxy-injector, identity, and heartbeat Pods could
read this value. This should not have exposed this private key to other
unauthorized users unless additional RoleBindings were added outside of Linkerd.
Nevertheless, we recommend that users who manually set control plane
certificates update the credentials for the policy validator after upgrading
Linkerd.
Additionally, a PodSecurityPolicy fix is included which fixes installations
where PSP is enabled and proxyInit.runAsRoot: true.
edge-22.7.1
edge-22.7.1
This release includes a security improvement. When a user manually specified the
policyValidator.keyPEM setting, the value was incorrectly included in the
linkerd-config configmap. This means that this private key was erroneously
exposed to service accounts with read access to this configmap. Practically,
this means that the Linkerd proxy-injector, identity, and heartbeat pods
could read this value. This should not have exposed this private key to
other unauthorized users unless additional role bindings were added outside of
Linkerd. Nevertheless, we recommend that users who manually set control plane
certificates update the credentials for the policy validator after upgrading
Linkerd.
Additionally, the linkerd-multicluster extensions has several fixes related to
fail fast errors during link watch restarts, improper label matching for
mirrored services, and properly cleaning up mirrored endpoints in certain
situations.
Lastly, the proxy can now retry gRPC requests that have responses with a
TRAILERS frame. A fix to reduce redundant load balancer updates should also
result in less connection churn.
- Changed unit tests to use newly introduced
prommatchpackage for asserting
expected metrics (thanks @krzysztofdrys!) - Fixed Docker container runtime check to only during
linkerd installrather
thanlinkerd check --pre - Changed linkerd-multicluster's remote cluster watcher to assume the gateway is
alive when starting—fixing fail fast errors from occurring during restarts
(thanks @chenaoxd!) - Added
matchLabelsandmatchExpressionsto linkerd-multicluster's Link CRD - Fixed linkerd-multicluster's label selector to properly select resources that
match the expected label value, rather than just the presence of the label - Fixed linkerd-multicluster's cluster watcher to properly clean up endpoints
belonging to remote headless services that are no longer mirrored - Added the HttpRoute CRD which will be used by future policy features
- Fixed CNI plugin event processing where file updates could sometimes be
skipped leading to the update not being acknowledged - Fixed redundant load balancer updates in the proxy that could cause
unnecessary connection churn - Fixed gRPC request retries for responses that contain a TRAILERS frame
- Fixed the dashboard's
linkerd checkdue to missing RBAC for listing pods in
the cluster - Fixed API check that ensures access to the Server CRD (thanks @aatarasoff!)
- Changed
linkerd authzto match the labels of pre-fetched Pods rather than
the multiple API calls it was doing—resulting in significant speed-up (thanks
@aatarasoff!) - Unset
policyValidtor.keyPEMinlinkerd-configConfigMap
stable-2.11.3
stable-2.11.3
This release pulls in several control plane and proxy fixes from the main
development branch. The linkerd-multicluster extension has several fixes
regarding incorrect label matching and resource cleanup. Additionally, a long
standing panic has been fixed in the proxy.
- Fixed an error in
linkerd multicluster allowwhich resulted in broken YAML
output - Fixed a potential panic in the proxy's outbound load balancer that could be
triggered when the balancer processes many service discovery updates in a
short period of time. - Fixed a class of DNS errors by ensuring the proxy falls back to A records when
SRV resolution fails - Fixed an issue where the proxy would pass along illegal headers from
CONNECT
responses - Fixed several Helm labels to follow the Helm standards recommendation which
were sometimes resulting chart generation errors - Fixed an issue where
linkerd checkdid not skip Pods with aNodeShutdown
status resulting in incorrect errors - Fixed the Docker container runtime check to only occur during
linkerd installrather thanlinkerd check - Fixed a class of fail fast errors that were occurring with
linkerd-multicluster due to delayed gateway liveness probes - Fixed linkerd-multicluster Endpoints not being deleted when their remote
Service was no longer mirrored - Fixed linkerd-multicluster's label selector to properly match the value of
mirror.linkerd.io/exportedrather than just its presence
edge-22.6.2
edge-22.6.2
This edge release bumps the minimum supported Kubernetes version from v1.20
to v1.21, introduces some new changes, and includes a few bug fixes. Most
notably, a bug has been fixed in the proxy's outbound load balancer that could
cause panics, especially when the balancer would process many service discovery
updates in a short period of time. This release also fixes a panic in the
proxy-injector, and introduces a change that will include HTTP probe ports in
the proxy's inbound ports configuration, to be used for policy discovery.
- Fixed a bug in the proxy's outbound load balancer that could cause panics
when many discovery updates were processed in short time periods - Added
runtimeClassNameoptions to Linkerd's Helm chart (thanks @jtcarnes!) - Introduced a change in the proxy-injector that will configure the inbound
ports proxy configuration with the pod's probe ports (HTTPGet) - Added godoc links in the project README file (thanks @spacewander!)
- Increased minimum supported Kubernetes version to
v1.21fromv1.20 - Fixed an issue where the proxy-injector would not emit events for resources
that receive annotation patches but are skipped for injection - Refactored
PublicIPToStringto handle both IPv4 and IPv6 addresses in a
similar behavior (thanks @zhlsunshine!) - Replaced the usage of branch with tags, and pinned
cosign-installeraction
tov1(thanks @saschagrunert!) - Fixed an issue where the proxy-injector would panic if resources have an
unsupported owner kind
edge-22.6.1
edge-22.6.1
This edge release fixes an issue where Linkerd injected pods could not be
evicted by Cluster Autoscaler. It also adds the --crds flag to linkerd check
which validates that the Linkerd CRDs have been installed with the proper
versions.
The previously noisy "cluster networks can be verified" check has been replaced
with one that now verifies each running Pod IP is contained within the current
clusterNetworks configuration value.
Additionally, linkerd-viz is no longer required for linkerd-multicluster's
gateways command — allowing the Gateways API to marked as deprecated for
2.12.
Finally, several security issues have been patched in the Docker images now that
the builds are pinned only to minor — rather than patch — versions.
- Replaced manual IP address parsing with functions available in the Go standard
library (thanks @zhlsunshine!) - Removed linkerd-multicluster's
gatewaycommand dependency on the linkerd-viz
extension - Fixed issue where Linkerd injected pods were prevented from being evicted by
Cluster Autoscaler - Added the
dst_target_clustermetric to linkerd-multicluster's service-mirror
controller probe traffic - Added the
--crdsflag tolinkerd checkwhich validates that the Linkerd
CRDs have been installed - Removed the Docker image's hardcoded patch versions so that builds pick up
patch releases without manual intervention - Replaced the "cluster networks can be verified check" check with a "cluster
networks contains all pods" check which ensures that all currently running Pod
IPs are contained by the currentclusterNetworksconfiguration - Added IPv6 compatible IP address generation in certain control plane
components that were only generating IPv4 (thanks @zhlsunshine!) - Deprecated linkerd-viz's
GatewaysAPI which is no longer used by
linkerd-multicluster - Added the
prommpackage for making programatic Prometheus assertions in
tests (thanks @krzysztofdrys!) - Added the
runAsUserconfiguration to extensions to fix a PodSecurityPolicy
violation when CNI is enabled
edge-22.5.3
edge-22.5.3
This edge release fixes a few proxy issues, improves the upgrade process, and
introduces proto retries to Service Profiles. Also included are updates to the
bash scripts to ensure that they follow best practices.
- Polished the shell scripts (thanks @joakimr-axis)
- Introduced retries to Service Profiles based on the idempotency option of the
method by adding an isRetryable function to the proto definition
(thanks @mahlunar) - Fixed proxy responses to CONNECT requests by removing the content-length
and/or transfer-encoding headers from the response - Fixed DNS lookups in the proxy to consistently use A records when SRV records
cannot be resolved - Added dynamic policy discovery to the proxy by evaluating traffic on ports
not included in the LINKERD2_PROXY_INBOUND_PORTS environment variable - Added logic to require that the linkerd CRDs are installed when running
thelinkerd upgradecommand
edge-22.5.2
edge-22.5.2
This edge release ships a few changes to the chart values, a fix for
multicluster headless services, and notable proxy features. HA functionality,
such as PDBs, deployment strategies, and pod anti-affinity, have been split
from the HA values and are now configurable for the control plane. On the proxy
side, non-HTTP traffic will now be forwarded on the outbound side within the
cluster when the proxy runs in ingress mode.
- Updated
ingress-modeproxies to forward non-HTTP traffic within the cluster
(protocol detection will always be attempted for outbound connections) - Added a new proxy metric
process_uptime_seconds_totalto keep track of the
number of seconds since the proxy started - Fixed an issue with multicluster headless service mirroring, where exported
endpoints would be mirrored with a delay, or when changes to the export label
would be ignored - Split HA functionality, such as PodDisruptionBudgets, into multiple
configurable values (thanks @evan-hines-firebolt for the initial work)
edge-22.5.1
edge-22.5.1
This edge release adds more flexibility to the MeshTLSAuthentication and
AuthorizationPolicy policy resources by allowing them to target entire
namespaces. It also fixes a race condition when multiple CNI plugins are
installed together as well as a number of other bug fixes.
- Added support for MeshTLSAuthentication resources to target an entire
namespace, authenticating all ServiceAccounts in that namespace - Fixed a panic in
linkerd installwhen the--ignore-clusterflag is passed - Fixed issue where pods would fail to start when
enablePSPand
proxyInit.runAsRootare set - Added support for AuthorizationPolicy resources to target namespaces, applying
to all Servers in that namespace - Fixed a race condition where the Linkerd CNI configuration could be
overwritten when multiple CNI plugins are installed - Added test for opaque ports using Service and Pod IPs (thanks @krzysztofdrys!)
- Fixed an error in the linkerd-viz Helm chart in HA mode
edge-22.4.1
edge-22.4.1
In order to support having custom resources in the default Linkerd installation,
the CLI install flow is now always a 2-step process where
linkerd install --crds must be run first to install CRDs only and then linkerd install is run
to install everything else. This more closely aligns the CLI install flow with
the Helm install flow where the CRDs are a separate chart. This also applies to
linkerd upgrade. Also, the config and control-plane sub-commands have been
removed from both linkerd install and linkerd upgrade.
On the proxy side, this release fixes an issue where proxies would not honor the
cluster's opaqueness settings for non-pod/service addresses. This could cause
protocol detection to be peformed, for instance, when using off-cluster
databases.
This release also disables the use of regexes in Linkerd log filters (i.e., as
set by LINKERD2_PROXY_LOG). Malformed log directives could, in theory, cause a
proxy to stop responding.
The helm.sh/chart label in some of the CRDs had its formatting fixed, which
avoids issues when installing/upgrading through external tools that make use of
it, such as recent versions of Flux.
- Added
--crdsflag to install/upgrade and remove config/control-plane stages - Allowed the
AuthorizationPolicyCRD to have an empty
requiredAuthenticationRefsentry that allows all traffic - Introduced
nodeAffinityconfig in all the charts for enhanced control on the
pods scheduling (thanks @michalrom089!) - Introduced
resources,nodeSelectorandtolerationsconfigs in the
linkerd-multicluster-linkchart for enhanced control on the service mirror
deployment (thanks @utay!) - Fixed formatting of the
helm.sh/chartlabel in CRDs - Updated container base images from buster to bullseye
- Added support for spaces in the
config.linkerd.io/opaque-portsannotation