Transitioning from Endpoints to EndpointSlices in Kubernetes

Since the initial alpha release of EndpointSlices (KEP-752) in v1.15 and their general availability as of v1.21, Kubernetes networking has steadily moved away from the legacy Endpoints
API. Modern service features such as dual-stack networking and advanced traffic distribution are enabled exclusively through EndpointSlices. As a result, all service proxies, in-cluster gateways and custom controllers have been updated to consume discovery.k8s.io/v1
EndpointSlice
resources rather than v1
Endpoints
. Today, the Endpoints
API exists primarily for backward compatibility with existing workloads and automation scripts.
With Kubernetes 1.33, the Endpoints
API is officially deprecated. Any read or write operation against v1 Endpoints
now emits a deprecation warning from the API server, steering users towards the > EndpointSlice
API. Meanwhile, KEP-4974 lays out plans to remove the requirement for the Endpoints controller from Kubernetes Conformance tests, since most contemporary clusters no longer rely on it.
According to the Kubernetes deprecation policy, the Endpoints
type may linger indefinitely in read-only or warning-only mode. However, users with automation, CI/CD steps or controllers that still query or generate Endpoints
should migrate to EndpointSlices
to avoid future breakage, take advantage of new network features, and reduce runtime overhead.
Notes on Migrating from Endpoints to EndpointSlices
Consuming EndpointSlices Rather Than Endpoints
The most significant shift when consuming EndpointSlices
is that a single Service can map to multiple EndpointSlices
. In contrast, every Service with a selector produces exactly one Endpoints
object named identically to the Service.
$ kubectl get endpoints myservice
Warning: v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
NAME ENDPOINTS AGE
myservice 10.180.3.17:443 2h
$ kubectl get endpointslice -l kubernetes.io/service-name=myservice
NAME ADDRESSTYPE PORTS ENDPOINTS AGE
myservice-7vzhx IPv4 443 10.180.3.17 35s
myservice-jcv8s IPv6 443 2001:db8:123::5 35s
Above, dual-stack Services produce two slices—one for IPv4 and one for IPv6. The old Endpoints
object only shows primary-family addresses. The EndpointSlice
controller also splits slices by port definitions and caps each slice to a maximum of 100 endpoints to keep object size small. Because slice names are generated with unique suffixes, consumers should list slices by label:
kubectl get endpointslice -l kubernetes.io/service-name=myservice
Below are the key scenarios where multiple slices arise:
- IP Family Separation: Each
EndpointSlice
can contain only one address family. Dual-stack Services therefore yield separate slices for IPv4 and IPv6. - Port Changes During Rollouts: If you roll out a change that modifies container ports (e.g., port 80 → 8080), Kubernetes will generate distinct slices for old-port and new-port endpoints until the rollout completes.
- Endpoint Count Limits: To prevent single large objects, the controller splits endpoints into multiple slices once they exceed 100 addresses per slice.
Generating EndpointSlices Rather Than Endpoints
For controllers or YAML definitions that emit endpoint data, migrating to EndpointSlices
is straightforward. The schema changes slightly—moving from subsets
to top-level endpoints
arrays with per-endpoint conditions.ready
flags—and requires adding a label and addressType
. Example:
apiVersion: discovery.k8s.io/v1
kind: EndpointSlice
metadata:
generateName: myservice-
labels:
kubernetes.io/service-name: myservice
addressType: IPv4
endpoints:
- addresses: ["10.180.3.17"]
nodeName: node-4
conditions:
ready: true
- addresses: ["10.180.5.22"]
nodeName: node-9
conditions:
ready: true
- addresses: ["10.180.6.6"]
nodeName: node-8
conditions:
ready: false
ports:
- name: https
protocol: TCP
port: 443
Key differences:
- You must specify
addressType
(IPv4
orIPv6
). - The label
kubernetes.io/service-name
links the slice to its Service. - Each
endpoints
entry includes per-endpoint conditions (ready/not-ready), replacing separateaddresses
andnotReadyAddresses
arrays. - Large endpoint sets auto-split into multiple slices by the controller, so no manual splitting is needed unless implementing a custom controller.
Performance Considerations
EndpointSlices reduce API server memory and bandwidth consumption by limiting object size. Benchmarks from the Kubernetes SIG-Network indicate up to a 50% reduction in watch event payloads in large clusters (10,000+ endpoints). Smaller JSON documents translate into less CPU usage on kube-apiserver and lower latency in kube-proxy and custom controllers that watch services at scale.
Experimental tests show that on a 5,000-node dual-stack cluster, EndpointSlice-based Kube-Proxy consumes ~30% less CPU compared to the legacy Endpoints watch, thanks to incremental updates and smaller list sizes. Additionally, feature gates such as EndpointSliceProxying
can be toggled to gradually roll out proxy improvements in environments with stringent stability requirements.
Migration Best Practices
- Audit Existing Controllers: Identify any custom controllers or scripts using
CoreV1().Endpoints()
and plan updates toDiscoveryV1().EndpointSlices()
. - Update CRDs & Tooling: If you manage custom resources generating endpoints, include
EndpointSlice
support and test under v1.33+ with warnings enabled. - Feature Gate Management: Ensure feature gates
EndpointSlice
andEndpointSliceProxying
are enabled on both API server and controller-manager in staging clusters before production rollout. - Gradual Rollout: Use dual-stack or multi-port test services to validate slice splitting behavior and monitor metrics from kube-proxy and CNI plugins.
Future Directions and Roadmap
KEP-4974 envisages the eventual removal of the Endpoints
controller from conformance tests and optional deactivation in kube-controller-manager. While the API object may remain indefinitely for compatibility, SIG-Network maintainers are considering an Endpoints removal mode as a hidden feature gate, further reducing controller-manager CPU load in large-scale clusters. Upcoming enhancements in discovery.k8s.io/v1
include more detailed topologyHints
and richer endpoint metadata for service mesh integrations.
As Kubernetes adoption grows into edge and IoT use cases, EndpointSlices lay the foundation for efficient multi-cluster service discovery, enabling federated controllers to aggregate slices across regions with minimal overhead. Organizations planning future expansions should complete their migration to unlock these advanced networking capabilities.