Skip to content

Conversation

MrFreezeex
Copy link
Member

  • One-line PR description: Define dual stack recommendations and fields
  • Other comments: This does three things: define initial suggestion as to what an implementation may do to support dual stack services, fix the max items for the IPs field (which is already fixed in the actual CRD) and add a IPfamilies matching the same field in the Service that implementation may use to reconcile this globally with an implementation defined policy.

@k8s-ci-robot k8s-ci-robot added cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. kind/kep Categorizes KEP tracking issues and PRs modifying the KEP directory labels Apr 29, 2025
@k8s-ci-robot k8s-ci-robot requested review from JeremyOT and skitt April 29, 2025 12:59
@k8s-ci-robot k8s-ci-robot added sig/multicluster Categorizes an issue or PR as relevant to SIG Multicluster. size/M Denotes a PR that changes 30-99 lines, ignoring generated files. labels Apr 29, 2025
Comment on lines +662 to +652
ipFamilies:
- IPv4
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

AFAIK, IPV4 and IPV6 format are quite different, are the formats in the "ips" field alone not enough for the consumer to discern? I assume that the consumer knows what IP family it can support.

Copy link
Member Author

@MrFreezeex MrFreezeex Apr 29, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This field is more aimed toward the allocation of the IPs than what happens after the IPs are already actually allocated a bit like the current type field or even how the ipFamilies field behave on a regular Service.

For instance for Cilium we would most likely want to do an intersection of all the ipFamilies in the exported Services which would put this as relevant as the other fields to "allocate" the IPs (meaning to create the derived service as it how we do this) to us essentially.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

oh, IIRC, this is for some controller to act on this field? Are we not in the process of moving the serviceImport to either status or root?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not entirely sure what you mean but yes our controller will use this to create the derived service with the appropriate IPFamily

@lauralorenz
Copy link
Contributor

Triage note: Had some discussion with comments from @mikemorris last SIG-MC, can you please add them to this PR so we can talk about them?

@MrFreezeex
Copy link
Member Author

Hi @mikemorris are you still looking at commenting here about your concerns about this change?

Copy link
Member

@mikemorris mikemorris left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Trying to capture a high-level concern I have with this direction:

  1. Using a Service is just one possible (albeit common) implementation of MCS - alternative explorations such as ClusterIP Gateways are currently being developed and may be an option in the future . Service is a bloated resource and in general I would have a strong preference towards avoiding leaking what should be implementation details up into the actual spec resources.
  2. I don't think adding this field to ServiceImport should actually be necessary.
    1. On Service, it is optionally configured by a service owner in conjunction with .spec.ipFamilyPolicy to specify what IP family or familiies should be made available for a Service on a dual-stack cluster, and which family should be used for the legacy.spec.clusterIP field.
    2. In contrast to Service, on a ServiceImport no discretion or decision-making should be required - the available IP families will be determined by the IP families of Services exposed by ServiceExports, and may be constrained by the IP family stack configuration of the importing cluster. The exported Services may be on clusters with different dual-stack configurations (some IPv4 only, some IPv6, some dual-stack) and may have different configurations for which IPs are available for each Service in each cluster. I believe determining the appropriate dual-stack configuration should be possible by watching the associated exported Service resources (and their EndpointSlices) directly, from either a centralized controller or per-cluster decentralized API server watches.
  3. I don't see this field as being helpful for order-of-operations concerns in creating an implementation Service resource, because at any point the available endpoints for a ServiceImport may change (an IPv6-only cluster may go offline while an IPv4-only cluster remains available and the ServiceImport in a dual-stack cluster should likely be updated to drop its IPv6 address from the ips field if the topology of an implementation requires direct/flat networking and no IPv6 endpoints are available (cleaning up and removing the ServiceImport entirely if no backends are routable from the importing cluster is a viable alternative too). Similarly, if an exported Service on a new IP family becomes available when it wasn't originally, the ServiceImport should likely be updated to publish an address in the ips field for the newly-available family when adding the backends.
  4. I think what may be helpful instead is clarifying expected behavior in the various scenarios @MrFreezeex had laid out in the presentation, and possibly encoding those in conformance tests and/or a status field indicating that a ServiceImport is "ready" (has backends available which are reachable from the importing cluster (which may have different constraints or meaning in centralized vs decentralized implementations) rather than expecting the ServiceImport and any supporting infra to be created (and destroyed) syncronously and be routable immediately at creation.

@MrFreezeex
Copy link
Member Author

MrFreezeex commented Jun 24, 2025

Thanks! Quickly answering to those point but we can have a longer discussion in the sig meeting if you are available there

Trying to capture a high-level concern I have with this direction:

1. Using a Service is just one possible (albeit common) implementation of MCS - alternative explorations such as [ClusterIP Gateways](https://github.com/kubernetes-sigs/gateway-api/pull/3608) are currently being developed and may be an option in the future . Service is a bloated resource and in general I would have a strong preference towards avoiding leaking what should be implementation details up into the actual spec resources.

IIUC all the implementation works with a Service in a way or in another, we cannot block PR to ServiceImport by saying that hypothetically another alternative to Service which is very much experimental and that (AFAIK) doesn't even have consensus among sig network/Gateway-API folks is the way forward. If we were to do that we can probably also forget about bumping MCS-API to v1beta1 too for instance...

2. I don't think adding this field to ServiceImport should actually be necessary.
   
   1. On Service, it is optionally configured by a service owner in conjunction with [`.spec.ipFamilyPolicy`](https://kubernetes.io/docs/concepts/services-networking/dual-stack/#services) to specify what IP family or familiies should be made available for a Service on a dual-stack cluster, and which family should be used for the legacy`.spec.clusterIP` field.
   2. In contrast to Service, on a ServiceImport no discretion or decision-making should be required - the available IP families will be determined by the IP families of Services exposed by ServiceExports, and may be constrained by the IP family stack configuration of the importing cluster. The exported Services _may_ be on clusters with different dual-stack configurations (some IPv4 only, some IPv6, some dual-stack) and _may_ have different configurations for which IPs are available for each Service in each cluster. I believe determining the appropriate dual-stack configuration should be possible by watching the associated exported Service resources (and their EndpointSlices) directly, from either a centralized controller _or_ per-cluster decentralized API server watches.

In Cilium at least we do not have this info on the controller creating the derived Service. This controller is intentionally not connected to all the other clusters it only knows about the local ServiceImport and the (possibly not yet created) derived Service. We also do not always sync Endpoint Slices from remote clusters as an optimization (only if there is a specific annotation or that the ServiceImport is headless). So I need to get this info already merged from all clusters/"reconciled" on the ServiceImport resource directly.

3. I don't see this field as being helpful for order-of-operations concerns in creating an implementation Service resource, because at any point the available endpoints for a ServiceImport may _change_ (an IPv6-only cluster may go offline while an IPv4-only cluster remains available and the ServiceImport in a dual-stack cluster should likely be updated to drop its IPv6 address from the `ips` field if the topology of an implementation requires direct/flat networking and no IPv6 endpoints are available (cleaning up and removing the ServiceImport entirely if no backends are routable from the importing cluster is a viable alternative too). Similarly, if an exported Service on a new IP family becomes available when it wasn't originally, the ServiceImport should likely be updated to publish an address in the `ips` field for the newly-available family when adding the backends.

Yep the ServiceImport ipFamilies may change and we do plan to reflect it on the IPs. I am not sure how you see this as unhelpful as you described pretty much what we are going to do... Also note that in our case we want some global consistency of what an IPFamily will get there so we will essentially do the intersection of all exported service ipFamily + what is supported by the local cluster.

4. I think what may be helpful instead is clarifying expected behavior in the various scenarios @MrFreezeex had laid out in the presentation, and possibly encoding those in conformance tests and/or a `status` field indicating that a ServiceImport is "ready" (has backends available which are reachable from the importing cluster (which may have different constraints or meaning in centralized vs decentralized implementations) rather than expecting the ServiceImport and any supporting infra to be created (and destroyed) syncronously and be routable immediately at creation.

I was more planning to to do this in a second step/PR as while the initial use case might be tied to that PR some possible conditions on the ServiceImport might be relevant for other things (like reporting any errors related to the import 🤷‍♂️). I am not sure that besides checking that the ServiceImport would be ready we can do much more on the conformance tests though.

I am not entirely sure what you mean about the behavior of "ready" and " to be created (and destroyed) syncronously and be routable immediately at creation" to me it would be more a place to put any error state whether it's something very generic with ready or some more specific implementation defined errors if there's a need for that.

@mikemorris
Copy link
Member

In Cilium at least we do not have this info on the controller creating the derived Service. This controller is intentionally not connected to all the other clusters it only knows about the local ServiceImport and the (possibly not yet created) derived Service. We also do not always sync Endpoint Slices from remote clusters as an optimization (only if there is a specific annotation or that the ServiceImport is headless). So I need to get this info already merged from all clusters/"reconciled" on the ServiceImport resource directly.

Okay this is the implementation detail/constraint I had not been familiar with, will need to think about this more.

@lauralorenz
Copy link
Contributor

Triage notes:

@tpantelis
Copy link
Contributor

In Cilium at least we do not have this info on the controller creating the derived Service. This controller is intentionally not connected to all the other clusters it only knows about the local ServiceImport and the (possibly not yet created) derived Service. We also do not always sync Endpoint Slices from remote clusters as an optimization (only if there is a specific annotation or that the ServiceImport is headless). So I need to get this info already merged from all clusters/"reconciled" on the ServiceImport resource directly.

Okay this is the implementation detail/constraint I had not been familiar with, will need to think about this more.

Submariner could also make use of the ipFamilies field on the ServiceImport. As you're probably aware, Submariner does not have a centralized controller and the constituent clusters do not have direct access to one another. So re: this statement above, "determining the appropriate dual-stack configuration should be possible by watching the associated exported Service resources", we cannot do that. Each cluster communicates its local service information to other clusters via a ServiceImport published via a shared hub cluster. Each cluster then has all the constituent service information in order to check for conflicts and apply a condition on its local ServiceExport and to create a derived/aggregated ServiceImport (note that we don't create a derived Service).

@MrFreezeex MrFreezeex force-pushed the kep1645-dualstack branch 3 times, most recently from 5c6aff9 to 257df3b Compare July 31, 2025 17:06
@tpantelis
Copy link
Contributor

/lgtm

@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Jul 31, 2025
@mikemorris
Copy link
Member

mikemorris commented Aug 5, 2025

There's a handful of details/nuance to consider, but I think I'm largely convinced it makes sense to add ipFamilies.

  • "The .spec.ipFamilies field is conditionally mutable: you can add or remove a secondary IP address family, but you cannot change the primary IP address family of an existing Service."
  • For setting .spec.ipFamilyPolicy on a derived Service I think implementations should prefer taking a looser approach as this may differ across exporting clusters (as described in @MrFreezeex's presentation/diagram), and thus not make sense to directly propagate and consider potential conflicts?
    • As an example, exporting a Service with a RequireDualStack ipFamilyPolicy from a dual-stack cluster might sense to be more flexible when creating a corresponding ServiceImport on a single-stack cluster (unless we really think this configuration should prevent creating a ServiceImport on single-stack clusters).

@lauralorenz
Copy link
Contributor

Triage notes:

  • Want to limit the "shoulds" when it comes to things like whether the inference should be from an intersection, from a union, ranked a certain way, etc...
  • Want to link to the Service dual stack docs because our overarching goal is to make MCS feel like an extension of Service
  • Edge case that is hard is when a clusterset has heterogenous ipFamilies and this is what we have to articulate ourselves. And we need to be explicit that the implementor needs to figure out what they are doing for that per their implementation.

@k8s-ci-robot k8s-ci-robot removed the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Aug 5, 2025
@MrFreezeex
Copy link
Member Author

MrFreezeex commented Aug 5, 2025

Thanks for the reviews @mikemorris!

There's a handful of details/nuance to consider, but I think I'm largely convinced it makes sense to add ipFamilies.

I tried to reference that more in my latest version let me know if this looks better now 👀

  • We should likely extend this constraint to ServiceImport.

As I was saying a bit in the call we should probably keep this implementation defined, for instance as the result of merging IPFamilies you might want to change the order of what's in IPFamilies field and I am not sure we should prevent that. For a derived Service style implementation you could just swap the orders IPs on the ServiceImport while your derived Service stays as is or even (not specific to derived service implementation) just express that the result of the merging across your clusters should be in this order but you don't want to change IPs order so you keep it as is.

  • I would expect that which family is set as primary on a ServiceImport should be inferred from the intersection of cluster configuration and available endpoints if applicable.

For us in Cilium we will most certainly do that although some implementation might like an union too or whatever the local cluster support because there is some magic gateway that does intelligent thing to not care about IP protocol somehow. A while ago we were saying in a sig-mc meeting that we shouldn't dictate that in the KEP (and I recognize that you are not saying here that we should, I am just saying that out loud).

  • For setting .spec.ipFamilyPolicy on a derived Service I think implementations should prefer taking a looser approach as this may differ across exporting clusters (as described in @MrFreezeex's presentation/diagram), and thus not make sense to directly propagate and consider potential conflicts?

Yes I haven't implemented that yet but I think what we would most likely do in CIlium is:

  • Start with the order of the IPFamily of the ServiceImport at creation (which would be intersection of all IPFamilies of the exported Service and the order dictated by the oldest if dual stack)
  • If there's any change to the order of the ServiceImport do not attempt to recreate the derived Service but just change the order of the IPs in the ServiceImport in the logic that propagate IPs from derived Service to ServiceImport
  • If there's an IP protocol to add or to remove do that (also we have the knowledge what the cluster support since we are the CNI but if you are not the CNI you could potentially ask users to pass that as a config to your project I guess)
  • If the IP protocol to remove is the primary IP of the derived Service I am not sure yet, maybe we will recreate the derived Service since we do that for changing the headless-ness but this might change / we might improve the logic for headless-ness too

EDIT: For this my latest thinking about this is that we would most likely add an annotation on the derived Service on our side to mark the "real IP protocol" that we consider which means we could remove some protocol in this annotation while not necessarily removing it on the true IPFamilies field. This is again implementation defined territory and just to mention what Cilium might do with all of this!

  • As an example, exporting a Service with a RequireDualStack ipFamilyPolicy from a dual-stack cluster might sense to be more flexible when creating a corresponding ServiceImport on a single-stack cluster (unless we really think this configuration should prevent creating a ServiceImport on single-stack clusters).

Yep this would be supported in our case, I am not sure of this yet but maybe what we would end up doing is:

  • Keep the IPFamiliy not supported in the ServiceImport IPFamilies
  • If there are none supported by the local cluster report that in the ready condition
  • Filter out IPFamilies not supported when creating the derived Service/reporting the IPs to the ServiceImport (and possibly add some kind of condition saying that it happened but I am not sure if we should do this or not)

@tpantelis
Copy link
Contributor

/lgtm

@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Aug 5, 2025
@lauralorenz
Copy link
Contributor

Triage notes:

  • Mike will do a triple check since several changes are in reaction to his comments in the first place
  • But overall this and its partner PR mcs-api repo are in very good shape in terms of being ready for approval according to us in the scrub cc @skitt @JeremyOT

@mikemorris
Copy link
Member

/lgtm

Appreciate the thoughtfulness in considering all those scenarios/edge cases @MrFreezeex, I think with that context this is in good shape to merge.

Comment on lines 600 to 583
IPs []string `json:"ips,omitempty"`
// +optional
IPFamilies []corev1.IPFamily `json:"ipFamilies,omitempty"`
// +optional
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was not following this conversation so apologies if this was already discussed, but this was very challenging to implement in Kube, cc @thockin as he did all the work, because IPs and IPFamilies are related, and the defaulting and validation or updates got very tricky

Copy link
Member Author

@MrFreezeex MrFreezeex Sep 16, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In that case this field is not directly controlled to by a user so there's less validation logic needed at the API level I believe (everything in ServiceImport is machine controlled and usually mirror most fields from Service). It's mostly implementation defined on how they are using exactly this. This PR is mostly defining the field (which is "internally" needed for Cilium and Submariner) and pointing to the Service doc to make sure implementations are aware of what IPFamilies does in regular Services.

Like for instance in Cilium we would want to do an intersection of all linked Service IPFamilies exported (user controlled) and reconcile this with our controller in the ServiceImport resources, then this ServiceImport (for us; using derived Service is also implementation defined) would create a derived service with the same IPFamilies field (+ some tricks to not have to recreate the derived Service if the IPFamilies change). There is a handful of viable things that could be done here, a union of IPFamilies for instance or just whatever the local cluster support/is configured for, which mainly explains that we want to not be restrictive of how this is getting defined...

Hope this helps!

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if the goal is to mirror the services fields why don't you add a reference? is to avoid the indirection layer?

Copy link
Member Author

@MrFreezeex MrFreezeex Sep 16, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not all fields are relevant to ServiceImport either for instance MCS-API type field is different and there's no targetPort in the ports arrays.

The rest of the fields is mostly not needed for MCS, although there is the trafficDistribution field that we haven't still added that would be the main remaining one probably. But besides that I am not aware of other things from Service that we should add here 🤔.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Another factor is that ServiceImport is a distributed resource and referencing a resource in another cluster would be problematic.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

at the end in Kubernetes APIs we settle on:

  • it is single stack IPv4 OR IPv6 (only one IP obviously)
  • it is dual stack IPv4IPv6 or IPv6IPv4 (order matters) (only two IPs , one of each IP family)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok I see I will try to add some sentence to make it explicit that implementation should set this coherently at least

Copy link
Member Author

@MrFreezeex MrFreezeex Sep 17, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I rebased the PR so I now realize it's hard to find the diff but I added this sentence:

If `ipFamilies` is set on the ServiceImport object, it must not have duplicated
families (for instance `ipFamilies: [IPv4, IPv4]` is not valid) and the IPs
should eventually be in the same order as what is defined in `ipFamilies`.

And adding a MaxItems=2 on IPFamilies

With that it should mandate implementations to set this correctly :D.
Thanks for providing the context about how this is done for the Service!

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @aojea 👋, just wondering does the above looks good (or at least reasonable) to you? I am hoping to get this in a good shape for the sig-mc leads to have a look at next week meeting (30/09) to get this merged, would be awesome if you have time to confirm that by that time 🙏. If you have other suggestion(s) I would be ofc more than happy to follow up on those!

@k8s-ci-robot
Copy link
Contributor

New changes are detected. LGTM label has been removed.

@k8s-ci-robot k8s-ci-robot removed the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Sep 17, 2025
Signed-off-by: Arthur Outhenin-Chalandre <[email protected]>
@skitt
Copy link
Member

skitt commented Sep 30, 2025

/approve

@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: MrFreezeex, skitt

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Sep 30, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. kind/kep Categorizes KEP tracking issues and PRs modifying the KEP directory sig/multicluster Categorizes an issue or PR as relevant to SIG Multicluster. size/M Denotes a PR that changes 30-99 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

8 participants