Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Breaking Changes after 1.0 #923

Open
clux opened this issue May 29, 2022 · 9 comments
Open

Breaking Changes after 1.0 #923

clux opened this issue May 29, 2022 · 9 comments
Assignees
Labels
question Direction unclear; possibly a bug, possibly could be improved.

Comments

@clux
Copy link
Member

clux commented May 29, 2022

This is a long term, philosophical issue carrying on from the stability policies PR. Go read that first.

Because of challenges of releasing 1.0 without immediately having to bump a major, we should discuss some possible steps we could do post 1.0 to make our "stable releases" feel less fluctuating. These include;

  • Forcing more interface changes on stable features through deprecations
  • Extending a deprecation policy for something like 6 months
  • Marking unstable features to let us change the few things still in flux
  • Whitelist unavoidable changes like dependencies / security upgrades (probably a bad idea)

ThreeOne major questions:

Q1. Can we actually follow Kubernetes versioning?

A: No

Reasoning

How do we break anything under Kubernetes versioning?

We depend on pre-1.0 libraries. Bumping any of these would be breaking changes to kube. We obviously need to be able to upgrade libraries, but we also cannot bump majors under kubernetes versioning.

Hence => "semver breaking" changes must be somewhat allowed under kubernetes. EDIT: but that's incompatible with cargo's assumptions on semver.

So effectively it's impossible for us to follow Kubernetes versioning; pre 1.0 dependencies make it impossible for us to release 1.0 safely.

While we could be very strict in other ways, such as:

  • deprecations and slow removal according to a strengthened deprecation policy where possible
  • limiting interface changes

It's not great, and it might actually make sense for us to keep doing some amount of interface changes because it is often the least surprising thing to do. Take for instance controller patterns that we have been recently evolving; if we find better controller patterns, do we suddenly duplicate the entire system to change the reconcile signature? Or do we mark large parts of the runtime as unstable? Either would be counter-intuitive; the runtime is one of our most used features (even though it changes infrequently).

Perhaps it is best to constrain our changes somewhat, and maintain policies on how to guide users on upgrades (as I've tried to do in https://github.com/kube-rs/website/pull/18/files)

Q2. If breaking changes are unavoidable, should we rush for a 1.0?

Given our numerous pre-1.0 dependencies, and the possibility of marking ourselves as stable (w.r.t. client requirements) without a 1.0 release (remember the requirements only are that we document deprecation policies and how quickly interfaces change), maybe it's a better look to hold off on making a major version and rather document our policies better instead for now until we finish of the major hurdles outlined in kube-rs/website#18 ?

Q3. If we want to mark code as unstable; how do we manage it?

A: Using unstable features

As first introduced in #1131

Collapsed points of compile flags vs. features

AFAIKT there are basically two real ways (if we ignore abstractions on top of these such as the stability crate).

3. a) Unstable Configuration

Following tokio's setup with an explicit RUSTFLAGS addition of --cfg kube_unstable.

This allows gating of features via:

#[cfg(all(kube_unstable, feature = "otherwise_stable_feature"))]

and dependency control via (like tokio's):

[target.'cfg(kube_unstable)'.dependencies]
unstable-dep = { version = "XXX", default-features = false, optional = true }

This approach is a high-threshold opt-in due to having to specify rustflags in a non-standard file, or via evars unfamiliar to most users.

Downsides of this approach is that it is more complicated, necessitates more complicated ci logic, rustdoc args, and is unintuitive to users. People have complained and expressed distaste for this setup in the tokio discord.

The main benefit of this approach is that it avoids the dependency feature summing problem:

Darksonn: if you have a dependency that enables the unstable feature, then you get it without being aware that you are opt-ing in to breakage.

In other words it avoids libraries depending on tokio enabling unstable features for users of those libraries. Users have to explicitly opt in themselves.

The resulting rust thread on unstable / opt-in / non-transitive crate features is a good read for more context.

3. b) Unstable Features

A lighter approach can use a set of unstable feature flags (e.g. kube/unstable, kube-runtime/unstable, or maybe more specific ones such as kube/unstable-reflector). These would be default-disabled and enable simple opt-ins.

This allows gating of features via:

#[cfg(all(feature = "unstable", feature = "otherwise_stable_feature"))]

and dependency control via:

[features]
unstable = ["dep:unstable-dep", "stable-dep?/unstable-feature"]

With the new weak dependencies support now in stable 1.60, this helps dependency control at our crate-level somewhat.

The downside, of course is that libraries using kube could still enable unstable features from under an application user (when combining multiple libraries using kube). So the question becomes is this an acceptable risk for kube?

A few libraries are building on top of us to provide alternative abstractions for applications:

But really, how many of these would you have to mix and match before the sudden enabling of kube/unstable becomes unexpected? We are hardly a low-level library. If a library needs an unstable feature, I don't think it's on us to care about unstable policies used by libraries.

@clux clux added question Direction unclear; possibly a bug, possibly could be improved. client-stable stable client requirements labels May 29, 2022
@clux clux assigned kazk and nightkr May 29, 2022
@clux
Copy link
Member Author

clux commented May 29, 2022

Have updated/resolved Q1 here after realizing it's actually impossible for us to follow Kubernetes versioning. This means it's down to the new Q2 above; i.e. is there a rush to release V1 given constraints?

...I realise this now is going to border on philosophical territory because ultimately the choice is going to come down to a choice between:

  • a quickly bumping major version number
  • more minor releases until increasingly absurd thresholds of stability are met (e.g. no outstanding feature work, no pre-1.0 dependencies, etc)

but thoughts welcome. might be best to iron out the stability policy proposal first though.

@nightkr
Copy link
Member

nightkr commented May 29, 2022

Personally, the main thing I'd love to see for 1.0 would be to decouple kube-rs breakage from k8s-openapi breakage.

Given that we mostly interact with k8s-openapi via traits, maybe it would make sense to extract those into a separate crate that k8s-openapi implements and kube-rs depends on? Ideally that should let users upgrade kube-rs and k8s-openapi independently, as long as those core traits remain stable. (Obviously this goal would require the approval of, and some coordination with, @Arnavion.)

@nightkr
Copy link
Member

nightkr commented May 29, 2022

I think that's also kind of required if we want to approach parity between k8s-openapi and k8s-pb at some point.

@nightkr
Copy link
Member

nightkr commented May 30, 2022

As an aside, for Q2, this only really concerns libraries that are part of our public API.

Currently I believe that would be backoff, tower, k8s-openapi (with the caveats above), and a few others. Purely internal libraries (that are never returned, taken as arguments, or reexported) should be fine to upgrade without doing a major bump.

@clux
Copy link
Member Author

clux commented May 30, 2022

As an aside, for Q2, this only really concerns libraries that are part of our public API.

Currently I believe that would be backoff, tower, k8s-openapi (with the caveats above), and a few others. Purely internal libraries (that are never returned, taken as arguments, or reexported) should be fine to upgrade without doing a major bump.

There are quite a few of them if we are being pedantic though:

  • tower, backoff, k8s-openapi (mentioned)
  • http, json_patch (kube_core)
  • futures (Stream)
  • hyper, hyper-openssl, hyper-rustls, hyper-tls (ConfigExt for custom clients)
  • rustls, tame_oauth, base64, serde_yaml, chrono, tokio_native_tls (via errors)

most of them are only very lightly exposed, and some can be limited, but there's a fair few.

@clux
Copy link
Member Author

clux commented May 30, 2022

Personally, the main thing I'd love to see for 1.0 would be to decouple kube-rs breakage from k8s-openapi breakage.

Some of that could be viable. The traits crate idea was also lifted in kube-rs/k8s-pb#4. Not sure how viable a complete decoupling is though, since we are writing abstractions on top of specific types (event recorder, leases), and we also need to depend on ObjectMeta - which is also moving in non-breaking ways. I'm more inclined to hard depend on the latest version supported by either k8s-pb or k8s-openapi via kube and use the standard compatibility matrix approach instead.

I suspect it is possible to do a generated source switchover with features since the export paths are the same. But we can make a more targeted issue for dealing with k8s-pb elsewhere, we know that is something we should tackle prior to 1.0.

@nightkr
Copy link
Member

nightkr commented May 30, 2022

I'm more inclined to hard depend on the latest version supported by either k8s-pb or k8s-openapi via kube and use the standard compatibility matrix approach instead.

That still won't help us support both k8s-openapi and k8s-pb though, since they will still have their own versions of ObjectMeta (just like everything else).

You could do the feature switcheroo, but that would also break all downstream libraries unless they reimplement the same logic as well.

@clux clux removed the client-stable stable client requirements label Mar 8, 2023
@Xuanwo
Copy link

Xuanwo commented Sep 2, 2024

  • tower, backoff, k8s-openapi (mentioned)

Hi, I'm the maintainer of backon, a retry library that just released v1. What do you think about replacing backoff with backon? I'm willing to assist with the migration.

@clux
Copy link
Member Author

clux commented Sep 4, 2024

Hi, I'm the maintainer of backon, a retry library that just released v1. What do you think about replacing backoff with backon? I'm willing to assist with the migration.

hey there, that is a nice library! i personally think that would be a good direction. backoff pretty dead at this point, and we/nat have had prs against it for ages ( e.g. ihrwein/backoff#50 )

note that we have a lot of public use of backoff, but it's very peripheral stuff where this is exposed. i'd be happy to try to help review stuff!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Direction unclear; possibly a bug, possibly could be improved.
Projects
None yet
Development

No branches or pull requests

4 participants