-
-
Notifications
You must be signed in to change notification settings - Fork 320
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Breaking Changes after 1.0 #923
Comments
Have updated/resolved Q1 here after realizing it's actually impossible for us to follow Kubernetes versioning. This means it's down to the new Q2 above; i.e. ...I realise this now is going to border on philosophical territory because ultimately the choice is going to come down to a choice between:
but thoughts welcome. might be best to iron out the stability policy proposal first though. |
Personally, the main thing I'd love to see for 1.0 would be to decouple kube-rs breakage from k8s-openapi breakage. Given that we mostly interact with k8s-openapi via traits, maybe it would make sense to extract those into a separate crate that k8s-openapi implements and kube-rs depends on? Ideally that should let users upgrade kube-rs and k8s-openapi independently, as long as those core traits remain stable. (Obviously this goal would require the approval of, and some coordination with, @Arnavion.) |
I think that's also kind of required if we want to approach parity between k8s-openapi and k8s-pb at some point. |
As an aside, for Q2, this only really concerns libraries that are part of our public API. Currently I believe that would be |
There are quite a few of them if we are being pedantic though:
most of them are only very lightly exposed, and some can be limited, but there's a fair few. |
Some of that could be viable. The traits crate idea was also lifted in kube-rs/k8s-pb#4. Not sure how viable a complete decoupling is though, since we are writing abstractions on top of specific types (event recorder, leases), and we also need to depend on I suspect it is possible to do a generated source switchover with features since the export paths are the same. But we can make a more targeted issue for dealing with |
That still won't help us support both k8s-openapi and k8s-pb though, since they will still have their own versions of You could do the feature switcheroo, but that would also break all downstream libraries unless they reimplement the same logic as well. |
Hi, I'm the maintainer of |
hey there, that is a nice library! i personally think that would be a good direction. backoff pretty dead at this point, and we/nat have had prs against it for ages ( e.g. ihrwein/backoff#50 ) note that we have a lot of public use of backoff, but it's very peripheral stuff where this is exposed. i'd be happy to try to help review stuff! |
This is a long term, philosophical issue carrying on from the stability policies PR. Go read that first.
Because of challenges of releasing 1.0 without immediately having to bump a major, we should discuss some possible steps we could do post 1.0 to make our "stable releases" feel less fluctuating. These include;
ThreeOne major questions:Q1. Can we actually follow Kubernetes versioning?
A: No
Reasoning
How do we break anything under Kubernetes versioning?
We depend on pre-1.0 libraries. Bumping any of these would be breaking changes to kube. We obviously need to be able to upgrade libraries, but we also cannot bump majors under kubernetes versioning.
Hence => "semver breaking" changes must be somewhat allowed under kubernetes. EDIT: but that's incompatible with cargo's assumptions on semver.
So effectively it's impossible for us to follow Kubernetes versioning; pre 1.0 dependencies make it impossible for us to release 1.0 safely.
While we could be very strict in other ways, such as:
It's not great, and it might actually make sense for us to keep doing some amount of interface changes because it is often the least surprising thing to do. Take for instance controller patterns that we have been recently evolving; if we find better controller patterns, do we suddenly duplicate the entire system to change the
reconcile
signature? Or do we mark large parts of the runtime as unstable? Either would be counter-intuitive; the runtime is one of our most used features (even though it changes infrequently).Perhaps it is best to constrain our changes somewhat, and maintain policies on how to guide users on upgrades (as I've tried to do in https://github.com/kube-rs/website/pull/18/files)
Q2. If breaking changes are unavoidable, should we rush for a 1.0?
Given our numerous pre-1.0 dependencies, and the possibility of marking ourselves as stable (w.r.t. client requirements) without a 1.0 release (remember the requirements only are that we document deprecation policies and how quickly interfaces change), maybe it's a better look to hold off on making a major version and rather document our policies better instead for now until we finish of the major hurdles outlined in kube-rs/website#18 ?
Q3. If we want to mark code as unstable; how do we manage it?
A: Using unstable features
As first introduced in #1131
Collapsed points of compile flags vs. features
AFAIKT there are basically two real ways (if we ignore abstractions on top of these such as the stability crate).
3. a) Unstable Configuration
Following tokio's setup with an explicit
RUSTFLAGS
addition of--cfg kube_unstable
.This allows gating of features via:
#[cfg(all(kube_unstable, feature = "otherwise_stable_feature"))]
and dependency control via (like tokio's):
This approach is a high-threshold opt-in due to having to specify
rustflags
in a non-standard file, or via evars unfamiliar to most users.Downsides of this approach is that it is more complicated, necessitates more complicated ci logic, rustdoc args, and is unintuitive to users. People have complained and expressed distaste for this setup in the tokio discord.
The main benefit of this approach is that it avoids the dependency feature summing problem:
In other words it avoids libraries depending on tokio enabling unstable features for users of those libraries. Users have to explicitly opt in themselves.
The resulting rust thread on unstable / opt-in / non-transitive crate features is a good read for more context.
3. b) Unstable Features
A lighter approach can use a set of
unstable
feature flags (e.g.kube/unstable
,kube-runtime/unstable
, or maybe more specific ones such askube/unstable-reflector
). These would be default-disabled and enable simple opt-ins.This allows gating of features via:
#[cfg(all(feature = "unstable", feature = "otherwise_stable_feature"))]
and dependency control via:
With the new weak dependencies support now in stable 1.60, this helps dependency control at our crate-level somewhat.
The downside, of course is that libraries using
kube
could still enable unstable features from under an application user (when combining multiple libraries using kube). So the question becomes is this an acceptable risk forkube
?A few libraries are building on top of us to provide alternative abstractions for applications:
But really, how many of these would you have to mix and match before the sudden enabling of
kube/unstable
becomes unexpected? We are hardly a low-level library. If a library needs an unstable feature, I don't think it's on us to care about unstable policies used by libraries.The text was updated successfully, but these errors were encountered: