Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CLOUDP-280230: Network peering controller and CRD #1930

Draft
wants to merge 7 commits into
base: main
Choose a base branch
from

Conversation

josvazg
Copy link
Collaborator

@josvazg josvazg commented Nov 14, 2024

Network peering support is sliced into 3 logical changes:

  1. Fixing to the already merged translation layer.
  2. Adding the CRD types and manifest files.
  3. Adding the controller code and tests.
  4. 🚧 (WIP) Unit tests

All Submissions:

  • Have you signed our CLA?

@josvazg josvazg changed the base branch from main to CLOUDP-280230/net-peering-crd November 14, 2024 17:04
@josvazg josvazg marked this pull request as draft November 14, 2024 17:04
@josvazg josvazg force-pushed the CLOUDP-280230/net-peering-controller branch 2 times, most recently from 3e4d019 to a1ae865 Compare November 15, 2024 14:40
@josvazg josvazg added the cloud-tests Run expensive Cloud Tests: Integration & E2E label Nov 15, 2024
Copy link
Contributor

github-actions bot commented Nov 15, 2024

@josvazg josvazg force-pushed the CLOUDP-280230/net-peering-controller branch from f2a35cc to 1d71818 Compare November 15, 2024 20:59
)

func TestNetworkPeeringProjectRefCELValidations(t *testing.T) {
launchProjectRefCELTests(
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

q: what is being asserted here? launchProjectRefCELTests has no indication what test cases are being executed.

Copy link
Collaborator Author

@josvazg josvazg Jan 23, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is interesting, because all cases were always the same for all the using structs. The thing is, the only change was the dual reference was used by one or another underlying CRD type, but that does not change the behaviour or correctness of how the dual ref is handled.

We could fix this in a separate PR, I think we can have a single test for this in a single place and just keep on adding underlaying CRDs to test as carriers of the dual reference to be tested.
WDYT?

metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata,omitempty"`

Spec AtlasNetworkPeeringSpec `json:"spec,omitempty"`
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I will double check that for when we surface this again, both for the NetworkContainer and the PeeringConnection CRDs.

@@ -219,6 +220,18 @@ func (b *Builder) Build(ctx context.Context) (cluster.Cluster, error) {
return nil, fmt.Errorf("unable to create indexers: %w", err)
}

npeReconciler := atlasnetworkpeering.NewAtlasNetworkPeeringsReconciler(
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right! yep I also need to fix this for comply with the new Cluster support

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed in next update

atlasPeer = peer
}
inAtlas := atlasPeer != nil
deleted := req.networkPeering.DeletionTimestamp != nil
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

blocking: this doesn't catch the case of deletion timestamp being zero. Please use the IsZero() method which asserts both nil and zero time.

//+kubebuilder:rbac:groups=atlas.mongodb.com,resources=atlasnetworkpeerings/status,verbs=get;update;patch
//+kubebuilder:rbac:groups=atlas.mongodb.com,resources=atlasnetworkpeerings/finalizers,verbs=update

// Reconcile is part of the main kubernetes reconciliation loop which aims to
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

blocking: this seems to be a scaffold comment, please fix ;-)

@@ -18,9 +18,9 @@ type Indexer interface {
// RegisterAll registers all known indexers to the given manager.
// It uses the given logger to create a new named "indexer" logger,
// passing that to each indexer.
func RegisterAll(ctx context.Context, c cluster.Cluster, logger *zap.Logger) error {
func RegisterAll(ctx context.Context, mgr manager.Manager, logger *zap.Logger) error {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

blocking: undo this change, we need to retain c cluster.Cluster as a parameter.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right! I missed this detail on the rebase, I will fix

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

reverted on next update

).ReconcileResult()
}

func (r *AtlasNetworkPeeringReconciler) sync(req *reconcileRequest, atlasPeer *networkpeering.NetworkPeer) ctrl.Result {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

blocking: what is being sync'ed here? as I read the code, updates in the spec are silently ignored. If the spec is immutable, we should talk w/ PM whether it makes sense to make the whole spec immutable (guarded via CEL) if Atlas doesn't support updates to network peers.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

// Atlas Network Peering reasons
const (
NetworkPeeringConnectionCreating ConditionReason = "NetworkPeeringConnectionCreating"
NetworkPeeringConnectionResetting ConditionReason = "NetworkPeeringConnectionResetting"
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

blocking: this status condition is unused.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

will remove

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done in next update

Copy link
Collaborator

@s-urbaniak s-urbaniak left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

a couple of comments, great work! 👏

@josvazg
Copy link
Collaborator Author

josvazg commented Jan 23, 2025

Making this a draft again, as we plan to re-design the Peering Connection - Network Container relationship, to make Networks a first class CRD as well.

Still I would like to address most of the comments as I will be using this branch for continuing work and this and will need it to be rebase-able as close to main as possible.

@josvazg josvazg marked this pull request as draft January 23, 2025 13:16
@josvazg josvazg force-pushed the CLOUDP-280230/net-peering-controller branch from c147102 to f3ac29e Compare January 23, 2025 17:40
@josvazg josvazg added the cloud-tests Run expensive Cloud Tests: Integration & E2E label Jan 23, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cloud-tests Run expensive Cloud Tests: Integration & E2E
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants