Skip to content
This repository has been archived by the owner on Jun 23, 2020. It is now read-only.

Use a single namespace via kube-namespace flag #56

Draft
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

metalmatze
Copy link

This PR is still missing:

  • Clean all created resource in the namespace after pipeline finishes
  • Services need to be unique even with same name?!

/cc @bradrydzewski @MOZGIII

@metalmatze metalmatze changed the title Use a single namespace via kube-namespace Use a single namespace via kube-namespace flag Mar 12, 2019
@galexrt
Copy link

galexrt commented Apr 1, 2019

@metalmatze One point I would like to throw in here is that owner reference could be used for the cleanup.
The Job object is created, all other objects get a owner reference added pointing to the job or other object that makes sense. That would then mean that when the job is deleted the other objects are deleted as well automatically (this could be done per object AFAIK).

https://kubernetes.io/docs/concepts/workloads/controllers/garbage-collection/

Though not 100% sure if this makes sense to use here. 🙂

@metalmatze
Copy link
Author

@galexrt, this makes a lot of sense and most of the things should be cleaned up automatically. But I agree that we should make sure it actually happens and the owner reference is perfect for that! 👍

In the last few weeks I've given this quite a lot of thought. Especially for services that have the same name.
Currently, I think we can work around the restrictions by templating the names of the services to something well known.

Let's say you want to start a Postgres Pod to use as a Drone service in your pipeline and want to reference that from your integration tests. It might happen that there are 2 (or more) concurrent pipelines for the same repository running and they both reference postgres.drone.svc.cluster.local. There's no way we can tell which pipeline should be routed to which Postgres Pod.
I propose to simply have app-42-postgres.drone.svc.cluster.local where app is the name of the repository and 42 is the build number. We basically namespace the Services by name on our own. 🤷‍♂️
If I recall correctly there have been some heuristics around that for Docker in the past, right @bradrydzewski?

spec.Metadata.Namespace,
&metav1.DeleteOptions{},
)
if e.namespace == "" {
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

!=?

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, nvm. The code should be more more self-decriptive or documented in my opinion. Consider adding a bool useSignleNamespace = e.namespace != "" and rely on it in the checks.

@tboerger
Copy link

tboerger commented Apr 1, 2019

Is it possible to just use sub domains with the cluster DNS and define a different search domain for the involved pods?

@galexrt
Copy link

galexrt commented Apr 1, 2019

@tboerger Mhh haven't tried that but that may work https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pod-s-dns-config (the search domain part at least).

That may cause K8S minimum version to be something around 1.10+ or so.

@MOZGIII
Copy link

MOZGIII commented Apr 1, 2019

So far I like where this is going. Using GC and owner references sounds like a great idea.
"Namespacing on our own" sounds good too - it's most likely the simplest solution in terms of relying on other parts of the cluster.
Namspacing via DNS subdomains looks sort of ideal - if we can pull it off, I'd say with that we could even reconsider using a single namespace the default again, as all the constraints that I see that force using multiple namespaces will be fulfilled, and we'll have the benefits of using standard rbac for restricting access.

@MOZGIII
Copy link

MOZGIII commented Apr 5, 2019

Please take a look at #53 - I think I found a solution to the name resolution issue.

@schmitch
Copy link

@MOZGIII I think your conclusion is right. There is even an example here: https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pod-s-hostname-and-subdomain-fields
basically the POD needs to have a Hostname/Subdomain Value, i.e.

hostname: postgres # of course that would brake scaling Statefulset/Deployments, but that is a non issue since there will always be one pod per Set/Deployment for Drone Services
subdomain: drone-123
searches:
  - drone-123.my-namespace.svc.cluster.local
  • a service with the name drone-123 and maybe publishNotReadyAddresses: true inside the service.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants