-
Notifications
You must be signed in to change notification settings - Fork 843
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Where to host cs.k8s.io #2182
Comments
@dims where is the original source code for cs.k8s.io? 👀 |
/wg k8s-infra |
/sig contributor-experience /assign @spiffxp |
@nikhita You can find the config here https://github.com/dims/k8s-code.appspot.com/ |
What's the argument against hosting it on AAA? |
@BenTheElder nothing other than someone has to do it :) oh, i don't know how to wire the ingress/dns stuff i tried a long time ago :) #96 |
I would say lack of artifact destined for |
@ameukam should this issue be migrated to the k/k8s.io repo? |
@nikhita I'm not sure about the right place of this issue. Just wanted to put this under SIG Contribex TLs and Chairs radar. |
it should be under k/k8s.io imho. I think we should host it on AAA fwiw. |
Moving to k8s.io repo. slack discussion - https://kubernetes.slack.com/archives/CCK68P2Q2/p1623300972130500 |
/sig contributor-experience |
I took a stab at onboarding codesearch; @spiffxp could I get your input? I want to make sure I didn't miss anything. #2513 I could also work on adding the docker build logic after, but I haven't worked in that repo yet so I'll have to do some digging. cc @dims |
/priority important-soon |
What about using https://sourcegraph.com/kubernetes to minimize the maintenance burden here? |
choices are:
if i missed any other options, please feel free to chime in. |
/unassign |
/assign @SohamChakraborty |
@ameukam: GitHub didn't allow me to assign the following users: SohamChakraborty. Note that only kubernetes members with read permissions, repo collaborators and people who have commented on this issue/PR can be assigned. Additionally, issues/PRs can only have 10 assignees at the same time. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
I think this is now ready for migration from the bare metal server to aaa cluster. I spoke with Arnaud and he will decide on a path for migration. |
We will need to raise this to We have time, and other pressing work, but we need to be tracking moving it. IMHO: while there are other options out there, we have a lot of links to this in issues etc, if it's cheap to operate I think we should keep it for now and just lift it over to AAA cluster or similar. What's left for this one? |
https://cs-canary.k8s.io/ seems to have performance issues but also maybe isn't running the same version: https://cs.k8s.io/?q=NodeStageVolume&i=nope&files=&excludeFiles=&repos= https://cs-canary.k8s.io/?q=NodeStageVolume&i=nope&files=&excludeFiles=&repos= (note the response times) |
Maybe run this inside a GCP MIG or a AWS ASG ? |
Re-upping this as it came up in conversation today on the sig-infra call and because I am actively working on timelines and budgets for the Equinix Metal wind-down. |
We should check if performance is disk, CPU or memory, we have options (like switching AAA to larger nodes, using a faster disk type, etc) but that needs investigating. |
And then we should just rotate cs.k8s.io to point to the deployment at cs-canary.k8s.io and wind down the equinix machine. |
I tried the test in #2182 (comment) again, this time cs-canary.k8s.io gave an error page eventually. Looking at the pod logs (it is in aaa cluster in kubernetes-public GCP project) there are a lot of git fetch errors, and one replica is not ready:
I probably won't be able to dig further for a bit, but clearly the canary deployment needs some work before we can switch. |
Specs from current cs.k8s.io 16 CPU(s)
32 GB Memory:
|
opened a PR to bump cs-canary to 4 CPU + 16 GB Memory for now - #7695 |
https://cs.k8s.io is running on a baremetal server provided by Equinix Metal(ex Packet) under CNCF budget and operated until now by @dims.
The question was asked about whether or not we should host CodeSearch on aaa cluster.
Ref: https://kubernetes.slack.com/archives/CCK68P2Q2/p1615204807111900?thread_ts=1615189697.108500&cid=CCK68P2Q2
Issue open to track the discussions and the consensus about this.
The text was updated successfully, but these errors were encountered: