-
Notifications
You must be signed in to change notification settings - Fork 15
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Akrobateo registry (https://registry.pharos.sh) is out of service making it non-operable #35
Comments
Well, I was able to change deploy/04_operator.yaml by replacing "image: registry.pharos.sh/kontenapharos/akrobateo:latest" with "image: kontenapharos/akrobateo:latest" This enabled the Akrobateo to run. But when adding a service of type LoadBalancer I fell back into the original problem (and I don't see how to control it from the deployment files) happening now on the newly created Akrobateo DS' pods:: And the specific problem (for all akrobateo lb pods): Is anyone able to operate current version of Akrobateo? |
Seems like this project is stale. I gave the source code a quick peek and setting the controller's LB_IMAGE environment variable to your akrobateo-lb image will solve this issue. The Dockerfile for the akrobateo-lb can be found here. |
Hi @toonsevrin - just saw your message. In case I progress in some way - I'll post back a comment to close the loop. Thanks again. |
Yeah, I've also stopped using this project because of it being stale, within our project, we've just moved our ingress deployment to a daemonset and made all the ports hostPorts. Works like magic. |
Well - we thought about using this tool with our TCP based services (actually SIP, so Ingress isn't suitable here) alongside with the ExternalDNS add-on to dynamically publish the designated IPs into a DNS server - it seems we'll need to find another alternative (this project should have done the job quite nicely :-() |
How about spending a few $ on an actual load balancer? If you're on-prem you can also consider metallb. Otherwise, I still think running externalDNS + a daemonset (you could put your external facing app in the set) can work out. |
Actually the metallb was our previous solution but it turns out it collides with our CNI -> Calico. |
That is very true indeed, as you are using hostPort, you are constraint to 1 ingress per machine. If this is a concern, you can have an autoscaling nodepool and deploy the daemonset only on those nodes. This is pretty similar to having a second proxy cluster that forwards to your nodePort if you would have a nodeport service. But I think if these are your concerns, I would consider finding a way to make metallb not collide with calico, that sounds like the cleaner solution at your scale. |
What you are saying is true, but I wanted to give the Akrobateo another try. Alas, I got stuck with a same problem like last time, which is fully described in: ip_forward is not enabled #31. Despite of what is written there, I have K8S with version 1.16.1 (which is higher than the 15.5 mentioned there) and I'm getting this error... Any thought on how to bypass this continuing obstacle? |
No idea, but note that behind the scenes, this project takes the same Daemonset approach as you said was not applicable. |
I don't mind the Akrobateo works as a Daemonset, I don't want to deploy my micro-service(s) as Daemonset from the reasons mentioned above. |
oooh of course, yeah you just need your ingress to be a daemonset, within my company, we simply patch our istio ingress-gateway deployment to a daemonset (with hostPorts) and it works like gold. All our services still work and route as they would by default. In case you don't have an ingress, you can setup a proxy pretty easily yourself, all though using a kubernetes framework is a lot nicer: We use istio gateways and virtualservices to route all our traffic, makes it ridiculously easy to add new endpoints and services and routes. |
What about: |
Thanks, but I've already switched to these images - currently I'm blocked by Akrobateo claiming IP Forwarding is disabled on our Cluster and thus it is unable to run (couldn't solve this yet) |
I got blocked by some weird Akrobateo behavior as well. Go back to DaemonSets. All I had to do for this was apply a few small patches to my ingress deployment, literally 8 lines. The idea of a LoadBalancer operator to do this automatically for you is really awesome, but sadly enough it doesn't seem to exist outside of this project, the solution above is your best bet. Good luck! |
I'm not sure your suggestion will work for me. as in my case I'm externalizing a service dealing with UDP based SIP (on port 5060), and not http/https. |
Well as long as you can setup a proxy for that your golden! Note that this is also how Akrobateo works :) |
This is what I'm currently doing as an alternative to Akrobateo. |
Hey @toonsevrin I'm also looking to achieve HA with embedded DB in k3s. |
Well this is an easy problem! Just add a healthcheck endpoint to your daemonset service (eg. /healthz) 🎉 If you have a DNS with health checks (and you don't mind manually adding new ips), you are done here. If either of these is not the case, you would want to write a small controller that syncs you dns (using your DNS API). This is considered very easy! Note that DNS updates take a while to propagate. I would not consider this truely HA until you use floating IPs (perhaps you are doing this, I'm not sure what you meant with the metallb part). |
Hey @toonsevrin Glad to know that this is considered easy 😄
The initial LoadBalancer IP will be |
Hi @jawabuu, |
Hey @tmeltser The IPs of the master is important because this is where the ingress(in this case Traefik) is deployed. |
Hey @jawabuu - we are using the Metallb for the same purpose, but we don't deploy anything on the master nodes (Ingress controller or anything else), but only on our worker nodes, so we don't need (nor do we want) to expose any of our master(s) IP(s); but it's your choice of course. |
I didn't know about externalDns, seems like that does exactly what I was proposing (pretty sweet, I'll definitely use it in the future) 😅 I'll leave it to the other people to help you as I don't have any experience with metallb. Good luck! |
Thanks @toonsevrin One question about akrobateo. How can I get it to preserve source IP? |
@jawabuu If you're able to add it as metadata (eg. header), you'll be the happiest. That way you don't have any networking challenges. hostPort Daemonsets (eg. akrobateo) should receive packets with the hostIP. I'm not certain what proxy akrobateo uses but you should check if it adds the SOURCE IP to some metadata field (or if that is configurable). But as with almost all reverse proxies, akrobateo will override the IP packet source. If you want your service to receive the raw packets, use your service as a hostPorted DaemonSet directly. Anyways, getting your source IP in the metadata is usually your best bet if you need the IP for something. P.S: I'm quite bad at network related stuff so please correct me if I'm wrong. |
No worries. I was actually thinking maybe this is why we can't retrieve the hostIP when using metalLB with an ingress like Traefik. akrobateo/lb-image/entrypoint.sh Lines 12 to 16 in f858444
|
For everyones interest, I've noticed inlets, a project that seems similar to akrobateo, but is maintained. |
Hey @toonsevrin
I can then apply it to my service or metallb and have external DNS update the records. |
Would be nice if someone picked this up. There are some forks, https://github.com/alexfouche/akrobateo looking the most updated. Or is klipper now somehow available to k8s? https://github.com/k3s-io/klipper-lb |
Hi,
I'm trying to operate Akrobateo as follows:
Cloned the repo
Moved to the "deploy" folder
Run the following command: "kubectl apply -k ."
According to the error I see, it seems the private Docker registry hosting the image for the Akrobateo Pod is out of service as I get the following error: Get https://registry.pharos.sh/v2/: dial tcp 198.54.117.199:443: connect: connection refused"
Any advice would be very much appreciated.
The text was updated successfully, but these errors were encountered: