-
Notifications
You must be signed in to change notification settings - Fork 13
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(network): enable internal ingress network policy #208
base: main
Are you sure you want to change the base?
Conversation
4fc35da
to
7ddb61a
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When testing on OpenShift, I seem to be getting 503 errors when trying to access Cryostat through its Route. When the Network Policy is disabled, the route works fine. Not sure why
kind: NetworkPolicy | ||
metadata: | ||
name: {{ .Release.Name }}-internal-ingress | ||
namespace: {{ .Release.Namespace }} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we can remove this since helm will just use the release namespace if not set?
ingress: | ||
- from: | ||
- podSelector: | ||
matchLabels: | ||
{{- include "cryostat.selectorLabels" $ | nindent 12 }} | ||
namespaceSelector: | ||
matchLabels: | ||
kubernetes.io/metadata.name: {{ .Release.Namespace }} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When testing on OpenShift, I seem to be getting 503 errors when trying to access Cryostat through its Route. When the Network Policy is disabled, the route works fine. Not sure why
I wonder if router configurations in different OpenShift clusters play a role here. I tried locally with CRC with OpenShift 4.13
(OpenShift SDN CNI) and OpenShift v4.15
(OVN-Kubernetes CNI) and was able to access Cryostat via route URL.
If we could add an Ingress rule like below, would it help?
- from:
- namespaceSelector:
matchLabels:
policy-group.network.openshift.io/ingress: ""
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for finding that @tthvo! That could be what we need. @andrewazores fwiw, this was using a ROSA cluster.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @tthvo , that looks like it did the trick. I was testing in crc
and that was working, and after adding this additional rule it's also working on a ROSA cluster.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh nice! Glad that helped ^^
- from: | ||
- podSelector: | ||
matchLabels: | ||
{{- include "cryostat.selectorLabels" $ | nindent 12 }} | ||
namespaceSelector: | ||
matchLabels: | ||
kubernetes.io/metadata.name: {{ .Release.Namespace }} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
On this same topic as #208 (comment), accessing Cryostat via ingress is causing timeout. It does seem like ingress pods are being rejected :((
I tested with minikube
with Calico as CNI and Nginx Ingress. I think the default Kindnet is not actually implementing NetworkPolicies.
I wonder if it would make sense to allow all ingress to Cryostat main pod instead?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I tested with minikube with Calico as CNI and Nginx Ingress. I think the default Kindnet is not actually implementing NetworkPolicies.
Yea, Kindnet doesn't implement them. That's supposed to mean they just get ignored and all traffic restrictions are lifted (or rather, are just not imposed)...
I wonder if it would make sense to allow all ingress to Cryostat main pod instead?
At that point, the policies may as well just be disabled then. Maybe these should be turned off by default along with the egress policies, since they are just meant to be an additional level of protection.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would it make sense to use the NetworkPolicies to restrict traffic only for certain ports? Like for Cryostat talking to the storage pod? If the port is meant to be accessible from outside the cluster, then it's probably okay to be accessed from inside the cluster too.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure that I understand. Creating an Ingress NetworkPolicy basically blocks all incoming traffic, and then the various rules create an allowlist for traffic that is allowed through based on the origin of that traffic. Since there are no specifications on the ports in these rules currently, all ports are allowed so long as the origin Pod matches one of the rules. So I think it's already behaving like this: "If the port is meant to be accessible from outside the cluster, then it's probably okay to be accessed from inside the cluster too."
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was thinking something like having one or more NetworkPolicy that:
- Always allows traffic to the OAuth Proxy container, no matter the source
- Filters traffic to the storage container/pod to only Cryostat itself
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah, I see. In that case I think that would mean the policies' selectors would only select the storage Pod, and not Cryostat's own Pod (which is what also contains the OAuth Proxy). That way all traffic to the Cryostat Pod is allowed, but traffic to storage would only be accepted if it originated from Cryostat.
Same thing would also be applied to the database.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Right, they are additive, but the default when none are present is "allow everything". The moment the first one is defined for a Pod then the default policy for that Pod becomes "allow nothing", and the additive rules build the allowlist for what can come through. I think what you're describing would be a policy for the storage, a policy for the database, and no policy on Cryostat itself.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That would work. Not sure if there are any ports in the Cryostat pod we'd want to restrict.
On another note, one use case I was thinking of for the operator would be for the new nginx container. We could have a NetworkPolicy to only allow traffic to it from pods in the target namespaces.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I suppose we could still put a policy on the Cryostat Pod just for completeness, to make sure only the authproxy's HTTP(S) port(s) are externally visible.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I guess splitting netpol would also be helpful for fine-grained egress rules in #209 :D
12c39b4
to
735dd80
Compare
Related to #205
To test on OpenShift:
helm install --set core.route.enabled=true --set authentication.openshift.enabled=true cryostat ./charts/cryostat
oc run runner --image=registry.access.redhat.com/ubi8/ubi --rm -it /bin/bash
. This runs an additional Pod in the same installation namespace, but one which is not part of Cryostat and does not have the same selector labels. Once this comes up and you have a shell,curl -v http://cryostat:8181
orcurl -v http://cryostat-storage:8333
. These should time out.echo https://$(oc get route -n cryostat cryostat -o jsonpath="{.status.ingress[0].host}")
) and ensure Cryostat UI behaves as usual. Create alocalhost:0
custom target, start and archive a recording, etc.helm uninstall cryostat
helm install --set core.route.enabled=true --set authentication.openshift.enabled=true --set networkPolicy.ingress.enabled=false cryostat ./charts/cryostat
oc run runner --image=registry.access.redhat.com/ubi8/ubi --rm -it /bin/bash
. Once this comes up and you have a shell,curl -v http://cryostat:8181
orcurl -v http://cryostat-storage:8333
. These should succeed and return HTTP responses quickly.Similar testing on other types of cluster (
kind
,minikube
) should also work, with the usual adjustments (don't usecore.route.enabled
orauthentication.openshift.enabled
, etc.).I also spent some time trying to define Egress policies. I thought this would be interesting along the same lines as cryostatio/cryostat-agent#242 and cryostatio/cryostat#323 - we could add network-level restrictions (firewall rules) to prevent Cryostat from being made to open network connections to unexpected destinations, ie. namespaces outside of the admin's chosen list of target namespaces. However, I ran into issues where this interrupted Cryostat's ability to connect to its own database deployment, and I was also not completely sure the right approach to allow Cryostat and the openshift-oauth-proxy to have traffic egress to the k8s API server (for doing RBAC checks, or Endpoints discovery). I'm holding off on that idea for now, but it may be worth following up with again later.