-
Notifications
You must be signed in to change notification settings - Fork 21
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Increase Memory Limit to 1Gi #40
base: main
Are you sure you want to change the base?
Conversation
On a big cluster, with several patches, I get an average memory usage of 800Mi. With a limit at 500Mi, the manager get OOMKilled.
please use this to configure the resources needed by your deployment: |
I have only 9 patches which all targets different resources (except one): {
"apiVersion": "config.openshift.io/v1",
"kind": "OAuth",
"name": "cluster"
}
{
"apiVersion": "config.openshift.io/v1",
"kind": "ClusterVersion",
"name": "version"
}
{
"apiVersion": "config.openshift.io/v1",
"kind": "APIServer",
"name": "cluster"
}
{
"apiVersion": "pipelines.openshift.io/v1alpha1",
"kind": "GitopsService",
"name": "cluster"
}
{
"apiVersion": "imageregistry.operator.openshift.io/v1",
"kind": "Config",
"name": "cluster"
}
{
"apiVersion": "operator.openshift.io/v1",
"kind": "IngressController",
"name": "default",
"namespace": "openshift-ingress-operator"
}
{
"apiVersion": "operator.openshift.io/v1",
"kind": "IngressController",
"name": "default",
"namespace": "openshift-ingress-operator"
}
{
"apiVersion": "machineconfiguration.openshift.io/v1",
"kind": "MachineConfigPool",
"name": "worker"
}
{
"apiVersion": "image.openshift.io/v1",
"kind": "ImageStreamTag",
"namespace": "openshift"
} The only one that target a lot of resources is the last one targeting 511 objects. |
source objects also count, do you have any?
…On Fri, Jul 29, 2022 at 9:17 AM Cedric Girard ***@***.***> wrote:
I have only 9 patches which all targets different resources (except one):
{
"apiVersion": "config.openshift.io/v1",
"kind": "OAuth",
"name": "cluster"
}
{
"apiVersion": "config.openshift.io/v1",
"kind": "ClusterVersion",
"name": "version"
}
{
"apiVersion": "config.openshift.io/v1",
"kind": "APIServer",
"name": "cluster"
}
{
"apiVersion": "pipelines.openshift.io/v1alpha1",
"kind": "GitopsService",
"name": "cluster"
}
{
"apiVersion": "imageregistry.operator.openshift.io/v1",
"kind": "Config",
"name": "cluster"
}
{
"apiVersion": "operator.openshift.io/v1",
"kind": "IngressController",
"name": "default",
"namespace": "openshift-ingress-operator"
}
{
"apiVersion": "operator.openshift.io/v1",
"kind": "IngressController",
"name": "default",
"namespace": "openshift-ingress-operator"
}
{
"apiVersion": "machineconfiguration.openshift.io/v1",
"kind": "MachineConfigPool",
"name": "worker"
}
{
"apiVersion": "image.openshift.io/v1",
"kind": "ImageStreamTag",
"namespace": "openshift"
}
The only one that target a lot of resources is the last one targeting 511
objects.
—
Reply to this email directly, view it on GitHub
<#40 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ABPERXDDXE2YQTDSWQ4YXLDVWPKXHANCNFSM55ATSE5Q>
.
You are receiving this because you commented.Message ID:
***@***.***>
--
ciao/bye
Raffaele
|
No, none |
On a big cluster, with several patches, I get an average memory usage of
800Mi. With a limit at 500Mi, the manager get OOMKilled.