Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] Increased memory usage in geographically distant Salt-Proxy server #67767

Open
2 of 9 tasks
ntt-raraujo opened this issue Feb 24, 2025 · 0 comments
Open
2 of 9 tasks
Labels
Bug broken, incorrect, or confusing behavior needs-triage

Comments

@ntt-raraujo
Copy link

Description
We are experiencing a memory issue on a Salt-Proxy server, which seems to be related to the latency between the proxy and the master. We have three servers that are clones of each other but are located in different geographical regions. Due to this, the servers have the following latencies:

Europe salt-proxy server: <1ms
America salt-proxy server: 90ms
Asia salt-proxy server: 160ms

The memory issue affects the salt-proxy and salt-minion processes on the Asia server only. While the memory usage of these processes on the America and Europe proxies varies between 100MB and 200MB, on the Asia proxy, the memory usage gradually increases and after a few days, it ranges between 800MB and 1.2GB per process.

The devices managed by these proxies use the same base of templates and pillar format. The proxies were created from the same cloned VM. The only difference is the geographical location in relation to the salt-master.

  • on-prem machine
  • VMware ESXI VM
  • VM running on a cloud service, please be explicit and add details
  • container (Kubernetes, Docker, containerd, etc. please specify)
  • or a combination, please be explicit
  • jails if it is FreeBSD
  • classic packaging
  • onedir packaging
  • used bootstrap to install

Steps to Reproduce the behavior

  1. Deploy three Salt-Proxy servers in different geographical locations (Europe, America, Asia).
  2. Monitor the memory usage of the salt-proxy and salt-minion processes over several days.

Expected behavior
Memory usage of salt-proxy and salt-minion processes should remain consistent across all proxies, regardless of geographical location.

Versions Report

salt --versions-report (Provided by running salt --versions-report. Please also mention any differences in master/minion versions.)
Salt Version:
          Salt: 3006.7
 
Python Version:
        Python: 3.10.13 (main, Feb 19 2024, 03:31:20) [GCC 11.2.0]
 
Dependency Versions:
          cffi: 1.16.0
      cherrypy: unknown
      dateutil: 2.8.1
     docker-py: Not Installed
         gitdb: Not Installed
     gitpython: Not Installed
        Jinja2: 3.1.3
       libgit2: Not Installed
  looseversion: 1.0.2
      M2Crypto: Not Installed
          Mako: Not Installed
       msgpack: 1.0.2
  msgpack-pure: Not Installed
  mysql-python: Not Installed
     packaging: 22.0
     pycparser: 2.21
      pycrypto: Not Installed
  pycryptodome: 3.19.1
        pygit2: Not Installed
  python-gnupg: 0.4.8
        PyYAML: 6.0.1
         PyZMQ: 25.1.2
        relenv: 0.15.1
         smmap: Not Installed
       timelib: 0.2.4
       Tornado: 4.5.3
           ZMQ: 4.3.4
 
System Versions:
          dist: oracle 8.9 
        locale: utf-8
       machine: x86_64
       release: 5.4.17-2136.329.3.1.el8uek.x86_64
        system: Linux
       version: Oracle Linux Server 8.9 

Additional context
All salt-proxy servers were created from the same VM clone.
The same templates and pillar formats are used across all salt-proxies. proxy-minions are also similar (same vendors and models across all regions)
problem affects both salt-minion and salt-proxies processes
The only variable is the geographical location and the resulting network latency.
The Salt master and proxies are running the same versions.
That issue didn't happen in 3004.x versions

@ntt-raraujo ntt-raraujo added Bug broken, incorrect, or confusing behavior needs-triage labels Feb 24, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Bug broken, incorrect, or confusing behavior needs-triage
Projects
None yet
Development

No branches or pull requests

1 participant