Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fail to start dify with podman; nginx excited; worker fails to connect redis #16967

Open
5 tasks done
ZelinMa557 opened this issue Mar 27, 2025 · 3 comments
Open
5 tasks done
Labels
🐞 bug Something isn't working

Comments

@ZelinMa557
Copy link

Self Checks

  • This is only for bug report, if you would like to ask a question, please head to Discussions.
  • I have searched for existing issues search for existing issues, including closed ones.
  • I confirm that I am using English to submit this report (我已阅读并同意 Language Policy).
  • [FOR CHINESE USERS] 请务必使用英文提交 Issue,否则会被关闭。谢谢!:)
  • Please do not modify this template :) and fill in all the required fields.

Dify version

0.15.3

Cloud or Self Hosted

Self Hosted (Docker)

Steps to reproduce

  1. git clone https://github.com/langgenius/dify.git --branch 0.15.3
  2. cd dify/docker
  3. cp .env.example .env
  4. append following lines to network config in docker-compose.yaml:
  default:
    driver: bridge
    internal: true
  1. podman-compose up -d

✔️ Expected Behavior

success to start dify

❌ Actual Behavior

  1. nginx exited
Image
  1. log shows worker fails to connect redis
    full log:
[redis]      | 1:C 27 Mar 2025 12:50:19.691 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
[redis]      | 1:C 27 Mar 2025 12:50:19.691 # Redis version=6.2.17, bits=64, commit=00000000, modified=0, pid=1, just started
[redis]      | 1:C 27 Mar 2025 12:50:19.691 # Configuration loaded
[redis]      | 1:M 27 Mar 2025 12:50:19.695 * monotonic clock: POSIX clock_gettime
[redis]      | 1:M 27 Mar 2025 12:50:19.696 * Running mode=standalone, port=6379.
[redis]      | 1:M 27 Mar 2025 12:50:19.696 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.
[redis]      | 1:M 27 Mar 2025 12:50:19.696 # Server initialized
[redis]      | 1:M 27 Mar 2025 12:50:19.696 # WARNING Memory overcommit must be enabled! Without it, a background save or replication may fail under low memory condition. Being disabled, it can can also cause failures without low memory condition, see https://github.com/jemalloc/jemalloc/issues/1328. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.
[redis]      | 1:M 27 Mar 2025 12:50:19.696 * Loading RDB produced by version 6.2.17
[redis]      | 1:M 27 Mar 2025 12:50:19.696 * RDB age 79 seconds
[redis]      | 1:M 27 Mar 2025 12:50:19.696 * RDB memory usage when created 0.77 Mb
[redis]      | 1:M 27 Mar 2025 12:50:19.696 # Done loading RDB, keys loaded: 0, keys expired: 0.
[redis]      | 1:M 27 Mar 2025 12:50:19.696 * DB loaded from disk: 0.000 seconds
[redis]      | 1:M 27 Mar 2025 12:50:19.696 * Ready to accept connections
[sandbox]    | 2025/03/27 12:50:19 setup.go:29: [INFO]initializing nodejs runner environment...
[ssrf_proxy] | [ENTRYPOINT] re-create snakeoil self-signed certificate removed in the build process
[sandbox]    | 2025/03/27 12:50:20 setup.go:85: [INFO]nodejs runner environment initialized
[sandbox]    | 2025/03/27 12:50:20 setup.go:33: [INFO]initializing python runner environment...
[sandbox]    | 2025/03/27 12:50:20 config.go:129: [INFO]network has been enabled
[sandbox]    | 2025/03/27 12:50:20 config.go:145: [INFO]using https proxy: http://ssrf_proxy:3128
[sandbox]    | 2025/03/27 12:50:20 config.go:154: [INFO]using http proxy: http://ssrf_proxy:3128
[sandbox]    | 2025/03/27 12:50:20 server.go:20: [INFO]config init success
[sandbox]    | 2025/03/27 12:50:20 server.go:26: [INFO]runner dependencies init success
[sandbox]    | 2025/03/27 12:50:20 cocrrent.go:31: [INFO]setting max requests to 50
[sandbox]    | 2025/03/27 12:50:20 cocrrent.go:13: [INFO]setting max workers to 4
[sandbox]    | 2025/03/27 12:50:20 server.go:47: [INFO]installing python dependencies...
[sandbox]    | 2025/03/27 12:50:20 server.go:53: [INFO]python dependencies installed
[sandbox]    | 2025/03/27 12:50:20 server.go:55: [INFO]initializing python dependencies sandbox...
[db]         | 
[db]         | PostgreSQL Database directory appears to contain a database; Skipping initialization
[db]         | 
[db]         | 2025-03-27 12:50:20.462 UTC [1] LOG:  starting PostgreSQL 15.12 on x86_64-pc-linux-musl, compiled by gcc (Alpine 14.2.0) 14.2.0, 64-bit
[db]         | 2025-03-27 12:50:20.463 UTC [1] LOG:  listening on IPv4 address "0.0.0.0", port 5432
[db]         | 2025-03-27 12:50:20.463 UTC [1] LOG:  listening on IPv6 address "::", port 5432
[db]         | 2025-03-27 12:50:20.483 UTC [1] LOG:  listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
[db]         | 2025-03-27 12:50:20.505 UTC [29] LOG:  database system was shut down at 2025-03-27 12:49:00 UTC
[db]         | 2025-03-27 12:50:20.532 UTC [1] LOG:  database system is ready to accept connections
[ssrf_proxy] | [ENTRYPOINT] replacing environment variables in the template
[ssrf_proxy] | 2025/03/27 12:50:21| Processing Configuration File: /etc/squid/squid.conf (depth 0)
[ssrf_proxy] | 2025/03/27 12:50:21| Processing Configuration File: /etc/squid/conf.d/debian.conf (depth 1)
[ssrf_proxy] | 2025/03/27 12:50:21| Processing Configuration File: /etc/squid/conf.d/rock.conf (depth 1)
[ssrf_proxy] | 2025/03/27 12:50:21| Created PID file (/run/squid.pid)
[ssrf_proxy] | 2025/03/27 12:50:21| Set Current Directory to /var/spool/squid
[ssrf_proxy] | 2025/03/27 12:50:21| Creating missing swap directories
[ssrf_proxy] | 2025/03/27 12:50:21| No cache_dir stores are configured.
[ssrf_proxy] | 2025/03/27 12:50:21| Removing PID file (/run/squid.pid)
[ssrf_proxy] | [ENTRYPOINT] starting squid
[web]        | 
[web]        |                         -------------
[web]        | 
[web]        | __/\\\\\\\\\\\\\____/\\\\____________/\\\\____/\\\\\\\\\_____
[web]        |  _\/\\\/////////\\\_\/\\\\\\________/\\\\\\__/\\\///////\\\___
[web]        |   _\/\\\_______\/\\\_\/\\\//\\\____/\\\//\\\_\///______\//\\\__
[web]        |    _\/\\\\\\\\\\\\\/__\/\\\\///\\\/\\\/_\/\\\___________/\\\/___
[web]        |     _\/\\\/////////____\/\\\__\///\\\/___\/\\\________/\\\//_____
[web]        |      _\/\\\_____________\/\\\____\///_____\/\\\_____/\\\//________
[web]        |       _\/\\\_____________\/\\\_____________\/\\\___/\\\/___________
[web]        |        _\/\\\_____________\/\\\_____________\/\\\__/\\\\\\\\\\\\\\\_
[web]        |         _\///______________\///______________\///__\///////////////__
[web]        | 
[web]        | 
[web]        |                           Runtime Edition
[web]        | 
[web]        |         PM2 is a Production Process Manager for Node.js applications
[web]        |                      with a built-in Load Balancer.
[web]        | 
[web]        |                 Start and Daemonize any application:
[web]        |                 $ pm2 start app.js
[web]        | 
[web]        |                 Load Balance 4 instances of api.js:
[web]        |                 $ pm2 start api.js -i 4
[web]        | 
[web]        |                 Monitor in production:
[web]        |                 $ pm2 monitor
[web]        | 
[web]        |                 Make pm2 auto-boot at server restart:
[web]        |                 $ pm2 startup
[web]        | 
[web]        |                 To go further checkout:
[web]        |                 http://pm2.io/
[web]        | 
[web]        | 
[web]        |                         -------------
[web]        | 
[web]        | pm2 launched in no-daemon mode (you can add DEBUG="*" env variable to get more messages)
[worker]     | Running migrations
[api]        | Running migrations
[web]        | 2025-03-27T12:50:22: PM2 log: Launching in no daemon mode
[web]        | 2025-03-27T12:50:22: PM2 log: [PM2][WARN] Applications dify-web not running, starting...
[web]        | 2025-03-27T12:50:22: PM2 log: App [dify-web:0] starting in -cluster mode-
[web]        | 2025-03-27T12:50:22: PM2 log: App [dify-web:0] online
[web]        | 2025-03-27T12:50:22: PM2 log: App [dify-web:1] starting in -cluster mode-
[web]        | 2025-03-27T12:50:22: PM2 log: [PM2] This PM2 is not UP TO DATE
[web]        | 2025-03-27T12:50:22: PM2 log: [PM2] Upgrade to version 6.0.5
[web]        | 2025-03-27T12:50:22: PM2 log: App [dify-web:1] online
[web]        | 2025-03-27T12:50:22: PM2 log: [PM2] App [dify-web] launched (2 instances)
[web]        | 2025-03-27T12:50:22: PM2 log: ┌────┬─────────────┬─────────────┬─────────┬─────────┬──────────┬────────┬──────┬───────────┬──────────┬──────────┬──────────┬──────────┐
[web]        | │ id │ name        │ namespace   │ version │ mode    │ pid      │ uptime │ ↺    │ status    │ cpu      │ mem      │ user     │ watching │
[web]        | ├────┼─────────────┼─────────────┼─────────┼─────────┼──────────┼────────┼──────┼───────────┼──────────┼──────────┼──────────┼──────────┤
[web]        | │ 0  │ dify-web    │ default     │ 0.15.3  │ cluster │ 18       │ 0s     │ 0    │ online    │ 0%       │ 52.2mb   │ 1001     │ disabled │
[web]        | │ 1  │ dify-web    │ default     │ 0.15.3  │ cluster │ 25       │ 0s     │ 0    │ online    │ 0%       │ 49.1mb   │ 1001     │ disabled │
[web]        | └────┴─────────────┴─────────────┴─────────┴─────────┴──────────┴────────┴──────┴───────────┴──────────┴──────────┴──────────┴──────────┘
[web]        | 2025-03-27T12:50:22: PM2 log: [--no-daemon] Continue to stream logs
[web]        | 2025-03-27T12:50:22: PM2 log: [--no-daemon] Exit on target PM2 exit pid=7
[ssrf_proxy] | 2025/03/27 12:50:21| Processing Configuration File: /etc/squid/squid.conf (depth 0)
[ssrf_proxy] | 2025/03/27 12:50:21| Processing Configuration File: /etc/squid/conf.d/debian.conf (depth 1)
[ssrf_proxy] | 2025/03/27 12:50:21| Processing Configuration File: /etc/squid/conf.d/rock.conf (depth 1)
[ssrf_proxy] | 2025/03/27 12:50:21| Created PID file (/run/squid.pid)
[ssrf_proxy] | 2025/03/27 12:50:21| Set Current Directory to /var/spool/squid
[ssrf_proxy] | 2025/03/27 12:50:21| Creating missing swap directories
[ssrf_proxy] | 2025/03/27 12:50:21| No cache_dir stores are configured.
[ssrf_proxy] | 2025/03/27 12:50:21| Removing PID file (/run/squid.pid)
[ssrf_proxy] | 2025/03/27 12:50:21| Processing Configuration File: /etc/squid/squid.conf (depth 0)
[ssrf_proxy] | 2025/03/27 12:50:21| Processing Configuration File: /etc/squid/conf.d/debian.conf (depth 1)
[ssrf_proxy] | 2025/03/27 12:50:21| Processing Configuration File: /etc/squid/conf.d/rock.conf (depth 1)
[ssrf_proxy] | 2025/03/27 12:50:21| Created PID file (/run/squid.pid)
[ssrf_proxy] | 2025/03/27 12:50:21| Set Current Directory to /var/spool/squid
[ssrf_proxy] | 2025/03/27 12:50:21| Starting Squid Cache version 6.10 for x86_64-pc-linux-gnu...
[ssrf_proxy] | 2025/03/27 12:50:21| Service Name: squid
[ssrf_proxy] | 2025/03/27 12:50:21| Process ID 45
[ssrf_proxy] | 2025/03/27 12:50:21| Process Roles: master worker
[ssrf_proxy] | 2025/03/27 12:50:21| With 1024 file descriptors available
[ssrf_proxy] | 2025/03/27 12:50:21| Initializing IP Cache...
[ssrf_proxy] | 2025/03/27 12:50:21| DNS IPv6 socket created at [::], FD 8
[ssrf_proxy] | 2025/03/27 12:50:21| DNS IPv4 socket created at 0.0.0.0, FD 9
[ssrf_proxy] | 2025/03/27 12:50:21| Adding nameserver 100.100.2.136 from /etc/resolv.conf
[ssrf_proxy] | 2025/03/27 12:50:21| Adding nameserver 100.100.2.138 from /etc/resolv.conf
[ssrf_proxy] | 2025/03/27 12:50:21| Logfile: opening log daemon:/var/log/squid/access.log
[ssrf_proxy] | 2025/03/27 12:50:21| Logfile Daemon: opening log /var/log/squid/access.log
[ssrf_proxy] | 2025/03/27 12:50:21| Local cache digest enabled; rebuild/rewrite every 3600/3600 sec
[ssrf_proxy] | 2025/03/27 12:50:21| Store logging disabled
[ssrf_proxy] | 2025/03/27 12:50:21| Swap maxSize 0 + 262144 KB, estimated 20164 objects
[ssrf_proxy] | 2025/03/27 12:50:21| Target number of buckets: 1008
[ssrf_proxy] | 2025/03/27 12:50:21| Using 8192 Store buckets
[ssrf_proxy] | 2025/03/27 12:50:21| Max Mem  size: 262144 KB
[ssrf_proxy] | 2025/03/27 12:50:21| Max Swap size: 0 KB
[ssrf_proxy] | 2025/03/27 12:50:21| Using Least Load store dir selection
[ssrf_proxy] | 2025/03/27 12:50:21| Set Current Directory to /var/spool/squid
[ssrf_proxy] | 2025/03/27 12:50:21| Finished loading MIME types and icons.
[ssrf_proxy] | 2025/03/27 12:50:21| HTCP Disabled.
[ssrf_proxy] | 2025/03/27 12:50:21| Pinger socket opened on FD 15
[ssrf_proxy] | 2025/03/27 12:50:21| Squid plugin modules loaded: 0
[ssrf_proxy] | 2025/03/27 12:50:21| Adaptation support is off.
[ssrf_proxy] | 2025/03/27 12:50:21| Accepting HTTP Socket connections at conn3 local=[::]:3128 remote=[::] FD 12 flags=9
[ssrf_proxy] |     listening port: 3128
[ssrf_proxy] | 2025/03/27 12:50:21| Accepting reverse-proxy HTTP Socket connections at conn5 local=[::]:8194 remote=[::] FD 13 flags=9
[ssrf_proxy] |     listening port: 8194
[ssrf_proxy] | 2025/03/27 12:50:21| Configuring Parent sandbox
[ssrf_proxy] | 2025/03/27 12:50:21| WARNING: DNS lookup for 'sandbox' failed!
[ssrf_proxy] | 2025/03/27 12:50:21 pinger| Initialising ICMP pinger ...
[ssrf_proxy] | 2025/03/27 12:50:21 pinger| ICMP socket opened.
[ssrf_proxy] | 2025/03/27 12:50:21 pinger| ICMPv6 socket opened
[ssrf_proxy] | 2025/03/27 12:50:22| storeLateRelease: released 0 objects
[nginx]      | 2025/03/27 12:50:23 [emerg] 11#11: host not found in upstream "api" in /etc/nginx/conf.d/default.conf:8
[nginx]      | nginx: [emerg] host not found in upstream "api" in /etc/nginx/conf.d/default.conf:8
[web]        | 12:50:24 0|dify-web  |   ▲ Next.js 14.2.17
[web]        | 12:50:24 0|dify-web  |   - Local:        http://12ebf6374d78:3000
[web]        | 12:50:24 0|dify-web  |   - Network:      http://10.88.0.51:3000
[web]        | 12:50:24 0|dify-web  |  ✓ Starting...
[web]        | 12:50:24 1|dify-web  |   ▲ Next.js 14.2.17
[web]        | 12:50:24 1|dify-web  |   - Local:        http://12ebf6374d78:3000
[web]        | 12:50:24 1|dify-web  |   - Network:      http://10.88.0.51:3000
[web]        | 12:50:24 1|dify-web  |  ✓ Starting...
[web]        | 12:50:24 0|dify-web  |  ✓ Ready in 734ms
[web]        | 12:50:25 1|dify-web  |  ✓ Ready in 675ms
[api]        | None of PyTorch, TensorFlow >= 2.0, or Flax have been found. Models won't be available and only tokenizers, configuration and file/data utilities can be used.
[worker]     | None of PyTorch, TensorFlow >= 2.0, or Flax have been found. Models won't be available and only tokenizers, configuration and file/data utilities can be used.
[worker]     | /app/api/.venv/lib/python3.12/site-packages/tencentcloud/hunyuan/v20230901/models.py:6087: SyntaxWarning: invalid escape sequence '\_'
[worker]     |   """function名称,只能包含a-z,A-Z,0-9,\_或-
[api]        | /app/api/.venv/lib/python3.12/site-packages/tencentcloud/hunyuan/v20230901/models.py:6087: SyntaxWarning: invalid escape sequence '\_'
[api]        |   """function名称,只能包含a-z,A-Z,0-9,\_或-
[worker]     | 2025-03-27 12:50:56.417 INFO [MainThread] [utils.py:160] - NumExpr defaulting to 4 threads.
[api]        | 2025-03-27 12:50:56.421 INFO [MainThread] [utils.py:160] - NumExpr defaulting to 4 threads.
[sandbox]    | 2025/03/27 12:50:57 env.go:30: [WARN]python lib path /usr/lib/python3.10 is not available
[sandbox]    | 2025/03/27 12:50:57 env.go:30: [WARN]python lib path /usr/lib/python3 is not available
[api]        | /app/api/.venv/lib/python3.12/site-packages/opik/evaluation/metrics/heuristics/regex_match.py:8: SyntaxWarning: invalid escape sequence '\d'
[api]        |   """
[worker]     | /app/api/.venv/lib/python3.12/site-packages/opik/evaluation/metrics/heuristics/regex_match.py:8: SyntaxWarning: invalid escape sequence '\d'
[worker]     |   """
[api]        | 2025-03-27 12:51:08.819 INFO [MainThread] [_client.py:1038] - HTTP Request: GET https://raw.githubusercontent.com/BerriAI/litellm/main/model_prices_and_context_window.json "HTTP/1.1 200 OK"
[worker]     | 2025-03-27 12:51:08.892 INFO [MainThread] [_client.py:1038] - HTTP Request: GET https://raw.githubusercontent.com/BerriAI/litellm/main/model_prices_and_context_window.json "HTTP/1.1 200 OK"
[sandbox]    | 2025/03/27 12:51:15 env.go:30: [WARN]python lib path /run/systemd/resolve/stub-resolv.conf is not available
[sandbox]    | 2025/03/27 12:51:15 env.go:30: [WARN]python lib path /run/resolvconf/resolv.conf is not available
[api]        | 2025-03-27 12:51:21.840 INFO [pre_load_builtin_providers_cache] [font_manager.py:1578] - generated new fontManager
[worker]     | 2025-03-27 12:51:21.957 INFO [pre_load_builtin_providers_cache] [font_manager.py:1578] - generated new fontManager
[sandbox]    | 2025/03/27 12:51:31 server.go:60: [INFO]python dependencies sandbox initialized
[api]        | Preparing database migration...
[api]        | Traceback (most recent call last):
[api]        |   File "/app/api/.venv/lib/python3.12/site-packages/redis/connection.py", line 277, in connect
[api]        |     sock = self.retry.call_with_retry(
[api]        |            ^^^^^^^^^^^^^^^^^^^^^^^^^^^
[api]        |   File "/app/api/.venv/lib/python3.12/site-packages/redis/retry.py", line 62, in call_with_retry
[api]        |     return do()
[api]        |            ^^^^
[api]        |   File "/app/api/.venv/lib/python3.12/site-packages/redis/connection.py", line 278, in <lambda>
[api]        |     lambda: self._connect(), lambda error: self.disconnect(error)
[api]        |             ^^^^^^^^^^^^^^^
[api]        |   File "/app/api/.venv/lib/python3.12/site-packages/redis/connection.py", line 607, in _connect
[api]        |     for res in socket.getaddrinfo(
[api]        |                ^^^^^^^^^^^^^^^^^^^
[api]        |   File "/app/api/.venv/lib/python3.12/site-packages/gevent/_socketcommon.py", line 221, in getaddrinfo
[api]        |     addrlist = get_hub().resolver.getaddrinfo(host, port, family, type, proto, flags)
[api]        |                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[api]        |   File "/app/api/.venv/lib/python3.12/site-packages/gevent/resolver/thread.py", line 63, in getaddrinfo
[api]        |     return self.pool.apply(_socket.getaddrinfo, args, kwargs)
[api]        |            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[api]        |   File "/app/api/.venv/lib/python3.12/site-packages/gevent/pool.py", line 161, in apply
[api]        |     return self.spawn(func, *args, **kwds).get()
[api]        |            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[api]        |   File "src/gevent/event.py", line 330, in gevent._gevent_cevent.AsyncResult.get
[api]        |   File "src/gevent/event.py", line 360, in gevent._gevent_cevent.AsyncResult.get
[api]        |   File "src/gevent/event.py", line 348, in gevent._gevent_cevent.AsyncResult.get
[api]        |   File "src/gevent/event.py", line 328, in gevent._gevent_cevent.AsyncResult._raise_exception
[api]        |   File "/app/api/.venv/lib/python3.12/site-packages/gevent/_compat.py", line 50, in reraise
[api]        |     raise value.with_traceback(tb)
[api]        |   File "/app/api/.venv/lib/python3.12/site-packages/gevent/threadpool.py", line 173, in __run_task
[api]        |     thread_result.set(func(*args, **kwargs))
[api]        | ^^^^^^^^^^^^^^^
[api]        | socket.gaierror: [Errno -2] Name or service not known
[api]        | 
[api]        | During handling of the above exception, another exception occurred:
[api]        | 
[api]        | Traceback (most recent call last):
[api]        |   File "/app/api/.venv/bin/flask", line 8, in <module>
[api]        |     sys.exit(main())
[api]        |              ^^^^^^
[api]        |   File "/app/api/.venv/lib/python3.12/site-packages/flask/cli.py", line 1129, in main
[api]        |     cli.main()
[api]        |   File "/app/api/.venv/lib/python3.12/site-packages/click/core.py", line 1082, in main
[api]        |     rv = self.invoke(ctx)
[api]        |          ^^^^^^^^^^^^^^^^
[api]        |   File "/app/api/.venv/lib/python3.12/site-packages/click/core.py", line 1697, in invoke
[api]        |     return _process_result(sub_ctx.command.invoke(sub_ctx))
[api]        |                            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[api]        |   File "/app/api/.venv/lib/python3.12/site-packages/click/core.py", line 1443, in invoke
[api]        |     return ctx.invoke(self.callback, **ctx.params)
[api]        |            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[api]        |   File "/app/api/.venv/lib/python3.12/site-packages/click/core.py", line 788, in invoke
[api]        |     return __callback(*args, **kwargs)
[api]        |            ^^^^^^^^^^^^^^^^^^^^^^^^^^^
[api]        |   File "/app/api/commands.py", line 585, in upgrade_db
[api]        |     if lock.acquire(blocking=False):
[api]        |        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[api]        |   File "/app/api/.venv/lib/python3.12/site-packages/redis/lock.py", line 210, in acquire
[api]        |     if self.do_acquire(token):
[api]        |        ^^^^^^^^^^^^^^^^^^^^^^
[api]        |   File "/app/api/.venv/lib/python3.12/site-packages/redis/lock.py", line 226, in do_acquire
[api]        |     if self.redis.set(self.name, token, nx=True, px=timeout):
[api]        |        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[api]        |   File "/app/api/.venv/lib/python3.12/site-packages/redis/commands/core.py", line 2333, in set
[api]        |     return self.execute_command("SET", *pieces, **options)
[api]        |            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[api]        |   File "/app/api/.venv/lib/python3.12/site-packages/redis/client.py", line 545, in execute_command
[api]        |     conn = self.connection or pool.get_connection(command_name, **options)
[api]        |                               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[api]        |   File "/app/api/.venv/lib/python3.12/site-packages/redis/connection.py", line 1074, in get_connection
[api]        |     connection.connect()
[api]        |   File "/app/api/.venv/lib/python3.12/site-packages/redis/connection.py", line 283, in connect
[api]        |     raise ConnectionError(self._error_message(e))
[api]        | redis.exceptions.ConnectionError: Error -2 connecting to redis:6379. Name or service not known.
[worker]     | Preparing database migration...
[worker]     | Traceback (most recent call last):
[worker]     |   File "/app/api/.venv/lib/python3.12/site-packages/redis/connection.py", line 277, in connect
[worker]     |     sock = self.retry.call_with_retry(
[worker]     |            ^^^^^^^^^^^^^^^^^^^^^^^^^^^
[worker]     |   File "/app/api/.venv/lib/python3.12/site-packages/redis/retry.py", line 62, in call_with_retry
[worker]     |     return do()
[worker]     |            ^^^^
[worker]     |   File "/app/api/.venv/lib/python3.12/site-packages/redis/connection.py", line 278, in <lambda>
[worker]     |     lambda: self._connect(), lambda error: self.disconnect(error)
[worker]     |             ^^^^^^^^^^^^^^^
[worker]     |   File "/app/api/.venv/lib/python3.12/site-packages/redis/connection.py", line 607, in _connect
[worker]     |     for res in socket.getaddrinfo(
[worker]     |                ^^^^^^^^^^^^^^^^^^^
[worker]     |   File "/app/api/.venv/lib/python3.12/site-packages/gevent/_socketcommon.py", line 221, in getaddrinfo
[worker]     |     addrlist = get_hub().resolver.getaddrinfo(host, port, family, type, proto, flags)
[worker]     |                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[worker]     |   File "/app/api/.venv/lib/python3.12/site-packages/gevent/resolver/thread.py", line 63, in getaddrinfo
[worker]     |     return self.pool.apply(_socket.getaddrinfo, args, kwargs)
[worker]     |            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[worker]     |   File "/app/api/.venv/lib/python3.12/site-packages/gevent/pool.py", line 161, in apply
[worker]     |     return self.spawn(func, *args, **kwds).get()
[worker]     |            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[worker]     |   File "src/gevent/event.py", line 330, in gevent._gevent_cevent.AsyncResult.get
[worker]     |   File "src/gevent/event.py", line 360, in gevent._gevent_cevent.AsyncResult.get
[worker]     |   File "src/gevent/event.py", line 348, in gevent._gevent_cevent.AsyncResult.get
[worker]     |   File "src/gevent/event.py", line 328, in gevent._gevent_cevent.AsyncResult._raise_exception
[worker]     |   File "/app/api/.venv/lib/python3.12/site-packages/gevent/_compat.py", line 50, in reraise
[worker]     |     raise value.with_traceback(tb)
[worker]     |   File "/app/api/.venv/lib/python3.12/site-packages/gevent/threadpool.py", line 173, in __run_task
[worker]     |     thread_result.set(func(*args, **kwargs))
[worker]     | ^^^^^^^^^^^^^^^
[worker]     | socket.gaierror: [Errno -2] Name or service not known
[worker]     | 
[worker]     | During handling of the above exception, another exception occurred:
[worker]     | 
[worker]     | Traceback (most recent call last):
[worker]     |   File "/app/api/.venv/bin/flask", line 8, in <module>
[worker]     |     sys.exit(main())
[worker]     |              ^^^^^^
[worker]     |   File "/app/api/.venv/lib/python3.12/site-packages/flask/cli.py", line 1129, in main
[worker]     |     cli.main()
[worker]     |   File "/app/api/.venv/lib/python3.12/site-packages/click/core.py", line 1082, in main
[worker]     |     rv = self.invoke(ctx)
[worker]     |          ^^^^^^^^^^^^^^^^
[worker]     |   File "/app/api/.venv/lib/python3.12/site-packages/click/core.py", line 1697, in invoke
[worker]     |     return _process_result(sub_ctx.command.invoke(sub_ctx))
[worker]     |                            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[worker]     |   File "/app/api/.venv/lib/python3.12/site-packages/click/core.py", line 1443, in invoke
[worker]     |     return ctx.invoke(self.callback, **ctx.params)
[worker]     |            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[worker]     |   File "/app/api/.venv/lib/python3.12/site-packages/click/core.py", line 788, in invoke
[worker]     |     return __callback(*args, **kwargs)
[worker]     |            ^^^^^^^^^^^^^^^^^^^^^^^^^^^
[worker]     |   File "/app/api/commands.py", line 585, in upgrade_db
[worker]     |     if lock.acquire(blocking=False):
[worker]     |        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[worker]     |   File "/app/api/.venv/lib/python3.12/site-packages/redis/lock.py", line 210, in acquire
[worker]     |     if self.do_acquire(token):
[worker]     |        ^^^^^^^^^^^^^^^^^^^^^^
[worker]     |   File "/app/api/.venv/lib/python3.12/site-packages/redis/lock.py", line 226, in do_acquire
[worker]     |     if self.redis.set(self.name, token, nx=True, px=timeout):
[worker]     |        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[worker]     |   File "/app/api/.venv/lib/python3.12/site-packages/redis/commands/core.py", line 2333, in set
[worker]     |     return self.execute_command("SET", *pieces, **options)
[worker]     |            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[worker]     |   File "/app/api/.venv/lib/python3.12/site-packages/redis/client.py", line 545, in execute_command
[worker]     |     conn = self.connection or pool.get_connection(command_name, **options)
[worker]     |                               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[worker]     |   File "/app/api/.venv/lib/python3.12/site-packages/redis/connection.py", line 1074, in get_connection
[worker]     |     connection.connect()
[worker]     |   File "/app/api/.venv/lib/python3.12/site-packages/redis/connection.py", line 283, in connect
[worker]     |     raise ConnectionError(self._error_message(e))
[worker]     | redis.exceptions.ConnectionError: Error -2 connecting to redis:6379. Name or service not known.
[db]         | 2025-03-27 12:55:20.608 UTC [27] LOG:  checkpoint starting: time
[db]         | 2025-03-27 12:55:20.620 UTC [27] LOG:  checkpoint complete: wrote 3 buffers (0.0%); 0 WAL file(s) added, 0 removed, 0 recycled; write=0.001 s, sync=0.004 s, total=0.012 s; sync files=2, longest=0.003 s, average=0.002 s; distance=0 kB, estimate=0 kB
@dosubot dosubot bot added the 🐞 bug Something isn't working label Mar 27, 2025
Copy link

dosubot bot commented Mar 27, 2025

Hello, @ZelinMa557! I'm here to help you with your Dify issue. I can assist with bug fixes, answer questions, and guide you on becoming a contributor.

I found a similar closed issue that might help with your Redis connection problem. The suggested solution is to ensure that the Redis container is running properly. Here are some steps you can follow:

  1. Verify that all Docker containers, including Redis, are running:

    docker compose ps
  2. Check the docker-compose.yaml file to ensure that the Redis service is correctly configured. Here is an example configuration for the Redis service:

    redis:
      image: redis:6-alpine
      restart: always
      volumes:
        - ./volumes/redis/data:/data
      command: redis-server --requirepass ${REDIS_PASSWORD:-difyai123456}
      healthcheck:
        test: [ "CMD", "redis-cli", "ping" ]
      # uncomment to expose redis port to host
      # ports:
      #   - "6379:6379"
  3. Make sure the REDIS_PASSWORD environment variable is set correctly in your .env file or Docker Compose environment variables.

  4. If you have exposed the Redis port to the host, ensure that the port mapping is correct and not conflicting with other services [1].

Additionally, for the 'host not found in upstream' error in Nginx, ensure you are using Dify version 0.15.3 or later. Verify that the target container is running by executing:

docker ps -q | xargs -n 1 docker inspect --format '{{ .Name }}: {{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}'

Look for the lines corresponding to your web and API services, and update the Nginx configuration with the correct IP addresses. Restart the Nginx container or reload the configuration if necessary [2].

To continue talking to Dosu, mention @dosu.


Help Dosu learn! Give it feedback: Great Response | Irrelevant Answer | Incorrect Sources | Too Verbose | Hallucination | Bug Report | Other

@ZelinMa557 ZelinMa557 changed the title fail to start dify with podman fail to start dify with podman; nginx excited; worker fails to connect redis Mar 27, 2025
@ZelinMa557
Copy link
Author

@dosu I think worker fails to connect redis:6379 because nginx excited. Logs of nginx:

[nginx]      | 2025/03/27 12:50:23 [emerg] 11#11: host not found in upstream "api" in /etc/nginx/conf.d/default.conf:8
[nginx]      | nginx: [emerg] host not found in upstream "api" in /etc/nginx/conf.d/default.conf:8

This might explain why worker fails to connect nginx.
Do you have any idea?

Copy link

dosubot bot commented Mar 28, 2025

The Nginx error indicates that it cannot resolve the hostname "api" in its configuration. This is likely due to network settings or service discovery issues. Here are steps to resolve this:

  1. Check Network Configuration: Ensure that the services are on the same Docker network. In your docker-compose.yaml, verify that the api service is connected to the same network as Nginx.

  2. Service Discovery: Docker services can communicate using their service names as hostnames if they are on the same network. Ensure that the api service is running and accessible. You can check this by running:

    docker compose ps
  3. Update Nginx Configuration: If the service name "api" is not resolving, you might need to use the actual IP address of the api container. You can find the IP address by running:

    docker inspect -f '{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}' <container_id>

    Replace <container_id> with the ID of the api container.

  4. Restart Nginx: After updating the configuration, restart the Nginx service to apply changes:

    docker compose restart nginx

These steps should help resolve the Nginx issue, which might also be affecting the worker's ability to connect to Redis [1].

To continue talking to Dosu, mention @dosu.


Help Dosu learn! Give it feedback: Great Response | Irrelevant Answer | Incorrect Sources | Too Verbose | Hallucination | Bug Report | Other

@crazywoola crazywoola marked this as a duplicate of #17004 Mar 28, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
🐞 bug Something isn't working
Projects
None yet
Development

No branches or pull requests

1 participant