Skip to content

Fix ChatQnA UI issue in set_env.sh on Xeon #1754

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 2 commits into from

Conversation

louie-tsai
Copy link
Collaborator

@louie-tsai louie-tsai commented Apr 3, 2025

Description

Set the wrong port in set_env.sh. remove the port setting and also direct users to use nginx using http://IP:80 instead of port 5173

Issues

issue#1658.

Type of change

List the type of change like below. Please delete options that are not relevant.

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds new functionality)
  • Breaking change (fix or feature that would break existing design and interface)
  • Others (enhancement, documentation, validation, etc.)

Dependencies

na

Tests

test on Xeon

Copy link

github-actions bot commented Apr 3, 2025

Dependency Review

✅ No vulnerabilities or license issues found.

Scanned Files

None

By default, the UI runs on port 5173 internally.

If you choose conversational UI, use this URL: `http://{host_ip}:5174`
To access the frontend, open the following URL in your browser: `http://{host_ip}`
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should set it the the NGINX_PORT. you can tell the user to check it in the compose.yaml file OR have them set it as an environment variable which the current README does:

export NGINX_PORT=80

@xiguiw also mentioned the port may change to 8080 so we want to keep it fluid to not mislead the reader

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

my two cents, good to just stay with default http port 80 to make thing simple. all UI should use default port 80, and users just need to figure out the IP info. don't need to make thing complicated by setting a different port for UI. also removed the export NGINX_PORT=80 in set_env.sh which is redundant. why we want to change it to 8080? port 80 is straightforward.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

put some feedback.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[slightly off-topic]

my two cents, good to just stay with default http port 80 to make thing simple. all UI should use default port 80, and users just need to figure out the IP info. don't need to make thing complicated by setting a different port for UI.

Hmm... Why just for UI? Couldn't OPEA standardize on using standard HTTP port for all services by default (like e.g. LLM engines do), and drop need for all those random SERVICE_PORT variables (unless user explicitly wants to configure them to non-default values)?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ditto on that. @xiguiw hope that makes sense to you.

Copy link
Collaborator

@xiguiw xiguiw Apr 5, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@louie-tsai No.
Look into #1382, it's requirement from K8S, where you have no permission to set port 80 at all.

Copy link
Contributor

@eero-t eero-t Apr 7, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@louie-tsai No. Look into #1382, it's requirement from K8S, where you have no permission to set port 80 at all.

@xiguiw I don't see any mention of Kubernetes in that ticket, and in Kubernetes it's completely fine for service to use port 80 (also for service ingress from the outside of cluster)?

Copy link
Contributor

@alexsin368 alexsin368 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

comment on NGINX port

@@ -22,7 +22,6 @@ export TELEMETRY_ENDPOINT=http://$JAEGER_IP:4318/v1/traces
export no_proxy="$no_proxy,chatqna-xeon-ui-server,chatqna-xeon-backend-server,dataprep-redis-service,tei-embedding-service,retriever,tei-reranking-service,tgi-service,vllm-service,jaeger,prometheus,grafana,node-exporter,$JAEGER_IP"

export LLM_ENDPOINT_PORT=8010
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LLM_SERVER_PORT is used in compose.yaml

  • LLM_SERVER_PORT=${LLM_SERVER_PORT:-80}

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@xiguiw this line means "if you don't have LLM_SERVER_PORT set, uses 80". In short, just using default port 80 is good enough. don't need to set LLM_SERVER_PORT again.

Copy link
Collaborator

@xiguiw xiguiw Apr 4, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are two separated networks: host and container.

80 is container port, you can see there are 4 ports are set to 80 (nginx, embedding, rarank, llm)

LLM_SERVER_PORT: This maps llm container (80) to host LLM_SERVER_PORT port, we need it and should make it configurable here.

# either vLLM or TGI service
curl http://${host_ip}:LLM_SERVER_PORT /v1/chat/completions \
  -X POST \
  -d '{"model": "meta-llama/Meta-Llama-3-8B-Instruct", "messages": [{"role": "user", "content": "What is Deep Learning?"}], "max_tokens":17}' \
  -H 'Content-Type: application/json'

It should be configurable host port in compose.yaml, but it is fixed 9009 in compose.yaml

I'll submit a PR to make it configurable, and map to the right host port on Monday.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@xiguiw I still don't agree on that. all port inside compose.yaml should be the port of docker instances. because you want to stay will fixed IP and port which are managed as docker name and its port.

the curl command SHOULD NOT use the LLM_SERVER_PORT which is the docker_port. you should have LLM_HOST_POST instead. don't mess up between docker port and host port in same variable.

9009 which is host port should not be in compose.yaml

Copy link
Collaborator

@xiguiw xiguiw Apr 5, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@xiguiw I still don't agree on that. all port inside compose.yaml should be the port of docker instances. because you want to stay will fixed IP and port which are managed as docker name and its port.

the curl command SHOULD NOT use the LLM_SERVER_PORT which is the docker_port. you should have LLM_HOST_POST instead. don't mess up between docker port and host port in same variable.

It depends on how you define LLM_SERVER_PORT
Why we need set docker_port container ports in host? We need only host port.

9009 which is host port should not be in compose.yaml

What I'll do is to separate host port from docker container port. It's mixed now.

all port inside compose.yaml should be the port of docker instances
No. host ports are in compose.yaml, too. If its' no here, how do you map container port to host?

We need to map docker container port to host port configurable in compose.yaml in case there is host port confliction.

@louie-tsai louie-tsai requested review from alexsin368 and xiguiw April 3, 2025 23:28
@louie-tsai louie-tsai force-pushed the chatqna_ui_fix branch 4 times, most recently from abac608 to a5f11c1 Compare April 4, 2025 17:09
@@ -22,12 +22,10 @@ export TELEMETRY_ENDPOINT=http://$JAEGER_IP:4318/v1/traces
export no_proxy="$no_proxy,chatqna-xeon-ui-server,chatqna-xeon-backend-server,dataprep-redis-service,tei-embedding-service,retriever,tei-reranking-service,tgi-service,vllm-service,jaeger,prometheus,grafana,node-exporter,$JAEGER_IP"

export LLM_ENDPOINT_PORT=8010
export LLM_SERVER_PORT=9000
Copy link
Collaborator

@xiguiw xiguiw Apr 7, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please hold on this PR.

We will solve this issue completely by PR #1763

Please refer to
#1739

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you might still need to change README and set_env.sh. #1763 doesn't cover the change in the PR.

Copy link
Collaborator

@xiguiw xiguiw Apr 8, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is for FagGen. export LLM_SERVER_PORT=9000
It breaks FagGen to remove LLM_SERVER_PORT=9000

Yes.
Document is a problem. It even the flow of FqaGen is changed.
Maybe a new PR to fix document?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

added modification to seperate FAQGen LLM port from ChatQnA

@xiguiw
Copy link
Collaborator

xiguiw commented Apr 7, 2025

Please hold this PR and refer to
#1739

It's not UI connection issue, but macro-service accesses the micro-service connection issue.
The setting should be considered for both faqgen and chatqna.

@joshuayao joshuayao linked an issue Apr 8, 2025 that may be closed by this pull request
8 tasks
@joshuayao joshuayao modified the milestone: v1.3 Apr 8, 2025
@louie-tsai louie-tsai requested a review from xiguiw April 8, 2025 22:01
@louie-tsai
Copy link
Collaborator Author

closed it since it is fixed

@louie-tsai louie-tsai closed this Apr 15, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[Bug]ChatQnA UI server connection issue
5 participants