Skip to content

Commit

Permalink
set default max_tokens to 1024
Browse files Browse the repository at this point in the history
  • Loading branch information
mattf committed May 8, 2024
1 parent 014e87b commit a791549
Showing 1 changed file with 1 addition and 1 deletion.
Original file line number Diff line number Diff line change
Expand Up @@ -134,7 +134,7 @@ class ChatNVIDIA(nvidia_ai_endpoints._NVIDIAClient, BaseChatModel):
infer_endpoint: str = Field("{base_url}/chat/completions")
model: str = Field(_default_model, description="Name of the model to invoke")
temperature: Optional[float] = Field(description="Sampling temperature in [0, 1]")
max_tokens: Optional[int] = Field(description="Maximum # of tokens to generate")
max_tokens: Optional[int] = Field(1024, description="Maximum # of tokens to generate")

Check failure on line 137 in libs/ai-endpoints/langchain_nvidia_ai_endpoints/chat_models.py

View workflow job for this annotation

GitHub Actions / cd libs/ai-endpoints / make lint #3.11

Ruff (E501)

langchain_nvidia_ai_endpoints/chat_models.py:137:89: E501 Line too long (90 > 88)
top_p: Optional[float] = Field(description="Top-p for distribution sampling")
seed: Optional[int] = Field(description="The seed for deterministic results")
bad: Optional[Sequence[str]] = Field(description="Bad words to avoid (cased)")
Expand Down

0 comments on commit a791549

Please sign in to comment.