Skip to content

Commit

Permalink
Merge pull request #4821 from pathwaycom/berke/feat-apply-hf-template
Browse files Browse the repository at this point in the history
feat: local model prompt formatting in HF
GitOrigin-RevId: ad198be40cd5373dddf7049bf0ad597b338d5854
  • Loading branch information
berkecanrizai authored and Manul from Pathway committed Oct 20, 2023
1 parent dc1f11e commit 760b946
Showing 1 changed file with 10 additions and 1 deletion.
11 changes: 10 additions & 1 deletion llm_app/model_wrappers/huggingface_wrapper/pipelines.py
Original file line number Diff line number Diff line change
Expand Up @@ -111,5 +111,14 @@ def __call__(self, text, **kwargs):

max_new_tokens = kwargs.pop("max_new_tokens", self.max_new_tokens)

output = self.pipeline(text, max_new_tokens=max_new_tokens, **kwargs)
messages = [
{
"role": "system",
"content": "You are a helpful virtual assistant that only responds in english clearly and precisely.",
},
{"role": "user", "content": text},
]
prompt = self.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)

output = self.pipeline(prompt, max_new_tokens=max_new_tokens, **kwargs)
return output[0]["generated_text"]

0 comments on commit 760b946

Please sign in to comment.