Skip to content

Local LLM chat incl. history using PromptTemplates. Key errors, formatting question & best practice #26974

Closed Answered by mxmahmoud
mxmahmoud asked this question in Q&A
Discussion options

You must be logged in to vote

this is how i did it for now


        self.llm, self.model_type, self.config = self.load_llm(
            model_path=model_path,
            model_dict_key=model_dict_key,
            config_json_path=config_json_path,
            dtype=dtype,
        )

        self.system_prompt = self.config["LLM"]["system_prompt"]
        self.tokenizer_config = self.config["LLM"]["tokenizer"][self.model_type]
        self.sys_beg = self.tokenizer_config["sys_beg"]       # "<|begin_of_text|><|start_header_id|>system<|end_header_id|>\n\n"
        self.sys_end = self.tokenizer_config["sys_end"]       # "<|eot_id|>"
        self.ai_n_beg = self.tokenizer_config["ai_n_beg"]     # "<|start_header_id|>assis…

Replies: 2 comments 2 replies

Comment options

You must be logged in to vote
2 replies
@mxmahmoud
Comment options

@dosubot
Comment options

Comment options

You must be logged in to vote
0 replies
Answer selected by mxmahmoud
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
1 participant