Summary Evaluator not showing in Langsmith UI #23665
Replies: 1 comment
-
Hello @whatever-afk! I'm here to help you with any bugs, questions, or contributions. Let's tackle this issue together while we wait for a human maintainer to assist us. To ensure the from langchain.smith.evaluation.config import RunEvalConfig, EvaluatorType
# Create a configuration for the summary evaluator
summary_evaluator_config = RunEvalConfig(
evaluators=[
{
"evaluator_type": EvaluatorType.SUMMARY,
"metrics": ["precision"]
}
]
)
# Use this configuration in your evaluate function
results = evaluate(
lambda inputs: label_query(inputs["text"]),
data=dataset_name,
evaluators=[correct_label],
summary_evaluators=[summary_evaluator_config],
experiment_prefix="Toxic Queries",
) This configuration specifies that the Regarding the def embed_query(self, text: str) -> List[float]:
if self.query_instruction:
text = self.query_instruction + text
return self._embed_text([text])[0] The Make sure your configuration and function names are correctly set up to see the desired metrics in the LangSmith UI. |
Beta Was this translation helpful? Give feedback.
-
Checked other resources
Commit to Help
Example Code
Description
I am trying to replicate the example in the documentation for the summary_evaluator. I am only able to see the
accuracy
metric not theprecision
in the LangSmtih UI.In the documentation
label_query
is not found onlylabel_text
function.Am i missing something?
System Info
System Information
Package Information
Packages not installed (Not Necessarily a Problem)
The following packages were not found:
Beta Was this translation helpful? Give feedback.
All reactions