Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add evaluate RAG with LlamaIndex #253

Open
wants to merge 2 commits into
base: main
Choose a base branch
from
Open

Conversation

shilpakancharla
Copy link
Collaborator

No description provided.

Copy link

Check out this pull request on  ReviewNB

See visual diffs & provide feedback on Jupyter Notebooks.


Powered by ReviewNB

@github-actions github-actions bot added status:awaiting review PR awaiting review from a maintainer component:examples Issues/PR referencing examples folder labels Aug 13, 2024
@@ -0,0 +1,550 @@
{
Copy link
Member

@markmcd markmcd Aug 15, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

High level comment: Cool notebook! I like the eval part especially. It'd be awesome to deep dive on that (as another task, another day)


Reply via ReviewNB

@@ -0,0 +1,550 @@
{
Copy link
Member

@markmcd markmcd Aug 15, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it worth adding a link or two here that can point users off to relevant pre-reading? e.g. make LlamaIndex link to their site, and make RAG link to something with background info on RAG?


Reply via ReviewNB

@@ -0,0 +1,550 @@
{
Copy link
Member

@markmcd markmcd Aug 15, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Clear outputs for pip install and for llamaindex specifically can you pin the version? It's <1.0 so they can and have made breaking API changes, so it would help keep the guide working.


Reply via ReviewNB

@@ -0,0 +1,550 @@
{
Copy link
Member

@markmcd markmcd Aug 15, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Line #6.                "Response": [response.response],

According to the type annotations response is a string, so has no .response property. Ditto for response.source_nodes


Reply via ReviewNB

@@ -0,0 +1,550 @@
{
Copy link
Member

@markmcd markmcd Aug 15, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Line #8.                "Evaluation Result": [eval_result.feedback],

eval_result is a string too, so has no .feedback property.


Reply via ReviewNB

@@ -0,0 +1,550 @@
{
Copy link
Member

@markmcd markmcd Aug 15, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Line #12.        eval_df = eval_df.style.set_properties(

Could you use the colab formatter instead?

from google import colab
colab.data_table.enable_dataframe_formatter()

Reply via ReviewNB

@@ -0,0 +1,550 @@
{
Copy link
Member

@markmcd markmcd Aug 15, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Line #9.        display_eval_df(question, llm_response, eval_result)

I don't think we want to create 1-row dataframes in a loop. This would look increasingly strange as N gets bigger. Can you assemble a single dataframe instead?


Reply via ReviewNB

@markmcd markmcd added status:awaiting response Awaiting a response from the author and removed status:awaiting review PR awaiting review from a maintainer labels Sep 12, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
component:examples Issues/PR referencing examples folder status:awaiting response Awaiting a response from the author
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants