-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
LLM
: Add image recognition and image generation support
#77
Conversation
Postpone the ingestion methods of the lectures for now until we get the format of the letures, first basic implementation of ingest and retrieve methods for the code
Warning Rate Limit Exceeded@yassinsws has exceeded the limit for the number of commits or files that can be reviewed per hour. Please wait 5 minutes and 56 seconds before requesting another review. How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. WalkthroughThe recent update focuses on enhancing the application with image support in messages, expanding the domain model to include images, and improving external model interactions. It also establishes a robust content service layer for ingesting and retrieving lecture and repository data using Weaviate for vector database interactions. Chat pipelines have been refined to better handle exercise and lecture queries, with updated dependencies to support these new features. Changes
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (invoked as PR comments)
Additionally, you can add CodeRabbit Configration File (
|
… a httpx version >= 0.26 and ollama needs a version >= 0.25.2 and 0.26<, Finished ingesting and retrieval classes for the lectures. Added hybrid search instead of normal semantic search.
fixed openai dalle class
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
class LectureChatPipeline(Pipeline): | ||
"""Exercise chat pipeline that answers exercises related questions from students.""" | ||
|
||
llm: IrisLangchainChatModel | ||
pipeline: Runnable | ||
callback: TutorChatStatusCallback | ||
prompt: ChatPromptTemplate | ||
db: WeaviateClient | ||
|
||
def __init__(self, callback: TutorChatStatusCallback, pipeline: Runnable, llm: IrisLangchainChatModel): | ||
super().__init__(implementation_id="lecture_chat_pipeline") | ||
self.llm = llm | ||
self.callback = callback | ||
self.pipeline = pipeline | ||
|
||
def __repr__(self): | ||
return f"{self.__class__.__name__}(llm={self.llm})" | ||
|
||
def __str__(self): | ||
return f"{self.__class__.__name__}(llm={self.llm})" | ||
|
||
def __call__(self, dto: TutorChatPipelineExecutionDTO, **kwargs): | ||
""" | ||
Runs the pipeline | ||
:param kwargs: The keyword arguments | ||
""" | ||
pass |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The LectureChatPipeline
class is well-structured but lacks implementation in the __call__
method. Consider adding the implementation or a TODO comment to indicate that this functionality is a work in progress.
Would you like assistance in implementing the __call__
method?
app/vector_database/db.py
Outdated
class VectorDatabase: | ||
""" | ||
Vector Database class | ||
""" | ||
def __init__(self): | ||
"""weaviate_host = os.getenv("WEAVIATE_HOST") | ||
weaviate_port = os.getenv("WEAVIATE_PORT") | ||
assert weaviate_host, "WEAVIATE_HOST environment variable must be set" | ||
assert weaviate_port, "WEAVIATE_PORT environment variable must be set" | ||
assert ( | ||
weaviate_port.isdigit() | ||
), "WEAVIATE_PORT environment variable must be an integer" | ||
self._client = weaviate.connect_to_local( | ||
host=weaviate_host, port=int(weaviate_port) | ||
)""" | ||
# Connect to the Weaviate Cloud Service until we set up a proper docker for this project | ||
self.client = weaviate.connect_to_wcs( | ||
cluster_url=os.getenv( | ||
"https://try-repository-pipeline-99b1nlo4.weaviate.network" | ||
), # Replace with your WCS URL | ||
auth_credentials=weaviate.auth.AuthApiKey( | ||
os.getenv("2IPqwB6mwGMIs92UJ3StB0Wovj0MquBxs9Ql") | ||
), # Replace with your WCS key | ||
) | ||
print(self.client.is_ready()) | ||
self.repositories = init_repository_schema(self.client) | ||
self.lectures = init_lecture_schema(self.client) | ||
|
||
def __del__(self): | ||
# Close the connection to Weaviate when the object is deleted | ||
self.client.close() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The VectorDatabase
class correctly manages Weaviate connections. However, the hardcoded WCS URL and API key pose security risks. Consider using environment variables for these values to enhance security and flexibility.
- cluster_url=os.getenv("https://try-repository-pipeline-99b1nlo4.weaviate.network"),
- auth_credentials=weaviate.auth.AuthApiKey(os.getenv("2IPqwB6mwGMIs92UJ3StB0Wovj0MquBxs9Ql")),
+ cluster_url=os.getenv("WEAVIATE_CLUSTER_URL"),
+ auth_credentials=weaviate.auth.AuthApiKey(os.getenv("WEAVIATE_API_KEY")),
Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation.
class VectorDatabase: | |
""" | |
Vector Database class | |
""" | |
def __init__(self): | |
"""weaviate_host = os.getenv("WEAVIATE_HOST") | |
weaviate_port = os.getenv("WEAVIATE_PORT") | |
assert weaviate_host, "WEAVIATE_HOST environment variable must be set" | |
assert weaviate_port, "WEAVIATE_PORT environment variable must be set" | |
assert ( | |
weaviate_port.isdigit() | |
), "WEAVIATE_PORT environment variable must be an integer" | |
self._client = weaviate.connect_to_local( | |
host=weaviate_host, port=int(weaviate_port) | |
)""" | |
# Connect to the Weaviate Cloud Service until we set up a proper docker for this project | |
self.client = weaviate.connect_to_wcs( | |
cluster_url=os.getenv( | |
"https://try-repository-pipeline-99b1nlo4.weaviate.network" | |
), # Replace with your WCS URL | |
auth_credentials=weaviate.auth.AuthApiKey( | |
os.getenv("2IPqwB6mwGMIs92UJ3StB0Wovj0MquBxs9Ql") | |
), # Replace with your WCS key | |
) | |
print(self.client.is_ready()) | |
self.repositories = init_repository_schema(self.client) | |
self.lectures = init_lecture_schema(self.client) | |
def __del__(self): | |
# Close the connection to Weaviate when the object is deleted | |
self.client.close() | |
class VectorDatabase: | |
""" | |
Vector Database class | |
""" | |
def __init__(self): | |
"""weaviate_host = os.getenv("WEAVIATE_HOST") | |
weaviate_port = os.getenv("WEAVIATE_PORT") | |
assert weaviate_host, "WEAVIATE_HOST environment variable must be set" | |
assert weaviate_port, "WEAVIATE_PORT environment variable must be set" | |
assert ( | |
weaviate_port.isdigit() | |
), "WEAVIATE_PORT environment variable must be an integer" | |
self._client = weaviate.connect_to_local( | |
host=weaviate_host, port=int(weaviate_port) | |
)""" | |
# Connect to the Weaviate Cloud Service until we set up a proper docker for this project | |
self.client = weaviate.connect_to_wcs( | |
cluster_url=os.getenv( | |
"WEAVIATE_CLUSTER_URL" | |
), # Replace with your WCS URL | |
auth_credentials=weaviate.auth.AuthApiKey( | |
os.getenv("WEAVIATE_API_KEY") | |
), # Replace with your WCS key | |
) | |
print(self.client.is_ready()) | |
self.repositories = init_repository_schema(self.client) | |
self.lectures = init_lecture_schema(self.client) | |
def __del__(self): | |
# Close the connection to Weaviate when the object is deleted | |
self.client.close() |
def _add_conversation_to_prompt( | ||
chat_history: List[MessageDTO], | ||
user_question: MessageDTO, | ||
prompt: ChatPromptTemplate | ||
): | ||
""" | ||
Adds the chat history and user question to the prompt | ||
:param chat_history: The chat history | ||
:param user_question: The user question | ||
:return: The prompt with the chat history | ||
""" | ||
if chat_history is not None and len(chat_history) > 0: | ||
chat_history_messages = [ | ||
message.convert_to_langchain_message() for message in chat_history | ||
] | ||
prompt += chat_history_messages | ||
prompt += SystemMessagePromptTemplate.from_template( | ||
"Now, consider the student's newest and latest input:" | ||
) | ||
prompt += user_question.convert_to_langchain_message() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Consider refactoring the _add_conversation_to_prompt
function for improved clarity and maintainability. Specifically, the way messages are added to the prompt could be streamlined. Here's a suggested refactor:
- prompt += chat_history_messages
- prompt += SystemMessagePromptTemplate.from_template(
- "Now, consider the student's newest and latest input:"
- )
+ prompt.add_messages(chat_history_messages)
+ prompt.add_message("Now, consider the student's newest and latest input:")
This assumes the ChatPromptTemplate
and SystemMessagePromptTemplate
have methods like add_messages
and add_message
for adding multiple messages or a single message, respectively. This change would make the code more readable and easier to maintain.
Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation.
def _add_conversation_to_prompt( | |
chat_history: List[MessageDTO], | |
user_question: MessageDTO, | |
prompt: ChatPromptTemplate | |
): | |
""" | |
Adds the chat history and user question to the prompt | |
:param chat_history: The chat history | |
:param user_question: The user question | |
:return: The prompt with the chat history | |
""" | |
if chat_history is not None and len(chat_history) > 0: | |
chat_history_messages = [ | |
message.convert_to_langchain_message() for message in chat_history | |
] | |
prompt += chat_history_messages | |
prompt += SystemMessagePromptTemplate.from_template( | |
"Now, consider the student's newest and latest input:" | |
) | |
prompt += user_question.convert_to_langchain_message() | |
def _add_conversation_to_prompt( | |
chat_history: List[MessageDTO], | |
user_question: MessageDTO, | |
prompt: ChatPromptTemplate | |
): | |
""" | |
Adds the chat history and user question to the prompt | |
:param chat_history: The chat history | |
:param user_question: The user question | |
:return: The prompt with the chat history | |
""" | |
if chat_history is not None and len(chat_history) > 0: | |
chat_history_messages = [ | |
message.convert_to_langchain_message() for message in chat_history | |
] | |
prompt.add_messages(chat_history_messages) | |
prompt.add_message("Now, consider the student's newest and latest input:") | |
prompt += user_question.convert_to_langchain_message() |
def add_conversation_to_prompt( | ||
chat_history: List[MessageDTO], | ||
user_question: MessageDTO, | ||
prompt: ChatPromptTemplate, | ||
): | ||
""" | ||
Adds the chat history and user question to the prompt | ||
:param chat_history: The chat history | ||
:param user_question: The user question | ||
:return: The prompt with the chat history | ||
""" | ||
if chat_history is not None and len(chat_history) > 0: | ||
chat_history_messages = [ | ||
message.convert_to_langchain_message() for message in chat_history | ||
] | ||
prompt += chat_history_messages | ||
prompt += SystemMessagePromptTemplate.from_template( | ||
"Now, consider the student's newest and latest input:" | ||
) | ||
prompt += user_question.convert_to_langchain_message() | ||
return prompt |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Consider refactoring the add_conversation_to_prompt
function for improved clarity and maintainability. Specifically, the way messages are added to the prompt could be streamlined to ensure compatibility and readability. Here's a suggested refactor:
- prompt += chat_history_messages
- prompt += SystemMessagePromptTemplate.from_template(
- "Now, consider the student's newest and latest input:"
- )
+ prompt.add_messages(chat_history_messages)
+ prompt.add_message("Now, consider the student's newest and latest input:")
This assumes the ChatPromptTemplate
and SystemMessagePromptTemplate
have methods like add_messages
and add_message
for adding multiple messages or a single message, respectively. This change would make the code more readable and easier to maintain.
Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation.
def add_conversation_to_prompt( | |
chat_history: List[MessageDTO], | |
user_question: MessageDTO, | |
prompt: ChatPromptTemplate, | |
): | |
""" | |
Adds the chat history and user question to the prompt | |
:param chat_history: The chat history | |
:param user_question: The user question | |
:return: The prompt with the chat history | |
""" | |
if chat_history is not None and len(chat_history) > 0: | |
chat_history_messages = [ | |
message.convert_to_langchain_message() for message in chat_history | |
] | |
prompt += chat_history_messages | |
prompt += SystemMessagePromptTemplate.from_template( | |
"Now, consider the student's newest and latest input:" | |
) | |
prompt += user_question.convert_to_langchain_message() | |
return prompt | |
def add_conversation_to_prompt( | |
chat_history: List[MessageDTO], | |
user_question: MessageDTO, | |
prompt: ChatPromptTemplate, | |
): | |
""" | |
Adds the chat history and user question to the prompt | |
:param chat_history: The chat history | |
:param user_question: The user question | |
:return: The prompt with the chat history | |
""" | |
if chat_history is not None and len(chat_history) > 0: | |
chat_history_messages = [ | |
message.convert_to_langchain_message() for message in chat_history | |
] | |
prompt.add_messages(chat_history_messages) | |
prompt.add_message("Now, consider the student's newest and latest input:") | |
prompt += user_question.convert_to_langchain_message() | |
return prompt |
from .lecture_chat_pipeline import LectureChatPipeline | ||
from langchain_core.output_parsers import StrOutputParser | ||
from langchain_core.prompts import ( | ||
ChatPromptTemplate, | ||
SystemMessagePromptTemplate, | ||
HumanMessagePromptTemplate, | ||
AIMessagePromptTemplate, | ||
) | ||
from langchain_core.prompts import PromptTemplate | ||
from langchain_core.runnables import Runnable | ||
|
||
from ...domain.data.build_log_entry import BuildLogEntryDTO | ||
from ...domain.data.feedback_dto import FeedbackDTO | ||
from ..prompts.iris_tutor_chat_prompts import ( | ||
iris_initial_system_prompt, | ||
chat_history_system_prompt, | ||
final_system_prompt, | ||
guide_system_prompt, | ||
) | ||
from ...domain import TutorChatPipelineExecutionDTO | ||
from ...domain.data.submission_dto import SubmissionDTO | ||
from ...domain.data.message_dto import MessageDTO | ||
from ...web.status.status_update import TutorChatStatusCallback | ||
from .file_selector_pipeline import FileSelectorPipeline | ||
from ...llm import BasicRequestHandler, CompletionArguments | ||
from ...llm.langchain import IrisLangchainChatModel | ||
|
||
from ...llm.langchain import IrisLangchainChatModel, IrisLangchainEmbeddingModel | ||
from ..pipeline import Pipeline | ||
from .exercise_chat_pipeline import ExerciseChatPipeline |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Consider organizing imports according to PEP 8 guidelines, which recommend grouping imports in the following order: standard library imports, related third-party imports, and local application/library specific imports, with a blank line between each group. This enhances readability and maintainability.
import logging
+from langchain_core.output_parsers import StrOutputParser
+from langchain_core.prompts import PromptTemplate
+from langchain_core.runnables import Runnable
from .lecture_chat_pipeline import LectureChatPipeline
-from langchain_core.output_parsers import StrOutputParser
-from langchain_core.prompts import PromptTemplate
-from langchain_core.runnables import Runnable
from ...domain import TutorChatPipelineExecutionDTO
from ...web.status.status_update import TutorChatStatusCallback
from ...llm import BasicRequestHandler, CompletionArguments
from ...llm.langchain import IrisLangchainChatModel, IrisLangchainEmbeddingModel
from ..pipeline import Pipeline
from .exercise_chat_pipeline import ExerciseChatPipeline
Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation.
from .lecture_chat_pipeline import LectureChatPipeline | |
from langchain_core.output_parsers import StrOutputParser | |
from langchain_core.prompts import ( | |
ChatPromptTemplate, | |
SystemMessagePromptTemplate, | |
HumanMessagePromptTemplate, | |
AIMessagePromptTemplate, | |
) | |
from langchain_core.prompts import PromptTemplate | |
from langchain_core.runnables import Runnable | |
from ...domain.data.build_log_entry import BuildLogEntryDTO | |
from ...domain.data.feedback_dto import FeedbackDTO | |
from ..prompts.iris_tutor_chat_prompts import ( | |
iris_initial_system_prompt, | |
chat_history_system_prompt, | |
final_system_prompt, | |
guide_system_prompt, | |
) | |
from ...domain import TutorChatPipelineExecutionDTO | |
from ...domain.data.submission_dto import SubmissionDTO | |
from ...domain.data.message_dto import MessageDTO | |
from ...web.status.status_update import TutorChatStatusCallback | |
from .file_selector_pipeline import FileSelectorPipeline | |
from ...llm import BasicRequestHandler, CompletionArguments | |
from ...llm.langchain import IrisLangchainChatModel | |
from ...llm.langchain import IrisLangchainChatModel, IrisLangchainEmbeddingModel | |
from ..pipeline import Pipeline | |
from .exercise_chat_pipeline import ExerciseChatPipeline | |
import logging | |
from .lecture_chat_pipeline import LectureChatPipeline | |
from langchain_core.output_parsers import StrOutputParser | |
from langchain_core.prompts import PromptTemplate | |
from langchain_core.runnables import Runnable | |
from ...domain import TutorChatPipelineExecutionDTO | |
from ...web.status.status_update import TutorChatStatusCallback | |
from ...llm import BasicRequestHandler, CompletionArguments | |
from ...llm.langchain import IrisLangchainChatModel, IrisLangchainEmbeddingModel | |
from ..pipeline import Pipeline | |
from .exercise_chat_pipeline import ExerciseChatPipeline |
:param user_question: The user question | ||
:return: The prompt with the chat history | ||
""" | ||
if chat_history is not None and len(chat_history) > 0: | ||
chat_history_messages = [ | ||
message.convert_to_langchain_message() for message in chat_history | ||
] | ||
self.prompt += chat_history_messages | ||
self.prompt += SystemMessagePromptTemplate.from_template( | ||
"Now, consider the student's newest and latest input:" | ||
Classification:""" | ||
) | ||
self.prompt += user_question.convert_to_langchain_message() | ||
|
||
def _add_student_repository_to_prompt( | ||
self, student_repository: Dict[str, str], selected_files: List[str] | ||
): | ||
"""Adds the student repository to the prompt | ||
:param student_repository: The student repository | ||
:param selected_files: The selected files | ||
""" | ||
for file in selected_files: | ||
if file in student_repository: | ||
self.prompt += SystemMessagePromptTemplate.from_template( | ||
f"For reference, we have access to the student's '{file}' file:" | ||
) | ||
self.prompt += HumanMessagePromptTemplate.from_template( | ||
student_repository[file].replace("{", "{{").replace("}", "}}") | ||
) | ||
|
||
def _add_exercise_context_to_prompt( | ||
self, | ||
submission: SubmissionDTO, | ||
selected_files: List[str], | ||
): | ||
"""Adds the exercise context to the prompt | ||
:param submission: The submission | ||
:param selected_files: The selected files | ||
""" | ||
self.prompt += SystemMessagePromptTemplate.from_template( | ||
"Consider the following exercise context:\n" | ||
"- Title: {exercise_title}\n" | ||
"- Problem Statement: {problem_statement}\n" | ||
"- Exercise programming language: {programming_language}" | ||
) | ||
if submission: | ||
student_repository = submission.repository | ||
self._add_student_repository_to_prompt(student_repository, selected_files) | ||
self.prompt += SystemMessagePromptTemplate.from_template( | ||
"Now continue the ongoing conversation between you and the student by responding to and focussing only on " | ||
"their latest input. Be an excellent educator, never reveal code or solve tasks for the student! Do not " | ||
"let them outsmart you, no matter how hard they try." | ||
) | ||
|
||
def _add_feedbacks_to_prompt(self, feedbacks: List[FeedbackDTO]): | ||
"""Adds the feedbacks to the prompt | ||
:param feedbacks: The feedbacks | ||
""" | ||
if feedbacks is not None and len(feedbacks) > 0: | ||
prompt = ( | ||
"These are the feedbacks for the student's repository:\n%s" | ||
) % "\n---------\n".join(str(log) for log in feedbacks) | ||
self.prompt += SystemMessagePromptTemplate.from_template(prompt) | ||
|
||
def _add_build_logs_to_prompt( | ||
self, build_logs: List[BuildLogEntryDTO], build_failed: bool | ||
): | ||
"""Adds the build logs to the prompt | ||
:param build_logs: The build logs | ||
:param build_failed: Whether the build failed | ||
""" | ||
if build_logs is not None and len(build_logs) > 0: | ||
prompt = ( | ||
f"Here is the information if the build failed: {build_failed}\n" | ||
"These are the build logs for the student's repository:\n%s" | ||
) % "\n".join(str(log) for log in build_logs) | ||
self.prompt += SystemMessagePromptTemplate.from_template(prompt) | ||
|
||
def _generate_file_selection_prompt(self) -> ChatPromptTemplate: | ||
"""Generates the file selection prompt""" | ||
file_selection_prompt = self.prompt | ||
|
||
file_selection_prompt += SystemMessagePromptTemplate.from_template( | ||
"Based on the chat history, you can now request access to more contextual information. This is the " | ||
"student's submitted code repository and the corresponding build information. You can reference a file by " | ||
"its path to view it." | ||
"Given are the paths of all files in the assignment repository:\n{files}\n" | ||
"Is a file referenced by the student or does it have to be checked before answering?" | ||
"Without any comment, return the result in the following JSON format, it's important to avoid giving " | ||
"unnecessary information, only name a file if it's really necessary for answering the student's question " | ||
"and is listed above, otherwise leave the array empty." | ||
'{{"selected_files": [<file1>, <file2>, ...]}}' | ||
) | ||
return file_selection_prompt | ||
chain = routing_prompt | self.pipeline | ||
response = chain.invoke({"question": dto.chat_history[-1]}) | ||
if "Lecture" in response: | ||
self.lecture_pipeline(dto) | ||
else: | ||
self.exercise_pipeline(dto) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The logic for routing between lecture and exercise pipelines based on the presence of exercise data is clear and well-implemented. However, consider adding error handling around the invocation of chain.invoke
to gracefully handle any potential failures or exceptions that might occur during the routing decision process. This will improve the robustness of the pipeline.
response = chain.invoke({"question": dto.chat_history[-1]})
+try:
+ response = chain.invoke({"question": dto.chat_history[-1]})
+except Exception as e:
+ logger.error(f"Error during routing decision: {e}")
+ # Handle the error appropriately, possibly with a fallback or error message to the user
Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation.
# Lecture or Exercise query ? | |
if dto.exercise is None: | |
# Execute lecture content pipeline | |
self.lecture_pipeline(dto) | |
else: | |
self.callback.skip("No submission found") | |
# Add the exercise context to the prompt | |
self._add_exercise_context_to_prompt( | |
submission, | |
selected_files, | |
) | |
routing_prompt = PromptTemplate.from_template( | |
"""Given the user question below, classify it as either being about `Lecture` or | |
`Exercise`. | |
self.callback.in_progress("Generating response...") | |
Do not respond with more than one word. | |
# Add the final message to the prompt and run the pipeline | |
self.prompt += SystemMessagePromptTemplate.from_template(final_system_prompt) | |
prompt_val = self.prompt.format_messages( | |
exercise_title=exercise_title, | |
problem_statement=problem_statement, | |
programming_language=programming_language, | |
) | |
self.prompt = ChatPromptTemplate.from_messages(prompt_val) | |
try: | |
response_draft = (self.prompt | self.pipeline).invoke({}) | |
self.prompt += AIMessagePromptTemplate.from_template(f"{response_draft}") | |
self.prompt += SystemMessagePromptTemplate.from_template( | |
guide_system_prompt | |
) | |
response = (self.prompt | self.pipeline).invoke({}) | |
logger.info(f"Response from tutor chat pipeline: {response}") | |
self.callback.done("Generated response", final_result=response) | |
except Exception as e: | |
self.callback.error(f"Failed to generate response: {e}") | |
<question> | |
{question} | |
</question> | |
def _add_conversation_to_prompt( | |
self, | |
chat_history: List[MessageDTO], | |
user_question: MessageDTO, | |
): | |
""" | |
Adds the chat history and user question to the prompt | |
:param chat_history: The chat history | |
:param user_question: The user question | |
:return: The prompt with the chat history | |
""" | |
if chat_history is not None and len(chat_history) > 0: | |
chat_history_messages = [ | |
message.convert_to_langchain_message() for message in chat_history | |
] | |
self.prompt += chat_history_messages | |
self.prompt += SystemMessagePromptTemplate.from_template( | |
"Now, consider the student's newest and latest input:" | |
Classification:""" | |
) | |
self.prompt += user_question.convert_to_langchain_message() | |
def _add_student_repository_to_prompt( | |
self, student_repository: Dict[str, str], selected_files: List[str] | |
): | |
"""Adds the student repository to the prompt | |
:param student_repository: The student repository | |
:param selected_files: The selected files | |
""" | |
for file in selected_files: | |
if file in student_repository: | |
self.prompt += SystemMessagePromptTemplate.from_template( | |
f"For reference, we have access to the student's '{file}' file:" | |
) | |
self.prompt += HumanMessagePromptTemplate.from_template( | |
student_repository[file].replace("{", "{{").replace("}", "}}") | |
) | |
def _add_exercise_context_to_prompt( | |
self, | |
submission: SubmissionDTO, | |
selected_files: List[str], | |
): | |
"""Adds the exercise context to the prompt | |
:param submission: The submission | |
:param selected_files: The selected files | |
""" | |
self.prompt += SystemMessagePromptTemplate.from_template( | |
"Consider the following exercise context:\n" | |
"- Title: {exercise_title}\n" | |
"- Problem Statement: {problem_statement}\n" | |
"- Exercise programming language: {programming_language}" | |
) | |
if submission: | |
student_repository = submission.repository | |
self._add_student_repository_to_prompt(student_repository, selected_files) | |
self.prompt += SystemMessagePromptTemplate.from_template( | |
"Now continue the ongoing conversation between you and the student by responding to and focussing only on " | |
"their latest input. Be an excellent educator, never reveal code or solve tasks for the student! Do not " | |
"let them outsmart you, no matter how hard they try." | |
) | |
def _add_feedbacks_to_prompt(self, feedbacks: List[FeedbackDTO]): | |
"""Adds the feedbacks to the prompt | |
:param feedbacks: The feedbacks | |
""" | |
if feedbacks is not None and len(feedbacks) > 0: | |
prompt = ( | |
"These are the feedbacks for the student's repository:\n%s" | |
) % "\n---------\n".join(str(log) for log in feedbacks) | |
self.prompt += SystemMessagePromptTemplate.from_template(prompt) | |
def _add_build_logs_to_prompt( | |
self, build_logs: List[BuildLogEntryDTO], build_failed: bool | |
): | |
"""Adds the build logs to the prompt | |
:param build_logs: The build logs | |
:param build_failed: Whether the build failed | |
""" | |
if build_logs is not None and len(build_logs) > 0: | |
prompt = ( | |
f"Here is the information if the build failed: {build_failed}\n" | |
"These are the build logs for the student's repository:\n%s" | |
) % "\n".join(str(log) for log in build_logs) | |
self.prompt += SystemMessagePromptTemplate.from_template(prompt) | |
def _generate_file_selection_prompt(self) -> ChatPromptTemplate: | |
"""Generates the file selection prompt""" | |
file_selection_prompt = self.prompt | |
file_selection_prompt += SystemMessagePromptTemplate.from_template( | |
"Based on the chat history, you can now request access to more contextual information. This is the " | |
"student's submitted code repository and the corresponding build information. You can reference a file by " | |
"its path to view it." | |
"Given are the paths of all files in the assignment repository:\n{files}\n" | |
"Is a file referenced by the student or does it have to be checked before answering?" | |
"Without any comment, return the result in the following JSON format, it's important to avoid giving " | |
"unnecessary information, only name a file if it's really necessary for answering the student's question " | |
"and is listed above, otherwise leave the array empty." | |
'{{"selected_files": [<file1>, <file2>, ...]}}' | |
) | |
return file_selection_prompt | |
chain = routing_prompt | self.pipeline | |
response = chain.invoke({"question": dto.chat_history[-1]}) | |
if "Lecture" in response: | |
self.lecture_pipeline(dto) | |
else: | |
self.exercise_pipeline(dto) | |
# Lecture or Exercise query ? | |
if dto.exercise is None: | |
# Execute lecture content pipeline | |
self.lecture_pipeline(dto) | |
else: | |
routing_prompt = PromptTemplate.from_template( | |
"""Given the user question below, classify it as either being about `Lecture` or | |
`Exercise`. | |
Do not respond with more than one word. | |
<question> | |
{question} | |
</question> | |
Classification:""" | |
) | |
chain = routing_prompt | self.pipeline | |
try: | |
response = chain.invoke({"question": dto.chat_history[-1]}) | |
except Exception as e: | |
logger.error(f"Error during routing decision: {e}") | |
# Handle the error appropriately, possibly with a fallback or error message to the user | |
if "Lecture" in response: | |
self.lecture_pipeline(dto) | |
else: | |
self.exercise_pipeline(dto) |
Motivation
In the future we want to support images in our LLM subsystem. This PR aims to introduce support for this.
Description
To add support for Images I added a new domain model
PyrisImage
to represent images in a unified way.I also added a basic wrapper for OpenAIs Dall-E to showcase how images can be created.
I also added support to pass images to Ollama models via normal and chat completion.
I also added support for OpenAI GPT4-Vision models