Skip to content

Commit

Permalink
Merge branch 'release/v4.1.0' into ft_process_docs
Browse files Browse the repository at this point in the history
  • Loading branch information
surapuramakhil authored Nov 22, 2024
2 parents 53ecb0a + 3772d6b commit 9c33f99
Show file tree
Hide file tree
Showing 7 changed files with 106 additions and 17 deletions.
40 changes: 40 additions & 0 deletions .github/pull_request_template.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,40 @@
- **Title**: [Descriptive title of the changes]
- **Description**: [Provide a clear description of the changes and their purpose]
- **Related Issues**: #[issue number]
- **Type**:
- [ ] Feature
- [ ] Bug Fix
- [ ] Refactor
- [ ] Documentation
- [ ] Other:

## Implementation Details

- [ ] Changes are focused and solve the stated problem
- [ ] Code follows project style guides
- [ ] Complex logic is documented
- [ ] No unnecessary complexity introduced

## Testing

- [ ] Unit tests added/updated
- [ ] Integration tests added/updated
- [ ] Manual testing completed
- [ ] All tests passing

## Documentation & Quality

- [ ] Project documentation updated
- [ ] Code reviewed for clarity
- [ ] Breaking changes clearly marked
- [ ] Dependencies documented

## Deployment Impact

- [ ] Database migrations required? [Yes/No]
- [ ] Configuration changes needed? [Yes/No]
- [ ] Breaking changes? [Yes/No]

## Additional Notes

[Add any other context or notes for reviewers]
18 changes: 13 additions & 5 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -178,7 +178,7 @@ Auto_Jobs_Applier_AIHawk steps in as a game-changing solution to these challenge

This file contains sensitive information. Never share or commit this file to version control.

- `llm_api_key: [Your OpenAI or Ollama API key or Gemini API key]`
- `llm_api_key: [Your OpenAI or Ollama API key or Gemini API key or Groq API key or AI/ML API key]`
- Replace with your OpenAI API key for GPT integration
- To obtain an API key, follow the tutorial at: <https://medium.com/@lorenzozar/how-to-get-your-own-openai-api-key-f4d44e60c327>
- Note: You need to add credit to your OpenAI account to use the API. You can add credit by visiting the [OpenAI billing dashboard](https://platform.openai.com/account/billing).
Expand All @@ -188,6 +188,8 @@ This file contains sensitive information. Never share or commit this file to ver
OpenAI will update your account automatically, but it might take some time, ranging from a couple of hours to a few days.
You can find more about your organization limits on the [official page](https://platform.openai.com/settings/organization/limits).
- For obtaining Gemini API key visit [Google AI for Devs](https://ai.google.dev/gemini-api/docs/api-key)
- For obtaining Groq API key visit [Groq API](https://api.groq.com/v1)
- For obtaining AI/ML API key visite [AI/ML API](https://aimlapi.com/app/)

### 2. work_preferences.yaml

Expand Down Expand Up @@ -265,19 +267,23 @@ This file defines your job search parameters and bot behavior. Each section cont
#### 2.1 config.py - Customize LLM model endpoint

- `LLM_MODEL_TYPE`:
- Choose the model type, supported: openai / ollama / claude / gemini
- Choose the model type, supported: openai / ollama / claude / gemini / groq / aiml
- `LLM_MODEL`:
- Choose the LLM model, currently supported:
- openai: gpt-4o
- ollama: llama2, mistral:v0.3
- claude: any model
- gemini: any model
- `LLM_API_URL`:
- Link of the API endpoint for the LLM model
- openai: <https://api.pawan.krd/cosmosrp/v1>
- groq: llama3-groq-70b-8192-tool-use-preview, llama3-groq-8b-8192-tool-use-preview, llama-3.1-70b-versatile, llama-3.1-8b-instant, llama-3.2-3b-preview, llama3-70b-8192, llama3-8b-8192, mixtral-8x7b-32768
- aiml: any model

- `llm_api_url`:
- Link of the API endpoint for the LLM model. (only requried for ollama)
- ollama: <http://127.0.0.1:11434/>
- claude: <https://api.anthropic.com/v1>
- gemini: <https://aistudio.google.com/app/apikey>
- groq: <https://api.groq.com/v1>
- aiml: <https://api.aimlapi.com/v2>
- Note: To run local Ollama, follow the guidelines here: [Guide to Ollama deployment](https://github.com/ollama/ollama)

### 3. plain_text_resume.yaml
Expand Down Expand Up @@ -719,6 +725,8 @@ For further assistance, please create an issue on the [GitHub repository](https:
- Written by Rushi, [Linkedin](https://www.linkedin.com/in/rushichaganti/), support him by following.

- [OpenAI API Documentation](https://platform.openai.com/docs/)

- [AI/ML API Documentation](https://docs.aimlapi.com/)

### For Developers

Expand Down
2 changes: 2 additions & 0 deletions constants.py
Original file line number Diff line number Diff line change
Expand Up @@ -64,8 +64,10 @@
LLM_API_URL = "llm_api_url"
LLM_MODEL = "llm_model"
OPENAI = "openai"
AIML="aiml"
CLAUDE = "claude"
OLLAMA = "ollama"
GEMINI = "gemini"
GROQ = "groq"
HUGGINGFACE = "huggingface"
PERPLEXITY = "perplexity"
1 change: 1 addition & 0 deletions requirements.txt
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,7 @@ jsonschema==4.23.0
jsonschema-specifications==2023.12.1
langchain==0.2.11
langchain-anthropic
langchain-groq==0.1.9
langchain-huggingface
langchain-community==0.2.10
langchain-core==0.2.36
Expand Down
5 changes: 4 additions & 1 deletion src/ai_hawk/job_manager.py
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@
from itertools import product
from pathlib import Path
from turtle import color
from datetime import datetime

from inputimeout import inputimeout, TimeoutOccurred
from selenium.common.exceptions import NoSuchElementException
Expand Down Expand Up @@ -400,13 +401,15 @@ def write_to_file(self, job : Job, file_name, reason=None):
logger.debug(f"Writing job application result to file: {file_name}")
pdf_path = Path(job.resume_path).resolve()
pdf_path = pdf_path.as_uri()
current_time = datetime.now().strftime("%Y-%m-%d %H:%M:%S")
data = {
"company": job.company,
"job_title": job.title,
"link": job.link,
"job_recruiter": job.recruiter_link,
"job_location": job.location,
"pdf_path": pdf_path
"pdf_path": pdf_path,
"time": current_time
}

if reason:
Expand Down
43 changes: 39 additions & 4 deletions src/ai_hawk/llm/llm_manager.py
Original file line number Diff line number Diff line change
Expand Up @@ -16,6 +16,7 @@
from langchain_core.prompt_values import StringPromptValue
from langchain_core.prompts import ChatPromptTemplate
from Levenshtein import distance
from loguru import logger

import ai_hawk.llm.prompts as prompts
from config import JOB_SUITABILITY_SCORE
Expand All @@ -30,6 +31,7 @@
EXPERIENCE_DETAILS,
FINISH_REASON,
GEMINI,
GROQ,
HUGGINGFACE,
ID,
INPUT_TOKENS,
Expand All @@ -38,7 +40,6 @@
JOB_DESCRIPTION,
LANGUAGES,
LEGAL_AUTHORIZATION,
LLM_MODEL_TYPE,
LOGPROBS,
MODEL,
MODEL_NAME,
Expand Down Expand Up @@ -69,9 +70,9 @@
TOTAL_TOKENS,
USAGE_METADATA,
WORK_PREFERENCES,
AIML,
)
from src.job import Job
from src.logging import logger
import config as cfg

load_dotenv()
Expand All @@ -82,6 +83,16 @@ class AIModel(ABC):
def invoke(self, prompt: str) -> str:
pass

class GroqAIModel(AIModel):
def __init__(self, api_key: str, llm_model: str):
from langchain_groq import ChatGroq
self.model = ChatGroq(model=llm_model, api_key=api_key,
temperature=0.4)

def invoke(self, prompt: str) -> BaseMessage:
response = self.model.invoke(prompt)
logger.debug("Invoking GroqAI API")
return response

class OpenAIModel(AIModel):
def __init__(self, api_key: str, llm_model: str):
Expand All @@ -97,6 +108,24 @@ def invoke(self, prompt: str) -> BaseMessage:
return response


class AIMLModel(AIModel):
def __init__(self, api_key: str, llm_model: str):
from langchain_openai import ChatOpenAI

self.base_url = "https://api.aimlapi.com/v2"
self.model = ChatOpenAI(
model_name=llm_model,
openai_api_key=api_key,
temperature=0.7,
base_url=self.base_url,
)

def invoke(self, prompt: str) -> BaseMessage:
logger.debug("Invoking AIML API")
response = self.model.invoke(prompt)
return response


class ClaudeModel(AIModel):
def __init__(self, api_key: str, llm_model: str):
from langchain_anthropic import ChatAnthropic
Expand Down Expand Up @@ -195,12 +224,16 @@ def _create_model(self, config: dict, api_key: str) -> AIModel:

if llm_model_type == OPENAI:
return OpenAIModel(api_key, llm_model)
elif llm_model_type == AIML:
return AIMLModel(api_key, llm_model)
elif llm_model_type == CLAUDE:
return ClaudeModel(api_key, llm_model)
elif llm_model_type == OLLAMA:
return OllamaModel(llm_model, llm_api_url)
elif llm_model_type == GEMINI:
return GeminiModel(api_key, llm_model)
elif llm_model_type == GROQ:
return GroqAIModel(api_key, llm_model)
elif llm_model_type == HUGGINGFACE:
return HuggingFaceModel(api_key, llm_model)
elif llm_model_type == PERPLEXITY:
Expand All @@ -213,7 +246,8 @@ def invoke(self, prompt: str) -> str:


class LLMLogger:
def __init__(self, llm: Union[OpenAIModel, OllamaModel, ClaudeModel, GeminiModel]):

def __init__(self, llm: AIModel):
self.llm = llm
logger.debug(f"LLMLogger successfully initialized with LLM: {llm}")

Expand Down Expand Up @@ -325,7 +359,8 @@ def log_request(prompts, parsed_reply: Dict[str, Dict]):


class LoggerChatModel:
def __init__(self, llm: Union[OpenAIModel, OllamaModel, ClaudeModel, GeminiModel]):

def __init__(self, llm: AIModel):
self.llm = llm
logger.debug(f"LoggerChatModel successfully initialized with LLM: {llm}")

Expand Down
14 changes: 7 additions & 7 deletions src/ai_hawk/llm/prompts.py
Original file line number Diff line number Diff line change
Expand Up @@ -262,17 +262,17 @@
- Do not include any introductions, explanations, or additional information.
- The letter should be formatted into paragraph.
## My resume:
```
{resume}
```
## Company Name:
{company}
## Job Description:
```
{job_description}
```
## My resume:
```
{resume}
```
"""

numeric_question_template = """
Expand Down Expand Up @@ -432,10 +432,10 @@
is_relavant_position_template = """
Evaluate whether the provided resume meets the requirements outlined in the job description. Determine if the candidate is suitable for the job based on the information provided.
Job Description: {job_description}
Resume: {resume}
Job Description: {job_description}
Instructions:
1. Extract the key requirements from the job description, identifying hard requirements (must-haves) and soft requirements (nice-to-haves).
2. Identify the relevant qualifications from the resume.
Expand Down

0 comments on commit 9c33f99

Please sign in to comment.