- Problem Statement
- Solution
- Technical Implementation
- Security & Scalability Features
- User Interface
- Arabic Language Support
- Future Improvements
- Development Setup
- Testing
- Problem Statement
- Solution
- Architecture
- Core Workflow Components
- Technical Implementation
- Security & Scalability Features
- Required Setup
- Future Development Areas
- Development Notes
- Throughout we are using LLama models hosted on Groq. This is for performance reasons and this can switched for on-premises implementation
- The open source models executes everything including tool calling.
Government departments currently handle Internal Opinion Requests through a manual, fragmented process that lacks:
- Standardized analysis workflows
- Context preservation across requests
- Efficient document retrieval
- Consistent evaluation criteria
- Cross-department knowledge sharing
Problem.1.Demo.3.mov
# Clone repository
git clone [https://github.com/tofaramususa/Aqlan](https://github.com/tofaramususa/Aqlan.git)
#upgrade PIP to new VERSION
pip install --upgrade pip
# Uninstall all packages or create new environment
pip uninstall -r requirements.txt -y
# Install dependencies
pip install -r requirements.txt
# Set up environment variables
THE .env contains api keys already which l will delete after 24hours
# Go into 1. Aqlan Request Agent - Internal Opinion Request and run
streamlit run agentWorkflow.py
We've developed a context-aware system that implements an intelligent workflow for processing opinion requests using LangGraph and LangChain frameworks.
stateDiagram-v2
[*] --> Router
Router --> VectorDataStore: Context Search
Router --> WebSearch: External Research
VectorDataStore --> DocumentGrader
WebSearch --> DocumentGrader
DocumentGrader --> Generator
Generator --> HallucationCheck
HallucationCheck --> CriteriaGrader
CriteriaGrader --> [*]: Meets Criteria
CriteriaGrader --> Generator: Retry Needed
router_instructions = """
Determine routing based on request context:
- Vectorstore: Internal documents and policies
- Web search: Supporting external research
Return JSON with datasource decision
"""
- Vector Store Implementation:
vectorstore = SKLearnVectorStore.from_documents(
documents=doc_splits,
embedding=NomicEmbeddings(
model="nomic-embed-text-v1.5",
inference_mode="local"
)
)
retriever = vectorstore.as_retriever(k=3)
# Document relevance assessment
doc_grader_instructions = """
Grade document relevance based on:
- Keyword matching
- Semantic relevance
- Context alignment
"""
# Hallucination prevention
hallucination_grader_instructions = """
Verify generation is grounded in:
- Retrieved documents
- Department context
- Historical decisions
"""
# Criteria-based evaluation
answer_grader_instructions = """
Evaluate response against:
- Question relevance
- Supporting evidence
- Format compliance
"""
workflow = StateGraph(GraphState)
# Node definition
workflow.add_node("retrieve", retrieve)
workflow.add_node("grade_documents", grade_documents)
workflow.add_node("generate", generate)
# Edge configuration
workflow.add_conditional_edges(
"grade_documents",
decide_to_generate,
{
"websearch": "websearch",
"generate": "generate",
}
)
langchain-community
langchain-nomic
langgraph
scikit-learn
streamlit
tiktoken
tavily-python
class GraphState(TypedDict):
question: str # Input request
generation: str # Generated response
web_search: str # Search decision
max_retries: int # Generation attempts
answers: int # Response count
loop_step: int # Workflow position
documents: List[str] # Retrieved contexts
- Local model deployment via Ollama
- Controlled API access for web searches
- On-premises vector store
- Document access controls
- Graph-based workflow architecture
- Modular component design
- Pluggable model support
- Extensible tool integration
Built with Streamlit for practical deployment:
def main():
st.title("Opinion Request Processing System")
# Input section
request_text = st.text_area("Enter Opinion Request")
department = st.selectbox("Select Department",
["Legal", "Finance", "Strategic"])
if st.button("Process Request"):
# Initialize workflow
workflow = initialize_graph()
# Process with progress tracking
with st.spinner("Processing..."):
result = workflow.run({
"question": request_text,
"max_retries": 3,
"department": department
})
# Display results
st.subheader("Analysis Summary")
st.write(result.summary)
st.subheader("Supporting Documents")
st.write(result.documents)
st.subheader("Recommendations")
st.write(result.recommendations)
- Integration with Arabic-compatible embeddings
- Bilingual document processing
- RTL interface support
- Enhanced Arabic NLP capabilities
- Additional document format support
- Extended model options
- Improved visualization tools
- Advanced routing logic
- Extended criteria frameworks
- Enhanced context retrieval
- Improved feedback loops
# Example test case
inputs = {
"question": "Analysis request for educational technology adoption",
"max_retries": 3,
"department": "Strategic"
}
# Run workflow
results = graph.run(inputs)
This implementation focuses on:
- Context-aware processing
- Criteria-based evaluation
- Secure document handling
- Scalable architecture
- Arabic language support
- Practical deployment
The system is designed to be extended with additional tools and capabilities while maintaining security and performance requirements.
Government departments face a critical challenge in research efficiency:
- Teams spend excessive time manually researching key topics
- Research efforts are often duplicated across departments
- Information must be gathered from multiple sources (reports, statistics, benchmarks)
- Current processes lack systematic analysis of existing documentation
- Need for secure, context-aware analysis that leverages existing knowledge
We've developed a research agent that streamlines document analysis and recommendation generation through a structured workflow. The system emphasizes context awareness and criteria-based analysis while maintaining security through open-source components.
Problem.2.Demo.2.mov
# Clone repository
git clone [https://github.com/tofaramususa/Aqlan](https://github.com/tofaramususa/Aqlan.git)
#upgrade PIP to new VERSION
pip install --upgrade pip
# Uninstall all packages or create new environment
pip uninstall -r requirements.txt -y
# Install dependencies
pip install -r requirements.txt
# Set up environment variables
THE .env contains api keys already which l will delete after 24hours
# Go into 2. Aqlan Research Agent - Research and Benchmarking
streamlit run researchUI.py
graph TD
A[User Query] --> B[Oracle/Router]
B --> C[RAG Search]
B --> D[Web Search]
B --> E[ArXiv Fetch]
C --> F[Analysis Loop]
D --> F
E --> F
F --> G[Final Answer Generation]
G --> H[Structured Report]
subgraph "State Management"
I[Input State]
J[Chat History]
K[Intermediate Steps]
end
-
Context-Aware Router (Oracle)
- Uses LLM to determine optimal research path
- Maintains state through intermediate steps
- Prevents redundant tool usage
- Tracks chat history for context preservation
-
Research Tools
tools = [ rag_search_filter, # Document-specific search rag_search, # Knowledge base search fetch_arxiv, # Research paper retrieval web_search, # General information final_answer # Report generation ]
-
Analysis Criteria Framework
- Summarizes initial request
- Deep analysis using provided content
- Identifies relevant existing documents
- Comparative analysis with similar cases
- Evidence-based recommendations
- Source attribution
-
Report Generation Structure
@tool("final_answer") def final_answer( introduction: str, # Question context research_steps: str, # Analysis process main_body: str, # Primary findings conclusion: str, # Key recommendations sources: str # Reference attribution )
class AgentState(TypedDict):
input: str # User query
chat_history: list # Conversation context
intermediate_steps: list # Research progress
# Initialize with AgentState
graph = StateGraph(AgentState)
# Core nodes
graph.add_node("oracle", run_oracle)
graph.add_node("rag_search", run_tool)
graph.add_node("web_search", run_tool)
graph.add_node("final_answer", run_tool)
# Dynamic routing
graph.add_conditional_edges(
source="oracle",
path=router
)
-
Security Implementation
- Local vector storage for document analysis
- API-based web search without data retention
- Compatible with open-source models for on-premises deployment
-
Scalable Architecture
- Graph-based design allows tool addition
- State management supports complex workflows
- Modular tool integration system
- Core Dependencies:
langchain==0.2.5
langgraph==0.1.1
langchain-core==0.2.9
semantic-router==0.0.48
-
Arabic Language Support
- Integration with Arabic-capable models
- Bilingual document processing
-
Enhanced Context Management
- Department-specific analysis criteria
- Extended document history tracking
- Improved source correlation
-
Tool Expansion
- Additional data source integrations
- Specialized analysis modules
- Custom criteria frameworks
The system is built on LangGraph for workflow management and uses a combination of RAG and tool-based approaches for comprehensive research analysis. The architecture emphasizes:
- Context preservation through state management
- Criteria-based analysis workflows
- Secure, scalable information processing
- Source attribution and verification
- Tool extensibility through graph architecture
This project is part of the AI government solution hackathon, focusing on research automation and context-aware analysis.