The Security Findings Recommender System is designed to assist in identifying, managing, and reporting security vulnerabilities. Leveraging AI-powered analysis, the system generates detailed reports that categorize findings and provide actionable recommendations.
- Automate Vulnerability Reporting: Streamline the process of generating vulnerability reports from raw data.
- Enhance Findings with AI: Use AI to categorize findings and suggest solutions.
- Improve Security Management: Help users prioritize and address security vulnerabilities based on severity and priority.
- AI-powered categorization and solution generation
- Customizable categories and solutions
- Detailed single finding solutions and aggregated vulnerability multi-finding reports
- API interface for easy integration
For a detailed architectural overview and component descriptions, refer to the Project Architecture Documentation.
The system consists of three main layers:
- Data Layer: Manages core data structures such as
VulnerabilityReport
,Finding
, andSolution
. - Aggregation Layer: Handles the grouping and processing of findings using classes like
FindingBatcher
,FindingGrouper
, andAgglomerativeClusterer
. - LLM (Language Model) Layer: Provides natural language processing capabilities through services such as
OLLAMAService
andOpenAIService
.
For more details, see the System Overview.
- Copy the
.env.docker.example
file to.env.docker
and fill in the required values. - For running without Docker, ensure the correct URL for the Ollama API is set in your
.env
file:OLLAMA_URL=http://localhost:11434
.
For detailed setup instructions, refer to the Prerequisites Documentation.
- Clone the Repository:
git clone <repository-url> cd <repository-directory>
Docker (details in Docker Installation Guide)
- Build and run the application using Docker:
docker compose up -d --build
ollama will not build on default , if you want to add ollama.
docker compose --profile ollama up -d --build
Local Development (details in Local Development Guide)
-
Set Up Environment Variables: Ensure
.env
file is configured as per prerequisites. -
Install Dependencies:
pipenv install
-
Run the Application:
cd src && pipenv shell python app.py
- API: Available at
http://localhost:8000
- Dashboard: Available at
http://localhost:3000
- GET /api/v1/tasks/: Retrieve all tasks.
- DELETE /api/v1/tasks/{task_id}: Delete a specific task by ID.
- POST /api/v1/recommendations/: Retrive recommendations.
- POST /api/v1/recommendations/aggregated: Retrive aggregated recommendations.
- POST /api/v1/upload/: Upload data for Processing.
- GET /: Root endpoint.
For a complete list of available routes, refer to the API Routes Documentation.
curl -X POST http://localhost:8000/api/v1/recommendations/ -d '{}' -H "Content-Type: application/json"
curl http://localhost:8000/api/v1/tasks/1/status
For more usage examples, refer to the Usage Documentation.
To run the code without Docker, follow these steps:
-
Install Dependencies:
pipenv install
-
Activate Virtual Environment:
pipenv shell
-
Run the Application:
cd src && python app.py
This repository contains several detailed guides to help you get started and understand the system better. Have a look at the Guides Directory.
For common questions and troubleshooting, refer to the FAQs Documentation.
This project is licensed under the MIT License. See the LICENSE file for details.
This Project was created in the context of TUMs practical course Digital Product Innovation and Development by fortiss in the summer semester 2024. The task was suggested by Siemens, who also supervised the project.
Please refer to FAQ
To contact fortiss or Siemens, please refer to their official websites.