This project builds a complete machine learning pipeline to predict the survival of Titanic passengers.
- Python 3.7+
- Required Libraries: pandas, scikit-learn, joblib, fastapi, uvicorn, gradio, ngrok
-
Navigate to the Part1_Data_Cleaning folder.
-
Run the script:
python data_cleaning.py
This will clean the data and save it as
cleaned_titanic.csv
.
-
Navigate to the Part2_Model_Building folder.
-
Run the script:
python model_building.py
This will train the model and display evaluation metrics.
-
Navigate to the Part3_Model_IO folder.
-
Run the script:
python model_io.py
This will save and load the model using joblib.
-
Navigate to the Part4_API folder.
-
Run the FastAPI server:
uvicorn api:app --reload
The server will start at
http://127.0.0.1:8000
.
-
Navigate to the Part5_Deployment folder.
-
Run the Gradio app:
python app.py
-
Follow instructions to deploy the UI on Hugging Face Spaces.
-
Deploy the FastAPI backend on Hugging Face Spaces.
-
Deploy the Gradio UI on Hugging Face Spaces.
-
data_cleaning.py: Script to clean and preprocess the Titanic dataset.
-
model_building.py: Script to build, train, and evaluate the machine learning model.
-
model_io.py: Script to save and load the trained model using joblib.
-
api.py: FastAPI application for serving the trained model.
-
app.py: Gradio-based user interface for interacting with the deployed model.
Author Bilal Akhtar Feel free to reach out if you have any questions or feedback!