Considering a scenario where individuals embarking on exploration treks, find themselves trapped and in unfamiliar locations, require timely and effective Search and Rescue (SAR) operations. Traditional SAR methods face challenges like rugged terrain, limited accessibility, and resource constraints. This MSc Dissertation project proposes a framework using drones for real-time surveillance and AI for object detection, scene description, and first responder support. Drones will capture live video feeds, processed by AI to identify human subjects and generate detailed scene descriptions, with generative AI creating checklists for first responders. The system will also store and manage case data effectively, providing a user-friendly web interface for accessing essential information.
The methodology depicted in the image outlines a drone-based Search and Rescue (SAR) system integrated with advanced AI and cloud technologies. Upon initiating a new case, a drone equipped with camera and thermal sensors, as well as a microphone, streams real-time data via a WebSocket API to an object detection model hosted on an AWS EC2 instance. If human subjects are detected, the data is processed further; otherwise, the stream ends after three minutes. The system calls a weather API to incorporate weather data and generates a prompt based on the video feed. This information is sent to a text generation model (Llama3) on AWS, which communicates with a backend server built with Node.js. The server, which manages authentication, geo-spatial data, and case data using MongoDB, provides APIs for interaction with Google Maps and the text generation model. Finally, a frontend application built with React JS, also on AWS, offers first responders a user-friendly interface to access Google Maps, chat with the AI, and view checklists and case summaries.
In depth documentation coming soon 🚀