Fruititionator is a Python project that leverages Computer Vision to detect fruits in real-time, fetch their nutritional values, and publish this data to an Adafruit IO dashboard using MQTT.
Clone the repo:
$ git clone [email protected]:emberfox205/fruititionator.git
Import libaries (with pip):
$ pip install -r requirements.txt
Get your USDA FoodData Central API key here.
Make an Adafruit IO account here or log in here.
Click on the key icon on the top right corner of the IO tab to view your Adafruit IO (AIO) username and key.
In your local repo, create a .env
file:
$ code .env
Structure it like this:
API_KEY=your_usda_key
AIO_USERNAME=your_aio_username
AIO_KEY=your_aio_key
Replace your_usda_key
, your_aio_username
, and your_aio_key
with your actual keys.
To track published data on your Adafruit account, create four feeds with the following names and specifications:
- Confidence Score: Feed History On
- Detected Object: Feed History On
- Captured Image: Feed History Off
- Nutrition Values: Feed History Off
To view them in a compact GUI, create a dashboard, then blocks of the same name as the feeds and connect them to their respective feed.
Note
Some values are best displayed on specific block types:
"Captured Image" -> "Image" block.
"Nutrition Values" -> "Multiline Text" block.
After installation, in your local repo, navigate to mqtt_client.py
and run the project. Alternatively, execute in your terminal the following:
$ python mqtt_client.py
- The client acts as an interface between code files and the dashboard.
- The file is run as normal without specific CLI commands.
- It first detects with
detect_fruit()
. To conclude detection, pressesc
while the video feed is in focus (click on the video feed window to gain focus). - Nutritional values are fetched and published using the
api_call.py
module, while the image corresponding to the chosen detection result is published using theimage_processor.py
module. - The program then either prints the data or warns user if nothing is detected. User is prompted to input
e
to continue scanning or any other key to exit. - Upon exiting, the program prints number of successful scans (instances that a fruit is detected) and a list of detected fruits.
- Relevant functions are in
fruit_detector.py
. - The
detect_fruit()
function takes 3 arguments to publish the data later. It opens the device's camera (if available and given permission) and also initiates a window showing the camera feed. The model looks for one out of the possible classes of objects, one of which beingNothing
and the rest being types of fruits. - This function is also responsible for publishing to 2 Adafruit feeds:
Confidence Score
andDetected Object
.
Note
All the fruit classes' names can be found in labels.txt
.
- The function returns an instance of the
Detected_Object
class, which contains 3 attributes:name
of typestr
.score
of typefloat
.image
of typendarray
.
- All necessary functions are in
api_call.py
. - The script takes 3 arguments, one of which,
keyword
is then used to query USDA Central Food Database (specifically the Foundation Food and SR Legacy) using theget_api()
function. It returns an instance of theFruit_Nutrition
dataclass, which contains 3 attributes:name
of typestr
.fdcId
of typeint
nutrition
of typedict
. Insidenutrition
, a key-value pair consists ofnutrient_name
as key, an f-stringf"{amount} {unit}"
as value.
Note
Info about Detected_Object
and Fruit_Nutrition
classes are in custom_classes.py
.
- In case the keyword is invalid, the function returns
None
. api_call.py
also handles the publishing of the nutrition toNutrition Values
feed on Adafruit IO.- Example of printing
Fruit_Nutrition
/ the return value ofget_api()
:
Status code: 200
Fruit Name: apple red delicious
FdcId: 1750339
Nutrition: {
"Magnesium, Mg": "4.7 MG",
"Phosphorus, P": "9.18 MG",
"Potassium, K": "95.3 MG",
"Zinc, Zn": "0.0196 MG",
"Copper, Cu": "0.0243 MG",
}
Note
Remember to have .env
file in the same directory as api_call.py
, and DO NOT share the API key.
- Relevant functions are in
image_processor.py
resize_image(image, target_size=100)
-
This function takes a numpy array representing an image and a target size in kilobytes, and returns a PIL Image object that is resized such that the size of the encoded image is less than or equal to the target size.
-
The function converts the numpy array to a PIL Image object, and then enters a loop where it encodes the image, checks the size of the encoded image, and if the size is greater than the target size, resizes the image by a factor and repeats the process. The loop continues until the size of the encoded image is less than or equal to the target size.
encode_image(image_pil)
- This function takes a PIL Image object, encodes it in JPEG format, and then encodes the result in base64. It returns the base64-encoded image as a string.
image_publisher(client, IMAGE_FEED_ID, ndarray_image)
- This function takes a MQTT client, an image feed ID, and a numpy array representing an image. It calls resize_image to resize the image, encode_image to encode the resized image, and then publishes the encoded image to the MQTT feed specified by IMAGE_FEED_ID.