We will be using an Amazon Sagemaker jumpstart solution to quickly deploy a model, train it, and create an endpoint.
This solution uses the ML model provided by the SageMaker jumpstart https://github.com/awslabs/aws-fleet-predictive-maintenance/ to do predictive maintenance for the vehicles.
You'll need to access Sagemaker Studio to quickly set up the jumpstart solution. To do that, open up your AWS Console, go to Sagemaker, and click on Create Domain.
Choose Quick Setup and click on Set Up.
This will take some time.
Once the domain is ready. Click on the dropdown and choose Studio as shown below
Then open up Studio Classic.
Once it opens up, navigate to JumpStart solutions, choose Predictive maintenance for vehicle fleets, and launch it. It will deploy the model and create an endpoint.
Log in to AWS Console, go to Lambda functions, and click on the create function.
Provide a name AWSIoTVehicleFailurePredict
for your function, be sure to choose Python 3.11 as the Runtime and click on Create function.
Now that our chat Lambda function is created, we'll deploy our code to it.
The chat function needs additional packages, namely boto3 and pymongo (v4.6.0) to interface with the Sagemaker endpoint and Atlas cluster.
On your local machine, install the packages in the same folder you place lambda_function.py in. You can install packages to a specific folder using the below command
pip install --target ./ package_name
Zip up the contents of the entire folder (lambda_function.py and the other package folders). Make sure the packages and the lambda function code are at the top level as shown below
Now upload your zip file to an S3 bucket as it'll be too large to directly upload to your Lambda function.
Open up the Lambda function we created earlier, go to code, click on upload from and choose Amazon S3 location as shown below.
Now enter the path to the zip file in your s3 bucket and click on Save.
Open up the Lambda function, go to configuration, and configure the below environment variables
ATLAS_URI
DB_NAME
FAILURE_THRESHOLD
SAGEMAKER_ENDPOINT
Fill in the values with your cluster's connection string and your Sagemaker endpoint and save.
Attach required permissions for the lambda function. It requires access to the S3 Bucket and to the Sagemaker
The final step is to run this function, to do that we'll set up a trigger that runs it when new sensor data is uploaded to a source S3 bucket.
As shown below, the export trigger in App Services uploads data as simple text files into your configured S3 bucket at the configured interval.
Here the file names are the vehicle IDs (MongoDB Object IDs) and the files themselves contain the last 20 voltage and current sensor readings in the format expected by the Sagemaker endpoint.
To set up the trigger, open your Lambda function and click on Add trigger.
In the Select, a source dropdown, choose S3, fill in the prefix (the one used in the export function in the app services), check Acknowledge, and click on Add.
You can also update the threshold value in the environment variables of the lambda. This threshold limit increases the probablity of the creation of jobs based on the current and voltage value received from the sensors.