The ONVIF, udev, and OPC UA Configurations documentation explains how to deploy Akri and utilize a specific Discovery Handler using Helm (more information about the Akri Helm charts can be found in the user guide). This documentation elaborates upon them, covering the following:
- Starting Akri without any Configurations
- Generating, modifying and applying a Configuration
- Deploying multiple Configurations
- Modifying a deployed Configuration
- Adding another Configuration to a cluster
- Modifying a broker
- Deleting a Configuration from a cluster
- Applying Discovery Handlers
To install Akri without any protocol Configurations, run this:
Note: See the cluster setup steps for information on how to set the crictl configuration variable
AKRI_HELM_CRICTL_CONFIGURATION
helm repo add akri-helm-charts https://project-akri.github.io/akri/
helm install akri akri-helm-charts/akri \
$AKRI_HELM_CRICTL_CONFIGURATION \
This will deploy the Akri Controller and deploy Akri Agents.
Helm allows us to parametrize the commonly modified fields in our Configuration templates and we have provided many (to see them, run helm inspect values akri-helm-charts/akri
). For more advanced Configuration changes that are not aided by our Helm chart, we suggest creating a Configuration file using Helm and then manually modifying it.
For example, to create an ONVIF Configuration file, run the following. (To instead create a udev Configuration, substitute onvif.configuration.enabled
with udev.configuration.enabled
and add a udev rule. For OPC UA, substitute with opcua.configuration.enabled
.)
helm template akri akri-helm-charts/akri \
--set onvif.configuration.enabled=true \
--set onvif.configuration.brokerPod.image.repository=nginx \
--set rbac.enabled=false \
--set controller.enabled=false \
--set agent.enabled=false > configuration.yaml
Note, that for the broker pod image, nginx was specified. Insert your broker image instead or remove the broker pod image from the installation command to generate a Configuration without a broker PodSpec or ServiceSpecs. Once you have modified the yaml file, you can apply the new Configuration to the cluster with standard kubectl like this:
kubectl apply -f configuration.yaml
{% hint style="info" %}
When modifying the Configuration, do not remove the resource request and limit {{PLACEHOLDER}}
. The Controller inserts the request for the discovered device/Instance here.
{% endhint %}
The following sections explain some of the ways the configuration.yaml could be modified to customize settings/fields that cannot be set with Akri's Helm Chart.
The brokerPodSpec
property is a full PodSpec and can be modified as such. For example, to allow the master Node to have a protocol broker Pod scheduled to it, modify the Configuration, ONVIF in this case, like so:
spec:
brokerPodSpec:
containers:
- name: akri-onvif-video-broker
image: "ghcr.io/project-akri/akri/onvif-video-broker:latest-dev"
resources:
limits:
"{{PLACEHOLDER}}" : "1"
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
The brokerJobSpec
property is a full JobSpec and can be modified as such. Akri's Helm chart enables modifying the capacity
, parallelism
, and backoffLimit
fields of the JobSpec. Other fields of the JobSpec and the PodSpec within the JobSpec can be specified in a similar manner as described in the modifying the PodSpec section.
The instanceServiceSpec
and configurationServiceSpec
properties are full ServiceSpecs and can be modified as such. The simplest reason to modify either might be to specify different ports (perhaps 8085 and 8086):
spec:
instanceServiceSpec:
ports:
- name: grpc
port: 8085
targetPort: 8083
configurationServiceSpec:
ports:
- name: grpc
port: 8086
targetPort: 8083
{% hint style="info" %}
The simple properties of instanceServiceSpec
and configurationServiceSpec
(like name, port, targetPort, and protocol) can be set using Helm's --set
command, e.g.--set onvif.instanceService.targetPort=90
.
{% endhint %}
If you want your end application to consume frames from both IP cameras and locally attached cameras, Akri can be installed from the start with both the ONVIF and udev Configurations like so:
helm repo add akri-helm-charts https://project-akri.github.io/akri/
helm install akri akri-helm-charts/akri \
$AKRI_HELM_CRICTL_CONFIGURATION \
--set onvif.configuration.enabled=true \
--set udev.configuration.enabled=true \
--set udev.configuration.discoveryDetails.udevRules[0]='KERNEL=="video[0-9]*"\, ENV{ID_V4L_CAPABILITIES}==":capture:"'
{% hint style="info" %} You must specify a udev rule to successfully build the udev Configuration. {% endhint %}
You can confirm that both a akri-onvif
and akri-udev
Configuration have been created by running:
kubectl get akric
Each Configuration could also have been deployed via separate Helm installations:
helm install udev-config akri-helm-charts/akri \
--set controller.enabled=false \
--set agent.enabled=false \
--set rbac.enabled=false \
--set udev.configuration.enabled=true \
--set udev.configuration.discoveryDetails.udevRules[0]='KERNEL=="video[0-9]*"\, ENV{ID_V4L_CAPABILITIES}==":capture:"'
helm install onvif-config akri-helm-charts/akri \
--set controller.enabled=false \
--set agent.enabled=false \
--set rbac.enabled=false \
--set onvif.configuration.enabled=true
An already deployed Configuration can be modified in one of two ways:
- Using the
helm upgrade
command - Generating, modifying and applying a custom Configuration
A Configuration can be modified by using the helm upgrade
command. It upgrades an existing release according to the values provided, only updating what has changed. Simply modify your helm install
command to reflect the new desired state of Akri and replace helm install
with helm upgrade
. Using the ONVIF protocol implementation as an example, say an IP camera with IP address 10.0.0.1 is malfunctioning and should be filtered out of discovery, the following command could be run:
helm upgrade akri akri-helm-charts/akri \
$AKRI_HELM_CRICTL_CONFIGURATION \
--set onvif.configuration.enabled=true \
--set onvif.configuration.brokerPod.image.repository=<your broker image name> \
--set onvif.configuration.brokerPod.image.tag=<your broker image tag> \
--set onvif.configuration.discoveryDetails.ipAddresses.action=Exclude \
--set onvif.configuration.discoveryDetails.ipAddresses.items[0]=10.0.0.1
Note that the command is not simply helm upgrade --set onvif.configuration.discoveryDetails.ipAddresses.items[0]=10.0.0.1
; rather, it includes all the old settings along with the new one. Also, note that we assumed you specified a broker pod image in your original installation command, so that brokers were deployed to utilize discovered cameras.
Helm will create a new ONVIF Configuration and apply it to the cluster. When the Agent sees that a Configuration has been updated, it deletes all Instances associated with that Configuration and the controller brings down all associated broker pods. Then, new Instances and broker pods are created. Therefore, the command above will bring down all ONVIF broker pods and then bring them all back up except for the ones servicing the IP camera at IP address 10.0.0.1.
Another Configuration can be added to an existing Akri installation using helm upgrade
or via a new Helm installation.
Another Configuration can be added to the cluster by using helm upgrade
. For example, if you originally installed just the ONVIF Configuration and now also want to discover local cameras via udev, as well, simply run the following:
helm upgrade akri akri-helm-charts/akri \
$AKRI_HELM_CRICTL_CONFIGURATION \
--set onvif.enabled=true \
--set udev.enabled=true \
--set udev.udevRules[0]='KERNEL=="video[0-9]*"\, ENV{ID_V4L_CAPABILITIES}==":capture:"'
The udev Configuration could also have been applied via a new Helm installation like so:
helm install udev-config akri-helm-charts/akri \
--set controller.enabled=false \
--set agent.enabled=false \
--set rbac.enabled=false \
--set udev.configuration.enabled=true \
--set udev.configuration.discoveryDetails.udevRules[0]='KERNEL=="video[0-9]*"\, ENV{ID_V4L_CAPABILITIES}==":capture:"'
Want to change what broker is deployed to already discovered devices or deploy a new Job to the devices? Instead of deleting and reapplying the Configuration, you can modify the brokerSpec
of the Configuration using one of the strategies from the section on modifying a deployed Configuration.
This can be illustrated using Akri's mock debug echo Discovery Handler. The following installation deploys a BusyBox Job to each discovered mock device. That Job simply echos "Hello World".
helm install akri akri-helm-charts/akri \
--set agent.allowDebugEcho=true \
--set debugEcho.discovery.enabled=true \
--set debugEcho.configuration.brokerJob.image.repository=busybox \
--set debugEcho.configuration.brokerJob.command[0]="sh" \
--set debugEcho.configuration.brokerJob.command[1]="-c" \
--set debugEcho.configuration.brokerJob.command[2]="echo 'Hello World'"
--set debugEcho.configuration.enabled=true
Say you are feeling more exuberant and want the Job to echo "Hello Amazing World", you can update the brokerSpec
like so:
helm upgrade akri akri-helm-charts/akri \
--set agent.allowDebugEcho=true \
--set debugEcho.discovery.enabled=true \
--set debugEcho.configuration.brokerJob.image.repository=busybox \
--set debugEcho.configuration.brokerJob.command[0]="sh" \
--set debugEcho.configuration.brokerJob.command[1]="-c" \
--set debugEcho.configuration.brokerJob.command[2]="echo 'Hello World'"
--set debugEcho.configuration.enabled=true
New Jobs will be spun up.
Note: The Agent and Controller can only gracefully handle changes to the
brokerSpec
. If any other parts of the Configuration are modified, the Agent will restart discovery, deleting and recreating the Instances.
If an operator no longer wants Akri to discover devices defined by a Configuration, they can delete the Configuration and all associated broker pods will automatically be brought down. This can be done with helm upgrade
, helm delete
, or kubectl.
A Configuration can be deleted from a cluster using helm upgrade
. For example, if both ONVIF and udev Configurations have been installed in a cluster, the udev Configuration can be deleted by only specifying the ONVIF Configuration in a helm upgrade
command like the following:
helm upgrade akri akri-helm-charts/akri \
$AKRI_HELM_CRICTL_CONFIGURATION \
--set onvif.enabled=true
If the Configuration was applied in its own Helm installation (named udev-config
in this example), the Configuration can be deleted by deleting the installation.
helm delete udev-config
A configuration can also be deleted using kubectl. To list all applied Configurations, run kubectl get akric
. If both udev and ONVIF Configurations have been applied with capacities of 5. The output should look like the following:
NAME CAPACITY AGE
akri-onvif 5 3s
akri-udev 5 16m
To delete the ONVIF Configuration and bring down all ONVIF broker pods, run:
kubectl delete akric akri-onvif
The Agent discovers devices via Discovery Handlers. Akri supports an Agent image that includes all supported Discovery Handlers. This Agent will be used if agent.full=true
, like so:
helm install akri akri-helm-charts/akri \
$AKRI_HELM_CRICTL_CONFIGURATION \
--set agent.full=true
By default, a slim Agent without any embedded Discovery Handlers is deployed and the required Discovery Handlers can be deployed as DaemonSets by specifying <discovery handler name>.discovery.enabled=true
when installing Akri. For example, Akri is installed with the OPC UA and ONVIF Discovery Handlers like so:
helm install akri akri-helm-charts/akri \
$AKRI_HELM_CRICTL_CONFIGURATION \
--set opcua.discovery.enabled=true \
--set onvif.discovery.enabled=true