-
Notifications
You must be signed in to change notification settings - Fork 801
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Google Coral support #554
Comments
I totally agree with you that support of Google Coral accelerator would be great. |
With the PR #580 for Coral support, is this looking any more possible? I tried looking through the contents of the PR but couldn't actually tell if it's possible to start using without further code changes. @pospielov or @iamrinni , can you comment? |
Irina did a great job and managed to run compreface-core functionality on Google Coral. |
Thanks @pospielov , Can you offer any advice on how to get moving with it? My python skills are pretty much nonexistent, but if it’s just wrapping everything up in Docker, I might be able to help. The PR that is pending seems to have Docker images included… Did these not work on testing? Haven’t properly gotten to grips with the architecture to know how everything hangs together yet, but is the ideal that the only thing that needs replacing is the container(s) running TensorFlow and then it should “just work”? I understand that might be a time consuming question to answer, only if you have time and any hits in the right direction would be appreciated. |
https://github.com/exadel-inc/CompreFace/blob/EFRS-1114/embedding-calculator/tpu.Dockerfile.full |
Sorry I'm jumping onboard this issue, but I wanted to share my progress: So I tried running this in my current Test env, and ran into some issues. (Unraid os, which is a stripped linux version) |
Small update:
I noticed the branch from Irina was behind the Master, so I did a merge from master and now it's building again. Unfortunately the build takes quite long, so I'm just waiting on that to see if that fixes the issue. |
fe08d90#diff-462581714c2d689beb979af2fa29b0c9122382efafbe9133b4319d79c1c8d6e8 |
Last update:
I currently don't have time to debug any further, will continue a bit later. Maybe someone can continue based on what I provided. |
CUDA is the name of nVidia’s GPGPU cores, and from the file names, it looks like it is using TensorFlow, rather than TensorFlow Lite (required for Coral). In the PR’s main comment block, she referenced a function where you have to pass in the parameter of “TPU”, but I didn’t understand where that referenced. |
I had a closer look at the PR, but even after changing the code to:
Basicly forcing it to always use the |
The error below doesn't seem Google Coral related, it also generates this error if I use the CPU.
I've debugged a ton, and currently I'm stuck on this error:
As soon as I try to invoke the Anyone have any ideas to continue? Currently stuck |
Kind of a strange error. Then I changed the file coralmtcnn.py the same way as you did.
Looks like the problem with a library.
I can't say where is the problem, I need to dig deeper to fix it. |
So I tested again with the clean branch like you mentioned above.
Your error |
Do you have the same error as me if you plug out Coral? |
No, I can't seem to get the code to run of the Coral. It doesn't matter if I have it plugged in, and with the code adjustment to force is. It will always use the CPU instead of coral. |
I can gladly be part of testing on my RPI4+Google Coral running Frigate today |
Any updates on support for Google Coral? |
Unfortunately no, we don't have contributors with Google Coral now |
I know a bit of python and have a raspberry pi 4 8gb 64bit with a coral ai what is the latest code with coral support. |
First, this would be great! There are two problems to solve:
What I suggest is first try to build CompreFace with Google Coral support and then try to build it for arm devices. We have a branch https://github.com/exadel-inc/CompreFace/tree/EFRS-1114, which is quite old, but it's a good idea to start from it. Could you try if it builds and works with Google Coral? There are two docker files: |
Are these Dockerfiles ARM64 compatible? |
No, we don't have ARM-compatible dockerfiles. This is another challenge |
I have all that and a Jetson Nano along with a test VMWare environment running on a Dell PowerEdge R620. Although I'm no programmer I do have 26 year of IT background. Mainly networking |
#610 (comment) |
I would love to. |
So basically run through what you did on the Jetson and that should compile an image? |
I would like to help with this. Java programming background but happy with a bit of Python, Typescript etc. Only problem is: my Coral USB order is saying delivery 05/2023 :/ hoping they get some earlier stock! |
I'd like to see it run off the Nano. I have Frigate using the Coral for Object detection. Would be nice to use all those cuda cores for facial recognition. For now I just have it running with Double Take in a VM on a PowerEdge R620 so it's just using xeon processor cores. Although I'm thinking about trying to find an old nvidia card go throw in there and see if that'll help. |
I have 2 pci-e m.2 corals and I'd be happy to help test/debug this weekend. Corals: 1 Daul-edge m.2 TPU, 1 single M.2 TPU using 2 M.2 to pci-e adapter cards. Setup: Running Frigate (GPU inference -Nvidia GTX 1080)/Doubletake/Compreface ( CPU inference- Intel Xeon E3 1280). hosted in Docker using Home Assistant OS add-ons as a qemu client on an Ubuntu 22.04 Host. I started working on adding Compreface GPU builds to the single container builds for use as a home assistant addon last weekend but would much rather do face detection on the corals. I don't have the USB corals but hopefully its the same python api with config differences. |
Not sure what you mean by "adding Compreface GPU builds to the single container builds". |
I have one and would be happy to help bit i only have a RPI |
I thought to make the same offer 😄 |
Hi, I have two mini PCIe coral TPUs currently being used with Frigate only. P.S i don't know much about coding, but I have build and maintained multi-arch docker containers |
I do not see any error in ❯ docker buildx build -t justsky/compreface:tpu . -f tpu.Dockerfile
[+] Building 668.6s (16/23)
=> [internal] load .dockerignore 0.1s
=> => transferring context: 48B 0.0s
=> [internal] load build definition from tpu.Dockerfile 0.1s
=> => transferring dockerfile: 2.12kB 0.0s
=> [internal] load metadata for docker.io/library/python:3.7-slim 2.2s
=> [ 1/19] FROM docker.io/library/python:3.7-slim@sha256:85ddc7b500c5e33a6eec44adbde8347a8a07714f03fc638f2cf4b13837bac601 0.0s
=> [internal] load build context 0.0s
=> => transferring context: 9.88kB 0.0s
=> CACHED [ 2/19] RUN apt-get update && apt-get install -y build-essential cmake git wget unzip curl yasm pkg-config libswscale-dev libtbb2 libtbb-dev libjpeg-dev libpng-dev libtiff-dev libavformat-dev libpq- 0.0s
=> [ 3/19] RUN echo "deb https://packages.cloud.google.com/apt coral-edgetpu-stable main" | tee /etc/apt/sources.list.d/coral-edgetpu.list 0.3s
=> [ 4/19] RUN curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add - 7.8s
=> [ 5/19] RUN apt-get update && apt-get install -y libedgetpu1-std 32.0s
=> [ 6/19] WORKDIR /app/ml 0.0s
=> [ 7/19] COPY requirements.txt . 0.1s
=> [ 8/19] RUN pip --no-cache-dir install -r requirements.txt 226.9s
=> [ 9/19] COPY src src 0.1s
=> [10/19] COPY srcext srcext 0.1s
=> [11/19] RUN pip --no-cache-dir install srcext/mtcnn_tflite/ 165.4s
=> ERROR [12/19] RUN python -m src.services.facescan.plugins.setup 233.4s
------
> [12/19] RUN python -m src.services.facescan.plugins.setup:
#0 8.995 Collecting tensorflow~=2.5.0
#0 14.27 Downloading tensorflow-2.5.3-cp37-cp37m-manylinux2010_x86_64.whl (460.3 MB)
#0 63.55 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 460.3/460.3 MB 10.9 MB/s eta 0:00:00
#0 65.49 Downloading tensorflow-2.5.0-cp37-cp37m-manylinux2010_x86_64.whl (454.3 MB)
#0 112.3 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 454.3/454.3 MB 10.4 MB/s eta 0:00:00
#0 114.5 Collecting tf-slim~=1.1.0
#0 114.5 Downloading tf_slim-1.1.0-py2.py3-none-any.whl (352 kB)
#0 114.5 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 352.1/352.1 KB 14.6 MB/s eta 0:00:00
#0 114.7 Collecting typing-extensions~=3.7.4
#0 114.7 Downloading typing_extensions-3.7.4.3-py3-none-any.whl (22 kB)
#0 114.8 Requirement already satisfied: google-pasta~=0.2 in /usr/local/lib/python3.7/site-packages (from tensorflow~=2.5.0) (0.2.0)
#0 116.0 Collecting numpy~=1.19.2
#0 116.0 Downloading numpy-1.19.5-cp37-cp37m-manylinux2010_x86_64.whl (14.8 MB)
#0 117.4 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 14.8/14.8 MB 10.8 MB/s eta 0:00:00
#0 118.4 Collecting keras-nightly~=2.5.0.dev
#0 118.5 Downloading keras_nightly-2.5.0.dev2021032900-py2.py3-none-any.whl (1.2 MB)
#0 118.7 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.2/1.2 MB 9.9 MB/s eta 0:00:00
#0 119.2 Collecting wrapt~=1.12.1
#0 119.3 Downloading wrapt-1.12.1.tar.gz (27 kB)
#0 119.3 Preparing metadata (setup.py): started
#0 120.1 Preparing metadata (setup.py): finished with status 'done'
#0 123.1 Collecting grpcio~=1.34.0
#0 123.3 Downloading grpcio-1.34.1-cp37-cp37m-manylinux2014_x86_64.whl (4.0 MB)
#0 123.7 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 4.0/4.0 MB 10.2 MB/s eta 0:00:00
#0 123.7 Requirement already satisfied: tensorboard~=2.5 in /usr/local/lib/python3.7/site-packages (from tensorflow~=2.5.0) (2.11.2)
#0 123.8 Collecting six~=1.15.0
#0 123.9 Downloading six-1.15.0-py2.py3-none-any.whl (10 kB)
#0 124.2 Collecting h5py~=3.1.0
#0 124.2 Downloading h5py-3.1.0-cp37-cp37m-manylinux1_x86_64.whl (4.0 MB)
#0 124.6 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 4.0/4.0 MB 10.9 MB/s eta 0:00:00
#0 124.6 Requirement already satisfied: gast==0.4.0 in /usr/local/lib/python3.7/site-packages (from tensorflow~=2.5.0) (0.4.0)
#0 124.6 Requirement already satisfied: protobuf>=3.9.2 in /usr/local/lib/python3.7/site-packages (from tensorflow~=2.5.0) (3.19.6)
#0 124.6 Requirement already satisfied: opt-einsum~=3.3.0 in /usr/local/lib/python3.7/site-packages (from tensorflow~=2.5.0) (3.3.0)
#0 124.7 Collecting termcolor~=1.1.0
#0 124.7 Downloading termcolor-1.1.0.tar.gz (3.9 kB)
#0 124.8 Preparing metadata (setup.py): started
#0 125.5 Preparing metadata (setup.py): finished with status 'done'
#0 125.6 Collecting flatbuffers~=1.12.0
#0 125.7 Downloading flatbuffers-1.12-py2.py3-none-any.whl (15 kB)
#0 125.7 Requirement already satisfied: astunparse~=1.6.3 in /usr/local/lib/python3.7/site-packages (from tensorflow~=2.5.0) (1.6.3)
#0 125.8 Collecting absl-py~=0.10
#0 125.8 Downloading absl_py-0.15.0-py3-none-any.whl (132 kB)
#0 125.8 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 132.0/132.0 KB 23.6 MB/s eta 0:00:00
#0 125.9 Collecting tensorflow-estimator<2.6.0,>=2.5.0rc0
#0 126.1 Downloading tensorflow_estimator-2.5.0-py2.py3-none-any.whl (462 kB)
#0 126.1 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 462.4/462.4 KB 11.2 MB/s eta 0:00:00
#0 126.4 Collecting keras-preprocessing~=1.1.2
#0 126.5 Downloading Keras_Preprocessing-1.1.2-py2.py3-none-any.whl (42 kB)
#0 126.5 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 42.6/42.6 KB 33.7 MB/s eta 0:00:00
#0 126.5 Requirement already satisfied: wheel~=0.35 in /usr/local/lib/python3.7/site-packages (from tensorflow~=2.5.0) (0.40.0)
#0 126.7 Requirement already satisfied: cached-property in /usr/local/lib/python3.7/site-packages (from h5py~=3.1.0->tensorflow~=2.5.0) (1.5.2)
#0 127.0 Requirement already satisfied: werkzeug>=1.0.1 in /usr/local/lib/python3.7/site-packages (from tensorboard~=2.5->tensorflow~=2.5.0) (1.0.1)
#0 127.1 Requirement already satisfied: tensorboard-plugin-wit>=1.6.0 in /usr/local/lib/python3.7/site-packages (from tensorboard~=2.5->tensorflow~=2.5.0) (1.8.1)
#0 127.1 Requirement already satisfied: setuptools>=41.0.0 in /usr/local/lib/python3.7/site-packages (from tensorboard~=2.5->tensorflow~=2.5.0) (57.5.0)
#0 127.1 Requirement already satisfied: google-auth<3,>=1.6.3 in /usr/local/lib/python3.7/site-packages (from tensorboard~=2.5->tensorflow~=2.5.0) (2.19.0)
#0 127.1 Requirement already satisfied: requests<3,>=2.21.0 in /usr/local/lib/python3.7/site-packages (from tensorboard~=2.5->tensorflow~=2.5.0) (2.24.0)
#0 127.1 Requirement already satisfied: google-auth-oauthlib<0.5,>=0.4.1 in /usr/local/lib/python3.7/site-packages (from tensorboard~=2.5->tensorflow~=2.5.0) (0.4.6)
#0 127.2 Requirement already satisfied: markdown>=2.6.8 in /usr/local/lib/python3.7/site-packages (from tensorboard~=2.5->tensorflow~=2.5.0) (3.4.3)
#0 127.2 Requirement already satisfied: tensorboard-data-server<0.7.0,>=0.6.0 in /usr/local/lib/python3.7/site-packages (from tensorboard~=2.5->tensorflow~=2.5.0) (0.6.1)
#0 127.3 Requirement already satisfied: pyasn1-modules>=0.2.1 in /usr/local/lib/python3.7/site-packages (from google-auth<3,>=1.6.3->tensorboard~=2.5->tensorflow~=2.5.0) (0.3.0)
#0 127.3 Requirement already satisfied: cachetools<6.0,>=2.0.0 in /usr/local/lib/python3.7/site-packages (from google-auth<3,>=1.6.3->tensorboard~=2.5->tensorflow~=2.5.0) (5.3.0)
#0 127.3 Requirement already satisfied: rsa<5,>=3.1.4 in /usr/local/lib/python3.7/site-packages (from google-auth<3,>=1.6.3->tensorboard~=2.5->tensorflow~=2.5.0) (4.9)
#0 127.3 Requirement already satisfied: urllib3<2.0 in /usr/local/lib/python3.7/site-packages (from google-auth<3,>=1.6.3->tensorboard~=2.5->tensorflow~=2.5.0) (1.25.11)
#0 127.4 Requirement already satisfied: requests-oauthlib>=0.7.0 in /usr/local/lib/python3.7/site-packages (from google-auth-oauthlib<0.5,>=0.4.1->tensorboard~=2.5->tensorflow~=2.5.0) (1.3.1)
#0 127.4 Requirement already satisfied: importlib-metadata>=4.4 in /usr/local/lib/python3.7/site-packages (from markdown>=2.6.8->tensorboard~=2.5->tensorflow~=2.5.0) (4.13.0)
#0 127.5 Requirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.7/site-packages (from requests<3,>=2.21.0->tensorboard~=2.5->tensorflow~=2.5.0) (3.0.4)
#0 127.5 Requirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.7/site-packages (from requests<3,>=2.21.0->tensorboard~=2.5->tensorflow~=2.5.0) (2.10)
#0 127.5 Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.7/site-packages (from requests<3,>=2.21.0->tensorboard~=2.5->tensorflow~=2.5.0) (2023.5.7)
#0 127.7 Requirement already satisfied: zipp>=0.5 in /usr/local/lib/python3.7/site-packages (from importlib-metadata>=4.4->markdown>=2.6.8->tensorboard~=2.5->tensorflow~=2.5.0) (3.15.0)
#0 127.8 Requirement already satisfied: pyasn1<0.6.0,>=0.4.6 in /usr/local/lib/python3.7/site-packages (from pyasn1-modules>=0.2.1->google-auth<3,>=1.6.3->tensorboard~=2.5->tensorflow~=2.5.0) (0.5.0)
#0 127.8 Requirement already satisfied: oauthlib>=3.0.0 in /usr/local/lib/python3.7/site-packages (from requests-oauthlib>=0.7.0->google-auth-oauthlib<0.5,>=0.4.1->tensorboard~=2.5->tensorflow~=2.5.0) (3.2.2)
#0 128.3 Building wheels for collected packages: termcolor, wrapt
#0 128.3 Building wheel for termcolor (setup.py): started
#0 129.3 Building wheel for termcolor (setup.py): finished with status 'done'
#0 129.3 Created wheel for termcolor: filename=termcolor-1.1.0-py3-none-any.whl size=4848 sha256=b90b359c0c8bf7061e2b0fbe2e533803da6eca3a8889741f0fcfabf841586335
#0 129.3 Stored in directory: /tmp/pip-ephem-wheel-cache-k5vfzimf/wheels/3f/e3/ec/8a8336ff196023622fbcb36de0c5a5c218cbb24111d1d4c7f2
#0 129.3 Building wheel for wrapt (setup.py): started
#0 131.9 Building wheel for wrapt (setup.py): finished with status 'done'
#0 131.9 Created wheel for wrapt: filename=wrapt-1.12.1-cp37-cp37m-linux_x86_64.whl size=70783 sha256=d4694c2c27357bde86f0622e1a5f2386225f0f568058633179659b30acd5102c
#0 131.9 Stored in directory: /tmp/pip-ephem-wheel-cache-k5vfzimf/wheels/62/76/4c/aa25851149f3f6d9785f6c869387ad82b3fd37582fa8147ac6
#0 131.9 Successfully built termcolor wrapt
#0 133.7 Installing collected packages: wrapt, typing-extensions, termcolor, tensorflow-estimator, keras-nightly, flatbuffers, six, numpy, keras-preprocessing, h5py, grpcio, absl-py, tf-slim, tensorflow
#0 133.7 Attempting uninstall: wrapt
#0 133.7 Found existing installation: wrapt 1.15.0
#0 133.8 Uninstalling wrapt-1.15.0:
#0 133.8 Successfully uninstalled wrapt-1.15.0
#0 133.9 Attempting uninstall: typing-extensions
#0 133.9 Found existing installation: typing_extensions 4.6.2
#0 133.9 Uninstalling typing_extensions-4.6.2:
#0 134.6 Successfully uninstalled typing_extensions-4.6.2
#0 134.7 Attempting uninstall: termcolor
#0 134.7 Found existing installation: termcolor 2.3.0
#0 134.7 Uninstalling termcolor-2.3.0:
#0 134.7 Successfully uninstalled termcolor-2.3.0
#0 134.8 Attempting uninstall: tensorflow-estimator
#0 134.8 Found existing installation: tensorflow-estimator 2.11.0
#0 134.9 Uninstalling tensorflow-estimator-2.11.0:
#0 135.1 Successfully uninstalled tensorflow-estimator-2.11.0
#0 138.2 Attempting uninstall: flatbuffers
#0 138.2 Found existing installation: flatbuffers 23.5.26
#0 138.3 Uninstalling flatbuffers-23.5.26:
#0 138.3 Successfully uninstalled flatbuffers-23.5.26
#0 138.4 Attempting uninstall: six
#0 138.4 Found existing installation: six 1.16.0
#0 138.4 Uninstalling six-1.16.0:
#0 139.1 Successfully uninstalled six-1.16.0
#0 139.2 Attempting uninstall: numpy
#0 139.2 Found existing installation: numpy 1.21.6
#0 139.8 Uninstalling numpy-1.21.6:
#0 140.8 Successfully uninstalled numpy-1.21.6
#0 145.4 Attempting uninstall: h5py
#0 145.4 Found existing installation: h5py 3.8.0
#0 145.5 Uninstalling h5py-3.8.0:
#0 145.7 Successfully uninstalled h5py-3.8.0
#0 146.2 Attempting uninstall: grpcio
#0 146.3 Found existing installation: grpcio 1.54.2
#0 146.3 Uninstalling grpcio-1.54.2:
#0 146.5 Successfully uninstalled grpcio-1.54.2
#0 146.9 Attempting uninstall: absl-py
#0 146.9 Found existing installation: absl-py 1.4.0
#0 147.0 Uninstalling absl-py-1.4.0:
#0 147.0 Successfully uninstalled absl-py-1.4.0
#0 148.2 Attempting uninstall: tensorflow
#0 148.2 Found existing installation: tensorflow 2.11.0
#0 153.2 Uninstalling tensorflow-2.11.0:
#0 172.9 Successfully uninstalled tensorflow-2.11.0
#0 219.5 Successfully installed absl-py-0.15.0 flatbuffers-1.12 grpcio-1.34.1 h5py-3.1.0 keras-nightly-2.5.0.dev2021032900 keras-preprocessing-1.1.2 numpy-1.19.5 six-1.15.0 tensorflow-2.5.0 tensorflow-estimator-2.5.0 termcolor-1.1.0 tf-slim-1.1.0 typing-extensions-3.7.4.3 wrapt-1.12.1
#0 219.5 WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv
#0 224.9 WARNING: You are using pip version 22.0.4; however, version 23.1.2 is available.
#0 224.9 You should consider upgrading via the '/usr/local/bin/python -m pip install --upgrade pip' command.
------
tpu.Dockerfile:42
--------------------
40 | COPY srcext srcext
41 | RUN pip --no-cache-dir install srcext/mtcnn_tflite/
42 | >>> RUN python -m src.services.facescan.plugins.setup
43 |
44 | # copy rest of the code
--------------------
ERROR: failed to solve: process "/bin/bash -c python -m src.services.facescan.plugins.setup" did not complete successfully: exit code: 132 ─❯ docker buildx build -t justsky/compreface:tpu . -f tpu.Dockerfile.full
[+] Building 87.1s (9/28)
=> [internal] load .dockerignore 0.1s
=> => transferring context: 48B 0.0s
=> [internal] load build definition from tpu.Dockerfile.full 0.1s
=> => transferring dockerfile: 2.63kB 0.0s
=> [internal] load metadata for docker.io/library/debian:buster-slim 3.9s
=> [ 1/24] FROM docker.io/library/debian:buster-slim@sha256:845d301da51ad74998165a70de4196fb4a66e08316c59a4b8237e81a99ad22a2 8.7s
=> => resolve docker.io/library/debian:buster-slim@sha256:845d301da51ad74998165a70de4196fb4a66e08316c59a4b8237e81a99ad22a2 0.0s
=> => sha256:845d301da51ad74998165a70de4196fb4a66e08316c59a4b8237e81a99ad22a2 984B / 984B 0.0s
=> => sha256:9d0fb5b9d5318bf507d4507fc846e36a55de7a1198bfc63cf12a2f7c99011efa 529B / 529B 0.0s
=> => sha256:4b589adf4404fb807a7225748174f3655b8e1b516e5ff5552f2d13083bafb4c1 1.46kB / 1.46kB 0.0s
=> => sha256:99bf4787315b60d97d860ac6d006b7835b2241a601e93c2da4af6ca554be8704 27.14MB / 27.14MB 3.6s
=> => extracting sha256:99bf4787315b60d97d860ac6d006b7835b2241a601e93c2da4af6ca554be8704 4.6s
=> [internal] load build context 0.0s
=> => transferring context: 9.88kB 0.0s
=> [ 2/24] RUN apt-get update 12.3s
=> [ 3/24] RUN apt-get install --no-install-recommends -y gnupg ca-certificates curl apt-utils apt-transport-https 22.7s
=> [ 4/24] RUN apt-get install -y curl python3.7 python3.7-dev python3.7-distutils 38.8s
=> ERROR [ 5/24] RUN update-alternatives --set python /usr/bin/python3.7 0.6s
------
> [ 5/24] RUN update-alternatives --set python /usr/bin/python3.7:
#0 0.475 update-alternatives: error: no alternatives for python
------
tpu.Dockerfile.full:16
--------------------
14 |
15 | # Set python 3 as the default python
16 | >>> RUN update-alternatives --set python /usr/bin/python3.7
17 |
18 | # Upgrade pip to latest version
--------------------
ERROR: failed to solve: process "/bin/sh -c update-alternatives --set python /usr/bin/python3.7" did not complete successfully: exit code: 2
|
Hi @pospielov, I have made progress with EFRS-1114 and managed to get detection working using my Coral USB Accelerator. There are a couple things:
I've managed to get it working that way. However I only started using CompreFace today and don't fully understand the architecture behind it yet. I'm using the single container approach and I think I can't just replace the calculator? |
Single container build uses compreface-core image as a base and then adds everything else inside. |
Incorporating Google Coral TPU support in CompreFace would be a game-changing move. Not only has this feature demonstrated notable effectiveness in Frigate, but it's also recognized for its superior performance enhancements. Its integration with CompreFace would supercharge face recognition capabilities, accelerating processing times. This upgrade could broaden CompreFace's appeal, making it more accessible and valuable for a diverse user base. Enthusiastically anticipating the potential implementation/incorporation of this feature! +1 |
Right, that much I understand. How do I make a "proper compreface-core" image? I've built and run the embedding-calculator with Coral support and I've a working container for that, but I'm afraid without further guidance I'm unable to go any further than that. |
compreface-core is the name for embedding-calculator image.
|
Not yet, since I've got no reply. I'll try with your instructions. |
To upload the image, you can use |
Just wanted to confirm, is Google Coral TPU M.2 working? To also clarify, since I'm using the M.2, it would put /dev/apex_0 or make available to compreface-core image right? |
Very nice 2 years killing feature. Do you have a working TPU Docker Image for Beta test ?
|
Check out here. |
Did this work for you with CompreFace? Currently, I'm using Coral TPU passed through via LXC>Docker>Frigate. Additionally, I have Intel HWAccel functioning in Frigate, but my Coral is connected via USB. However, if I have the Coral working seamlessly with Frigate, there's no reason to believe, as you've demonstrated here, that I can't utilize the same device callouts in my docker-compose for CompreFace, Deepstack, and CodeProject to also access the Coral? |
ML part to Google Coral accelerator
If the Google Coral USB accelerator (Google Edge TPU) could be used for CompreFace it would offload the ML part.
It could possibly run also on ARM as there are Docker Images (for ARM too) to support the Google Edge TPU.
I got it running for another project (https://github.com/matiasdelellis/facerecognition) as a simple solution to provide a version with modularization (matiasdelellis/facerecognition#210) in mind to make it run on eg. RaspberryPi 4.
I think this project could be used in the way I did with my simple docker container. The Nextcloud facerecognition app could use CompreFace as a engine for the ML part and with ML accelerators (like Google Coral or others) it would offer enough ML power to be usable on "home Nextcloud Clouds" in your closet 😉.
The text was updated successfully, but these errors were encountered: