Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[WIP] Add initial GPU support #4

Open
wants to merge 20 commits into
base: master
Choose a base branch
from
Open

Conversation

edurenye
Copy link

This is a work in progress.
I think for whisper it is working, but I'm not sure how to check it.
And for piper it is giving me an error unrecognized arguments: --cuda, but I got the instructions from here: https://github.com/rhasspy/piper At the end it says that it should work just installing onnxruntime-gpu and running piper with the --cuda argument.

What am I missing?

I guess this will conflict with those that just want to use the CPU, how can we handle that? Making different images?
Ex: piper and piper-gpu

@edurenye
Copy link
Author

edurenye commented Aug 24, 2023

Closes #3

@DBaker85
Copy link

DBaker85 commented Sep 12, 2023

Just wanted to leave my 2cents here:
I tried your whisper changes locally and it is working perfectly on my 1080ti and Docker.
VRam is assigned and the container works as well. Home assistant also recognised and used it perfectly.
Nice one!

(Did not try Piper)

@edurenye
Copy link
Author

Piper does not work because of this: rhasspy/rhasspy3#49

@wdunn001
Copy link

wdunn001 commented Oct 5, 2023

Whisper is still targeting 20.04 is there a reason for that?

@wdunn001
Copy link

wdunn001 commented Oct 5, 2023

This may need to be its own image since the majority of users would not want the cuda version

@wdunn001
Copy link

wdunn001 commented Oct 5, 2023

could this be split into 2 tickets one for whisper and one for piper. The whisper portion is in reality the more useful of the two and benefits more from this feature. If piper is experiencing issues.

@edurenye
Copy link
Author

edurenye commented Oct 6, 2023

@wdunn001 From the documentation https://github.com/guillaumekln/faster-whisper/ it says it requires cuDNN 8 for CUDA 11, and for those versions of CUDA and cuDNN the highest version of ubuntu available is 20.04, and I had to look for it because it was not working with the image I set for the other containers sadly.
And updating to CUDA 12 is not planned in the very short term. See an explanation here: SYSTRAN/faster-whisper#47 (comment).

@edurenye
Copy link
Author

edurenye commented Oct 6, 2023

Sorry, editing because I missunderstood your comment.
Yes, makes sense to make it 2 different images, I can add that.

But I guess for better maintainability the solution we add for one should be the same as for the others, for that is I think is better to have the conversation in a single issue and PR.
If you need to use it right now you can just add the changes to your local Dockerfile and build it.
Or if you need to use CUDA 12 you could try the workarounds that they comment in here: SYSTRAN/faster-whisper#153 (comment)

@edurenye
Copy link
Author

edurenye commented Oct 6, 2023

And I'll try to add porcupine1 too

@wdunn001
Copy link

wdunn001 commented Oct 6, 2023

Awesome! I am happy to help if you need anything. Would we want to add the docker arguments for the CUDA image to the documentation here?

@edurenye
Copy link
Author

edurenye commented Oct 6, 2023

I added the changes.
I have not tested the new porcupine1 container, since that software does not support my language yet.

And yes, ofc we should document this, also I was thinking should we add a docker-compose.yml file?
It made sense for me since I use home assistant and need the 3 services. But now that porcupine1 has been added I am not sure anymore since as far as I know porcupine1 and openwakeword do the same, which is quite confusing for me.

@edurenye
Copy link
Author

edurenye commented Oct 6, 2023

But in the README.md file right now there is just the documentation for using it pulling the images, not building them, so that will depend on the tags the maintainer might wanna use. Should we add building instructions to the README.md file?

@wdunn001
Copy link

wdunn001 commented Oct 6, 2023

I think so for sure we can create a contributors section. I'll work on it I will be building it for the first time this weekend so I'll try and document the process.

@edurenye
Copy link
Author

edurenye commented Oct 6, 2023

I will give you the docker-compose files and a starting point.

@edurenye
Copy link
Author

edurenye commented Oct 6, 2023

I just added it, tell me how it works for you, you can create your own docker-compose.x.yml file for your use case.

I have not added porcupine1 to the docker compose because it uses the same port as openwakeword, so for that particular case it could be added in the custom extend file.

@wdunn001
Copy link

wdunn001 commented Oct 8, 2023

ok so I am getting an error deploying this via compose or run

usage: main.py [-h] --model {tiny,tiny-int8,base,base-int8,small,small-int8,medium,medium-int8} --uri URI --data-dir DATA_DIR [--download-dir DOWNLOAD_DIR] [--device DEVICE] [--language LANGUAGE] [--compute-type COMPUTE_TYPE] [--beam-size BEAM_SIZE] [--debug]
main.py: error: the following arguments are required: --model, --uri, --data-dir
/run.sh: line 3: --uri: command not found
/run.sh: line 4: --data-dir: command not found
/run.sh: line 5: --download-dir: command not found

It needs additional params in contrast with the other build.

These appear to be supplied by the run.sh file and I see its called in the Dockerfile.

I added commands to the GPU compose file identical to those in the NOGPU version and they work fine and made a pr. Its only the ones in the run.sh that seem to not work.

I am on Ubuntu 22.04 with latest docker is that matters.

@edurenye
Copy link
Author

edurenye commented Oct 9, 2023

This is weird, according to the documentation, the only thinks not extended should be volumes_from and depends_on. We can follow this discussion in the PR that you created edurenye#1

@AnkushMalaker
Copy link

I needed to add --device cuda to actually load the whisper model onto my GPU. I second that we could split this into different branches to handle GPU for whisper, piper and wakeword. I made a branch for that, not sure if I should raise this as a PR.

  • removed --cuda for piper as that isn't working upstream yet.
  • changed the default data directories to /var/data to be consistent with some other docker compose files I saw.

New to contributing, happy to hear thoughts.

https://github.com/AnkushMalaker/wyoming-addons/tree/gpu

@edurenye
Copy link
Author

I rebased with the last chnages from master and the typos in the readme file.

I don´t think we need to create another branch for the meanwhile you can just have an extend file where you use GPU options for whisper and openwakeword and nongpu for piper.

And regarding /var/data, I am generally against storing user data in a system folder. And passing all the folder to the docker container might load a lot of data that is not needed from other applications.

@wdunn001
Copy link

@edurenye agreed using cpu for piper seems to be more than sufficient. I am still experiencing issues with openwakeword but it may just be my environment. I'll pull down the changes here and try again. I'll push any fixes I find to the PR on your branch.

@@ -0,0 +1,35 @@
FROM nvidia/cuda:12.1.1-cudnn8-runtime-ubuntu22.04

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Perhaps we remove this file in the interim to get rid of dead code?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I do not see it as dead code, when this issue gets fixed it should just work right away.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok sounds good

@@ -0,0 +1,32 @@
FROM nvidia/cuda:12.1.1-cudnn8-runtime-ubuntu22.04
Copy link

@wdunn001 wdunn001 Oct 18, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

remove to get rid of deadcode?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I do not see it as dead code either, the people that wants to use it can just use it extending the docker compose or use it directly with docker run as documented here: https://github.com/rhasspy/wyoming-porcupine1/blob/master/README.md but adding the cuda stuff.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sounds good

.gitignore Outdated
@@ -0,0 +1,12 @@
# OpenWakeWord

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

perhaps we reference managed volumes instead to prevent this?

i.e.
volumes:
openwakeword-data:
whisper-data:
piper-data:

this is what I did in my version.
we could also add -gpu for volumes connected to gpu enabled instances in the GPU compose file so that we can keep data seperate between instance types.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you mean non binded mounts? But then adding custom models (thinking mainly about OpenWakeWord here) is hard, with binded mounts you can just move the model to that directory. Also I don't think there will be a case where you want to move from GPU to NONGPU changing models, but probably I am wrong there.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think I agree with you here, probably the best way is to not bind them by default and then you can bind them extending the docker compose and point wherever you have the custom model.

Or maybe we could look at passing it as a parameter, haven't looked into it, I'm still fighting to generate the custom model actually.

docker compose down
```

### Run with GPU

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should we reference documentation on how to setup docker for gpu? (I can of course add it in a seperate pr)

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, good idea!

@Maxcodesthings
Copy link

Maxcodesthings commented Oct 25, 2023

I have tried applying the contents of this PR to my local instance. I do not see the faster-whisper implementation use GPU over CPU.

I have conflated the dockerfiles as such and focused on only using GPU for whisper container:

  whisper:
    container_name: whisper
    build:
      context: /opt/wyoming-addons/whisper/
      dockerfile: GPU.Dockerfile
    # image: rhasspy/wyoming-whisper:latest
    restart: unless-stopped
    ports:
      - 10300:10300
    volumes:
      - /opt/homeassistant/whisper:/data
    command: 
      - --model
      - medium-int8
      - --language
      - en
    deploy:
      resources:
        reservations:
          devices:
            - driver: nvidia
              count: 1
              capabilities: [gpu]

I can tell my GPU is passed through because it appears in nvidia-smi on the container
NVIDIA_Share_gj0fFevK7R

However when watching GPU when processing my speech the usage does not increase, and when watching CPU the usage clearly spikes since it's the CPU processing my speech

How have you all tested that this implementation of faster-whisper is working? I would like to do the same on my machine

Edit:

Found the issue!

You are missing --device in your compose

command: 
      - --model
      - small
      - --language
      - en
      - --device
      - cuda

@edurenye
Copy link
Author

Good finiding! Was not documented, but that parameter exists in https://github.com/rhasspy/wyoming-faster-whisper/blob/master/wyoming_faster_whisper/__main__.py

@mreilaender
Copy link

Can u resolve the conflicts? I would love to see the improvements from using the GPU directly :)

@mreilaender
Copy link

Doesn't work with piper since wyoming-piper doesn't declare the --cuda argument. I created a PR

@spitfire
Copy link

spitfire commented Jan 8, 2025

@edurenye when I try to do docker compose -f docker-compose.gpu.yml up
it fails with

[+] Running 0/0
 ⠋ Container spitfire-wyoming-addons-gpu-wyoming-whisper-1       Creating                           0.0s
 ⠋ Container spitfire-wyoming-addons-gpu-wyoming-piper-1         Creating                           0.0s
 ⠋ Container spitfire-wyoming-addons-gpu-wyoming-openwakeword-1  Creating                           0.0s
Error response from daemon: unknown or invalid runtime name: nvidia

for me. When I comment out runtime: nvidia from it that works fine. And I can see process for whisper showing up in nvidia-smi

I've created a whisper-gpu image with WYOMING_WHISPER_VERSION='2.4.0' and works as expected

where did you add that parameter?

When I find time, I'll try to update this PR with the latest code, use that new base image and use the --use-cuda flag for piper mentioned here: rhasspy/wyoming-piper#5

did you add it in your new commits? I can't see it.
Piper does not create any processes visible through nvidia-smi. It also errors out when I try to invoke it from HA:

wyoming-piper-1         | INFO:wyoming_piper.download:Downloaded /data/pl_PL-darkman-medium.onnx (https://huggingface.co/rhasspy/piper-voices/resolve/v1.0.0/pl/pl_PL/darkman/medium/pl_PL-darkman-medium.onnx)
wyoming-piper-1         | INFO:wyoming_piper.download:Downloaded /data/pl_PL-darkman-medium.onnx.json (https://huggingface.co/rhasspy/piper-voices/resolve/v1.0.0/pl/pl_PL/darkman/medium/pl_PL-darkman-medium.onnx.json)
wyoming-piper-1         | INFO:__main__:Ready


wyoming-piper-1         | ERROR:asyncio:Task exception was never retrieved
wyoming-piper-1         | future: <Task finished name='wyoming event handler' coro=<AsyncEventHandler.run() done, defined at /usr/local/lib/python3.8/dist-packages/wyoming/server.py:31> exception=FileNotFoundError(2, 'No such file or directory')>
wyoming-piper-1         | Traceback (most recent call last):
wyoming-piper-1         |   File "/usr/local/lib/python3.8/dist-packages/wyoming/server.py", line 41, in run
wyoming-piper-1         |     if not (await self.handle_event(event)):
wyoming-piper-1         |   File "/usr/local/lib/python3.8/dist-packages/wyoming_piper/handler.py", line 53, in handle_event
wyoming-piper-1         |     raise err
wyoming-piper-1         |   File "/usr/local/lib/python3.8/dist-packages/wyoming_piper/handler.py", line 48, in handle_event
wyoming-piper-1         |     return await self._handle_event(event)
wyoming-piper-1         |   File "/usr/local/lib/python3.8/dist-packages/wyoming_piper/handler.py", line 108, in _handle_event
wyoming-piper-1         |     wav_file: wave.Wave_read = wave.open(output_path, "rb")
wyoming-piper-1         |   File "/usr/lib/python3.8/wave.py", line 510, in open
wyoming-piper-1         |     return Wave_read(f)
wyoming-piper-1         |   File "/usr/lib/python3.8/wave.py", line 160, in __init__
wyoming-piper-1         |     f = builtins.open(f, 'rb')
wyoming-piper-1         | FileNotFoundError: [Errno 2] No such file or directory: ''

@edurenye
Copy link
Author

edurenye commented Jan 8, 2025

@spitfire There are problems with the piper that I'm trying to sort out right now, but Error response from daemon: unknown or invalid runtime name: nvidia is a problem with your environment it means nvidia-container-toolkit is not installed or not configured properly

@spitfire
Copy link

spitfire commented Jan 8, 2025

@spitfire There are problems with the piper that I'm trying to sort out right now, but Error response from daemon: unknown or invalid runtime name: nvidia is a problem with your environment it means nvidia-container-toolkit is not installed or not configured properly

a quick nvidia-ctk runtime configure --runtime=docker fixed that, but somehow whisper has been working without it all this time;)

@spitfire
Copy link

spitfire commented Jan 8, 2025

Whisper fails for me with the image you've specified:

wyoming-whisper-1       | INFO:faster_whisper:Processing audio with duration 00:05.440
wyoming-whisper-1       | Unable to load any of {libcudnn_ops.so.9.1.0, libcudnn_ops.so.9.1, libcudnn_ops.so.9, libcudnn_ops.so}
wyoming-whisper-1       | Invalid handle. Cannot load symbol cudnnCreateTensorDescriptor
wyoming-whisper-1       | /run.sh: line 5:    13 Aborted                 (core dumped) python3 -m wyoming_faster_whisper --uri 'tcp://0.0.0.0:10300' --data-dir /data --download-dir /data "$@"
wyoming-whisper-1 exited with code 0



wyoming-whisper-1       | INFO:__main__:Ready
wyoming-whisper-1       | INFO:faster_whisper:Processing audio with duration 00:04.750
wyoming-whisper-1       | Unable to load any of {libcudnn_ops.so.9.1.0, libcudnn_ops.so.9.1, libcudnn_ops.so.9, libcudnn_ops.so}
wyoming-whisper-1       | Invalid handle. Cannot load symbol cudnnCreateTensorDescriptor
wyoming-whisper-1       | /run.sh: line 5:    13 Aborted                 (core dumped) python3 -m wyoming_faster_whisper --uri 'tcp://0.0.0.0:10300' --data-dir /data --download-dir /data "$@"
wyoming-whisper-1 exited with code 0

it works with:

services:
  wyoming-whisper:
    build:
      args:
        - BASE=nvidia/cuda:12.3.2-cudnn9-runtime-ubuntu22.04

@edurenye
Copy link
Author

edurenye commented Jan 8, 2025

With the updates I made, Whisper should work now, but piper is still getting this error: rhasspy/wyoming#9

@edurenye
Copy link
Author

edurenye commented Jan 8, 2025

At least piper is really trying to use CUDA now

@edurenye
Copy link
Author

edurenye commented Jan 8, 2025

Also I tested OpenWakeWord and I'm getting the following error:

wyoming-openwakeword-1  | /usr/src/.venv/lib/python3.10/site-packages/tflite_runtime/interpreter.py:452: UserWarning:     Warning: tf.lite.Interpreter is deprecated and is scheduled for deletion in
wyoming-openwakeword-1  |     TF 2.20. Please use the LiteRT interpreter from the ai_edge_litert package.
wyoming-openwakeword-1  |     See the [migration guide](https://ai.google.dev/edge/litert/migration)
wyoming-openwakeword-1  |     for details.
wyoming-openwakeword-1  |     
wyoming-openwakeword-1  |   warnings.warn(_INTERPRETER_DELETION_WARNING)
wyoming-openwakeword-1  | INFO:root:Ready
wyoming-openwakeword-1  | 
wyoming-openwakeword-1  | A module that was compiled using NumPy 1.x cannot be run in
wyoming-openwakeword-1  | NumPy 2.2.1 as it may crash. To support both 1.x and 2.x
wyoming-openwakeword-1  | versions of NumPy, modules must be compiled with NumPy 2.0.
wyoming-openwakeword-1  | Some module may need to rebuild instead e.g. with 'pybind11>=2.12'.
wyoming-openwakeword-1  | 
wyoming-openwakeword-1  | If you are a user of the module, the easiest solution will be to
wyoming-openwakeword-1  | downgrade to 'numpy<2' or try to upgrade the affected module.
wyoming-openwakeword-1  | We expect that some modules will need time to support NumPy 2.
wyoming-openwakeword-1  | 
wyoming-openwakeword-1  | Traceback (most recent call last):  File "/usr/lib/python3.10/threading.py", line 973, in _bootstrap
wyoming-openwakeword-1  |     self._bootstrap_inner()
wyoming-openwakeword-1  |   File "/usr/lib/python3.10/threading.py", line 1016, in _bootstrap_inner
wyoming-openwakeword-1  |     self.run()
wyoming-openwakeword-1  |   File "/usr/lib/python3.10/threading.py", line 953, in run
wyoming-openwakeword-1  |     self._target(*self._args, **self._kwargs)
wyoming-openwakeword-1  |   File "/usr/src/.venv/lib/python3.10/site-packages/wyoming_openwakeword/openwakeword.py", line 248, in ww_proc
wyoming-openwakeword-1  |     ww_model = tflite.Interpreter(model_path=str(ww_model_path), num_threads=1)
wyoming-openwakeword-1  |   File "/usr/src/.venv/lib/python3.10/site-packages/tflite_runtime/interpreter.py", line 485, in __init__
wyoming-openwakeword-1  |     self._interpreter = _interpreter_wrapper.CreateWrapperFromFile(
wyoming-openwakeword-1  | AttributeError: _ARRAY_API not found
wyoming-openwakeword-1  | 
wyoming-openwakeword-1  | A module that was compiled using NumPy 1.x cannot be run in
wyoming-openwakeword-1  | NumPy 2.2.1 as it may crash. To support both 1.x and 2.x
wyoming-openwakeword-1  | versions of NumPy, modules must be compiled with NumPy 2.0.
wyoming-openwakeword-1  | Some module may need to rebuild instead e.g. with 'pybind11>=2.12'.
wyoming-openwakeword-1  | 
wyoming-openwakeword-1  | If you are a user of the module, the easiest solution will be to
wyoming-openwakeword-1  | downgrade to 'numpy<2' or try to upgrade the affected module.
wyoming-openwakeword-1  | We expect that some modules will need time to support NumPy 2.
wyoming-openwakeword-1  | 
wyoming-openwakeword-1  | Traceback (most recent call last):  File "/usr/lib/python3.10/threading.py", line 973, in _bootstrap
wyoming-openwakeword-1  |     self._bootstrap_inner()
wyoming-openwakeword-1  |   File "/usr/lib/python3.10/threading.py", line 1016, in _bootstrap_inner
wyoming-openwakeword-1  |     self.run()
wyoming-openwakeword-1  |   File "/usr/lib/python3.10/threading.py", line 953, in run
wyoming-openwakeword-1  |     self._target(*self._args, **self._kwargs)
wyoming-openwakeword-1  |   File "/usr/src/.venv/lib/python3.10/site-packages/wyoming_openwakeword/openwakeword.py", line 35, in mels_proc
wyoming-openwakeword-1  |     melspec_model = tflite.Interpreter(
wyoming-openwakeword-1  |   File "/usr/src/.venv/lib/python3.10/site-packages/tflite_runtime/interpreter.py", line 485, in __init__
wyoming-openwakeword-1  |     self._interpreter = _interpreter_wrapper.CreateWrapperFromFile(
wyoming-openwakeword-1  | AttributeError: _ARRAY_API not found
wyoming-openwakeword-1  | ERROR:root:Unexpected error in wake word thread (hey_jarvis_v0.1)
wyoming-openwakeword-1  | ImportError: numpy.core.multiarray failed to import
wyoming-openwakeword-1  | 
wyoming-openwakeword-1  | The above exception was the direct cause of the following exception:
wyoming-openwakeword-1  | 
wyoming-openwakeword-1  | Traceback (most recent call last):
wyoming-openwakeword-1  |   File "/usr/src/.venv/lib/python3.10/site-packages/wyoming_openwakeword/openwakeword.py", line 248, in ww_proc
wyoming-openwakeword-1  |     ww_model = tflite.Interpreter(model_path=str(ww_model_path), num_threads=1)
wyoming-openwakeword-1  |   File "/usr/src/.venv/lib/python3.10/site-packages/tflite_runtime/interpreter.py", line 485, in __init__
wyoming-openwakeword-1  |     self._interpreter = _interpreter_wrapper.CreateWrapperFromFile(
wyoming-openwakeword-1  | SystemError: <built-in method CreateWrapperFromFile of PyCapsule object at 0x723410a19680> returned a result with an exception set
wyoming-openwakeword-1  | 
wyoming-openwakeword-1  | A module that was compiled using NumPy 1.x cannot be run in
wyoming-openwakeword-1  | NumPy 2.2.1 as it may crash. To support both 1.x and 2.x
wyoming-openwakeword-1  | versions of NumPy, modules must be compiled with NumPy 2.0.
wyoming-openwakeword-1  | Some module may need to rebuild instead e.g. with 'pybind11>=2.12'.
wyoming-openwakeword-1  | 
wyoming-openwakeword-1  | If you are a user of the module, the easiest solution will be to
wyoming-openwakeword-1  | downgrade to 'numpy<2' or try to upgrade the affected module.
wyoming-openwakeword-1  | We expect that some modules will need time to support NumPy 2.
wyoming-openwakeword-1  | 
wyoming-openwakeword-1  | Traceback (most recent call last):  File "/usr/lib/python3.10/threading.py", line 973, in _bootstrap
wyoming-openwakeword-1  |     self._bootstrap_inner()
wyoming-openwakeword-1  |   File "/usr/lib/python3.10/threading.py", line 1016, in _bootstrap_inner
wyoming-openwakeword-1  |     self.run()
wyoming-openwakeword-1  |   File "/usr/lib/python3.10/threading.py", line 953, in run
wyoming-openwakeword-1  |     self._target(*self._args, **self._kwargs)
wyoming-openwakeword-1  |   File "/usr/src/.venv/lib/python3.10/site-packages/wyoming_openwakeword/openwakeword.py", line 135, in embeddings_proc
wyoming-openwakeword-1  |     embedding_model = tflite.Interpreter(
wyoming-openwakeword-1  |   File "/usr/src/.venv/lib/python3.10/site-packages/tflite_runtime/interpreter.py", line 485, in __init__
wyoming-openwakeword-1  |     self._interpreter = _interpreter_wrapper.CreateWrapperFromFile(
wyoming-openwakeword-1  | AttributeError: _ARRAY_API not found
wyoming-openwakeword-1  | ERROR:root:Unexpected error in mels thread
wyoming-openwakeword-1  | ImportError: numpy.core.multiarray failed to import
wyoming-openwakeword-1  | 
wyoming-openwakeword-1  | The above exception was the direct cause of the following exception:
wyoming-openwakeword-1  | 
wyoming-openwakeword-1  | Traceback (most recent call last):
wyoming-openwakeword-1  |   File "/usr/src/.venv/lib/python3.10/site-packages/wyoming_openwakeword/openwakeword.py", line 35, in mels_proc
wyoming-openwakeword-1  |     melspec_model = tflite.Interpreter(
wyoming-openwakeword-1  |   File "/usr/src/.venv/lib/python3.10/site-packages/tflite_runtime/interpreter.py", line 485, in __init__
wyoming-openwakeword-1  |     self._interpreter = _interpreter_wrapper.CreateWrapperFromFile(
wyoming-openwakeword-1  | SystemError: <built-in method CreateWrapperFromFile of PyCapsule object at 0x723410a19680> returned a result with an exception set
wyoming-openwakeword-1  | ERROR:root:Unexpected error in embeddings thread
wyoming-openwakeword-1  | ImportError: numpy.core.multiarray failed to import
wyoming-openwakeword-1  | 
wyoming-openwakeword-1  | The above exception was the direct cause of the following exception:
wyoming-openwakeword-1  | 
wyoming-openwakeword-1  | Traceback (most recent call last):
wyoming-openwakeword-1  |   File "/usr/src/.venv/lib/python3.10/site-packages/wyoming_openwakeword/openwakeword.py", line 135, in embeddings_proc
wyoming-openwakeword-1  |     embedding_model = tflite.Interpreter(
wyoming-openwakeword-1  |   File "/usr/src/.venv/lib/python3.10/site-packages/tflite_runtime/interpreter.py", line 485, in __init__
wyoming-openwakeword-1  |     self._interpreter = _interpreter_wrapper.CreateWrapperFromFile(
wyoming-openwakeword-1  | SystemError: <built-in method CreateWrapperFromFile of PyCapsule object at 0x723410a19680> returned a result with an exception set

Seems like https://github.com/rhasspy/wyoming-openwakeword is quite broken from the amount of issues I see in the repository. So I just go back to use snowboy since that works fine for me.

@spitfire
Copy link

spitfire commented Jan 8, 2025

Also I tested OpenWakeWord and I'm getting the following error:

Same here. Didn't mention it since it was quite obviously broken and I use wake word locally on my satellites

Seems like https://github.com/rhasspy/wyoming-openwakeword is quite broken from the amount of issues I see in the repository. So I just go back to use snowboy since that works fine for me.

I don't even know what half of these tools do TBH

@AnkushMalaker
Copy link

Can you guys try out https://github.com/AnkushMalaker/wyoming-openwakeword/tree/master? Its PR #39 on https://github.com/rhasspy/wyoming-openwakeword

@Rudd-O
Copy link

Rudd-O commented Jan 10, 2025

Can this be merged and a container release published? I want to use my GPU with the Whisper container because it's otherwise unbearably slow.

@spitfire
Copy link

spitfire commented Jan 10, 2025

Can this be merged and a container release published? I want to use my GPU with the Whisper container because it's otherwise unbearably slow.

You can already run it from the repo https://github.com/edurenye/wyoming-addons-gpu/tree/gpu using docker-compose just like I am doing it.

@Rudd-O
Copy link

Rudd-O commented Jan 10, 2025

Thanks for the prompt response.

I'm not using Docker. I'm using Podman.

How do I run this directly as a podman run thing, without building anything? I don't want to build this on my production machine. Is there a container image published somewhere?

@spitfire
Copy link

spitfire commented Jan 10, 2025

Thanks for the prompt response.

I'm not using Docker. I'm using Podman.

How do I run this directly as a podman run thing, without building anything? I don't want to build this on my production machine. Is there a container image published somewhere?

you can run it using docker compose -f docker-compose.gpu.yml up -d from the inside of a directory with cloned git repo (it has to be on branch gpu).

@Rudd-O
Copy link

Rudd-O commented Jan 10, 2025

Correct, I can do that. That is exactly what I don't want to do — clone some repo in my production computer and run a docker build or equivalent.

I was hoping there would be a container image I could test — but, if there isn't, then I guess I'll have to wait until all of this is merged to master and released as a container image.

@spitfire
Copy link

Correct, I can do that. That is exactly what I don't want to do — clone some repo in my production computer and run a docker build or equivalent.

I was hoping there would be a container image I could test — but, if there isn't, then I guess I'll have to wait until all of this is merged to master and released as a container image.

It's exactly as safe as running a pre-built container. You can review anything before you run it and nothing is installed on the host system. Just like with pre-built images docker is downloading a base image, and overlaying or putting stuff inside it.

@Rudd-O
Copy link

Rudd-O commented Jan 10, 2025

I found a way to run the build process on my build server using Podman. Thanks for your kind help.

@Rudd-O
Copy link

Rudd-O commented Jan 10, 2025

Didn't work.

Jan 10 20:56:32 roxanne.dragonfear wyoming-whisper[1709404]: RuntimeError: CUDA failed with error CUDA driver version is insufficient for CUDA runtime version

I'm using the latest NVIDIA driver DKMS on Fedora. Not sure what could be going on here.

[root@roxanne ~]# nvidia-smi --version
NVIDIA-SMI version  : 565.77
NVML version        : 565.77
DRIVER version      : 565.77
CUDA Version        : 12.7

In container:

ollama@roxanne:/$ ls -la /usr/local/cuda-12/targets/x86_64-linux/lib
total 1409466
drwxr-xr-x. 1 root root        62 Jan 10 20:54 .
drwxr-xr-x. 1 root root         4 Feb 27  2024 ..
lrwxrwxrwx. 1 root root        16 Oct 31  2023 libOpenCL.so.1 -> libOpenCL.so.1.0
lrwxrwxrwx. 1 root root        18 Oct 31  2023 libOpenCL.so.1.0 -> libOpenCL.so.1.0.0
-rw-r--r--. 1 root root     30856 Oct 31  2023 libOpenCL.so.1.0.0
lrwxrwxrwx. 1 root root        21 Oct 31  2023 libcublas.so.12 -> libcublas.so.12.3.4.1
-rw-r--r--. 1 root root 106679344 Oct 31  2023 libcublas.so.12.3.4.1
lrwxrwxrwx. 1 root root        23 Oct 31  2023 libcublasLt.so.12 -> libcublasLt.so.12.3.4.1
-rw-r--r--. 1 root root 518358624 Oct 31  2023 libcublasLt.so.12.3.4.1
lrwxrwxrwx. 1 root root        21 Oct 31  2023 libcudart.so.12 -> libcudart.so.12.3.101
-rw-r--r--. 1 root root    703808 Oct 31  2023 libcudart.so.12.3.101
lrwxrwxrwx. 1 root root        21 Oct 31  2023 libcufft.so.11 -> libcufft.so.11.0.12.1
-rw-r--r--. 1 root root 177827520 Oct 31  2023 libcufft.so.11.0.12.1
lrwxrwxrwx. 1 root root        22 Oct 31  2023 libcufftw.so.11 -> libcufftw.so.11.0.12.1
-rw-r--r--. 1 root root    966600 Oct 31  2023 libcufftw.so.11.0.12.1
lrwxrwxrwx. 1 root root        18 Oct 25  2023 libcufile.so.0 -> libcufile.so.1.8.1
-rw-r--r--. 1 root root   2993680 Oct 25  2023 libcufile.so.1.8.1
lrwxrwxrwx. 1 root root        23 Oct 25  2023 libcufile_rdma.so.1 -> libcufile_rdma.so.1.8.1
-rw-r--r--. 1 root root     43320 Oct 25  2023 libcufile_rdma.so.1.8.1
lrwxrwxrwx. 1 root root        23 Nov 22  2023 libcurand.so.10 -> libcurand.so.10.3.4.107
-rw-r--r--. 1 root root  96259504 Nov 22  2023 libcurand.so.10.3.4.107
lrwxrwxrwx. 1 root root        25 Oct 31  2023 libcusolver.so.11 -> libcusolver.so.11.5.4.101
-rw-r--r--. 1 root root 115640600 Oct 31  2023 libcusolver.so.11.5.4.101
lrwxrwxrwx. 1 root root        27 Oct 31  2023 libcusolverMg.so.11 -> libcusolverMg.so.11.5.4.101
-rw-r--r--. 1 root root  83040368 Oct 31  2023 libcusolverMg.so.11.5.4.101
lrwxrwxrwx. 1 root root        25 Oct 31  2023 libcusparse.so.12 -> libcusparse.so.12.2.0.103
-rw-r--r--. 1 root root 267184960 Oct 31  2023 libcusparse.so.12.2.0.103
lrwxrwxrwx. 1 root root        19 Oct 31  2023 libnppc.so.12 -> libnppc.so.12.2.3.2
-rw-r--r--. 1 root root   1642992 Oct 31  2023 libnppc.so.12.2.3.2
lrwxrwxrwx. 1 root root        21 Oct 31  2023 libnppial.so.12 -> libnppial.so.12.2.3.2
-rw-r--r--. 1 root root  17568560 Oct 31  2023 libnppial.so.12.2.3.2
lrwxrwxrwx. 1 root root        21 Oct 31  2023 libnppicc.so.12 -> libnppicc.so.12.2.3.2
-rw-r--r--. 1 root root   7500616 Oct 31  2023 libnppicc.so.12.2.3.2
lrwxrwxrwx. 1 root root        22 Oct 31  2023 libnppidei.so.12 -> libnppidei.so.12.2.3.2
-rw-r--r--. 1 root root  11134104 Oct 31  2023 libnppidei.so.12.2.3.2
lrwxrwxrwx. 1 root root        20 Oct 31  2023 libnppif.so.12 -> libnppif.so.12.2.3.2
-rw-r--r--. 1 root root 101066824 Oct 31  2023 libnppif.so.12.2.3.2
lrwxrwxrwx. 1 root root        20 Oct 31  2023 libnppig.so.12 -> libnppig.so.12.2.3.2
-rw-r--r--. 1 root root  41137040 Oct 31  2023 libnppig.so.12.2.3.2
lrwxrwxrwx. 1 root root        20 Oct 31  2023 libnppim.so.12 -> libnppim.so.12.2.3.2
-rw-r--r--. 1 root root  10322760 Oct 31  2023 libnppim.so.12.2.3.2
lrwxrwxrwx. 1 root root        21 Oct 31  2023 libnppist.so.12 -> libnppist.so.12.2.3.2
-rw-r--r--. 1 root root  38171728 Oct 31  2023 libnppist.so.12.2.3.2
lrwxrwxrwx. 1 root root        21 Oct 31  2023 libnppisu.so.12 -> libnppisu.so.12.2.3.2
-rw-r--r--. 1 root root    716168 Oct 31  2023 libnppisu.so.12.2.3.2
lrwxrwxrwx. 1 root root        21 Oct 31  2023 libnppitc.so.12 -> libnppitc.so.12.2.3.2
-rw-r--r--. 1 root root   5530224 Oct 31  2023 libnppitc.so.12.2.3.2
lrwxrwxrwx. 1 root root        19 Oct 31  2023 libnpps.so.12 -> libnpps.so.12.2.3.2
-rw-r--r--. 1 root root  18105592 Oct 31  2023 libnpps.so.12.2.3.2
lrwxrwxrwx. 1 root root        24 Oct 31  2023 libnvJitLink.so.12 -> libnvJitLink.so.12.3.101
-rw-r--r--. 1 root root  52190720 Oct 31  2023 libnvJitLink.so.12.3.101
lrwxrwxrwx. 1 root root        18 Oct 31  2023 libnvToolsExt.so -> libnvToolsExt.so.1
lrwxrwxrwx. 1 root root        22 Oct 31  2023 libnvToolsExt.so.1 -> libnvToolsExt.so.1.0.0
-rw-r--r--. 1 root root     40136 Oct 31  2023 libnvToolsExt.so.1.0.0
lrwxrwxrwx. 1 root root        21 Oct 31  2023 libnvblas.so.12 -> libnvblas.so.12.3.4.1
-rw-r--r--. 1 root root    728856 Oct 31  2023 libnvblas.so.12.3.4.1
lrwxrwxrwx. 1 root root        22 Oct 31  2023 libnvjpeg.so.12 -> libnvjpeg.so.12.3.0.81
-rw-r--r--. 1 root root   6705968 Oct 31  2023 libnvjpeg.so.12.3.0.81
lrwxrwxrwx. 1 root root        29 Nov 22  2023 libnvrtc-builtins.so.12.3 -> libnvrtc-builtins.so.12.3.107
-rw-r--r--. 1 root root   6662024 Nov 22  2023 libnvrtc-builtins.so.12.3.107
lrwxrwxrwx. 1 root root        20 Nov 22  2023 libnvrtc.so.12 -> libnvrtc.so.12.3.107
-rw-r--r--. 1 root root  60792048 Nov 22  2023 libnvrtc.so.12.3.107

@Rudd-O
Copy link

Rudd-O commented Jan 10, 2025

[ollama@roxanne ~]$ podman run -it --net=host --userns=keep-id -v /var/lib/wyoming/whisper:/data -v /var/lib/wyoming/whisper/.cache:/.cache --device=/dev/nvidia0 --device=/dev/nvidiactl --device=/dev/nvidia-uvm --device=/dev/nvidia-modeset --device=/dev/nvidia-uvm-tools docker.dragonfear:80/wyoming/whisper:gpu --model medium-int8 --language en --device cuda --debug
WARN[0000] The cgroupv2 manager is set to systemd but there is no systemd user session available 
WARN[0000] For using systemd, you may need to log in using a user session 
WARN[0000] Alternatively, you can enable lingering with: `loginctl enable-linger 955` (possibly as root) 
WARN[0000] Falling back to --cgroup-manager=cgroupfs    
WARN[0000] The cgroupv2 manager is set to systemd but there is no systemd user session available 
WARN[0000] For using systemd, you may need to log in using a user session 
WARN[0000] Alternatively, you can enable lingering with: `loginctl enable-linger 955` (possibly as root) 
WARN[0000] Falling back to --cgroup-manager=cgroupfs    
DEBUG:__main__:Namespace(model='medium-int8', uri='tcp://0.0.0.0:10300', data_dir=['/data'], download_dir='/data', device='cuda', language='en', compute_type='default', beam_size=5, initial_prompt=None, debug=True, log_format='%(levelname)s:%(name)s:%(message)s')
DEBUG:__main__:Loading rhasspy/faster-whisper-medium-int8
DEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): huggingface.co:443
DEBUG:urllib3.connectionpool:https://huggingface.co:443 "GET /api/models/rhasspy/faster-whisper-medium-int8/revision/main HTTP/1.1" 200 722
Traceback (most recent call last):
  File "/usr/lib/python3.10/runpy.py", line 196, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/usr/lib/python3.10/runpy.py", line 86, in _run_code
    exec(code, run_globals)
  File "/usr/local/lib/python3.10/dist-packages/wyoming_faster_whisper/__main__.py", line 169, in <module>
    run()
  File "/usr/local/lib/python3.10/dist-packages/wyoming_faster_whisper/__main__.py", line 164, in run
    asyncio.run(main())
  File "/usr/lib/python3.10/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/lib/python3.10/asyncio/base_events.py", line 649, in run_until_complete
    return future.result()
  File "/usr/local/lib/python3.10/dist-packages/wyoming_faster_whisper/__main__.py", line 138, in main
    whisper_model = faster_whisper.WhisperModel(
  File "/usr/local/lib/python3.10/dist-packages/faster_whisper/transcribe.py", line 634, in __init__
    self.model = ctranslate2.models.Whisper(
RuntimeError: CUDA failed with error CUDA driver version is insufficient for CUDA runtime version

@spitfire
Copy link

Didn't work.

Jan 10 20:56:32 roxanne.dragonfear wyoming-whisper[1709404]: RuntimeError: CUDA failed with error CUDA driver version is insufficient for CUDA runtime version

I'm using the latest NVIDIA driver DKMS on Fedora. Not sure what could be going on here.

[root@roxanne ~]# nvidia-smi --version
NVIDIA-SMI version  : 565.77
NVML version        : 565.77
DRIVER version      : 565.77
CUDA Version        : 12.7

In container:

ollama@roxanne:/$ ls -la /usr/local/cuda-12/targets/x86_64-linux/lib
total 1409466
drwxr-xr-x. 1 root root        62 Jan 10 20:54 .
drwxr-xr-x. 1 root root         4 Feb 27  2024 ..
lrwxrwxrwx. 1 root root        16 Oct 31  2023 libOpenCL.so.1 -> libOpenCL.so.1.0
lrwxrwxrwx. 1 root root        18 Oct 31  2023 libOpenCL.so.1.0 -> libOpenCL.so.1.0.0
-rw-r--r--. 1 root root     30856 Oct 31  2023 libOpenCL.so.1.0.0
lrwxrwxrwx. 1 root root        21 Oct 31  2023 libcublas.so.12 -> libcublas.so.12.3.4.1
-rw-r--r--. 1 root root 106679344 Oct 31  2023 libcublas.so.12.3.4.1
lrwxrwxrwx. 1 root root        23 Oct 31  2023 libcublasLt.so.12 -> libcublasLt.so.12.3.4.1
-rw-r--r--. 1 root root 518358624 Oct 31  2023 libcublasLt.so.12.3.4.1
lrwxrwxrwx. 1 root root        21 Oct 31  2023 libcudart.so.12 -> libcudart.so.12.3.101
-rw-r--r--. 1 root root    703808 Oct 31  2023 libcudart.so.12.3.101
lrwxrwxrwx. 1 root root        21 Oct 31  2023 libcufft.so.11 -> libcufft.so.11.0.12.1
-rw-r--r--. 1 root root 177827520 Oct 31  2023 libcufft.so.11.0.12.1
lrwxrwxrwx. 1 root root        22 Oct 31  2023 libcufftw.so.11 -> libcufftw.so.11.0.12.1
-rw-r--r--. 1 root root    966600 Oct 31  2023 libcufftw.so.11.0.12.1
lrwxrwxrwx. 1 root root        18 Oct 25  2023 libcufile.so.0 -> libcufile.so.1.8.1
-rw-r--r--. 1 root root   2993680 Oct 25  2023 libcufile.so.1.8.1
lrwxrwxrwx. 1 root root        23 Oct 25  2023 libcufile_rdma.so.1 -> libcufile_rdma.so.1.8.1
-rw-r--r--. 1 root root     43320 Oct 25  2023 libcufile_rdma.so.1.8.1
lrwxrwxrwx. 1 root root        23 Nov 22  2023 libcurand.so.10 -> libcurand.so.10.3.4.107
-rw-r--r--. 1 root root  96259504 Nov 22  2023 libcurand.so.10.3.4.107
lrwxrwxrwx. 1 root root        25 Oct 31  2023 libcusolver.so.11 -> libcusolver.so.11.5.4.101
-rw-r--r--. 1 root root 115640600 Oct 31  2023 libcusolver.so.11.5.4.101
lrwxrwxrwx. 1 root root        27 Oct 31  2023 libcusolverMg.so.11 -> libcusolverMg.so.11.5.4.101
-rw-r--r--. 1 root root  83040368 Oct 31  2023 libcusolverMg.so.11.5.4.101
lrwxrwxrwx. 1 root root        25 Oct 31  2023 libcusparse.so.12 -> libcusparse.so.12.2.0.103
-rw-r--r--. 1 root root 267184960 Oct 31  2023 libcusparse.so.12.2.0.103
lrwxrwxrwx. 1 root root        19 Oct 31  2023 libnppc.so.12 -> libnppc.so.12.2.3.2
-rw-r--r--. 1 root root   1642992 Oct 31  2023 libnppc.so.12.2.3.2
lrwxrwxrwx. 1 root root        21 Oct 31  2023 libnppial.so.12 -> libnppial.so.12.2.3.2
-rw-r--r--. 1 root root  17568560 Oct 31  2023 libnppial.so.12.2.3.2
lrwxrwxrwx. 1 root root        21 Oct 31  2023 libnppicc.so.12 -> libnppicc.so.12.2.3.2
-rw-r--r--. 1 root root   7500616 Oct 31  2023 libnppicc.so.12.2.3.2
lrwxrwxrwx. 1 root root        22 Oct 31  2023 libnppidei.so.12 -> libnppidei.so.12.2.3.2
-rw-r--r--. 1 root root  11134104 Oct 31  2023 libnppidei.so.12.2.3.2
lrwxrwxrwx. 1 root root        20 Oct 31  2023 libnppif.so.12 -> libnppif.so.12.2.3.2
-rw-r--r--. 1 root root 101066824 Oct 31  2023 libnppif.so.12.2.3.2
lrwxrwxrwx. 1 root root        20 Oct 31  2023 libnppig.so.12 -> libnppig.so.12.2.3.2
-rw-r--r--. 1 root root  41137040 Oct 31  2023 libnppig.so.12.2.3.2
lrwxrwxrwx. 1 root root        20 Oct 31  2023 libnppim.so.12 -> libnppim.so.12.2.3.2
-rw-r--r--. 1 root root  10322760 Oct 31  2023 libnppim.so.12.2.3.2
lrwxrwxrwx. 1 root root        21 Oct 31  2023 libnppist.so.12 -> libnppist.so.12.2.3.2
-rw-r--r--. 1 root root  38171728 Oct 31  2023 libnppist.so.12.2.3.2
lrwxrwxrwx. 1 root root        21 Oct 31  2023 libnppisu.so.12 -> libnppisu.so.12.2.3.2
-rw-r--r--. 1 root root    716168 Oct 31  2023 libnppisu.so.12.2.3.2
lrwxrwxrwx. 1 root root        21 Oct 31  2023 libnppitc.so.12 -> libnppitc.so.12.2.3.2
-rw-r--r--. 1 root root   5530224 Oct 31  2023 libnppitc.so.12.2.3.2
lrwxrwxrwx. 1 root root        19 Oct 31  2023 libnpps.so.12 -> libnpps.so.12.2.3.2
-rw-r--r--. 1 root root  18105592 Oct 31  2023 libnpps.so.12.2.3.2
lrwxrwxrwx. 1 root root        24 Oct 31  2023 libnvJitLink.so.12 -> libnvJitLink.so.12.3.101
-rw-r--r--. 1 root root  52190720 Oct 31  2023 libnvJitLink.so.12.3.101
lrwxrwxrwx. 1 root root        18 Oct 31  2023 libnvToolsExt.so -> libnvToolsExt.so.1
lrwxrwxrwx. 1 root root        22 Oct 31  2023 libnvToolsExt.so.1 -> libnvToolsExt.so.1.0.0
-rw-r--r--. 1 root root     40136 Oct 31  2023 libnvToolsExt.so.1.0.0
lrwxrwxrwx. 1 root root        21 Oct 31  2023 libnvblas.so.12 -> libnvblas.so.12.3.4.1
-rw-r--r--. 1 root root    728856 Oct 31  2023 libnvblas.so.12.3.4.1
lrwxrwxrwx. 1 root root        22 Oct 31  2023 libnvjpeg.so.12 -> libnvjpeg.so.12.3.0.81
-rw-r--r--. 1 root root   6705968 Oct 31  2023 libnvjpeg.so.12.3.0.81
lrwxrwxrwx. 1 root root        29 Nov 22  2023 libnvrtc-builtins.so.12.3 -> libnvrtc-builtins.so.12.3.107
-rw-r--r--. 1 root root   6662024 Nov 22  2023 libnvrtc-builtins.so.12.3.107
lrwxrwxrwx. 1 root root        20 Nov 22  2023 libnvrtc.so.12 -> libnvrtc.so.12.3.107
-rw-r--r--. 1 root root  60792048 Nov 22  2023 libnvrtc.so.12.3.107

Did you install docker nvidia container toolkit?

@Rudd-O
Copy link

Rudd-O commented Jan 10, 2025

Nope, and now I have installed it, and now it works.

Thanks for the tip.

@Rudd-O
Copy link

Rudd-O commented Jan 10, 2025

For the curious, how I solved my problem:

dnf install -y golang-github-nvidia-container-toolkit
mkdir /etc/cdi
nvidia-ctk cdi generate --output=/etc/cdi/nvidia.yaml

Then, to run the container:

podman run --pull=newer -it --net=host --userns=keep-id -v /var/lib/wyoming/whisper:/data -v /var/lib/wyoming/whisper/.cache:/.cache --gpus=all <THE CONTAINER IMAGE AND TAG> --model medium-int8 --language en --device cuda --debug

@Rudd-O
Copy link

Rudd-O commented Jan 10, 2025

WOW.

The difference is overwhelming and totally worth doing. From 7 seconds for a sentence with 15 words, to absolutely no time at all. It makes the Voice Assistant Preview Edition faster than Google Home or Amazon Alexa.

My self-built Whisper container is using the 24.04 Ubuntu version. I recommend that the container be based on that version instead of the ancient 22.04 version it currently is.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging this pull request may close these issues.