You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
-[x] Linux / [FreeBSD](https://github.com/ggerganov/whisper.cpp/issues/56#issuecomment-1350920264)
34
+
-[x] Linux / [FreeBSD](https://github.com/ggml-org/whisper.cpp/issues/56#issuecomment-1350920264)
35
35
-[x][WebAssembly](examples/whisper.wasm)
36
-
-[x] Windows ([MSVC](https://github.com/ggerganov/whisper.cpp/blob/master/.github/workflows/build.yml#L117-L144) and [MinGW](https://github.com/ggerganov/whisper.cpp/issues/168)]
-[x] Windows ([MSVC](https://github.com/ggml-org/whisper.cpp/blob/master/.github/workflows/build.yml#L117-L144) and [MinGW](https://github.com/ggml-org/whisper.cpp/issues/168)]
@@ -222,7 +222,7 @@ speed-up - more than x3 faster compared with CPU-only execution. Here are the in
222
222
The first run on a device is slow, since the ANE service compiles the Core ML model to some device-specific format.
223
223
Next runs are faster.
224
224
225
-
For more information about the Core ML implementation please refer to PR [#566](https://github.com/ggerganov/whisper.cpp/pull/566).
225
+
For more information about the Core ML implementation please refer to PR [#566](https://github.com/ggml-org/whisper.cpp/pull/566).
226
226
227
227
## OpenVINO support
228
228
@@ -307,7 +307,7 @@ This can result in significant speedup in encoder performance. Here are the inst
307
307
The first time run on an OpenVINO device is slow, since the OpenVINO framework will compile the IR (Intermediate Representation) model to a device-specific 'blob'. This device-specific blob will get
308
308
cached for the next run.
309
309
310
-
For more information about the OpenVINO implementation please refer to PR [#1037](https://github.com/ggerganov/whisper.cpp/pull/1037).
310
+
For more information about the OpenVINO implementation please refer to PR [#1037](https://github.com/ggml-org/whisper.cpp/pull/1037).
311
311
312
312
## NVIDIA GPU support
313
313
@@ -385,8 +385,8 @@ Run the inference examples as usual, for example:
385
385
386
386
We have two Docker images available for this project:
387
387
388
-
1. `ghcr.io/ggerganov/whisper.cpp:main`: This image includes the main executable file as well as `curl` and `ffmpeg`. (platforms: `linux/amd64`, `linux/arm64`)
389
-
2. `ghcr.io/ggerganov/whisper.cpp:main-cuda`: Same as `main` but compiled with CUDA support. (platforms: `linux/amd64`)
388
+
1. `ghcr.io/ggml-org/whisper.cpp:main`: This image includes the main executable file as well as `curl` and `ffmpeg`. (platforms: `linux/amd64`, `linux/arm64`)
389
+
2. `ghcr.io/ggml-org/whisper.cpp:main-cuda`: Same as `main` but compiled with CUDA support. (platforms: `linux/amd64`)
390
390
391
391
### Usage
392
392
@@ -424,8 +424,8 @@ For detailed instructions on how to use Conan, please refer to the [Conan docume
424
424
425
425
This is a naive example of performing real-time inference on audio from your microphone.
426
426
The [stream](examples/stream) tool samples the audio every half a second and runs the transcription continuously.
427
-
More info is available in [issue #10](https://github.com/ggerganov/whisper.cpp/issues/10).
428
-
You will need to have [sdl2](https://wiki.libsdl.org/SDL2/Installation) installed for it to work properly.
427
+
More info is available in [issue #10](https://github.com/ggml-org/whisper.cpp/issues/10).
428
+
You will need to have [sdl2](https://wiki.libsdl.org/SDL2/Installation) installed for it to work properly.
Use the [scripts/bench-wts.sh](https://github.com/ggerganov/whisper.cpp/blob/master/scripts/bench-wts.sh) script to generate a video in the following format:
580
+
Use the [scripts/bench-wts.sh](https://github.com/ggml-org/whisper.cpp/blob/master/scripts/bench-wts.sh) script to generate a video in the following format:
581
581
582
582
```bash
583
583
./scripts/bench-wts.sh samples/jfk.wav
@@ -594,7 +594,7 @@ In order to have an objective comparison of the performance of the inference acr
594
594
use the [whisper-bench](examples/bench) tool. The tool simply runs the Encoder part of the model and prints how much time it
595
595
took to execute it. The results are summarized in the following Github issue:
@@ -692,13 +691,13 @@ Some of the examples are even ported to run in the browser using WebAssembly. Ch
692
691
| [whisper.android](examples/whisper.android) | | Android mobile application using whisper.cpp |
693
692
| [whisper.nvim](examples/whisper.nvim) | | Speech-to-text plugin for Neovim |
694
693
| [generate-karaoke.sh](examples/generate-karaoke.sh) | | Helper script to easily [generate a karaoke video](https://youtu.be/uj7hVta4blM) of raw audio capture |
0 commit comments