Skip to content

Alternate OpenCL support via the CLBlast Netlib BLAS API #891

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 13 commits into from

Conversation

trholding
Copy link
Contributor

Experimental alternate OpenCL support via the CLBlast Netlib BLAS API. The performance is quite similar to the CLBlast optimized implementation when tested on the same low end / old AMD A9 APU.

CLBlast needs to be compiled with -DNETLIB=ON flag.

Rationale: Support More Hardware. This is meant to be used as a last resort for GPU acceleration when other methods don't work or are not compatible. Since OpenCL 1.x EMBEDDED PROFILE is supported, I anticipate that this could enable acceleration on Single Board Computers and Smart Phones.

Also serves as a template for pre-emptive OpenCL support for projects that use ggml. This could provide baseline GPU acceleration without custom OpenCL code or added effort due to CLBlast being a drop in BLAS with the Netlib API enabled.

More Info:
https://github.com/CNugteren/CLBlast/blob/master/doc/bindings.md
CNugteren/CLBlast#227

Usage:

Makefile:
cd whisper.cpp
WHISPER_CLBLAST_NETLIB=1 make

CMake:
cd whisper.cpp ; mkdir build ; cd build
cmake -DWHISPER_CLBLAST_NETLIB=ON  ..
make

Benchmarks:

Standard:

time ./bench 
whisper_init_from_file_no_state: loading model from 'models/ggml-base.en.bin'
whisper_model_load: loading model
whisper_model_load: n_vocab       = 51864
whisper_model_load: n_audio_ctx   = 1500
whisper_model_load: n_audio_state = 512
whisper_model_load: n_audio_head  = 8
whisper_model_load: n_audio_layer = 6
whisper_model_load: n_text_ctx    = 448
whisper_model_load: n_text_state  = 512
whisper_model_load: n_text_head   = 8
whisper_model_load: n_text_layer  = 6
whisper_model_load: n_mels        = 80
whisper_model_load: ftype         = 1
whisper_model_load: type          = 2
whisper_model_load: mem required  =  310.00 MB (+    6.00 MB per decoder)
whisper_model_load: adding 1607 extra tokens
whisper_model_load: model ctx     =  140.60 MB
whisper_model_load: model size    =  140.54 MB
whisper_init_state: kv self size  =    5.25 MB
whisper_init_state: kv cross size =   17.58 MB

system_info: n_threads = 2 / 2 | AVX = 1 | AVX2 = 1 | AVX512 = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | VSX = 0 | COREML = 0 | 

whisper_print_timings:     load time =   210.24 ms
whisper_print_timings:     fallbacks =   0 p /   0 h
whisper_print_timings:      mel time =     0.00 ms
whisper_print_timings:   sample time =     0.00 ms /     1 runs (    0.00 ms per run)
whisper_print_timings:   encode time = 16635.92 ms /     1 runs (16635.92 ms per run)
whisper_print_timings:   decode time =     0.00 ms /     1 runs (    0.00 ms per run)
whisper_print_timings:    total time = 16949.15 ms

If you wish, you can submit these results here:

  https://github.com/ggerganov/whisper.cpp/issues/89

Please include the following information:

  - CPU model
  - Operating system
  - Compiler


real    0m16.972s
user    0m32.215s
sys     0m0.282s

With CLBlast:

time ./bench 
whisper_init_from_file_no_state: loading model from 'models/ggml-base.en.bin'
whisper_model_load: loading model
whisper_model_load: n_vocab       = 51864
whisper_model_load: n_audio_ctx   = 1500
whisper_model_load: n_audio_state = 512
whisper_model_load: n_audio_head  = 8
whisper_model_load: n_audio_layer = 6
whisper_model_load: n_text_ctx    = 448
whisper_model_load: n_text_state  = 512
whisper_model_load: n_text_head   = 8
whisper_model_load: n_text_layer  = 6
whisper_model_load: n_mels        = 80
whisper_model_load: ftype         = 1
whisper_model_load: type          = 2
whisper_model_load: mem required  =  310.00 MB (+    6.00 MB per decoder)
whisper_model_load: adding 1607 extra tokens
whisper_model_load: model ctx     =  140.60 MB

Initializing CLBlast (First Run)...
Attempting to use: Platform=0, Device=0 (If invalid, program will crash)
Using Platform: Clover Device: AMD Radeon R5 Graphics (stoney, LLVM 15.0.7, DRM 3.49, 6.2.10-arch1-1)
whisper_model_load: model size    =  140.54 MB
whisper_init_state: kv self size  =    5.25 MB
whisper_init_state: kv cross size =   17.58 MB

system_info: n_threads = 2 / 2 | AVX = 1 | AVX2 = 1 | AVX512 = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | VSX = 0 | COREML = 0 | 

whisper_print_timings:     load time =   713.91 ms
whisper_print_timings:     fallbacks =   0 p /   0 h
whisper_print_timings:      mel time =     0.00 ms
whisper_print_timings:   sample time =     0.00 ms /     1 runs (    0.00 ms per run)
whisper_print_timings:   encode time =  7015.58 ms /     1 runs ( 7015.58 ms per run)
whisper_print_timings:   decode time =     0.00 ms /     1 runs (    0.00 ms per run)
whisper_print_timings:    total time =  7833.02 ms

If you wish, you can submit these results here:

  https://github.com/ggerganov/whisper.cpp/issues/89

Please include the following information:

  - CPU model
  - Operating system
  - Compiler


real    0m7.906s
user    0m12.932s
sys     0m0.441s

With CLBlast Netlib API:

time ./bench 
whisper_init_from_file_no_state: loading model from 'models/ggml-base.en.bin'
whisper_model_load: loading model
whisper_model_load: n_vocab       = 51864
whisper_model_load: n_audio_ctx   = 1500
whisper_model_load: n_audio_state = 512
whisper_model_load: n_audio_head  = 8
whisper_model_load: n_audio_layer = 6
whisper_model_load: n_text_ctx    = 448
whisper_model_load: n_text_state  = 512
whisper_model_load: n_text_head   = 8
whisper_model_load: n_text_layer  = 6
whisper_model_load: n_mels        = 80
whisper_model_load: ftype         = 1
whisper_model_load: type          = 2
whisper_model_load: mem required  =  310.00 MB (+    6.00 MB per decoder)
whisper_model_load: adding 1607 extra tokens
whisper_model_load: model ctx     =  140.60 MB
whisper_model_load: model size    =  140.54 MB
whisper_init_state: kv self size  =    5.25 MB
whisper_init_state: kv cross size =   17.58 MB

system_info: n_threads = 2 / 2 | AVX = 1 | AVX2 = 1 | AVX512 = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | VSX = 0 | COREML = 0 | 

whisper_print_timings:     load time =   207.38 ms
whisper_print_timings:     fallbacks =   0 p /   0 h
whisper_print_timings:      mel time =     0.00 ms
whisper_print_timings:   sample time =     0.00 ms /     1 runs (    0.00 ms per run)
whisper_print_timings:   encode time =  7361.65 ms /     1 runs ( 7361.65 ms per run)
whisper_print_timings:   decode time =     0.00 ms /     1 runs (    0.00 ms per run)
whisper_print_timings:    total time =  7672.91 ms

If you wish, you can submit these results here:

  https://github.com/ggerganov/whisper.cpp/issues/89

Please include the following information:

  - CPU model
  - Operating system
  - Compiler


real    0m7.759s
user    0m12.958s
sys     0m0.522s

Currently, CLBlast with Netlib API seems to have a advantage over CLBlast and custom OpenCL code:

CLBlast Netlib :  whisper_print_timings:     load time =   207.38 ms
CLBlast             :  whisper_print_timings:     load time =   713.91 ms

The load time is higher with CLBlast and custom OpenCL code. This is emphasized when using the tiny model. This has the effect that the net performance seems to be equal or better:

Command: ./bench -m models/ggml-tiny.en.bin

CLBlast Netlib:

whisper_print_timings:     load time =   144.74 ms
whisper_print_timings:     fallbacks =   0 p /   0 h
whisper_print_timings:      mel time =     0.00 ms
whisper_print_timings:   sample time =     0.00 ms /     1 runs (    0.00 ms per run)
whisper_print_timings:   encode time =  4780.53 ms /     1 runs ( 4780.53 ms per run)
whisper_print_timings:   decode time =     0.00 ms /     1 runs (    0.00 ms per run)
whisper_print_timings:    total time =  5006.46 ms

CLBlast:

whisper_print_timings:     load time =   664.63 ms
whisper_print_timings:     fallbacks =   0 p /   0 h
whisper_print_timings:      mel time =     0.00 ms
whisper_print_timings:   sample time =     0.00 ms /     1 runs (    0.00 ms per run)
whisper_print_timings:   encode time =  4533.00 ms /     1 runs ( 4533.00 ms per run)
whisper_print_timings:   decode time =     0.00 ms /     1 runs (    0.00 ms per run)
whisper_print_timings:    total time =  5277.10 ms

trholding added 10 commits May 3, 2023 00:05
Building with CLBlast speeds up whisper.cpp ~2x on low end / older AMD APUs (CPU with integrated GPU) such as the A9.

Usage:
WHISPER_CLBLAST=1 make
Building with CLBlast speeds up whisper.cpp ~2x on low end / older AMD APUs (CPU with integrated GPU) such as the A9.

Usage:
```
Makefile:
cd whisper.cpp
WHISPER_CLBLAST=1 make

CMake:
cd whisper.cpp ; mkdir build ; cd build
cmake -DWHISPER_CLBLAST=ON  ..
make
```
Added OpenCL Build Instructions
Added build instructions and examples for Make and CMake to support OpenCL enabled GPUs.
Experimental alternate OpenCL support via the CLBlast Netlib BLAS API. The performance is quite similar to the CLBlast optimized implementation when tested on the same low end / old AMD A9 APU.

CLBlast needs to be compiled with ```-DNETLIB=ON``` flag.

Rationale: Support More Hardware. This is meant to be used as a last resort for GPU acceleration when other methods don't work or are not compatible. Since OpenCL 1.x EMBEDDED PROFILE is supported, I anticipate that this could enable acceleration on Single Board Computers and Smart Phones.

Also serves as a template for pre-emptive OpenCL support for projects that use ggml. This could provide baseline GPU acceleration without custom OpenCL code or added effort due to CLBlast being a drop in BLAS with the Netlib API enabled.

More Info:
https://github.com/CNugteren/CLBlast/blob/master/doc/bindings.md
CNugteren/CLBlast#227

Usage:
```
Makefile:
cd whisper.cpp
WHISPER_CLBLAST_NETLIB=1 make

CMake:
cd whisper.cpp ; mkdir build ; cd build
cmake -DWHISPER_CLBLAST_NETLIB=ON  ..
make
```
…tlib BLAS API

Added build instructions and examples for the experimental alternate OpenCL support via the CLBlast Netlib BLAS API.
ggml.c Outdated
@@ -8187,7 +8189,7 @@ static void ggml_compute_forward_rms_norm(

// ggml_compute_forward_mul_mat

#if defined(GGML_USE_ACCELERATE) || defined(GGML_USE_OPENBLAS) || defined(GGML_USE_CLBLAST)
#if defined(GGML_USE_ACCELERATE) || defined(GGML_USE_OPENBLAS) || defined(GGML_USE_CLBLAST) || defined(GGML_USE_CLBLASTNETLIB)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's have this build define both GGML_USE_CLBLAST and GGML_USE_CLBLASTNETLIB and avoid these changes

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You are correct, I get the logic.

@trholding trholding marked this pull request as draft May 9, 2023 13:26
@trholding trholding closed this Feb 14, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants