You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Merged 3rd and 4th point under only one point related to Visual Studio.
Specify better how to build and use Visual Studio solution.
Add note about tested configuration.
You can use Visual Studio to open llama.cpp folder as a CMake project. Choose the sycl CMake presets (`x64-windows-sycl-release` or `x64-windows-sycl-debug`) before you compile the project.
544
+
You have two options to use Visual Studio to build llama.cpp:
545
+
- As CMake Project using CMake presets.
546
+
- Creating a Visual Studio solution to handle the project.
547
+
548
+
1. Open as a CMake Project
549
+
550
+
You can use Visual Studio to open the `llama.cpp` folder directly as a CMake project. Before compiling, select one of the SYCL CMake presets:
551
+
552
+
-`x64-windows-sycl-release`
553
+
554
+
-`x64-windows-sycl-debug`
545
555
546
556
*Notes:*
557
+
- For a minimal experimental setup, you can build only the inference executable using:
547
558
548
-
- In case of a minimal experimental setup, the user can build the inference executable only through `cmake --build build --config Release -j --target llama-cli`.
You can use Visual Studio projects to build and work on llama.cpp on Windows. You need to convert the CMake Project into a `.sln` file.
565
+
You can use Visual Studio solution to build and work on llama.cpp on Windows. You need to convert the CMake Project into a `.sln` file.
553
566
554
-
If you want to use Intel C++ compiler for the entire llama.cpp project:
555
-
```
567
+
- Using Intel C++ Compiler for the Entire Project
568
+
569
+
If you want to use the Intel C++ Compiler for the entire `llama.cpp` project, run the following command:
570
+
571
+
```Powershell
556
572
cmake -B build -G "Visual Studio 17 2022" -T "Intel C++ Compiler 2025" -A x64 -DGGML_SYCL=ON -DCMAKE_BUILD_TYPE=Release
557
573
```
558
574
559
-
If you want, you can use Intel C++ Compiler only for ggml-sycl, but `ggml` and its backend libraries *must* be build as shared libraries(i.e. `-DBUILD_SHARED_LIBRARIES=ON`):
575
+
- Using Intel C++ Compiler Only for ggml-sycl
576
+
577
+
If you prefer to use the Intel C++ Compiler only for `ggml-sycl`, ensure that `ggml` and its backend libraries are built as shared libraries ( i.e. `-DBUILD_SHARED_LIBRARIES=ON`, this is default behaviour):
578
+
579
+
```Powershell
580
+
cmake -B build -G "Visual Studio 17 2022" -A x64 -DGGML_SYCL=ON -DCMAKE_BUILD_TYPE=Release \
In both cases, after the Visual Studio solution is created, open it, right click on `ggml-sycl` and open properties. In the left column open `C/C++` sub menu and select `DPC++`. In the option window on the right set `Enable SYCL offload` to `yes` and apply changes.
Now, you can build `llama.cpp` with the SYCL backend as a Visual Studio project.
612
+
To do it from menu: `Build -> Build Solution`.
613
+
Once it is completed, final results will be in **build/Release/bin**
567
614
568
-
Now you can build llama.cpp with SYCL backend as a Visual Studio project.
615
+
*Additional Note*
569
616
570
-
*Notes:*
571
-
- you can avoid specifying `SYCL_INCLUDE_DIR` and `SYCL_LIBRARY_DIR` by setting the two environment variables `SYCL_INCLUDE_DIR_HINT` and `SYCL_LIBRARY_DIR_HINT`.
617
+
- You can avoid specifying `SYCL_INCLUDE_DIR` and `SYCL_LIBRARY_DIR` in the CMake command by setting the environment variables:
618
+
619
+
- `SYCL_INCLUDE_DIR_HINT`
620
+
621
+
- `SYCL_LIBRARY_DIR_HINT`
622
+
623
+
- Above instruction has been tested with Visual Studio 17 Community edition and oneAPI 2025.0. We expect them to work also with future version if the instructions are adapted accordingly.
0 commit comments