Skip to content

Commit bae66c3

Browse files
committed
Rewrote documentation about Visual Studio usage
Merged 3rd and 4th point under only one point related to Visual Studio. Specify better how to build and use Visual Studio solution. Add note about tested configuration.
1 parent fa197b1 commit bae66c3

File tree

1 file changed

+65
-13
lines changed

1 file changed

+65
-13
lines changed

docs/backend/SYCL.md

Lines changed: 65 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -541,34 +541,86 @@ cmake --build build-x64-windows-sycl-debug -j --target llama-cli
541541

542542
3. Visual Studio
543543

544-
You can use Visual Studio to open llama.cpp folder as a CMake project. Choose the sycl CMake presets (`x64-windows-sycl-release` or `x64-windows-sycl-debug`) before you compile the project.
544+
You have two options to use Visual Studio to build llama.cpp:
545+
- As CMake Project using CMake presets.
546+
- Creating a Visual Studio solution to handle the project.
547+
548+
1. Open as a CMake Project
549+
550+
You can use Visual Studio to open the `llama.cpp` folder directly as a CMake project. Before compiling, select one of the SYCL CMake presets:
551+
552+
- `x64-windows-sycl-release`
553+
554+
- `x64-windows-sycl-debug`
545555

546556
*Notes:*
557+
- For a minimal experimental setup, you can build only the inference executable using:
547558

548-
- In case of a minimal experimental setup, the user can build the inference executable only through `cmake --build build --config Release -j --target llama-cli`.
559+
```Powershell
560+
cmake --build build --config Release -j --target llama-cli
561+
```
549562
550-
4. Visual Studio Project
563+
2. Generating a Visual Studio Solution
551564
552-
You can use Visual Studio projects to build and work on llama.cpp on Windows. You need to convert the CMake Project into a `.sln` file.
565+
You can use Visual Studio solution to build and work on llama.cpp on Windows. You need to convert the CMake Project into a `.sln` file.
553566
554-
If you want to use Intel C++ compiler for the entire llama.cpp project:
555-
```
567+
- Using Intel C++ Compiler for the Entire Project
568+
569+
If you want to use the Intel C++ Compiler for the entire `llama.cpp` project, run the following command:
570+
571+
```Powershell
556572
cmake -B build -G "Visual Studio 17 2022" -T "Intel C++ Compiler 2025" -A x64 -DGGML_SYCL=ON -DCMAKE_BUILD_TYPE=Release
557573
```
558574

559-
If you want, you can use Intel C++ Compiler only for ggml-sycl, but `ggml` and its backend libraries *must* be build as shared libraries(i.e. `-DBUILD_SHARED_LIBRARIES=ON`):
575+
- Using Intel C++ Compiler Only for ggml-sycl
576+
577+
If you prefer to use the Intel C++ Compiler only for `ggml-sycl`, ensure that `ggml` and its backend libraries are built as shared libraries ( i.e. `-DBUILD_SHARED_LIBRARIES=ON`, this is default behaviour):
578+
579+
```Powershell
580+
cmake -B build -G "Visual Studio 17 2022" -A x64 -DGGML_SYCL=ON -DCMAKE_BUILD_TYPE=Release \
581+
-DSYCL_INCLUDE_DIR="C:\Program Files (x86)\Intel\oneAPI\compiler\latest\include" \
582+
-DSYCL_LIBRARY_DIR="C:\Program Files (x86)\Intel\oneAPI\compiler\latest\lib"
560583
```
561-
cmake -B build -G "Visual Studio 17 2022" -A x64 -DGGML_SYCL=ON -DCMAKE_BUILD_TYPE=Release -DSYCL_INCLUDE_DIR="C:\Program Files (x86)\Intel\oneAPI\compiler\latest\include" -DSYCL_LIBRARY_DIR="C:\Program Files (x86)\Intel\oneAPI\compiler\latest\lib"
584+
585+
If successful the build files have been written to: *path/to/llama.cpp/build*
586+
Open the project file **build/llama.cpp.sln** with Visual Studio.
587+
588+
- Configuring SYCL Offload in Visual Studio
589+
590+
Once the Visual Studio solution is created, follow these steps:
591+
592+
1. Open the solution in Visual Studio.
593+
594+
2. Right-click on `ggml-sycl` and select **Properties**.
595+
596+
3. In the left column, expand **C/C++** and select **DPC++**.
597+
598+
4. In the right panel, find **Enable SYCL Offload** and set it to `Yes`.
599+
600+
5. Apply the changes and save.
601+
602+
603+
### Navigation Path:
604+
605+
```
606+
Properties -> C/C++ -> DPC++ -> Enable SYCL Offload (Yes)
562607
```
563608

564-
In both cases, after the Visual Studio solution is created, open it, right click on `ggml-sycl` and open properties. In the left column open `C/C++` sub menu and select `DPC++`. In the option window on the right set `Enable SYCL offload` to `yes` and apply changes.
609+
- Build
565610

566-
Properties -> C\C++ -> DPC++ -> Enable SYCL offload(yes)
611+
Now, you can build `llama.cpp` with the SYCL backend as a Visual Studio project.
612+
To do it from menu: `Build -> Build Solution`.
613+
Once it is completed, final results will be in **build/Release/bin**
567614

568-
Now you can build llama.cpp with SYCL backend as a Visual Studio project.
615+
*Additional Note*
569616

570-
*Notes:*
571-
- you can avoid specifying `SYCL_INCLUDE_DIR` and `SYCL_LIBRARY_DIR` by setting the two environment variables `SYCL_INCLUDE_DIR_HINT` and `SYCL_LIBRARY_DIR_HINT`.
617+
- You can avoid specifying `SYCL_INCLUDE_DIR` and `SYCL_LIBRARY_DIR` in the CMake command by setting the environment variables:
618+
619+
- `SYCL_INCLUDE_DIR_HINT`
620+
621+
- `SYCL_LIBRARY_DIR_HINT`
622+
623+
- Above instruction has been tested with Visual Studio 17 Community edition and oneAPI 2025.0. We expect them to work also with future version if the instructions are adapted accordingly.
572624

573625
### III. Run the inference
574626

0 commit comments

Comments
 (0)