Skip to content

cmake : enable warnings in llama #10474

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 7 commits into from
Nov 26, 2024
Merged

cmake : enable warnings in llama #10474

merged 7 commits into from
Nov 26, 2024

Conversation

ggerganov
Copy link
Member

Enable more compile warnings in CMake builds.

Copy link
Collaborator

@danbev danbev left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could these blocks be moved to the parent CMakeLists.txt to avoid the duplication?

@ggerganov
Copy link
Member Author

Probably we can refactor this in a function call and put it in the cmake folder to avoid duplication. But I think it's better to not call it in the root CMakeLists.txt since it will also apply to 3rd party sources (if any) and it's better to have the 3rd party sources decide their compile flags.

@slaren
Copy link
Member

slaren commented Nov 24, 2024

Agree, let's not duplicate this code. Either a function of storing C/C++ flags in a common variable should do it.

@ggerganov ggerganov marked this pull request as draft November 24, 2024 20:35
@ggerganov ggerganov marked this pull request as ready for review November 25, 2024 20:11
@ggerganov ggerganov requested a review from slaren November 25, 2024 20:12
@github-actions github-actions bot added the build Compilation issues label Nov 25, 2024
Copy link
Member

@slaren slaren left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It might also make sense to rename get_flags to something like ggml_get_flags to make it clear where it comes from.

@ggerganov
Copy link
Member Author

I didn't notice get_flags comes all the way from ggml. I added llama_get_flags to keep things separated. PTAnotherL

@slaren
Copy link
Member

slaren commented Nov 25, 2024

I think it would be ok to use ggml_get_flags in llama.cpp to avoid duplicating the code, keeping functions like this in sync is always a source of errors.

@github-actions github-actions bot added the Nvidia GPU Issues specific to Nvidia GPUs label Nov 25, 2024
@ggerganov ggerganov merged commit ab96610 into master Nov 26, 2024
52 of 53 checks passed
@ggerganov ggerganov deleted the gg/cmake-warnings branch November 26, 2024 12:18
@github-actions github-actions bot added the ggml changes relating to the ggml tensor library for machine learning label Nov 26, 2024
arthw pushed a commit to arthw/llama.cpp that referenced this pull request Dec 20, 2024
* cmake : enable warnings in llama

ggml-ci

* cmake : add llama_get_flags and respect LLAMA_FATAL_WARNINGS

* cmake : get_flags -> ggml_get_flags

* speculative-simple : fix warnings

* cmake : reuse ggml_get_flags

ggml-ci

* speculative-simple : fix compile warning

ggml-ci
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
build Compilation issues examples ggml changes relating to the ggml tensor library for machine learning Nvidia GPU Issues specific to Nvidia GPUs
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants