Skip to content

Conversation

@loci-dev
Copy link

Mirrored from ggml-org/llama.cpp#18073

detected error in readme quoting n-gpu-layer which is incorrect
argument is n-gpu-layers with the 's'
verified correct from to doco on L83
https://github.com/ggml-org/llama.cpp/blob/d6a1e18c651a46109cbf2ad3b299581f0651128f/tools/server/README.md?plain=1#L83

Make sure to read the contributing guidelines before submitting a PR

i have read contributing guidelines
documentation typo fix only, no code impacted

typo in Commit f32ca51

n-gpu-layer is incorrect
argument is n-gpu-layers with the 's'
@loci-agentic-ai
Copy link

Explore the complete analysis inside the Version Insights

Performance Analysis Summary - PR #584

Analysis Type: Documentation-only change
Files Modified: 1 (tools/server/README.md)
Code Changes: None

This PR corrects a documentation typo in the llama-server README, changing --n-gpu-layer to --n-gpu-layers. No source code, headers, or binaries were modified. Power consumption analysis confirms 0% change across all binaries. No functions in performance-critical areas were affected.

@loci-dev loci-dev force-pushed the main branch 15 times, most recently from 1fc5e38 to 193b250 Compare December 17, 2025 12:15
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants