-
Notifications
You must be signed in to change notification settings - Fork 951
Pull requests: abetlen/llama-cpp-python
Author
Label
Projects
Milestones
Reviews
Assignee
Sort
Pull requests list
Support LoRA hotswapping and multiple LoRAs at a time
#1817
opened Oct 30, 2024 by
richdougherty
•
Draft
5 of 12 tasks
fix: make content not required in ChatCompletionRequestAssistantMessage
#1807
opened Oct 21, 2024 by
feloy
Loading…
fix: Avoid thread starvation on many concurrent requests by making use of asyncio to lock llama_proxy context
#1798
opened Oct 15, 2024 by
gjpower
Loading…
fix: added missing exit_stack.close() to /v1/chat/completions
#1796
opened Oct 14, 2024 by
Ian321
Loading…
Fix: Refactor Batching notebook to use new sampler chain API
#1793
opened Oct 13, 2024 by
lukestanley
Loading…
chore(deps): bump pypa/cibuildwheel from 2.21.1 to 2.21.3
dependencies
Pull requests that update a dependency file
github_actions
Pull requests that update GitHub Actions code
#1790
opened Oct 9, 2024 by
dependabot
bot
Loading…
server types: Move 'model' parameter to clarify it is used
#1786
opened Oct 5, 2024 by
domdomegg
Loading…
Resync llama_grammar with llama.cpp implementation and use curly braces quantities instead of repetitions
#1721
opened Aug 31, 2024 by
gbloisi-openaire
Loading…
feat: adding support for external chat format contribution
#1716
opened Aug 29, 2024 by
axel7083
Loading…
Allow server to accept openai's new structured output "json_schema" format.
#1677
opened Aug 13, 2024 by
cerealbox
Loading…
Updated README.md, llama_cpp/llama.py and pyproject.toml to add support for cross-encoders
#1605
opened Jul 17, 2024 by
perpendicularai
Loading…
Support images from local storage for Llava models
#1583
opened Jul 9, 2024 by
GokulMuraliRajasekar
Loading…
Previous Next
ProTip!
Add no:assignee to see everything that’s not assigned.