server : host-memory prompt caching#16391
Merged
Conversation
0787f03 to
5c0cec4
Compare
This comment was marked as spam.
This comment was marked as spam.
5c0cec4 to
1440ec5
Compare
3 tasks
9de8392 to
cf7dd4b
Compare
Member
Author
65e8991 to
264d2c3
Compare
|
Fantastic work indeed! Now local orchestrators can actually be usable!
What argument can I use to make "max number of cached tokens" larger than "by default, equal to Just to make sure I understand everything correctly:
Is this all correct? |
3 tasks
4 tasks
3 tasks
Anico2
added a commit
to Anico2/llama.cpp
that referenced
this pull request
Jan 15, 2026
* minor : code style * server : fix prompt similarity calculation * server : initial host-memory prompt caching * cont * server : refactor * cont * cont : make the server task of the slot const * cont : minor [no ci] * server : cache prompts and checkpoints only for completion tasks * server : improve prompt caching logic * cont : fix check for number of cached prompts [no ci] * server : improve caching logic, add -cram CLI arg * server : print prompt mismatch info * cont : better naming [no ci] * server : improve prompt cache loading logic * server : add option to debug the slot contents (ggml-org#16482) * server : add option to debug the slot contents * Update tools/server/server.cpp --------- Co-authored-by: Xuan-Son Nguyen <son@huggingface.co> * server : add option to disable prompt cache --------- Co-authored-by: Xuan-Son Nguyen <son@huggingface.co>
6 tasks
5 tasks
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.

target #16440
rel #16117
Initial version of automatic memory offloading to host memory using an extended logic for minimizing the prompt reprocessing. The host-memory prompt cache acts as "extra slots" with which we can calculate prefix similarity and decide to hot-swap them into the
llama_contextif it would reduce the processing. The cache is stored in regular RAM.The RAM size that is used for caching prompts has 2 limits:
--cache-ram, -cramCLI arg)--context-size)The server logs provide detailed prompt cache information each time the cache is updated:
A small QoL improvement is that
update_slots()now also logs the old and new prompt for each task aroundn_past(up to 10 tokens) so we can have a better understanding what caused the particular choice of then_pastvalue for the new task.Setting
LLAMA_SERVER_SLOTS_DEBUG=1env will make the/slotsendpoint output a more detailed output containing the prompt and the generated text of the current or last task. This is useful for debugging purposes.Note: mtmd workarounds are starting to cause some headaches. For example
server_tokensis not copyable which complicates the cache logic and makes the prompt caching feature incompatible with mtmd.Usage
Server refactor
server_slotmembers with a singleserver_taskserver_slot.n_predictslot.taskis nowconst ptrto reflect that the task parameters should not change when it is passed to the slotTODOs