-
Notifications
You must be signed in to change notification settings - Fork 8.3k
backend: fix extra spaces in tokenization and a CUDA crash #2778
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Since upstream commit 1b67731e1 ("BERT tokenizer fixes (#6498)"),
llama_tokenize will not add BOS for tokenizers that should not have it.
Since upstream commit 37bef8943 ("tokenizer : BPE fixes (#7530)"),
llama_add_bos_token can be used to confidently determine whether BOS
will be added by llama_tokenize.
The upstream logic to determine whether to add BOS has grown as
tokenizers have been added and improved, so this could fix problems with
a missing BOS, or context recalculation preserving the first token when
it shouldn't.
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
The size of the token cache is expected to match n_past during the decode phase of llmodel_prompt. We should make sure they match at entry, and never do anything that could cause them to desync. Signed-off-by: Jared Van Bortel <jared@nomic.ai>
`logits` does nothing now that GPT-J is removed, so remove the unused fields. Signed-off-by: Jared Van Bortel <jared@nomic.ai>
When llama.cpp was updated, I removed the space removal logic, but it turns out it's still actually needed. This is now a proper parameter, as we specifically only want to disable the *leading* space when we are tokenizing input that comes after a normal token. This fixes a regression in commit 290c629 ("backend: rebase llama.cpp submodule on latest upstream (#2694)"). Signed-off-by: Jared Van Bortel <jared@nomic.ai>
llama.cpp commit e3c337d87 ("llama : support negative ith in llama_get_
API (#6519)") added a simpler way to get the logits for the last token
in the batch, so use that instead. This also fixes potential issues with
not serializing this value with the rest of the prompt context, although
in practice we should always call evalTokens before
llama_sample_top_p_top_k.
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
manyoso
approved these changes
Aug 1, 2024
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
The main issue being fixed here is a CUDA crash with certain sizes of long inputs (ggml-org/llama.cpp#8798).
For now there is just a workaround, but upstream is working on a proper fix.I cherry-picked the change from ggml-org/llama.cpp#8800 since it seems to work.Other things I fixed while working on improving context "recalculation":