You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It seems that the string replacements in the post-processing of the tokenizer are not included in the GGUF model.
Hence some LLM with fancy tokenizers can have the output text a bit weird with tools like ollama that use GGUF models.
Those are supposed to remove extra space (introduced in the pre-processing to have "uniform" subword tokens, i.e. sam e token represente for a word whether it comes after a space or after something starting a new sentence (start of string, apostrophe, quotation mark, ...).
@ggerganov I would be happy to contribute to this repo to solve this bug :)
The text was updated successfully, but these errors were encountered:
Thank you for your answer @ggerganov
Do you have any hint on where I should look at ?
Is there already a class for text post-processing bricks, or would it be a new thing ?
It seems that the string replacements in the post-processing of the tokenizer are not included in the GGUF model.
Hence some LLM with fancy tokenizers can have the output text a bit weird with tools like ollama that use GGUF models.
I noticed it with Lucie Instruct: https://huggingface.co/OpenLLM-France/Lucie-7B-Instruct#test-with-ollama
The tokenizer include several post-processing steps that are discarded:
https://huggingface.co/OpenLLM-France/Lucie-7B/raw/main/tokenizer.json
Those are supposed to remove extra space (introduced in the pre-processing to have "uniform" subword tokens, i.e. sam e token represente for a word whether it comes after a space or after something starting a new sentence (start of string, apostrophe, quotation mark, ...).
@ggerganov I would be happy to contribute to this repo to solve this bug :)
The text was updated successfully, but these errors were encountered: