I have the same card on my machine at home, what is your config to run the model?
Downloaded the gguf file by unsloth, ran llama-cli from llama.cpp with that file as an argument.
IIUC, nowadays there is a jinja templated metadata-struct inside the gguf file itself. This contains the chat template and other config.