Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

Use the code/model included here: https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1

Change the initial device line from "cuda" to "cpu" and it'll run.

(Edit: just a note, use the main/head version of transformers which has merged Mistral support. Also saw TheBloke uploaded a GGUF and just confirmed that latest llama.cpp works w/ it.)



view as:

Legal | privacy