-
Notifications
You must be signed in to change notification settings - Fork 77
Delete models
igardev edited this page Aug 16, 2025
·
1 revision
Llama-vscode automatically downloads (if not yet done) models (LLMs) from Huggingface if a local model (or env) is selected. The downloaded models are GGUF files. Once downloaded, the models are reused. The LLMs could take a lot of space on your hard disk. For example gpt-oss-20b-GGUF is 12GB.
All downloaded models are stored in one standard folder:
- Windows: C:\Users<user_name>\AppData\Local\llama.cpp.
- Mac or Linux: /users/<user_name>/Library/Caches/llama.cpp.
You could delete the GGUF files from this folder. If they are missing, but are needed by llama-vscode, it will download them automatically again.