-
Notifications
You must be signed in to change notification settings - Fork 8
Open
Labels
enhancementNew feature or requestNew feature or request
Description
Describe the feature you'd like to request
Instead of using the whl file from https://github.com/abetlen/llama-cpp-python/releases, build the package in the docker container. Newer versions of llama-cpp-python haven't had pre-built releases recently which would make building the only way to have them officially.
This will also solve #126 and #178 (comment) (using app_api's env declaration for CMAKE_ARGS
).
Some related links:
- llama-cpp-python 0.3.8 with CUDA abetlen/llama-cpp-python#2010
- How to install the latest version with GPU support abetlen/llama-cpp-python#2012
- https://github.com/abetlen/llama-cpp-python/blob/main/docker/cuda_simple/Dockerfile
-DGGML_CPU_ALL_VARIANTS=ON
in ggml : add predefined list of CPU backend variants to build ggml-org/llama.cpp#10626
Metadata
Metadata
Assignees
Labels
enhancementNew feature or requestNew feature or request