You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/content/getting_started/_index.en.md
+35-38Lines changed: 35 additions & 38 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -147,57 +147,54 @@ You can run `local-ai` directly with a model name, and it will download the mode
147
147
{{< tabs >}}
148
148
{{% tab name="CPU-only" %}}
149
149
150
-
|Category|Model| Docker command |
150
+
|Model|Category| Docker command |
151
151
| --- | --- | --- |
152
-
|LLM|phi2|```docker run -ti -p 8080:8080 localai/localai:{{< version >}}-ffmpeg-core phi-2```|
153
-
|Multimodal LLM | llava|```docker run -ti -p 8080:8080 localai/localai:{{< version >}}-ffmpeg-core llava```|
154
-
|LLM |mistral-openorca |```docker run -ti -p 8080:8080 localai/localai:{{< version >}}-ffmpeg-core mistral-openorca```|
155
-
|Embeddings |bert-cpp |```docker run -ti -p 8080:8080 localai/localai:{{< version >}}-ffmpeg-core bert-cpp```|
156
-
|Embeddings |all-minilm-l6-v2 |```docker run -ti -p 8080:8080 localai/localai:{{< version >}}-ffmpeg all-minilm-l6-v2```|
157
-
| Audio to Text| whisper-base|```docker run -ti -p 8080:8080 localai/localai:{{< version >}}-ffmpeg-core whisper-base```|
158
-
|Text to Audio |rhasspy-voice-en-us-amy |```docker run -ti -p 8080:8080 localai/localai:{{< version >}}-ffmpeg-core rhasspy-voice-en-us-amy```|
159
-
| Text to Audio| coqui|```docker run -ti -p 8080:8080 localai/localai:{{< version >}}-ffmpeg coqui```|
160
-
| Text to Audio| bark|```docker run -ti -p 8080:8080 localai/localai:{{< version >}}-ffmpeg bark```|
161
-
|Text to Audio |vall-e-x |```docker run -ti -p 8080:8080 localai/localai:{{< version >}}-ffmpeg vall-e-x```|
152
+
|[phi-2](https://huggingface.co/microsoft/phi-2)|LLM|```docker run -ti -p 8080:8080 localai/localai:{{< version >}}-ffmpeg-core phi-2```|
153
+
|[llava](https://github.com/SkunkworksAI/BakLLaVA)| Multimodal LLM|```docker run -ti -p 8080:8080 localai/localai:{{< version >}}-ffmpeg-core llava```|
154
+
|[mistral-openorca](https://huggingface.co/Open-Orca/Mistral-7B-OpenOrca)| LLM|```docker run -ti -p 8080:8080 localai/localai:{{< version >}}-ffmpeg-core mistral-openorca```|
155
+
|[bert-cpp](https://github.com/skeskinen/bert.cpp)| Embeddings|```docker run -ti -p 8080:8080 localai/localai:{{< version >}}-ffmpeg-core bert-cpp```|
156
+
| all-minilm-l6-v2| Embeddings|```docker run -ti -p 8080:8080 localai/localai:{{< version >}}-ffmpeg all-minilm-l6-v2```|
157
+
|whisper-base |Audio to Text |```docker run -ti -p 8080:8080 localai/localai:{{< version >}}-ffmpeg-core whisper-base```|
158
+
| rhasspy-voice-en-us-amy| Text to Audio|```docker run -ti -p 8080:8080 localai/localai:{{< version >}}-ffmpeg-core rhasspy-voice-en-us-amy```|
159
+
|coqui |Text to Audio |```docker run -ti -p 8080:8080 localai/localai:{{< version >}}-ffmpeg coqui```|
160
+
|bark |Text to Audio |```docker run -ti -p 8080:8080 localai/localai:{{< version >}}-ffmpeg bark```|
161
+
| vall-e-x| Text to Audio|```docker run -ti -p 8080:8080 localai/localai:{{< version >}}-ffmpeg vall-e-x```|
162
162
163
163
{{% /tab %}}
164
164
{{% tab name="GPU (CUDA 11)" %}}
165
165
166
-
> To know which version of CUDA do you have available, you can check with `nvidia-smi` or `nvcc --version`
167
166
168
-
| Category | Model | Docker command |
167
+
168
+
| Model | Category | Docker command |
169
169
| --- | --- | --- |
170
-
|LLM |phi-2 |```docker run -ti -p 8080:8080 --gpus all localai/localai:{{< version >}}-cublas-cuda11-core phi-2```|
171
-
|Multimodal LLM | llava|```docker run -ti -p 8080:8080 --gpus all localai/localai:{{< version >}}-cublas-cuda11-core llava```|
172
-
|LLM |mistral-openorca |```docker run -ti -p 8080:8080 --gpus all localai/localai:{{< version >}}-cublas-cuda11-core mistral-openorca```|
173
-
|Embeddings |bert-cpp |```docker run -ti -p 8080:8080 --gpus all localai/localai:{{< version >}}-cublas-cuda11-core bert-cpp```|
174
-
|Embeddings |all-minilm-l6-v2 |```docker run -ti -p 8080:8080 --gpus all localai/localai:{{< version >}}-cublas-cuda11 all-minilm-l6-v2```|
175
-
| Audio to Text| whisper-base|```docker run -ti -p 8080:8080 --gpus all localai/localai:{{< version >}}-cublas-cuda11-core whisper-base```|
176
-
|Text to Audio |rhasspy-voice-en-us-amy |```docker run -ti -p 8080:8080 --gpus all localai/localai:{{< version >}}-cublas-cuda11-core rhasspy-voice-en-us-amy```|
177
-
| Text to Audio| coqui|```docker run -ti -p 8080:8080 --gpus all localai/localai:{{< version >}}-cublas-cuda11 coqui```|
178
-
| Text to Audio| bark|```docker run -ti -p 8080:8080 --gpus all localai/localai:{{< version >}}-cublas-cuda11 bark```|
179
-
|Text to Audio |vall-e-x |```docker run -ti -p 8080:8080 --gpus all localai/localai:{{< version >}}-cublas-cuda11 vall-e-x```|
170
+
|[phi-2](https://huggingface.co/microsoft/phi-2)| LLM|```docker run -ti -p 8080:8080 --gpus all localai/localai:{{< version >}}-cublas-cuda11-core phi-2```|
171
+
|[llava](https://github.com/SkunkworksAI/BakLLaVA)| Multimodal LLM|```docker run -ti -p 8080:8080 --gpus all localai/localai:{{< version >}}-cublas-cuda11-core llava```|
172
+
|[mistral-openorca](https://huggingface.co/Open-Orca/Mistral-7B-OpenOrca)| LLM|```docker run -ti -p 8080:8080 --gpus all localai/localai:{{< version >}}-cublas-cuda11-core mistral-openorca```|
173
+
|[bert-cpp](https://github.com/skeskinen/bert.cpp)| Embeddings|```docker run -ti -p 8080:8080 --gpus all localai/localai:{{< version >}}-cublas-cuda11-core bert-cpp```|
174
+
|[all-minilm-l6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2)| Embeddings|```docker run -ti -p 8080:8080 --gpus all localai/localai:{{< version >}}-cublas-cuda11 all-minilm-l6-v2```|
175
+
|whisper-base |Audio to Text |```docker run -ti -p 8080:8080 --gpus all localai/localai:{{< version >}}-cublas-cuda11-core whisper-base```|
176
+
| rhasspy-voice-en-us-amy| Text to Audio|```docker run -ti -p 8080:8080 --gpus all localai/localai:{{< version >}}-cublas-cuda11-core rhasspy-voice-en-us-amy```|
177
+
|coqui |Text to Audio |```docker run -ti -p 8080:8080 --gpus all localai/localai:{{< version >}}-cublas-cuda11 coqui```|
178
+
|bark |Text to Audio |```docker run -ti -p 8080:8080 --gpus all localai/localai:{{< version >}}-cublas-cuda11 bark```|
179
+
| vall-e-x| Text to Audio|```docker run -ti -p 8080:8080 --gpus all localai/localai:{{< version >}}-cublas-cuda11 vall-e-x```|
180
180
181
181
{{% /tab %}}
182
182
183
183
184
184
{{% tab name="GPU (CUDA 12)" %}}
185
185
186
-
> To know which version of CUDA do you have available, you can check with `nvidia-smi` or `nvcc --version`
187
-
188
-
| Category | Model | Docker command |
186
+
| Model | Category | Docker command |
189
187
| --- | --- | --- |
190
-
| LLM | phi-2 |```docker run -ti -p 8080:8080 --gpus all localai/localai:{{< version >}}-cublas-cuda12-core phi-2```|
191
-
| Multimodal LLM | llava |```docker run -ti -p 8080:8080 --gpus all localai/localai:{{< version >}}-cublas-cuda12-core llava```|
192
-
| LLM | mistral-openorca |```docker run -ti -p 8080:8080 --gpus all localai/localai:{{< version >}}-cublas-cuda12-core mistral-openorca```|
193
-
| Embeddings | bert-cpp |```docker run -ti -p 8080:8080 --gpus all localai/localai:{{< version >}}-cublas-cuda12-core bert-cpp```|
194
-
| Embeddings | all-minilm-l6-v2 |```docker run -ti -p 8080:8080 --gpus all localai/localai:{{< version >}}-cublas-cuda12 all-minilm-l6-v2```|
195
-
| Audio to Text | whisper-base |```docker run -ti -p 8080:8080 --gpus all localai/localai:{{< version >}}-cublas-cuda12-core whisper-base```|
196
-
| Text to Audio | rhasspy-voice-en-us-amy |```docker run -ti -p 8080:8080 --gpus all localai/localai:{{< version >}}-cublas-cuda12-core rhasspy-voice-en-us-amy```|
197
-
| Text to Audio | coqui |```docker run -ti -p 8080:8080 --gpus all localai/localai:{{< version >}}-cublas-cuda12 coqui```|
198
-
| Text to Audio | bark |```docker run -ti -p 8080:8080 --gpus all localai/localai:{{< version >}}-cublas-cuda12 bark```|
199
-
| Text to Audio | vall-e-x |```docker run -ti -p 8080:8080 --gpus all localai/localai:{{< version >}}-cublas-cuda12 vall-e-x```|
200
-
188
+
|[phi-2](https://huggingface.co/microsoft/phi-2)| LLM |```docker run -ti -p 8080:8080 --gpus all localai/localai:{{< version >}}-cublas-cuda12-core phi-2```|
189
+
|[llava](https://github.com/SkunkworksAI/BakLLaVA)| Multimodal LLM |```docker run -ti -p 8080:8080 --gpus all localai/localai:{{< version >}}-cublas-cuda12-core llava```|
190
+
|[mistral-openorca](https://huggingface.co/Open-Orca/Mistral-7B-OpenOrca)| LLM |```docker run -ti -p 8080:8080 --gpus all localai/localai:{{< version >}}-cublas-cuda12-core mistral-openorca```|
191
+
| bert-cpp | Embeddings |```docker run -ti -p 8080:8080 --gpus all localai/localai:{{< version >}}-cublas-cuda12-core bert-cpp```|
192
+
| all-minilm-l6-v2 | Embeddings |```docker run -ti -p 8080:8080 --gpus all localai/localai:{{< version >}}-cublas-cuda12 all-minilm-l6-v2```|
193
+
| whisper-base | Audio to Text |```docker run -ti -p 8080:8080 --gpus all localai/localai:{{< version >}}-cublas-cuda12-core whisper-base```|
194
+
| rhasspy-voice-en-us-amy | Text to Audio |```docker run -ti -p 8080:8080 --gpus all localai/localai:{{< version >}}-cublas-cuda12-core rhasspy-voice-en-us-amy```|
195
+
| coqui | Text to Audio |```docker run -ti -p 8080:8080 --gpus all localai/localai:{{< version >}}-cublas-cuda12 coqui```|
196
+
| bark | Text to Audio |```docker run -ti -p 8080:8080 --gpus all localai/localai:{{< version >}}-cublas-cuda12 bark```|
197
+
| vall-e-x | Text to Audio |```docker run -ti -p 8080:8080 --gpus all localai/localai:{{< version >}}-cublas-cuda12 vall-e-x```|
201
198
202
199
{{% /tab %}}
203
200
@@ -223,7 +220,7 @@ For example, to start localai with phi-2, it's possible for instance to also use
223
220
docker run -p 8080:8080 localai/localai:{{< version >}}-ffmpeg-core https://gist.githubusercontent.com/mudler/ad601a0488b497b69ec549150d9edd18/raw/a8a8869ef1bb7e3830bf5c0bae29a0cce991ff8d/phi-2.yaml
224
221
```
225
222
226
-
The file should be a valid YAML configuration file, for the full syntax see [advanced]({{%relref "advanced" %}}).
223
+
The file should be a valid LocalAI YAML configuration file, for the full syntax see [advanced]({{%relref "advanced" %}}).
0 commit comments