Skip to content

Commit 4912b90

Browse files
Update openai.md (huggingface#1730)
I was able to set multimodal: 'true' when using a open ai backend running Qwen2-VL in a vllm openai like backend
1 parent 5b82962 commit 4912b90

File tree

1 file changed

+1
-1
lines changed
  • docs/source/configuration/models/providers

1 file changed

+1
-1
lines changed

docs/source/configuration/models/providers/openai.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@
33
| Feature | Available |
44
| --------------------------- | --------- |
55
| [Tools](../tools) | No |
6-
| [Multimodal](../multimodal) | No |
6+
| [Multimodal](../multimodal) | Yes |
77

88
Chat UI can be used with any API server that supports OpenAI API compatibility, for example [text-generation-webui](https://github.com/oobabooga/text-generation-webui/tree/main/extensions/openai), [LocalAI](https://github.com/go-skynet/LocalAI), [FastChat](https://github.com/lm-sys/FastChat/blob/main/docs/openai_api.md), [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), and [ialacol](https://github.com/chenhunghan/ialacol) and [vllm](https://docs.vllm.ai/en/latest/serving/openai_compatible_server.html).
99

0 commit comments

Comments
 (0)