|
| 1 | +# Run the full version of OmAgent |
| 2 | +OmAgent now supports free switching between Full and Lite versions, the differences between the two versions are as follows: |
| 3 | +- The Full version has better concurrency performance, can view workflows as well as run logs with the help of the orchestration system GUI, and supports more device types (e.g. smartphone apps). Note that running the Full version requires a Docker deployment middleware dependencies. |
| 4 | +- The Lite version is suitable for developers who want to get started faster. It eliminates the steps of installing and deploying Docker, and is suitable for rapid prototyping and debugging. |
| 5 | + |
| 6 | +## Instruction of how to use Full version |
| 7 | +### 🛠️ How To Install |
| 8 | +- python >= 3.10 |
| 9 | +- Install omagent_core |
| 10 | + Use pip to install omagent_core latest release. |
| 11 | + ```bash |
| 12 | + pip install omagent-core |
| 13 | + ``` |
| 14 | + Or install the latest version from the source code like below. |
| 15 | + ```bash |
| 16 | + pip install -e omagent-core |
| 17 | + ``` |
| 18 | +- Set Up Conductor Server (Docker-Compose) Docker-compose includes conductor-server, Elasticsearch, and Redis. |
| 19 | + ```bash |
| 20 | + cd docker |
| 21 | + docker-compose up -d |
| 22 | + ``` |
| 23 | + |
| 24 | +### 🚀 Quick Start |
| 25 | +#### Configuration |
| 26 | + |
| 27 | +The container.yaml file is a configuration file that manages dependencies and settings for different components of the system. To set up your configuration: |
| 28 | + |
| 29 | +1. Generate the container.yaml file: |
| 30 | + ```bash |
| 31 | + cd examples/step1_simpleVQA |
| 32 | + python compile_container.py |
| 33 | + ``` |
| 34 | + This will create a container.yaml file with default settings under `examples/step1_simpleVQA`. |
| 35 | + |
| 36 | + |
| 37 | + |
| 38 | +2. Configure your LLM settings in `configs/llms/gpt.yml`: |
| 39 | + |
| 40 | + - Set your OpenAI API key or compatible endpoint through environment variable or by directly modifying the yml file |
| 41 | + ```bash |
| 42 | + export custom_openai_key="your_openai_api_key" |
| 43 | + export custom_openai_endpoint="your_openai_endpoint" |
| 44 | + ``` |
| 45 | + You can use a locally deployed Ollama to call your own language model. The tutorial is [here](docs/concepts/models/Ollama.md). |
| 46 | + |
| 47 | +3. Update settings in the generated `container.yaml`: |
| 48 | + - Configure Redis connection settings, including host, port, credentials, and both `redis_stream_client` and `redis_stm_client` sections. |
| 49 | + - Update the Conductor server URL under conductor_config section |
| 50 | + - Adjust any other component settings as needed |
| 51 | + |
| 52 | + |
| 53 | +For more information about the container.yaml configuration, please refer to the [container module](./docs/concepts/container.md) |
| 54 | + |
| 55 | +#### Run the demo |
| 56 | + |
| 57 | +1. Set the OmAgent to Full version by setting environment variable `OMAGENT_MODE` |
| 58 | + ```bash |
| 59 | + export OMAGENT_MODE=full |
| 60 | + ``` |
| 61 | + or |
| 62 | + ```pyhton |
| 63 | + os.environ["OMAGENT_MODE"] = "full" |
| 64 | + ``` |
| 65 | +2. Run the simple VQA demo with webpage GUI: |
| 66 | + |
| 67 | + For WebpageClient usage: Input and output are in the webpage |
| 68 | + ```bash |
| 69 | + cd examples/step1_simpleVQA |
| 70 | + python run_webpage.py |
| 71 | + ``` |
| 72 | + Open the webpage at `http://127.0.0.1:7860`, you will see the following interface: |
| 73 | + <img src="docs/images/simpleVQA_webpage.png" width="400"/> |
0 commit comments