ORCH — CLI runtime I built to orchestrate multiple coding agents (pairs with Tabby-backed inference) #4464
Replies: 1 comment
-
|
This is super timely. With Qwen3.6-27B just dropping today (27B dense model hitting flagship-level coding benchmarks), the economics of multi-agent coding orchestration just got a LOT more interesting. The cost model shift:
For your ORCH architecture specifically:
Curious: have you experimented with mixed model sizes across your agent team? What's the trade-off between consistency (all agents same model) vs cost/latency (different models per role)? |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
What I built
ORCH — a TypeScript CLI runtime for orchestrating teams of AI coding agents from a single terminal.
How it relates to Tabby
Many of us use Tabby as the inference backend while coordinating multiple coding agents on top. The problem I kept running into: agents would "complete" a task but the output was never reviewed — it just moved to done silently.
ORCH adds a formal state machine on top:
Tasks can't skip the review step. If an agent crashes, it auto-retries. Agents can send messages to each other via
orch msg send.Stack
@oxgeneral/orchnpm packageQuick start
Repo: https://github.com/oxgeneral/ORCH
Beta Was this translation helpful? Give feedback.
All reactions