A powerful command-line tool for running multiple AI prompts in parallel using GitHub Copilot SDK. Execute multiple prompts concurrently, stream responses in real-time, and save consolidated results to markdown.
- Parallel Execution: Run multiple prompts concurrently with configurable parallelism
- Real-time Streaming: View responses as they are generated in real-time
- Flexible Input: Support for inline prompts or prompts from a file
- Multi-line Prompts: Create complex prompts spanning multiple lines
- Model Selection: Choose your preferred AI model (default: Claude Opus 4.5)
- Timeout Control: Configure timeout per prompt to handle long-running tasks
- Skill Integration: Load custom skills from specified directories
- Markdown Output: Automatically save all results to a timestamped markdown file
- Error Handling: Graceful error handling with detailed error reporting
- .NET 10.0 or later
- GitHub Copilot SDK access
- Clone this repository:
git clone https://github.com/ManishJayaswal/multi-cli-launcher.git
cd multi-cli-launcher- Build the project:
dotnet build- Run the tool:
dotnet run -- [options] [prompts...]Or build and run as a standalone executable:
dotnet publish -c Release
./bin/Release/net10.0/multi-cli-launcher [options] [prompts...]multi-cli-launcher [options] [prompts...]| Option | Short | Description | Default |
|---|---|---|---|
--model <name> |
-m |
AI model to use | Claude Opus 4.5 |
--file <path> |
-f |
Path to prompts file | - |
--max-parallel <n> |
-p |
Maximum concurrent sessions | 4 |
--output-dir <path> |
-o |
Output directory for results | ./output |
--timeout <minutes> |
-t |
Timeout per prompt in minutes | 10 |
--skills <dir> |
-s |
Directory containing skill files (can be repeated) | - |
--help |
-h |
Show help message | - |
multi-cli-launcher "What is 2+2?" "Explain gravity in simple terms"multi-cli-launcher -f prompts.txtmulti-cli-launcher -m "Claude Opus 4.5" -f prompts.txtmulti-cli-launcher -p 2 -o ./results "Hello world" "Explain AI"multi-cli-launcher -t 30 "Analyze this large dataset..."multi-cli-launcher -s ~/.copilot/skills "Use my custom skill"multi-cli-launcher -f prompts.txt -m "Claude Opus 4.5" -p 3 -o ./output -t 15Create a text file with your prompts. The format supports:
- Comments: Lines starting with
#(except directives) - Model Directive: Use
#model: <model-name>at the top to override the default model - Multi-line Prompts: Prompts can span multiple lines
- Prompt Delimiter: Use
---to separate prompts
#model: Claude Opus 4.5
# This is a comment
What is the capital of France?
---
# Multi-line prompt example
Explain the following concepts in simple terms:
- Recursion
- Big O notation
- Stack vs Heap
---
Write a short haiku about programming.
- Model Directive: Must appear before any prompts, prefixed with
#model: - Comments: Lines starting with
#(except#model:) are ignored - Delimiters: Use
---on its own line to separate prompts - Whitespace: Leading and trailing whitespace is trimmed from each prompt
- Empty Prompts: Empty sections (containing only comments/whitespace) are skipped
The tool displays:
- Configuration Summary: Shows model, parallelism, timeout, and prompt count
- Real-time Streaming: Each prompt's response streams to the console as it's generated
- Completion Status: Shows success/error count when all prompts complete
Example console output:
Model: Claude Opus 4.5
Max Parallel: 4
Timeout: 10 minutes
Output Directory: ./output
Total Prompts: 3
[1] What is the capital of France?
[2] Explain the following concepts in simple ter...
[3] Write a short haiku about programming.
============================================================
## Prompt 1 (streaming...)
------------------------------------------------------------
What is the capital of France?
------------------------------------------------------------
### Response
The capital of France is Paris...
============================================================
[Prompt 1 completed]
Completed: 3 successful, 0 failed
Output saved to: ./output/Multi-CLI-Launcher-Output-2026-01-26-120530.md
Results are saved to a timestamped markdown file in the output directory:
- Filename format:
Multi-CLI-Launcher-Output-YYYY-MM-DD-HHmmss.md - Contains all prompts and their responses
- Errors are included in code blocks
- Easy to share and review
The tool is built with several key components:
- ArgumentParser: Parses command-line arguments into configuration
- PromptFileReader: Reads and parses prompts from files with directive support
- PromptExecutor: Executes prompts using GitHub Copilot SDK with streaming
- OutputCoordinator: Coordinates output from multiple parallel tasks using channels
- OutputConsumer: Consumes and formats output for console and markdown
- Concurrency Control: Uses
SemaphoreSlimto limit parallel executions - Streaming Support: Real-time streaming of responses to console
- Channel-based Coordination: Prevents output interleaving using .NET Channels
- First-Complete-First-Print: Displays results as they complete, not in order
- Independent Sessions: Each prompt runs in its own isolated session
The project includes comprehensive unit tests:
# Run all tests
dotnet test
# Run tests with verbose output
dotnet test -v n
# Run specific test file
dotnet test --filter "FullyQualifiedName~ArgumentParserTests"Test coverage includes:
- Argument parsing
- Prompt file reading
- Output coordination
- Output consumption
Contributions are welcome! Please feel free to submit issues or pull requests.
[Specify your license here]
Manish Jayaswal
Built with the GitHub Copilot SDK