This document contains performance benchmarks and security scan results for SILENTCHAIN AI across different AI models, providers, and target applications.
| Product | SILENTCHAIN AI Professional v1.1.0 |
| AI Provider | Ollama |
| Model | deepseek-v3.1:671b-cloud |
| Target(s) | aspnet.testinvicti.com |
| Metric | Value |
|---|---|
| Total Scan Time | 19.0 minutes |
| AI Requests | 137 |
| Avg Time/Request | 8.32s |
| Total Tokens | 138,742 |
| Avg Tokens/Request | 1,012 |
| Severity | Count |
|---|---|
| 🔴 High | 16 |
| 🟠 Medium | 37 |
| 🟡 Low | 63 |
| 🔵 Info | 35 |
| Total | 151 |
| Verified (Phase 2) | 20 |
| Metric | Value |
|---|---|
| URLs Processed | 78 |
| URLs Analyzed | 42 |
| Skipped (Dup) | 50 |
| Errors | 22 |
More benchmarks will be added here as additional models, providers, and target applications are tested.
- OpenAI GPT-4 on aspnet.testinvicti.com
- Claude 3.5 Sonnet on aspnet.testinvicti.com
- Google Gemini 1.5 Pro on aspnet.testinvicti.com
- Ollama Llama 3.1 on various targets
- Ollama Qwen 2.5 Coder on various targets
- Comparative analysis across OWASP Juice Shop
- Performance testing on large-scale applications
- Burp Suite: Professional Edition
- SILENTCHAIN: Professional v1.1.0
- Network: Standard broadband connection
- Hardware: (To be documented per test)
- aspnet.testinvicti.com: ASP.NET vulnerable application for security testing
- More targets will be added in future benchmarks
- Total Scan Time: Wall-clock time from first request to last finding
- AI Requests: Number of API calls made to the AI provider
- Avg Time/Request: Average response time per AI analysis
- Total Tokens: Combined input + output tokens used
- Avg Tokens/Request: Average token consumption per request
- High Severity: Critical vulnerabilities requiring immediate attention
- Medium Severity: Important security issues with moderate risk
- Low Severity: Minor vulnerabilities or security weaknesses
- Info: Informational findings and security notes
- Verified (Phase 2): Findings confirmed through active exploitation (Professional only)
- URLs Processed: Total unique URLs encountered
- URLs Analyzed: URLs that underwent AI analysis
- Skipped (Dup): Duplicate URLs not re-analyzed
- Errors: Requests that failed analysis
If you'd like to contribute benchmark results:
- Run SILENTCHAIN on a public vulnerable application (DVWA, Juice Shop, WebGoat, etc.)
- Document your test environment:
- SILENTCHAIN version
- AI provider and model
- Target application
- Hardware specs (CPU, RAM)
- Network conditions
- Export benchmark report from SILENTCHAIN
- Submit via GitHub Issue or Pull Request
- All benchmarks are performed on publicly accessible test applications designed for security testing
- Results may vary based on:
- AI model version and capabilities
- Network latency
- Hardware resources
- Target application complexity
- Burp Suite configuration
- For Ollama benchmarks: Local hardware significantly impacts performance
- For Cloud AI benchmarks: Network latency and API rate limits affect results
Generated by SILENTCHAIN AI
Copyright © 2026 SN1PERSECURITY LLC. All rights reserved.