Skip to content

Latest commit

 

History

History
130 lines (100 loc) · 3.76 KB

File metadata and controls

130 lines (100 loc) · 3.76 KB

SILENTCHAIN AI - Benchmark Report

This document contains performance benchmarks and security scan results for SILENTCHAIN AI across different AI models, providers, and target applications.


Benchmark Results

Test 1: Ollama DeepSeek-V3.1 on ASP.NET TestInvicti

Product SILENTCHAIN AI Professional v1.1.0
AI Provider Ollama
Model deepseek-v3.1:671b-cloud
Target(s) aspnet.testinvicti.com

Performance

Metric Value
Total Scan Time 19.0 minutes
AI Requests 137
Avg Time/Request 8.32s
Total Tokens 138,742
Avg Tokens/Request 1,012

Findings

Severity Count
🔴 High 16
🟠 Medium 37
🟡 Low 63
🔵 Info 35
Total 151
Verified (Phase 2) 20

Scan Coverage

Metric Value
URLs Processed 78
URLs Analyzed 42
Skipped (Dup) 50
Errors 22

Benchmark History

More benchmarks will be added here as additional models, providers, and target applications are tested.

Planned Tests

  • OpenAI GPT-4 on aspnet.testinvicti.com
  • Claude 3.5 Sonnet on aspnet.testinvicti.com
  • Google Gemini 1.5 Pro on aspnet.testinvicti.com
  • Ollama Llama 3.1 on various targets
  • Ollama Qwen 2.5 Coder on various targets
  • Comparative analysis across OWASP Juice Shop
  • Performance testing on large-scale applications

Methodology

Test Environment

  • Burp Suite: Professional Edition
  • SILENTCHAIN: Professional v1.1.0
  • Network: Standard broadband connection
  • Hardware: (To be documented per test)

Test Targets

  • aspnet.testinvicti.com: ASP.NET vulnerable application for security testing
  • More targets will be added in future benchmarks

Metrics Explained

Performance Metrics

  • Total Scan Time: Wall-clock time from first request to last finding
  • AI Requests: Number of API calls made to the AI provider
  • Avg Time/Request: Average response time per AI analysis
  • Total Tokens: Combined input + output tokens used
  • Avg Tokens/Request: Average token consumption per request

Finding Metrics

  • High Severity: Critical vulnerabilities requiring immediate attention
  • Medium Severity: Important security issues with moderate risk
  • Low Severity: Minor vulnerabilities or security weaknesses
  • Info: Informational findings and security notes
  • Verified (Phase 2): Findings confirmed through active exploitation (Professional only)

Coverage Metrics

  • URLs Processed: Total unique URLs encountered
  • URLs Analyzed: URLs that underwent AI analysis
  • Skipped (Dup): Duplicate URLs not re-analyzed
  • Errors: Requests that failed analysis

Contributing Benchmarks

If you'd like to contribute benchmark results:

  1. Run SILENTCHAIN on a public vulnerable application (DVWA, Juice Shop, WebGoat, etc.)
  2. Document your test environment:
    • SILENTCHAIN version
    • AI provider and model
    • Target application
    • Hardware specs (CPU, RAM)
    • Network conditions
  3. Export benchmark report from SILENTCHAIN
  4. Submit via GitHub Issue or Pull Request

Notes

  • All benchmarks are performed on publicly accessible test applications designed for security testing
  • Results may vary based on:
    • AI model version and capabilities
    • Network latency
    • Hardware resources
    • Target application complexity
    • Burp Suite configuration
  • For Ollama benchmarks: Local hardware significantly impacts performance
  • For Cloud AI benchmarks: Network latency and API rate limits affect results

Generated by SILENTCHAIN AI

Copyright © 2026 SN1PERSECURITY LLC. All rights reserved.