Skip to content

Liwx1014/VizAudit

Repository files navigation

Project Overview

Developed a Web-based visualization and analysis platform for model predictions, supporting review of YOLO series models' results to efficiently identify and correct mislabeling and missed labeling issues, enhancing dataset quality.

Core Highlights:

  • Universal Compatibility: Supports multiple model formats (YOLO, Define, etc.) and NMS/WBF
  • Visual Review: Web-based intuitive comparison of model predictions vs. ground truth
  • Interactive Error Correction: Easily delete incorrect labels and supplement missed ones
  • Quality Closed Loop: Establishes an iterative "predict-analyze-correct-retrain" workflow

Step 1

  1. Generate prediction results and save them as txt files (not in normalized format). The model name can be specified.
    • python3 generate.py -mo pred -m deim -c projects/x.yaml
  2. For YOLO models: need to convert to XML format first, then proceed with step 1.
  3. Generate ground truth (gt) from annotated XML files (classname, top-left coordinates, bottom-right coordinates) and save as txt files in the datasets directory under the current path.
    • python3 generate.py -mo gt -c projects/x.yaml -m yolo
  4. Generate prediction result txt files.

Step 2: Calculate metrics with pascalvoc.py

  1. python pascalvoc.py -g datasets/yiwu_6cls/gt -p datasets/yiwu_6cls/deim_pred -c projects/x.yaml --deep-analysis tower_crane

    The above command compares prediction results with ground truth, generates detailed metric information, and creates structural files for specific FP and FN.

Step 3: Analyze false positives and false negatives with web_service.py

  1. python3 web_service.py -j datasets/smoke_yolo_v11/analysis/structured_data

image

About

Web-based Model Detection Prediction Quality Control Platform

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages