StorageLens is a .NET 8 microservices SaaS foundation for storage analytics and duplicate detection across local and NAS paths.
- Product home and routing:
/and/Indexnow open the marketing-style product home.- Global View is the primary operational dashboard entry point from the landing experience.
- The old sidebar Home link pattern has been removed from the app shell.
- Product-led acquisition UX:
- The product home now includes demo request capture, pilot/demo CTAs, FAQ/contact sections, and quick links into app modules.
- Operations UX updates:
- Reports now supports CSV export for oldest files, largest files, common file types, and duplicate-heavy locations.
- Duplicates now opens in Interactive mode by default, while keeping the Original view available.
- Pricing alignment:
- Public pricing now reflects the active catalogue: Starter ($99), Professional ($499), and Enterprise (contact sales).
- Frontend module organization:
- Profile page initialization logic was split from
site.jsinto dedicatedprofile.js.
- Profile page initialization logic was split from
Phase 2 upgrades the MVP into a real backend-capable workflow:
- real file system scan execution
- batch metadata ingestion
- SHA-256 hashing pipeline
- duplicate recalculation
- orchestration with retry/resume hooks
StorageLens.Web(Razor Pages frontend)StorageLens.Services.LocationsStorageLens.Services.ScanJobsStorageLens.Services.FileInventoryStorageLens.Services.DuplicatesStorageLens.Services.AnalyticsStorageLens.Services.Scanner(new)StorageLens.Services.Hashing(new)StorageLens.Services.Orchestrator(new)StorageLens.Shared.ContractsStorageLens.Shared.Infrastructure
Each service owns its own schema and DbContext:
locationsscanjobsfileinventoryduplicatesanalyticsscannerhashingorchestration
Cross-service access is API-driven via HttpClient.
- Web:
http://localhost:5001 - Locations:
http://localhost:5101 - ScanJobs:
http://localhost:5102 - FileInventory:
http://localhost:5103 - Duplicates:
http://localhost:5104 - Analytics:
http://localhost:5105 - Scanner:
http://localhost:5106 - Hashing:
http://localhost:5107 - Orchestrator:
http://localhost:5108
- User starts a scan from Web (
Scan JobsorStorage Locations). - Web calls Orchestrator.
- Orchestrator creates workflow + scan job (correlation ID).
- Scanner recursively enumerates files and posts metadata batches to FileInventory.
- Hashing pulls pending files and computes SHA-256 streaming hashes.
- Duplicates recalculates groups (
hash count > 1) and marks duplicate flags. - Analytics endpoints aggregate current data for dashboard/reporting.
- ScanJobs lifecycle is updated across stages.
- Configure local Azure SQL secrets (stored in user-secrets, not git):
pwsh ./scripts/set-azure-sql-user-secrets.ps1 -AzureSqlServer <your-azure-sql-server> -SqlUser <your-sql-user>- Build:
dotnet restore StorageLens.sln
dotnet build StorageLens.sln- Start services (separate terminals):
dotnet run --project src/StorageLens.Services.Locations
dotnet run --project src/StorageLens.Services.ScanJobs
dotnet run --project src/StorageLens.Services.FileInventory
dotnet run --project src/StorageLens.Services.Duplicates
dotnet run --project src/StorageLens.Services.Analytics
dotnet run --project src/StorageLens.Services.Scanner
dotnet run --project src/StorageLens.Services.Hashing
dotnet run --project src/StorageLens.Services.Orchestrator
dotnet run --project src/StorageLens.Web- Open
http://localhost:5001.
- Add a storage location in UI that points to a real local folder or UNC path.
- A sample folder is included at
sample-scan. - For local testing, use a location path like:
A:\source\_VSCode\StorageLens\sample-scan
- Dockerfiles are included for Web and all services.
- A compose stack is provided in
docker-compose.yml. - Compose includes:
- SQL Server
- RabbitMQ (for future message-bus integration)
- all StorageLens services + Web
Run compose:
cp .env.example .env
docker compose up --buildPOST /api/orchestrator/scans/start/{locationId}GET /api/orchestrator/workflows/{id}POST /api/orchestrator/workflows/{id}/resumePOST /api/orchestrator/workflows/{id}/retry
POST /api/scanner/executeGET /api/scanner/executions/{id}
POST /api/hashing/run/{scanJobId}?correlationId={id}GET /api/hashing/executions/{id}
POST /api/files/batchGET /api/files/pending-hashPOST /api/files/{id}/hash-resultPOST /api/files/mark-duplicates/{scanJobId}
POST /api/duplicates/recalculate/{scanJobId}?correlationId={id}GET /api/duplicates/summaryGET /api/duplicates/groups/{id}
POST /api/scanjobsPUT /api/scanjobs/{id}/statusPUT /api/scanjobs/{id}/progressGET /api/scanjobs/{id}GET /api/scanjobs/recent
Most services expose:
/health
- Phase 2 is intentionally practical and API-orchestrated.
- RabbitMQ is included in compose for future transition to event-driven flows; current workflow uses HTTP + persisted orchestration state.
- Scanning is read-only (no file mutation/deletion).
This repo now includes Azure deployment scaffolding:
azure.yaml(AZD service map)infra/main.bicep(Container Apps environment + ACR + identity + app topology)infra/main.parameters.json(AZD environment variable bindings).azure/plan.copilotmd(deployment plan)
- Azure CLI (
az) - Azure Developer CLI (
azd) - Docker
azd auth login
azd env new <env-name>
azd env set AZURE_LOCATION <region>
azd env set SHARED_SQL_CONNECTION_STRING "<azure-sql-connection-string>"
azd provision --preview
azd upStorageLens.Webis exposed publicly; backend services are internal-only in Container Apps.- Service-to-service URLs are wired through internal Container Apps FQDN values.
- Initial infra uses a placeholder base image in Bicep;
azd upupdates services to your built images. useDevelopmentCostProfile=true(default) enables MVP cost controls: ACR Basic, reduced log retention, and scale-to-zero/minimal replicas.
Comprehensive documentation is available in the docs/ folder:
-
Latest Review Refresh — Overall review updates and score deltas (see the Reviews section for review artifacts)
-
Developer Rebuild Checklist — Complete 18-phase checklist for rebuilding the application from scratch (estimated 4-8 hours)
-
Developer Guide — Codebase overview, workflow, and engineering practices
-
Architecture Documentation — Microservices architecture, data model, infrastructure, security, and system overview
-
API Specification — Detailed endpoint documentation
-
Deployment Guide — Local and Azure deployment procedures
-
Contributing Guide — Pull request workflow and requirements
StorageLens includes specialized Copilot agents and skills configured for productive development:
- Copilot Agents Guide — 6 specialized agents for different development tasks
- Copilot Configuration Reference — For maintainers updating Copilot configuration
Ask Copilot for help with your development task—it understands StorageLens architecture and will guide you!