Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
123 changes: 19 additions & 104 deletions .agents/skills/signoz-docs-pr-review/SKILL.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
---
name: signoz-docs-pr-review
description: Review SigNoz documentation pull requests using CONTRIBUTING.md and this skill's docs-review rubric. Use when asked to review docs PRs, post inline findings, verify OpenTelemetry technical accuracy with sources, decide add-to-onboarding labeling, and write a concise checklist-based summary.
description: Review SigNoz documentation pull requests — post inline findings, verify OpenTelemetry technical accuracy with sources, decide add-to-onboarding labeling, and write a concise checklist-based summary. Use when asked to review docs PRs, check documentation changes, evaluate MDX content in data/docs, or assess any docs-related PR, even when the user just says "review this PR" and the changed files are documentation.
---

# SigNoz Docs PR Review
Expand All @@ -21,108 +21,23 @@ If a PR includes frontend code too, use this skill only for the docs part.

## Source of Truth

- Treat `CONTRIBUTING.md` at repo root as the single source of truth for docs standards.
- Apply rules; do not restate the entire guide in comments.
- Ignore date-related guideline checks during review.
The review standards live in the `contributing/` playbooks, not in this skill file. This skill orchestrates the review process; the playbooks define the policy.

- `contributing/docs-review.md` — review rubric, JTBD checklist, technical accuracy rules, onboarding-label policy, summary format
- `contributing/docs-authoring.md` — authoring standards, frontmatter rules, page structure, doc-type guidance, author checklist

Apply rules from those playbooks directly. Do not restate the entire guide in comments. Ignore date-related guideline checks during review.

## Review Process

1. Identify docs files changed in the PR.
2. Identify related discoverability files that should change with the docs when relevant (`constants/docsSideNav.ts` for sidebar visibility, `constants/componentItems.ts` as the public entrypoint and `constants/componentItems/*.ts` as the source modules for listicle/overview visibility).
3. Read relevant sections in `CONTRIBUTING.md` (JTBD, patterns/components, happy path, hyperlinks, doc-type rules, docs checklist).
3. **Read `contributing/docs-review.md` and `contributing/docs-authoring.md` in full before starting the review.** These playbooks contain the JTBD rubric, authoring standards, onboarding-label policy, and summary format that drive every review decision. Reviewing without reading them first leads to missed checks and inconsistent feedback.
4. Identify intended user personas for each changed doc (for example: OTel beginner, platform engineer, app developer, SRE) from doc context.
5. Run a **JTBD-first pass** (mandatory) before technical verification.
6. Verify technical accuracy when claims involve OpenTelemetry behavior/configuration.
5. Run the JTBD-first pass defined in `contributing/docs-review.md` before technical verification.
6. Verify technical accuracy when claims involve OpenTelemetry behavior or configuration, following the source priority and citation rules in `contributing/docs-review.md`.
7. Post inline findings for concrete issues.
8. Post exactly one concise summary comment referencing Docs PR Checklist coverage.

## JTBD Priority Rubric (Mandatory)

Review each changed doc against these checks in order. If any check fails, raise a finding.

1. Intended personas
- List primary and secondary personas the doc appears to target.
- Confirm scope, assumptions, and language match those personas.
2. Primary job clarity
- The reader can quickly tell what problem this page solves.
- The page does not mix unrelated jobs into one mandatory flow.
3. Happy/common path focus
- Common path is easy to follow end-to-end.
- Tangential information is minimized or moved to optional/collapsible sections.
4. Time-to-first-success
- A clear default path exists and reaches first success without optional detours.
- Mandatory steps are minimal and in the right order.
5. Step clarity and concision
- Steps are concrete, unambiguous, and concise.
- Users can execute each step without guessing missing actions.
6. Minimal required steps
- The doc requires only what is necessary to complete the primary job.
- Non-essential actions are explicitly optional.
7. Recommended defaults vs advanced options
- Best/recommended configuration is presented as default.
- Advanced options are moved to the bottom, troubleshooting, collapsible sections, or next steps.
8. Beginner unblockers
- Any required attribute/config/concept has a direct "how to set this" step or link to a doc that explains how to set it.
- No critical prerequisite is implied without remediation guidance.
9. Symptom-to-action mapping
- Troubleshooting starts from user-visible symptoms and points to exact next actions.
- Failure modes are concrete (not generic "check your setup").
10. Success signal
- Validation tells users exactly where to check in SigNoz and what success looks like.
11. Follow-through
- Next steps help users complete the broader job (for example dashboards, alerts, deeper guides).
12. Helpful links (internal/external)
- Link to internal/external docs wherever they directly help completion of the current step.
- Avoid irrelevant link dumping.
13. Link health
- Added/edited links resolve (no dead links). Prefer canonical production paths.
14. Discoverability surface updates
- If a PR adds a new integration, data source, installation path, dashboard template, or similar doc that users should find from an existing docs listicle/overview card surface, verify the matching entry was added or updated in the relevant `constants/componentItems/*.ts` module and remains exported through `constants/componentItems.ts`.
- When the surfaced component uses icons, verify the relevant component `ICON_MAP` was updated too.
- Do not require this when no existing listicle/overview surface is relevant.

If a JTBD check cannot be validated from the PR context, explicitly call out the assumption and residual risk.

## Technical Accuracy and Sources

When validating technical claims, use this source priority:

1. `https://opentelemetry.io/docs/*`
2. `https://github.com/open-telemetry/*` (official repos, READMEs, examples)
3. Other reputable sources only if official sources do not cover the claim

For each correction that depends on verification:

- Add a short citation in the comment as: `Source: <URL>`
- Keep citations precise and relevant to the claim
- Do not paste long excerpts

Prioritize verification for:

- Config keys and component names
- Receiver/exporter/processor names
- Environment variables and CLI flags
- APIs, semantic conventions, version compatibility, deprecations

If sources conflict, prefer the most recent official OpenTelemetry docs/repo over third-party content.

## Web Lookup Guidance

- Use targeted queries first (for example, `site:opentelemetry.io <topic>`).
- Fetch only the minimal pages needed to verify the exact claim.
- Summarize findings; avoid large copy-paste.

## Labeling Rule

If PR adds a **new** docs file (not only edits existing docs) that explains sending data to SigNoz Cloud (for example new instrumentation, new collector receiver flow, or new log collection method), add label:

- `add-to-onboarding`

Command:

```bash
gh issue edit <PR_NUMBER> --add-label "add-to-onboarding"
```
8. Post exactly one concise summary comment using the format in `contributing/docs-review.md`.

## Commenting Rules

Expand All @@ -149,20 +64,22 @@ Post one summary comment that includes:
1. Key findings grouped by severity (`P1`, `P2`, `P3`)
2. Intended personas and fit summary (who this doc serves and where fit is weak)
3. JTBD coverage summary (which mandatory JTBD checks failed or were at risk)
4. Checklist-oriented coverage summary (what failed/needs work)
4. Checklist coverage summary (what failed or needs work against `contributing/docs-authoring.md`)
5. Any open questions/assumptions
6. Whether onboarding label was applied (when relevant)
6. The onboarding-label result from `contributing/docs-review.md`

## Suggested Commands

```bash
# PR context
gh pr view <PR_NUMBER>
gh pr diff <PR_NUMBER>
gh api repos/<REPO>/pulls/<PR_NUMBER>/files --paginate

# changed docs and policy anchors
# changed docs and guidance
rg --files data/docs
rg -n "JTBD|happy path|Patterns and components|Hyperlinks|Docs PR Checklist" CONTRIBUTING.md
cat contributing/docs-authoring.md
cat contributing/docs-review.md

# scan for likely docs quality issues
rg -n "## Next steps|## Troubleshooting|KeyPointCallout|ToggleHeading|https?://|<[^>]+>" data/docs
Expand All @@ -173,8 +90,6 @@ curl -sI <URL>

## Guardrails

- Do not require a dedicated "Target Persona" section unless context truly needs it.
- Keep advanced options out of the mandatory path unless essential for first success.
- If a PR adds or changes docs MDX components or component-driven content patterns, verify both agent markdown handling (`utils/docs/agentMarkdownStubs.ts`) and rendered Copy Markdown behavior (`utils/docs/buildCopyMarkdownFromRendered.ts`) where relevant.
- Do not restate large parts of the playbooks in comments.
- Keep review feedback decision-oriented and immediately actionable.
- Do not mark a review complete if mandatory JTBD checks were skipped.
- Follow the JTBD, technical verification, and onboarding-label guidance from `contributing/docs-review.md`.
2 changes: 1 addition & 1 deletion .agents/skills/signoz-docs-pr-review/agents/openai.yaml
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
interface:
display_name: "SigNoz Docs PR Review"
short_description: "Review SigNoz docs PRs against CONTRIBUTING.md"
short_description: "Review SigNoz docs PRs against the contributing playbooks"
default_prompt: "Use $signoz-docs-pr-review to review this docs PR and return prioritized inline findings with a concise checklist summary."
34 changes: 16 additions & 18 deletions .agents/skills/signoz-website-frontend-pr-review/SKILL.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
---
name: signoz-website-frontend-pr-review
description: Review SigNoz frontend pull requests using this skill's rubric and project-specific standards from CONTRIBUTING.md. Use when asked to review JS/TS/React/Next.js changes for duplication, architecture, App Router best practices, performance, maintainability, and accessibility.
description: Review SigNoz frontend pull requests for duplication, architecture, App Router best practices, performance, maintainability, and accessibility. Use when asked to review JS/TS/React/Next.js changes, check components or hooks, evaluate frontend code quality, or review any PR whose changed files are under app/, components/, hooks/, utils/, or similar frontend paths.
---

# SigNoz Frontend PR Review
Expand All @@ -19,16 +19,16 @@ If a PR includes docs too, use this skill for code review only.

## Source of Truth

- Use this skill file as the frontend review rubric.
- Apply project-specific constraints from `CONTRIBUTING.md` for code standards.
This skill file defines the review rubric (the 13 categories below). Project-specific code conventions and verification commands live in `contributing/site-code.md` — read it before reviewing so you apply the canonical rules, not stale assumptions.

## Review Process

1. Get PR context and changed files.
2. Scan for high-impact issues first (duplication, architecture, performance).
3. Evaluate against the categories below.
4. Leave inline comments for specific issues only.
5. Post exactly one concise summary grouped by severity.
2. **Read `contributing/site-code.md` in full before starting the review.** It contains the project's icon policy, UI primitive expectations, componentItems data placement rules, async/DOM safety rules, MDX rendering constraints, dependency policy, and required verification commands. Reviewing without reading it first leads to missed project-specific findings.
3. Scan for high-impact issues first (duplication, architecture, performance).
4. Evaluate against the categories below.
5. Leave inline comments for specific issues only.
6. Post exactly one concise summary grouped by severity.

## Review Categories

Expand Down Expand Up @@ -116,19 +116,13 @@ If a PR includes docs too, use this skill for code review only.
- Check import order/organization.
- Flag circular dependencies.
- Avoid duplicate functionality from existing deps.
- Ensure new deps are justified (per `CONTRIBUTING.md`).
- Ensure new deps are justified (per `contributing/site-code.md`).

### 11) Project-specific rules (`CONTRIBUTING.md`)
### 11) Project-specific rules (`contributing/site-code.md`)

- Prefer existing icon libs (`lucide-react`, `react-icons`).
- Prefer existing UI primitives in `components/ui`.
- Keep types/constants co-located and exported appropriately.
- Avoid concurrent async invocations in handlers (loading/ref guards).
- Be deliberate with DOM cleanup/transforms.
- If a PR adds or changes MDX components used in `data/docs/**`, verify `utils/docs/agentMarkdownStubs.ts` still handles them, the agent-markdown tests/coverage remain valid, and rendered Copy Markdown behavior in `utils/docs/buildCopyMarkdownFromRendered.ts` still stays clean.
- Justify dependency additions.
- **Listicle / icon-card items must stay exported through `constants/componentItems.ts` and source data should live in `constants/componentItems/*.ts`**, not inside component files. Flag any `{ name, href, clickName }` arrays defined inline in a component.
- **Never use `slice()` with hardcoded indices to split a flat array into sub-sections.** If a constant has logically distinct sub-groups (for example AWS / Azure / GCP), each group must be a named sub-key (`cloud: { aws: [...], azure: [...], gcp: [...] }`). Adding or removing an item in one group silently shifts slice boundaries in others — flag this pattern as a `High` finding.
- Apply the project-specific rules from `contributing/site-code.md`.
- Pay extra attention to icon usage, existing UI primitives, `constants/componentItems*.ts` data placement, async handler safety, MDX rendering compatibility, and dependency justification.
- Treat hardcoded `slice()` boundaries for logical sub-sections as a `High` finding.

### 12) Error handling and edge cases

Expand Down Expand Up @@ -181,6 +175,9 @@ Post exactly one concise summary that:
gh pr view <PR_NUMBER>
gh pr diff <PR_NUMBER>

# project-specific standards
cat contributing/site-code.md

# find similar code
find components shared -name "*.tsx" -type f
rg -n "<pattern>" utils hooks app/lib components shared
Expand All @@ -195,3 +192,4 @@ rg -n "^import " app components hooks utils shared
- For style changes, prefer Tailwind-first implementations and avoid introducing new component-level CSS systems when existing Tailwind patterns already solve the requirement.
- Avoid speculative refactors outside PR scope unless there is clear risk reduction.
- Keep feedback decision-oriented and implementable without ambiguity.
- Read `contributing/site-code.md` during the review instead of relying on memory or partial summaries.
6 changes: 4 additions & 2 deletions .github/PULL_REQUEST_TEMPLATE.md
Original file line number Diff line number Diff line change
Expand Up @@ -57,18 +57,20 @@ If breaking change, describe migration steps:
- [ ] Pre-commit hooks passed (or ran `yarn check:doc-redirects` / `yarn check:docs-metadata` if applicable)

### For docs changes (`data/docs/**`)
- [ ] Completed the [Docs PR Checklist](CONTRIBUTING.md#docs-pr-checklist) in CONTRIBUTING.md
- [ ] Followed the docs author checklist in [contributing/docs-authoring.md](https://github.com/SigNoz/signoz.io/blob/main/contributing/docs-authoring.md)
- [ ] Added/updated the page in `constants/docsSideNav.ts` if adding or moving a doc

### For blog changes
- [ ] Followed the blog workflow in [contributing/blog-workflow.md](https://github.com/SigNoz/signoz.io/blob/main/contributing/blog-workflow.md)
- [ ] Frontmatter includes `title`, `date`, `author`, `tags` (and `canonicalUrl` if applicable)
- [ ] Images use WebP format and live under `public/img/blog/<YYYY-MM>/`

### For site code changes
- [ ] Follows [Site Code Guidelines](CONTRIBUTING.md#website-code-guidelines) in CONTRIBUTING.md
- [ ] Followed the site code playbook in [contributing/site-code.md](https://github.com/SigNoz/signoz.io/blob/main/contributing/site-code.md)
- [ ] New dependencies are justified in the PR description (if any)

### For renamed or moved docs
- [ ] Followed the redirects and discovery section in [contributing/docs-authoring.md](https://github.com/SigNoz/signoz.io/blob/main/contributing/docs-authoring.md)
- [ ] Added permanent redirect in `next.config.js` under `async redirects()`
- [ ] Updated internal links and sidebar in `constants/docsSideNav.ts`
- [ ] Ran `yarn check:doc-redirects` to verify
Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/claude-doc-update.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -142,7 +142,7 @@ jobs:

Do NOT modify application code.
Only add under FAQ if it's brief problem/solution. Else create new page under troubleshooting.
- Follow the repo's CONTRIBUTING.md documentation guidelines.
- Use `CONTRIBUTING.md` as the entrypoint and follow `contributing/docs-authoring.md` for documentation guidance (includes redirects and discovery).
- Content placement:
- If it's a brief problem/solution, add it under FAQs.
- Otherwise, add it under troubleshooting
Expand Down
Loading
Loading