diff --git a/.claude/prompts/nl-unity-claude-tests-mini.md b/.claude/prompts/nl-unity-claude-tests-mini.md new file mode 100644 index 00000000..35900b71 --- /dev/null +++ b/.claude/prompts/nl-unity-claude-tests-mini.md @@ -0,0 +1,45 @@ +# Unity NL Editing Suite — Natural Mode + +You are running inside CI for the **unity-mcp** repository. Your task is to demonstrate end‑to‑end **natural‑language code editing** on a representative Unity C# script using whatever capabilities and servers are already available in this session. Work autonomously. Do not ask the user for input. Do NOT spawn subagents, as they will not have access to the mcp server process on the top-level agent. + +## Mission +1) **Discover capabilities.** Quietly inspect the tools and any connected servers that are available to you at session start. If the server offers a primer or capabilities resource, read it before acting. +2) **Choose a target file.** Prefer `TestProjects/UnityMCPTests/Assets/Scripts/LongUnityScriptClaudeTest.cs` if it exists; otherwise choose a simple, safe C# script under `TestProjects/UnityMCPTests/Assets/`. +3) **Perform a small set of realistic edits** using minimal, precise changes (not full-file rewrites). Examples of small edits you may choose from (pick 3–6 total): + - Insert a new, small helper method (e.g., a logger or counter) in a sensible location. + - Add a short anchor comment near a key method (e.g., above `Update()`), then add or modify a few lines nearby. + - Append an end‑of‑class utility method (e.g., formatting or clamping helper). + - Make a safe, localized tweak to an existing method body (e.g., add a guard or a simple accumulator). + - Optionally include one idempotency/no‑op check (re‑apply an edit and confirm nothing breaks). +4) **Validate your edits.** Re‑read the modified regions and verify the changes exist, compile‑risk is low, and surrounding structure remains intact. +5) **Report results.** Produce both: + - A JUnit XML at `reports/junit-nl-suite.xml` containing a single suite named `UnityMCP.NL` with one test case per sub‑test you executed (mark pass/fail and include helpful failure text). + - A summary markdown at `reports/junit-nl-suite.md` that explains what you attempted, what succeeded/failed, and any follow‑ups you would try. +6) **Be gentle and reversible.** Prefer targeted, minimal edits; avoid wide refactors or non‑deterministic changes. + +## Assumptions & Hints (non‑prescriptive) +- A Unity‑oriented MCP server is expected to be connected. If a server‑provided **primer/capabilities** resource exists, read it first. If no primer is available, infer capabilities from your visible tools in the session. +- In CI/headless mode, when calling `mcp__unity__list_resources` or `mcp__unity__read_resource`, include: + - `ctx: {}` + - `project_root: "TestProjects/UnityMCPTests"` (the server will also accept the absolute path passed via env) + Example: `{ "ctx": {}, "under": "Assets/Scripts", "pattern": "*.cs", "project_root": "TestProjects/UnityMCPTests" }` +- If the preferred file isn’t present, locate a fallback C# file with simple, local methods you can edit safely. +- If a compile command is available in this environment, you may optionally trigger it; if not, rely on structural checks and localized validation. + +## Output Requirements (match NL suite conventions) +- JUnit XML at `$JUNIT_OUT` if set, otherwise `reports/junit-nl-suite.xml`. + - Single suite named `UnityMCP.NL`, one `` per sub‑test; include `` on errors. +- Markdown at `$MD_OUT` if set, otherwise `reports/junit-nl-suite.md`. + +Constraints (for fast publishing): +- Log allowed tools once as a single line: `AllowedTools: ...`. +- For every edit: Read → Write (with precondition hash) → Re‑read; on `{status:"stale_file"}` retry once after re‑read. +- Keep evidence to ±20–40 lines windows; cap unified diffs to 300 lines and note truncation. +- End `` with `VERDICT: PASS` or `VERDICT: FAIL`. + +## Guardrails +- No destructive operations. Keep changes minimal and well‑scoped. +- Don’t leak secrets or environment details beyond what’s needed in the reports. +- Work without user interaction; do not prompt for approval mid‑flow. + +> If capabilities discovery fails, still produce the two reports that clearly explain why you could not proceed and what evidence you gathered. diff --git a/.claude/prompts/nl-unity-suite-full.md b/.claude/prompts/nl-unity-suite-full.md new file mode 100644 index 00000000..1b46127a --- /dev/null +++ b/.claude/prompts/nl-unity-suite-full.md @@ -0,0 +1,234 @@ +# Unity NL/T Editing Suite — CI Agent Contract + +You are running inside CI for the `unity-mcp` repo. Use only the tools allowed by the workflow. Work autonomously; do not prompt the user. Do NOT spawn subagents. + +**Print this once, verbatim, early in the run:** +AllowedTools: Write,Bash(printf:*),Bash(echo:*),Bash(scripts/nlt-revert.sh:*),mcp__unity__manage_editor,mcp__unity__list_resources,mcp__unity__read_resource,mcp__unity__apply_text_edits,mcp__unity__script_apply_edits,mcp__unity__validate_script,mcp__unity__find_in_file,mcp__unity__read_console,mcp__unity__get_sha + +--- + +## Mission +1) Pick target file (prefer): + - `unity://path/Assets/Scripts/LongUnityScriptClaudeTest.cs` +2) Execute **all** NL/T tests in order using minimal, precise edits. +3) Validate each edit with `mcp__unity__validate_script(level:"standard")`. +4) **Report**: write one `` XML fragment per test to `reports/_results.xml`. Do **not** read or edit `$JUNIT_OUT`. +5) **Restore** the file after each test using the OS‑level helper (fast), not a full‑file text write. + +--- + +## Environment & Paths (CI) +- Always pass: `project_root: "TestProjects/UnityMCPTests"` and `ctx: {}` on list/read/edit/validate. +- **Canonical URIs only**: + - Primary: `unity://path/Assets/...` (never embed `project_root` in the URI) + - Relative (when supported): `Assets/...` +- File paths for the helper script are workspace‑relative: + - `TestProjects/UnityMCPTests/Assets/...` + +CI provides: +- `$JUNIT_OUT=reports/junit-nl-suite.xml` (pre‑created; leave alone) +- `$MD_OUT=reports/junit-nl-suite.md` (synthesized from JUnit) +- Helper script: `scripts/nlt-revert.sh` (snapshot/restore) + +--- + +## Tool Mapping +- **Anchors/regex/structured**: `mcp__unity__script_apply_edits` + - Allowed ops: `anchor_insert`, `replace_range`, `regex_replace` (no overlapping ranges within a single call) +- **Precise ranges / atomic batch**: `mcp__unity__apply_text_edits` (non‑overlapping ranges) + - Multi‑span batches are computed from the same fresh read and sent atomically by default. + - Prefer `options.applyMode:"atomic"` when passing options for multiple spans; for single‑span, sequential is fine. +- **Hash-only**: `mcp__unity__get_sha` — returns `{sha256,lengthBytes,lastModifiedUtc}` without file body +- **Validation**: `mcp__unity__validate_script(level:"standard")` + - For edits, you may pass `options.validate`: + - `standard` (default): full‑file delimiter balance checks. + - `relaxed`: scoped checks for interior, non‑structural text edits; do not use for header/signature/brace‑touching changes. +- **Reporting**: `Write` small XML fragments to `reports/*_results.xml` +- **Editor state/flush**: `mcp__unity__manage_editor` (use sparingly; no project mutations) +- **Console readback**: `mcp__unity__read_console` (INFO capture only; do not assert in place of `validate_script`) +- **Snapshot/Restore**: `Bash(scripts/nlt-revert.sh:*)` + - For `script_apply_edits`: use `name` + workspace‑relative `path` only (e.g., `name="LongUnityScriptClaudeTest"`, `path="Assets/Scripts"`). Do not pass `unity://...` URIs as `path`. + - For `apply_text_edits` / `read_resource`: use the URI form only (e.g., `uri="unity://path/Assets/Scripts/LongUnityScriptClaudeTest.cs"`). Do not concatenate `Assets/` with a `unity://...` URI. + - Never call generic Bash like `mkdir`; the revert helper creates needed directories. Use only `scripts/nlt-revert.sh` for snapshot/restore. + - If you believe a directory is missing, you are mistaken: the workflow pre-creates it and the snapshot helper creates it if needed. Do not attempt any Bash other than scripts/nlt-revert.sh:*. + +### Structured edit ops (required usage) + +# Insert a helper RIGHT BEFORE the final class brace (NL‑3, T‑D) +1) Prefer `script_apply_edits` with a regex capture on the final closing brace: +```json +{"op":"regex_replace", + "pattern":"(?s)(\\r?\\n\\s*\\})\\s*$", + "replacement":"\\n // Tail test A\\n // Tail test B\\n // Tail test C\\1"} + +2) If the server returns `unsupported` (op not available) or `missing_field` (op‑specific), FALL BACK to + `apply_text_edits`: + - Find the last `}` in the file (class closing brace) by scanning from end. + - Insert the three comment lines immediately before that index with one non‑overlapping range. + +# Insert after GetCurrentTarget (T‑A/T‑E) +- Use `script_apply_edits` with: +```json +{"op":"anchor_insert","afterMethodName":"GetCurrentTarget","text":"private int __TempHelper(int a,int b)=>a+b;\\n"} +``` + +# Delete the temporary helper (T‑A/T‑E) +- Prefer structured delete: + - Use `script_apply_edits` with `{ "op":"delete_method", "className":"LongUnityScriptClaudeTest", "methodName":"PrintSeries" }` (or `__TempHelper` for T‑A). +- If structured delete is unavailable, fall back to `apply_text_edits` with a single `replace_range` spanning the exact method block (bounds computed from a fresh read); avoid whole‑file regex deletes. + +# T‑B (replace method body) +- Use `mcp__unity__apply_text_edits` with a single `replace_range` strictly inside the `HasTarget` braces. +- Compute start/end from a fresh `read_resource` at test start. Do not edit signature or header. +- On `{status:"stale_file"}` retry once with the server-provided hash; if absent, re-read once and retry. +- On `bad_request`: write the testcase with ``, restore, and continue to next test. +- On `missing_field`: FALL BACK per above; if the fallback also returns `unsupported` or `bad_request`, then fail as above. +> Don’t use `mcp__unity__create_script`. Avoid the header/`using` region entirely. + +Span formats for `apply_text_edits`: +- Prefer LSP ranges (0‑based): `{ "range": { "start": {"line": L, "character": C}, "end": {…} }, "newText": "…" }` +- Explicit fields are 1‑based: `{ "startLine": L1, "startCol": C1, "endLine": L2, "endCol": C2, "newText": "…" }` +- SDK preflights overlap after normalization; overlapping non‑zero spans → `{status:"overlap"}` with conflicts and no file mutation. +- Optional debug: pass `strict:true` to reject explicit 0‑based fields (else they are normalized and a warning is emitted). +- Apply mode guidance: router defaults to atomic for multi‑span; you can explicitly set `options.applyMode` if needed. + +--- + +## Output Rules (JUnit fragments only) +- For each test, create **one** file: `reports/_results.xml` containing exactly a single ` ... `. + Put human-readable lines (PLAN/PROGRESS/evidence) **inside** ``. + - If content contains `]]>`, split CDATA: replace `]]>` with `]]]]>`. +- Evidence windows only (±20–40 lines). If showing a unified diff, cap at 100 lines and note truncation. +- **Never** open/patch `$JUNIT_OUT` or `$MD_OUT`; CI merges fragments and synthesizes Markdown. + - Write destinations must match: `^reports/[A-Za-z0-9._-]+_results\.xml$` + - Snapshot files must live under `reports/_snapshots/` + - Reject absolute paths and any path containing `..` + - Reject control characters and line breaks in filenames; enforce UTF‑8 + - Cap basename length to ≤64 chars; cap any path segment to ≤100 and total path length to ≤255 + - Bash(printf|echo) must write to stdout only. Do not use shell redirection, here‑docs, or `tee` to create/modify files. The only allowed FS mutation is via `scripts/nlt-revert.sh`. + +**Example fragment** +```xml + + +... evidence windows ... +VERDICT: PASS +]]> + + +``` + +Note: Emit the PLAN line only in NL‑0 (do not repeat it for later tests). + + +### Fast Restore Strategy (OS‑level) + +- Snapshot once at NL‑0, then restore after each test via the helper. +- Snapshot (once after confirming the target): + ```bash + scripts/nlt-revert.sh snapshot "TestProjects/UnityMCPTests/Assets/Scripts/LongUnityScriptClaudeTest.cs" "reports/_snapshots/LongUnityScriptClaudeTest.cs.baseline" + ``` +- Log `snapshot_sha=...` printed by the script. +- Restore (after each mutating test): + ```bash + scripts/nlt-revert.sh restore "TestProjects/UnityMCPTests/Assets/Scripts/LongUnityScriptClaudeTest.cs" "reports/_snapshots/LongUnityScriptClaudeTest.cs.baseline" + ``` +- Then `read_resource` to confirm and (optionally) `validate_script(level:"standard")`. +- If the helper fails: fall back once to a guarded full‑file restore using the baseline bytes; then continue. + +### Guarded Write Pattern (for edits, not restores) + +- Before any mutation: `res = mcp__unity__read_resource(uri)`; `pre_sha = sha256(res.bytes)`. +- Write with `precondition_sha256 = pre_sha` on `apply_text_edits`/`script_apply_edits`. +- To compute `pre_sha` without reading file contents, you may instead call `mcp__unity__get_sha(uri).sha256`. +- On `{status:"stale_file"}`: + - Retry once using the server-provided hash (e.g., `data.current_sha256` or `data.expected_sha256`, per API schema). + - If absent, one re-read then a final retry. No loops. +- After success: immediately re-read via `res2 = mcp__unity__read_resource(uri)` and set `pre_sha = sha256(res2.bytes)` before any further edits in the same test. +- Prefer anchors (`script_apply_edits`) for end-of-class / above-method insertions. Keep edits inside method bodies. Avoid header/using. + +**On non‑JSON/transport errors (timeout, EOF, connection closed):** +- Write `reports/_results.xml` with a `` that includes a `` or `` node capturing the error text. +- Run the OS restore via `scripts/nlt-revert.sh restore …`. +- Continue to the next test (do not abort). + +**If any write returns `bad_request`, or `unsupported` after a fallback attempt:** +- Write `reports/_results.xml` with a `` that includes a `` node capturing the server error, include evidence, and end with `VERDICT: FAIL`. +- Run `scripts/nlt-revert.sh restore ...` and continue to the next test. +### Execution Order (fixed) + +- Run exactly: NL-0, NL-1, NL-2, NL-3, NL-4, T-A, T-B, T-C, T-D, T-E, T-F, T-G, T-H, T-I, T-J (15 total). +- Before NL-1..T-J: Bash(scripts/nlt-revert.sh:restore "" "reports/_snapshots/LongUnityScriptClaudeTest.cs.baseline") IF the baseline exists; skip for NL-0. +- NL-0 must include the PLAN line (len=15). +- After each testcase, include `PROGRESS: /15 completed`. + + +### Test Specs (concise) + +- NL‑0. Sanity reads — Tail ~120; ±40 around `Update()`. Then snapshot via helper. +- NL‑1. Replace/insert/delete — `HasTarget → return currentTarget != null;`; insert `PrintSeries()` after `GetCurrentTarget` logging "1,2,3"; verify; delete `PrintSeries()`; restore. +- NL‑2. Anchor comment — Insert `// Build marker OK` above `public void Update(...)`; restore. +- NL‑3. End‑of‑class — Insert `// Tail test A/B/C` (3 lines) before final brace; restore. +- NL‑4. Compile trigger — Record INFO only. + +### T‑A. Anchor insert (text path) — Insert helper after `GetCurrentTarget`; verify; delete via `regex_replace`; restore. +### T‑B. Replace body — Single `replace_range` inside `HasTarget`; restore. +- Options: pass {"validate":"relaxed"} for interior one-line edits. +### T‑C. Header/region preservation — Edit interior of `ApplyBlend`; preserve signature/docs/regions; restore. +- Options: pass {"validate":"relaxed"} for interior one-line edits. +### T‑D. End‑of‑class (anchor) — Insert helper before final brace; remove; restore. +### T‑E. Lifecycle — Insert → update → delete via regex; restore. +### T‑F. Atomic batch — One `mcp__unity__apply_text_edits` call (text ranges only) + - Compute all three edits from the **same fresh read**: + 1) Two small interior `replace_range` tweaks. + 2) One **end‑of‑class insertion**: find the **index of the final `}`** for the class; create a zero‑width range `[idx, idx)` and set `replacement` to the 3‑line comment block. + - Send all three ranges in **one call**, sorted **descending by start index** to avoid offset drift. + - Expect all‑or‑nothing semantics; on `{status:"overlap"}` or `{status:"bad_request"}`, write the testcase fragment with ``, **restore**, and continue. + - Options: pass {"applyMode":"atomic"} to enforce all‑or‑nothing. +- T‑G. Path normalization — Make the same edit with `unity://path/Assets/...` then `Assets/...`. Without refreshing `precondition_sha256`, the second attempt returns `{stale_file}`; retry with the server-provided hash to confirm both forms resolve to the same file. + +### T-H. Validation (standard) +- Restore baseline (helper call above). +- Perform a harmless interior tweak (or none), then MUST call: + mcp__unity__validate_script(level:"standard") +- Write the validator output to system-out; VERDICT: PASS if standard is clean, else include with the validator message and continue. + +### T-I. Failure surfaces (expected) +- Restore baseline. +- (1) OVERLAP: + * Fresh read of file; compute two interior ranges that overlap inside HasTarget. + * Prefer LSP ranges (0‑based) or explicit 1‑based fields; ensure both spans come from the same snapshot. + * Single mcp__unity__apply_text_edits call with both ranges. + * Expect `{status:"overlap"}` (SDK preflight) → record as PASS; else FAIL. Restore. +- (2) STALE_FILE: + * Fresh read → pre_sha. + * Make a tiny legit edit with pre_sha; success. + * Attempt another edit reusing the OLD pre_sha. + * Expect {status:"stale_file"} → record as PASS; else FAIL. Re-read to refresh, restore. + +### Per‑test error handling and recovery +- For each test (NL‑0..T‑J), use a try/finally pattern: + - Always write a testcase fragment and perform restore in finally, even when tools return error payloads. + - try: run the test steps; always write `reports/_results.xml` with PASS/FAIL/ERROR + - finally: run Bash(scripts/nlt-revert.sh:restore …baseline) to restore the target file +- On any transport/JSON/tool exception: + - catch and write a `` fragment with an `` node (include the message), then proceed to the next test. +- After NL‑4 completes, proceed directly to T‑A regardless of any earlier validator warnings (do not abort the run). +- (3) USING_GUARD (optional): + * Attempt a 1-line insert above the first 'using'. + * Expect {status:"using_guard"} → record as PASS; else note 'not emitted'. Restore. + +### T-J. Idempotency +- Restore baseline. +- Repeat a replace_range twice (second call may be noop). Validate standard after each. +- Insert or ensure a tiny comment, then delete it twice (second delete may be noop). +- Restore and PASS unless an error/structural break occurred. + + +### Status & Reporting + +- Safeguard statuses are non‑fatal; record and continue. +- End each testcase `` with `VERDICT: PASS` or `VERDICT: FAIL`. \ No newline at end of file diff --git a/.github/scripts/mark_skipped.py b/.github/scripts/mark_skipped.py new file mode 100755 index 00000000..d2e7ca7b --- /dev/null +++ b/.github/scripts/mark_skipped.py @@ -0,0 +1,113 @@ +#!/usr/bin/env python3 +""" +Post-processes a JUnit XML so that "expected"/environmental failures +(e.g., permission prompts, empty MCP resources, or schema hiccups) +are converted to . Leaves real failures intact. + +Usage: + python .github/scripts/mark_skipped.py reports/claude-nl-tests.xml +""" + +from __future__ import annotations +import sys +import os +import re +import xml.etree.ElementTree as ET + +PATTERNS = [ + r"\bpermission\b", + r"\bpermissions\b", + r"\bautoApprove\b", + r"\bapproval\b", + r"\bdenied\b", + r"requested\s+permissions", + r"^MCP resources list is empty$", + r"No MCP resources detected", + r"aggregator.*returned\s*\[\s*\]", + r"Unknown resource:\s*unity://", + r"Input should be a valid dictionary.*ctx", + r"validation error .* ctx", +] + +def should_skip(msg: str) -> bool: + if not msg: + return False + msg_l = msg.strip() + for pat in PATTERNS: + if re.search(pat, msg_l, flags=re.IGNORECASE | re.MULTILINE): + return True + return False + +def summarize_counts(ts: ET.Element): + tests = 0 + failures = 0 + errors = 0 + skipped = 0 + for case in ts.findall("testcase"): + tests += 1 + if case.find("failure") is not None: + failures += 1 + if case.find("error") is not None: + errors += 1 + if case.find("skipped") is not None: + skipped += 1 + return tests, failures, errors, skipped + +def main(path: str) -> int: + if not os.path.exists(path): + print(f"[mark_skipped] No JUnit at {path}; nothing to do.") + return 0 + + try: + tree = ET.parse(path) + except ET.ParseError as e: + print(f"[mark_skipped] Could not parse {path}: {e}") + return 0 + + root = tree.getroot() + suites = root.findall("testsuite") if root.tag == "testsuites" else [root] + + changed = False + for ts in suites: + for case in list(ts.findall("testcase")): + nodes = [n for n in list(case) if n.tag in ("failure", "error")] + if not nodes: + continue + # If any node matches skip patterns, convert the whole case to skipped. + first_match_text = None + to_skip = False + for n in nodes: + msg = (n.get("message") or "") + "\n" + (n.text or "") + if should_skip(msg): + first_match_text = (n.text or "").strip() or first_match_text + to_skip = True + if to_skip: + for n in nodes: + case.remove(n) + reason = "Marked skipped: environment/permission precondition not met" + skip = ET.SubElement(case, "skipped") + skip.set("message", reason) + skip.text = first_match_text or reason + changed = True + # Recompute tallies per testsuite + tests, failures, errors, skipped = summarize_counts(ts) + ts.set("tests", str(tests)) + ts.set("failures", str(failures)) + ts.set("errors", str(errors)) + ts.set("skipped", str(skipped)) + + if changed: + tree.write(path, encoding="utf-8", xml_declaration=True) + print(f"[mark_skipped] Updated {path}: converted environmental failures to skipped.") + else: + print(f"[mark_skipped] No environmental failures detected in {path}.") + + return 0 + +if __name__ == "__main__": + target = ( + sys.argv[1] + if len(sys.argv) > 1 + else os.environ.get("JUNIT_OUT", "reports/junit-nl-suite.xml") + ) + raise SystemExit(main(target)) diff --git a/.github/workflows/claude-nl-suite-mini.yml b/.github/workflows/claude-nl-suite-mini.yml new file mode 100644 index 00000000..272e04d6 --- /dev/null +++ b/.github/workflows/claude-nl-suite-mini.yml @@ -0,0 +1,356 @@ +name: Claude Mini NL Test Suite (Unity live) + +on: + workflow_dispatch: {} + +permissions: + contents: read + checks: write + +concurrency: + group: ${{ github.workflow }}-${{ github.ref }} + cancel-in-progress: true + +env: + UNITY_VERSION: 2021.3.45f1 + UNITY_IMAGE: unityci/editor:ubuntu-2021.3.45f1-linux-il2cpp-3 + UNITY_CACHE_ROOT: /home/runner/work/_temp/_github_home + +jobs: + nl-suite: + if: github.event_name == 'workflow_dispatch' + runs-on: ubuntu-latest + timeout-minutes: 60 + env: + JUNIT_OUT: reports/junit-nl-suite.xml + MD_OUT: reports/junit-nl-suite.md + + steps: + # ---------- Detect secrets ---------- + - name: Detect secrets (outputs) + id: detect + env: + UNITY_LICENSE: ${{ secrets.UNITY_LICENSE }} + UNITY_EMAIL: ${{ secrets.UNITY_EMAIL }} + UNITY_PASSWORD: ${{ secrets.UNITY_PASSWORD }} + UNITY_SERIAL: ${{ secrets.UNITY_SERIAL }} + ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }} + run: | + set -e + if [ -n "$ANTHROPIC_API_KEY" ]; then echo "anthropic_ok=true" >> "$GITHUB_OUTPUT"; else echo "anthropic_ok=false" >> "$GITHUB_OUTPUT"; fi + if [ -n "$UNITY_LICENSE" ] || { [ -n "$UNITY_EMAIL" ] && [ -n "$UNITY_PASSWORD" ]; } || [ -n "$UNITY_SERIAL" ]; then + echo "unity_ok=true" >> "$GITHUB_OUTPUT" + else + echo "unity_ok=false" >> "$GITHUB_OUTPUT" + fi + + - uses: actions/checkout@v4 + with: + fetch-depth: 0 + + # ---------- Python env for MCP server (uv) ---------- + - uses: astral-sh/setup-uv@v4 + with: + python-version: '3.11' + + - name: Install MCP server + run: | + set -eux + uv venv + echo "VIRTUAL_ENV=$GITHUB_WORKSPACE/.venv" >> "$GITHUB_ENV" + echo "$GITHUB_WORKSPACE/.venv/bin" >> "$GITHUB_PATH" + if [ -f UnityMcpBridge/UnityMcpServer~/src/pyproject.toml ]; then + uv pip install -e UnityMcpBridge/UnityMcpServer~/src + elif [ -f UnityMcpBridge/UnityMcpServer~/src/requirements.txt ]; then + uv pip install -r UnityMcpBridge/UnityMcpServer~/src/requirements.txt + elif [ -f UnityMcpBridge/UnityMcpServer~/pyproject.toml ]; then + uv pip install -e UnityMcpBridge/UnityMcpServer~/ + elif [ -f UnityMcpBridge/UnityMcpServer~/requirements.txt ]; then + uv pip install -r UnityMcpBridge/UnityMcpServer~/requirements.txt + else + echo "No MCP Python deps found (skipping)" + fi + + # ---------- License prime on host (handles ULF or EBL) ---------- + - name: Prime Unity license on host (GameCI) + if: steps.detect.outputs.unity_ok == 'true' + uses: game-ci/unity-test-runner@v4 + env: + UNITY_LICENSE: ${{ secrets.UNITY_LICENSE }} + UNITY_EMAIL: ${{ secrets.UNITY_EMAIL }} + UNITY_PASSWORD: ${{ secrets.UNITY_PASSWORD }} + UNITY_SERIAL: ${{ secrets.UNITY_SERIAL }} + with: + projectPath: TestProjects/UnityMCPTests + testMode: EditMode + customParameters: -runTests -testFilter __NoSuchTest__ -batchmode -nographics + unityVersion: ${{ env.UNITY_VERSION }} + + # (Optional) Show where the license actually got written + - name: Inspect GameCI license caches (host) + if: steps.detect.outputs.unity_ok == 'true' + run: | + set -eux + find "${{ env.UNITY_CACHE_ROOT }}" -maxdepth 4 \( -path "*/.cache" -prune -o -type f \( -name '*.ulf' -o -name 'user.json' \) -print \) 2>/dev/null || true + + # ---------- Clean any stale MCP status from previous runs ---------- + - name: Clean old MCP status + run: | + set -eux + mkdir -p "$HOME/.unity-mcp" + rm -f "$HOME/.unity-mcp"/unity-mcp-status-*.json || true + + # ---------- Start headless Unity that stays up (bridge enabled) ---------- + - name: Start Unity (persistent bridge) + if: steps.detect.outputs.unity_ok == 'true' + env: + UNITY_EMAIL: ${{ secrets.UNITY_EMAIL }} + UNITY_PASSWORD: ${{ secrets.UNITY_PASSWORD }} + UNITY_SERIAL: ${{ secrets.UNITY_SERIAL }} + run: | + set -eu + if [ ! -d "${{ github.workspace }}/TestProjects/UnityMCPTests/ProjectSettings" ]; then + echo "Unity project not found; failing fast." + exit 1 + fi + mkdir -p "$HOME/.unity-mcp" + MANUAL_ARG=() + if [ -f "${UNITY_CACHE_ROOT}/.local/share/unity3d/Unity_lic.ulf" ]; then + MANUAL_ARG=(-manualLicenseFile /root/.local/share/unity3d/Unity_lic.ulf) + fi + EBL_ARGS=() + [ -n "${UNITY_SERIAL:-}" ] && EBL_ARGS+=(-serial "$UNITY_SERIAL") + [ -n "${UNITY_EMAIL:-}" ] && EBL_ARGS+=(-username "$UNITY_EMAIL") + [ -n "${UNITY_PASSWORD:-}" ] && EBL_ARGS+=(-password "$UNITY_PASSWORD") + docker rm -f unity-mcp >/dev/null 2>&1 || true + docker run -d --name unity-mcp --network host \ + -e HOME=/root \ + -e UNITY_MCP_ALLOW_BATCH=1 -e UNITY_MCP_STATUS_DIR=/root/.unity-mcp \ + -e UNITY_MCP_BIND_HOST=127.0.0.1 \ + -v "${{ github.workspace }}:/workspace" -w /workspace \ + -v "${{ env.UNITY_CACHE_ROOT }}:/root" \ + -v "$HOME/.unity-mcp:/root/.unity-mcp" \ + ${{ env.UNITY_IMAGE }} /opt/unity/Editor/Unity -batchmode -nographics -logFile - \ + -stackTraceLogType Full \ + -projectPath /workspace/TestProjects/UnityMCPTests \ + "${MANUAL_ARG[@]}" \ + "${EBL_ARGS[@]}" \ + -executeMethod MCPForUnity.Editor.MCPForUnityBridge.StartAutoConnect + + # ---------- Wait for Unity bridge (fail fast if not running/ready) ---------- + - name: Wait for Unity bridge (robust) + if: steps.detect.outputs.unity_ok == 'true' + run: | + set -euo pipefail + if ! docker ps --format '{{.Names}}' | grep -qx 'unity-mcp'; then + echo "Unity container failed to start"; docker ps -a || true; exit 1 + fi + docker logs -f unity-mcp 2>&1 | sed -E 's/((serial|license|password|token)[^[:space:]]*)/[REDACTED]/ig' & LOGPID=$! + deadline=$((SECONDS+420)); READY=0 + try_connect_host() { + P="$1" + timeout 1 bash -lc "exec 3<>/dev/tcp/127.0.0.1/$P; head -c 8 <&3 >/dev/null" && return 0 || true + if command -v nc >/dev/null 2>&1; then nc -6 -z ::1 "$P" && return 0 || true; fi + return 1 + } + + # in-container probe will try IPv4 then IPv6 via nc or /dev/tcp + + while [ $SECONDS -lt $deadline ]; do + if docker logs unity-mcp 2>&1 | grep -qE "MCP Bridge listening|Bridge ready|Server started"; then + READY=1; echo "Bridge ready (log markers)"; break + fi + PORT=$(python -c "import os,glob,json,sys,time; b=os.path.expanduser('~/.unity-mcp'); fs=sorted(glob.glob(os.path.join(b,'unity-mcp-status-*.json')), key=os.path.getmtime, reverse=True); print(next((json.load(open(f,'r',encoding='utf-8')).get('unity_port') for f in fs if time.time()-os.path.getmtime(f)<=300 and json.load(open(f,'r',encoding='utf-8')).get('unity_port')), '' ))" 2>/dev/null || true) + if [ -n "${PORT:-}" ] && { try_connect_host "$PORT" || docker exec unity-mcp bash -lc "timeout 1 bash -lc 'exec 3<>/dev/tcp/127.0.0.1/$PORT' || (command -v nc >/dev/null 2>&1 && nc -6 -z ::1 $PORT)"; }; then + READY=1; echo "Bridge ready on port $PORT"; break + fi + if docker logs unity-mcp 2>&1 | grep -qE "No valid Unity Editor license|Token not found in cache|com\.unity\.editor\.headless"; then + echo "Licensing error detected"; break + fi + sleep 2 + done + + kill $LOGPID || true + + if [ "$READY" != "1" ]; then + echo "Bridge not ready; diagnostics:" + echo "== status files =="; ls -la "$HOME/.unity-mcp" || true + echo "== status contents =="; for f in "$HOME"/.unity-mcp/unity-mcp-status-*.json; do [ -f "$f" ] && { echo "--- $f"; sed -n '1,120p' "$f"; }; done + echo "== sockets (inside container) =="; docker exec unity-mcp bash -lc 'ss -lntp || netstat -tulpen || true' + echo "== tail of Unity log ==" + docker logs --tail 200 unity-mcp | sed -E 's/((serial|license|password|token)[^[:space:]]*)/[REDACTED]/ig' || true + exit 1 + fi + + # ---------- Make MCP config available to the action ---------- + - name: Write MCP config (.claude/mcp.json) + run: | + set -eux + mkdir -p .claude + cat > .claude/mcp.json < str: + return tag.rsplit('}', 1)[-1] if '}' in tag else tag + + src = Path(os.environ.get('JUNIT_OUT', 'reports/junit-nl-suite.xml')) + out = Path('reports/junit-for-actions.xml') + out.parent.mkdir(parents=True, exist_ok=True) + + if not src.exists(): + # Try to use any existing XML as a source (e.g., claude-nl-tests.xml) + candidates = sorted(Path('reports').glob('*.xml')) + if candidates: + src = candidates[0] + else: + print("WARN: no XML source found for normalization") + + if src.exists(): + try: + root = ET.parse(src).getroot() + rtag = localname(root.tag) + if rtag == 'testsuites' and len(root) == 1 and localname(root[0].tag) == 'testsuite': + ET.ElementTree(root[0]).write(out, encoding='utf-8', xml_declaration=True) + else: + out.write_bytes(src.read_bytes()) + except Exception as e: + print("Normalization error:", e) + out.write_bytes(src.read_bytes()) + + # Always create a second copy with a junit-* name so wildcard patterns match too + if out.exists(): + Path('reports/junit-nl-suite-copy.xml').write_bytes(out.read_bytes()) + PY + + - name: "Debug: list report files" + if: always() + shell: bash + run: | + set -eux + ls -la reports || true + shopt -s nullglob + for f in reports/*.xml; do + echo "===== $f =====" + head -n 40 "$f" || true + done + + + # sanitize only the markdown (does not touch JUnit xml) + - name: Sanitize markdown (all shards) + if: always() + run: | + set -eu + python - <<'PY' + from pathlib import Path + rp=Path('reports') + rp.mkdir(parents=True, exist_ok=True) + for p in rp.glob('*.md'): + b=p.read_bytes().replace(b'\x00', b'') + s=b.decode('utf-8','replace').replace('\r\n','\n') + p.write_text(s, encoding='utf-8', newline='\n') + PY + + - name: NL/T details → Job Summary + if: always() + run: | + echo "## Unity NL/T Editing Suite — Full Coverage" >> $GITHUB_STEP_SUMMARY + python - <<'PY' >> $GITHUB_STEP_SUMMARY + from pathlib import Path + p = Path('reports/junit-nl-suite.md') if Path('reports/junit-nl-suite.md').exists() else Path('reports/claude-nl-tests.md') + if p.exists(): + text = p.read_bytes().decode('utf-8', 'replace') + MAX = 65000 + print(text[:MAX]) + if len(text) > MAX: + print("\n\n_…truncated in summary; full report is in artifacts._") + else: + print("_No markdown report found._") + PY + + - name: Fallback JUnit if missing + if: always() + run: | + set -eu + mkdir -p reports + if [ ! -f reports/junit-for-actions.xml ]; then + printf '%s\n' \ + '' \ + '' \ + ' ' \ + ' ' \ + ' ' \ + '' \ + > reports/junit-for-actions.xml + fi + + + - name: Publish JUnit reports + if: always() + uses: mikepenz/action-junit-report@v5 + with: + report_paths: 'reports/junit-for-actions.xml' + include_passed: true + detailed_summary: true + annotate_notice: true + require_tests: false + fail_on_parse_error: true + + - name: Upload artifacts + if: always() + uses: actions/upload-artifact@v4 + with: + name: claude-nl-suite-artifacts + path: reports/** + + # ---------- Always stop Unity ---------- + - name: Stop Unity + if: always() + run: | + docker logs --tail 400 unity-mcp | sed -E 's/((serial|license|password|token)[^[:space:]]*)/[REDACTED]/ig' || true + docker rm -f unity-mcp || true diff --git a/.github/workflows/claude-nl-suite.yml b/.github/workflows/claude-nl-suite.yml new file mode 100644 index 00000000..8fc8603e --- /dev/null +++ b/.github/workflows/claude-nl-suite.yml @@ -0,0 +1,543 @@ +name: Claude NL/T Full Suite (Unity live) + +on: + workflow_dispatch: {} + +permissions: + contents: read + checks: write + +concurrency: + group: ${{ github.workflow }}-${{ github.ref }} + cancel-in-progress: true + +env: + UNITY_VERSION: 2021.3.45f1 + UNITY_IMAGE: unityci/editor:ubuntu-2021.3.45f1-linux-il2cpp-3 + UNITY_CACHE_ROOT: /home/runner/work/_temp/_github_home + +jobs: + nl-suite: + if: github.event_name == 'workflow_dispatch' + runs-on: ubuntu-latest + timeout-minutes: 60 + env: + JUNIT_OUT: reports/junit-nl-suite.xml + MD_OUT: reports/junit-nl-suite.md + + steps: + # ---------- Secrets check ---------- + - name: Detect secrets (outputs) + id: detect + env: + UNITY_LICENSE: ${{ secrets.UNITY_LICENSE }} + UNITY_EMAIL: ${{ secrets.UNITY_EMAIL }} + UNITY_PASSWORD: ${{ secrets.UNITY_PASSWORD }} + UNITY_SERIAL: ${{ secrets.UNITY_SERIAL }} + ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }} + run: | + set -e + if [ -n "$ANTHROPIC_API_KEY" ]; then echo "anthropic_ok=true" >> "$GITHUB_OUTPUT"; else echo "anthropic_ok=false" >> "$GITHUB_OUTPUT"; fi + if [ -n "$UNITY_LICENSE" ] || { [ -n "$UNITY_EMAIL" ] && [ -n "$UNITY_PASSWORD" ]; } || [ -n "$UNITY_SERIAL" ]; then + echo "unity_ok=true" >> "$GITHUB_OUTPUT" + else + echo "unity_ok=false" >> "$GITHUB_OUTPUT" + fi + + - uses: actions/checkout@v4 + with: + fetch-depth: 0 + + # ---------- Python env for MCP server (uv) ---------- + - uses: astral-sh/setup-uv@v4 + with: + python-version: '3.11' + + - name: Install MCP server + run: | + set -eux + uv venv + echo "VIRTUAL_ENV=$GITHUB_WORKSPACE/.venv" >> "$GITHUB_ENV" + echo "$GITHUB_WORKSPACE/.venv/bin" >> "$GITHUB_PATH" + if [ -f UnityMcpBridge/UnityMcpServer~/src/pyproject.toml ]; then + uv pip install -e UnityMcpBridge/UnityMcpServer~/src + elif [ -f UnityMcpBridge/UnityMcpServer~/src/requirements.txt ]; then + uv pip install -r UnityMcpBridge/UnityMcpServer~/src/requirements.txt + elif [ -f UnityMcpBridge/UnityMcpServer~/pyproject.toml ]; then + uv pip install -e UnityMcpBridge/UnityMcpServer~/ + elif [ -f UnityMcpBridge/UnityMcpServer~/requirements.txt ]; then + uv pip install -r UnityMcpBridge/UnityMcpServer~/requirements.txt + else + echo "No MCP Python deps found (skipping)" + fi + + # ---------- License prime on host (GameCI) ---------- + - name: Prime Unity license on host (GameCI) + if: steps.detect.outputs.unity_ok == 'true' + uses: game-ci/unity-test-runner@v4 + env: + UNITY_LICENSE: ${{ secrets.UNITY_LICENSE }} + UNITY_EMAIL: ${{ secrets.UNITY_EMAIL }} + UNITY_PASSWORD: ${{ secrets.UNITY_PASSWORD }} + UNITY_SERIAL: ${{ secrets.UNITY_SERIAL }} + with: + projectPath: TestProjects/UnityMCPTests + testMode: EditMode + customParameters: -runTests -testFilter __NoSuchTest__ -batchmode -nographics + unityVersion: ${{ env.UNITY_VERSION }} + + # (Optional) Inspect license caches + - name: Inspect GameCI license caches (host) + if: steps.detect.outputs.unity_ok == 'true' + run: | + set -eux + find "${{ env.UNITY_CACHE_ROOT }}" -maxdepth 4 \( -path "*/.cache" -prune -o -type f \( -name '*.ulf' -o -name 'user.json' \) -print \) 2>/dev/null || true + + # ---------- Clean old MCP status ---------- + - name: Clean old MCP status + run: | + set -eux + mkdir -p "$HOME/.unity-mcp" + rm -f "$HOME/.unity-mcp"/unity-mcp-status-*.json || true + + # ---------- Start headless Unity (persistent bridge) ---------- + - name: Start Unity (persistent bridge) + if: steps.detect.outputs.unity_ok == 'true' + env: + UNITY_EMAIL: ${{ secrets.UNITY_EMAIL }} + UNITY_PASSWORD: ${{ secrets.UNITY_PASSWORD }} + UNITY_SERIAL: ${{ secrets.UNITY_SERIAL }} + run: | + set -eu + if [ ! -d "${{ github.workspace }}/TestProjects/UnityMCPTests/ProjectSettings" ]; then + echo "Unity project not found; failing fast." + exit 1 + fi + mkdir -p "$HOME/.unity-mcp" + MANUAL_ARG=() + if [ -f "${UNITY_CACHE_ROOT}/.local/share/unity3d/Unity_lic.ulf" ]; then + MANUAL_ARG=(-manualLicenseFile /root/.local/share/unity3d/Unity_lic.ulf) + fi + EBL_ARGS=() + [ -n "${UNITY_SERIAL:-}" ] && EBL_ARGS+=(-serial "$UNITY_SERIAL") + [ -n "${UNITY_EMAIL:-}" ] && EBL_ARGS+=(-username "$UNITY_EMAIL") + [ -n "${UNITY_PASSWORD:-}" ] && EBL_ARGS+=(-password "$UNITY_PASSWORD") + docker rm -f unity-mcp >/dev/null 2>&1 || true + docker run -d --name unity-mcp --network host \ + -e HOME=/root \ + -e UNITY_MCP_ALLOW_BATCH=1 -e UNITY_MCP_STATUS_DIR=/root/.unity-mcp \ + -e UNITY_MCP_BIND_HOST=127.0.0.1 \ + -v "${{ github.workspace }}:/workspace" -w /workspace \ + -v "${{ env.UNITY_CACHE_ROOT }}:/root" \ + -v "$HOME/.unity-mcp:/root/.unity-mcp" \ + ${{ env.UNITY_IMAGE }} /opt/unity/Editor/Unity -batchmode -nographics -logFile - \ + -stackTraceLogType Full \ + -projectPath /workspace/TestProjects/UnityMCPTests \ + "${MANUAL_ARG[@]}" \ + "${EBL_ARGS[@]}" \ + -executeMethod MCPForUnity.Editor.MCPForUnityBridge.StartAutoConnect + + # ---------- Wait for Unity bridge ---------- + - name: Wait for Unity bridge (robust) + if: steps.detect.outputs.unity_ok == 'true' + run: | + set -euo pipefail + if ! docker ps --format '{{.Names}}' | grep -qx 'unity-mcp'; then + echo "Unity container failed to start"; docker ps -a || true; exit 1 + fi + docker logs -f unity-mcp 2>&1 | sed -E 's/((serial|license|password|token)[^[:space:]]*)/[REDACTED]/ig' & LOGPID=$! + deadline=$((SECONDS+420)); READY=0 + try_connect_host() { + P="$1" + timeout 1 bash -lc "exec 3<>/dev/tcp/127.0.0.1/$P; head -c 8 <&3 >/dev/null" && return 0 || true + if command -v nc >/dev/null 2>&1; then nc -6 -z ::1 "$P" && return 0 || true; fi + return 1 + } + while [ $SECONDS -lt $deadline ]; do + if docker logs unity-mcp 2>&1 | grep -qE "MCP Bridge listening|Bridge ready|Server started"; then + READY=1; echo "Bridge ready (log markers)"; break + fi + PORT=$(python3 -c "import os,glob,json,sys,time; b=os.path.expanduser('~/.unity-mcp'); fs=sorted(glob.glob(os.path.join(b,'unity-mcp-status-*.json')), key=os.path.getmtime, reverse=True); print(next((json.load(open(f,'r',encoding='utf-8')).get('unity_port') for f in fs if time.time()-os.path.getmtime(f)<=300 and json.load(open(f,'r',encoding='utf-8')).get('unity_port')), '' ))" 2>/dev/null || true) + if [ -n "${PORT:-}" ] && { try_connect_host "$PORT" || docker exec unity-mcp bash -lc "timeout 1 bash -lc 'exec 3<>/dev/tcp/127.0.0.1/$PORT' || (command -v nc >/dev/null 2>&1 && nc -6 -z ::1 $PORT)"; }; then + READY=1; echo "Bridge ready on port $PORT"; break + fi + if docker logs unity-mcp 2>&1 | grep -qE "No valid Unity Editor license|Token not found in cache|com\.unity\.editor\.headless"; then + echo "Licensing error detected"; break + fi + sleep 2 + done + kill $LOGPID || true + if [ "$READY" != "1" ]; then + echo "Bridge not ready; diagnostics:" + echo "== status files =="; ls -la "$HOME/.unity-mcp" || true + echo "== status contents =="; for f in "$HOME"/.unity-mcp/unity-mcp-status-*.json; do [ -f "$f" ] && { echo "--- $f"; sed -n '1,120p' "$f"; }; done + echo "== sockets (inside container) =="; docker exec unity-mcp bash -lc 'ss -lntp || netstat -tulpen || true' + echo "== tail of Unity log ==" + docker logs --tail 200 unity-mcp | sed -E 's/((serial|license|password|token)[^[:space:]]*)/[REDACTED]/ig' || true + exit 1 + fi + + # ---------- MCP client config ---------- + - name: Write MCP config (.claude/mcp.json) + run: | + set -eux + mkdir -p .claude + cat > .claude/mcp.json < "$JUNIT_OUT" <<'XML' + + + + Bootstrap placeholder; suite will append real tests. + + + XML + printf '# Unity NL/T Editing Suite Test Results\n\n' > "$MD_OUT" + + - name: Write safe revert helper (scripts/nlt-revert.sh) + shell: bash + run: | + set -eux + cat > scripts/nlt-revert.sh <<'BASH' + #!/usr/bin/env bash + set -euo pipefail + sub="${1:-}"; target_rel="${2:-}"; snap="${3:-}" + WS="${GITHUB_WORKSPACE:-$PWD}" + ROOT="$WS/TestProjects/UnityMCPTests" + t_abs="$(realpath -m "$WS/$target_rel")" + s_abs="$(realpath -m "$WS/$snap")" + if [[ "$t_abs" != "$ROOT/Assets/"* ]]; then + echo "refuse: target outside allowed scope: $t_abs" >&2; exit 2 + fi + mkdir -p "$(dirname "$s_abs")" + case "$sub" in + snapshot) + cp -f "$t_abs" "$s_abs" + sha=$(sha256sum "$s_abs" | awk '{print $1}') + echo "snapshot_sha=$sha" + ;; + restore) + if [[ ! -f "$s_abs" ]]; then echo "snapshot missing: $s_abs" >&2; exit 3; fi + cp -f "$s_abs" "$t_abs" + touch "$t_abs" + sha=$(sha256sum "$t_abs" | awk '{print $1}') + echo "restored_sha=$sha" + ;; + *) + echo "usage: $0 snapshot|restore " >&2; exit 1 + ;; + esac + BASH + chmod +x scripts/nlt-revert.sh + + # ---------- Snapshot baseline (pre-agent) ---------- + - name: Snapshot baseline (pre-agent) + if: steps.detect.outputs.anthropic_ok == 'true' && steps.detect.outputs.unity_ok == 'true' + shell: bash + run: | + set -euo pipefail + TARGET="TestProjects/UnityMCPTests/Assets/Scripts/LongUnityScriptClaudeTest.cs" + SNAP="reports/_snapshots/LongUnityScriptClaudeTest.cs.baseline" + scripts/nlt-revert.sh snapshot "$TARGET" "$SNAP" + + + # ---------- Run suite ---------- + - name: Run Claude NL suite (single pass) + uses: anthropics/claude-code-base-action@beta + if: steps.detect.outputs.anthropic_ok == 'true' && steps.detect.outputs.unity_ok == 'true' + continue-on-error: true + with: + use_node_cache: false + prompt_file: .claude/prompts/nl-unity-suite-full.md + mcp_config: .claude/mcp.json + allowed_tools: >- + Write, + Bash(scripts/nlt-revert.sh:*), + mcp__unity__manage_editor, + mcp__unity__list_resources, + mcp__unity__read_resource, + mcp__unity__apply_text_edits, + mcp__unity__script_apply_edits, + mcp__unity__validate_script, + mcp__unity__find_in_file, + mcp__unity__read_console, + mcp__unity__get_sha + disallowed_tools: TodoWrite,Task + model: claude-3-7-sonnet-latest + timeout_minutes: "30" + anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }} + + # ---------- Merge testcase fragments into JUnit ---------- + - name: Normalize/assemble JUnit in-place (single file) + if: always() + shell: bash + run: | + python3 - <<'PY' + from pathlib import Path + import xml.etree.ElementTree as ET + import re, os + def localname(tag: str) -> str: return tag.rsplit('}', 1)[-1] if '}' in tag else tag + src = Path(os.environ.get('JUNIT_OUT', 'reports/junit-nl-suite.xml')) + if not src.exists(): raise SystemExit(0) + tree = ET.parse(src); root = tree.getroot() + suite = root.find('./*') if localname(root.tag) == 'testsuites' else root + if suite is None: raise SystemExit(0) + fragments = sorted(Path('reports').glob('*_results.xml')) + added = 0 + for frag in fragments: + try: + froot = ET.parse(frag).getroot() + if localname(froot.tag) == 'testcase': + suite.append(froot); added += 1 + else: + for tc in froot.findall('.//testcase'): + suite.append(tc); added += 1 + except Exception: + txt = Path(frag).read_text(encoding='utf-8', errors='replace') + for m in re.findall(r'', txt, flags=re.DOTALL): + try: suite.append(ET.fromstring(m)); added += 1 + except Exception: pass + if added: + # Drop bootstrap placeholder and recompute counts + removed_bootstrap = 0 + for tc in list(suite.findall('.//testcase')): + name = (tc.get('name') or '') + if name == 'NL-Suite.Bootstrap': + suite.remove(tc) + removed_bootstrap += 1 + testcases = suite.findall('.//testcase') + tests_cnt = len(testcases) + failures_cnt = sum(1 for tc in testcases if (tc.find('failure') is not None or tc.find('error') is not None)) + suite.set('tests', str(tests_cnt)) + suite.set('failures', str(failures_cnt)) + suite.set('errors', str(0)) + suite.set('skipped', str(0)) + tree.write(src, encoding='utf-8', xml_declaration=True) + print(f"Added {added} testcase fragments; removed bootstrap={removed_bootstrap}; tests={tests_cnt}; failures={failures_cnt}") + PY + + # ---------- Markdown summary from JUnit ---------- + - name: Build markdown summary from JUnit + if: always() + shell: bash + run: | + python3 - <<'PY' + import xml.etree.ElementTree as ET + from pathlib import Path + import os, html + + def localname(tag: str) -> str: + return tag.rsplit('}', 1)[-1] if '}' in tag else tag + + src = Path(os.environ.get('JUNIT_OUT', 'reports/junit-nl-suite.xml')) + md_out = Path(os.environ.get('MD_OUT', 'reports/junit-nl-suite.md')) + # Ensure destination directory exists even if earlier prep steps were skipped + md_out.parent.mkdir(parents=True, exist_ok=True) + + if not src.exists(): + md_out.write_text("# Unity NL/T Editing Suite Test Results\n\n(No JUnit found)\n", encoding='utf-8') + raise SystemExit(0) + + tree = ET.parse(src) + root = tree.getroot() + suite = root.find('./*') if localname(root.tag) == 'testsuites' else root + cases = [] if suite is None else list(suite.findall('.//testcase')) + + total = len(cases) + failures = sum(1 for tc in cases if (tc.find('failure') is not None or tc.find('error') is not None)) + passed = total - failures + + desired = ['NL-0','NL-1','NL-2','NL-3','NL-4','T-A','T-B','T-C','T-D','T-E','T-F','T-G','T-H','T-I','T-J'] + name_to_case = {(tc.get('name') or ''): tc for tc in cases} + + def status_for(prefix: str): + for name, tc in name_to_case.items(): + if name.startswith(prefix): + return not ((tc.find('failure') is not None) or (tc.find('error') is not None)) + return None + + lines = [] + lines += [ + '# Unity NL/T Editing Suite Test Results', + '', + f'Totals: {passed} passed, {failures} failed, {total} total', + '', + '## Test Checklist' + ] + for p in desired: + st = status_for(p) + lines.append(f"- [x] {p}" if st is True else (f"- [ ] {p} (fail)" if st is False else f"- [ ] {p} (not run)")) + lines.append('') + + # Rich per-test system-out details + lines.append('## Test Details') + + def order_key(n: str): + try: + if n.startswith('NL-') and n[3].isdigit(): + return (0, int(n.split('.')[0].split('-')[1])) + except Exception: + pass + if n.startswith('T-') and len(n) > 2 and n[2].isalpha(): + return (1, ord(n[2])) + return (2, n) + + MAX_CHARS = 2000 + for name in sorted(name_to_case.keys(), key=order_key): + tc = name_to_case[name] + status_badge = "PASS" if (tc.find('failure') is None and tc.find('error') is None) else "FAIL" + lines.append(f"### {name} — {status_badge}") + so = tc.find('system-out') + text = '' if so is None or so.text is None else so.text.replace('\r\n','\n') + # Unescape XML entities so code reads naturally (e.g., => instead of =>) + if text: + text = html.unescape(text) + if text.strip(): + t = text.strip() + if len(t) > MAX_CHARS: + t = t[:MAX_CHARS] + "\n…(truncated)" + # Use a safer fence if content contains triple backticks + fence = '```' + if '```' in t: + fence = '````' + lines.append(fence) + lines.append(t) + lines.append(fence) + else: + lines.append('(no system-out)') + node = tc.find('failure') or tc.find('error') + if node is not None: + msg = (node.get('message') or '').strip() + body = (node.text or '').strip() + if msg: lines.append(f"- Message: {msg}") + if body: lines.append(f"- Detail: {body.splitlines()[0][:500]}") + lines.append('') + + md_out.write_text('\n'.join(lines), encoding='utf-8') + PY + + - name: "Debug: list report files" + if: always() + shell: bash + run: | + set -eux + ls -la reports || true + shopt -s nullglob + for f in reports/*.xml; do + echo "===== $f =====" + head -n 40 "$f" || true + done + + # ---------- Collect execution transcript (if present) ---------- + - name: Collect action execution transcript + if: always() + shell: bash + run: | + set -eux + if [ -f "$RUNNER_TEMP/claude-execution-output.json" ]; then + cp "$RUNNER_TEMP/claude-execution-output.json" reports/claude-execution-output.json + elif [ -f "/home/runner/work/_temp/claude-execution-output.json" ]; then + cp "/home/runner/work/_temp/claude-execution-output.json" reports/claude-execution-output.json + fi + + - name: Sanitize markdown (normalize newlines) + if: always() + run: | + set -eu + python3 - <<'PY' + from pathlib import Path + rp=Path('reports'); rp.mkdir(parents=True, exist_ok=True) + for p in rp.glob('*.md'): + b=p.read_bytes().replace(b'\x00', b'') + s=b.decode('utf-8','replace').replace('\r\n','\n') + p.write_text(s, encoding='utf-8', newline='\n') + PY + + - name: NL/T details → Job Summary + if: always() + run: | + echo "## Unity NL/T Editing Suite — Summary" >> $GITHUB_STEP_SUMMARY + python3 - <<'PY' >> $GITHUB_STEP_SUMMARY + from pathlib import Path + p = Path('reports/junit-nl-suite.md') + if p.exists(): + text = p.read_bytes().decode('utf-8', 'replace') + MAX = 65000 + print(text[:MAX]) + if len(text) > MAX: + print("\n\n_…truncated; full report in artifacts._") + else: + print("_No markdown report found._") + PY + + - name: Fallback JUnit if missing + if: always() + run: | + set -eu + mkdir -p reports + if [ ! -f "$JUNIT_OUT" ]; then + printf '%s\n' \ + '' \ + '' \ + ' ' \ + ' ' \ + ' ' \ + '' \ + > "$JUNIT_OUT" + fi + + - name: Publish JUnit report + if: always() + uses: mikepenz/action-junit-report@v5 + with: + report_paths: '${{ env.JUNIT_OUT }}' + include_passed: true + detailed_summary: true + annotate_notice: true + require_tests: false + fail_on_parse_error: true + + - name: Upload artifacts (reports + fragments + transcript) + if: always() + uses: actions/upload-artifact@v4 + with: + name: claude-nl-suite-artifacts + path: | + ${{ env.JUNIT_OUT }} + ${{ env.MD_OUT }} + reports/*_results.xml + reports/claude-execution-output.json + retention-days: 7 + + # ---------- Always stop Unity ---------- + - name: Stop Unity + if: always() + run: | + docker logs --tail 400 unity-mcp | sed -E 's/((serial|license|password|token)[^[:space:]]*)/[REDACTED]/ig' || true + docker rm -f unity-mcp || true + \ No newline at end of file diff --git a/.gitignore b/.gitignore index 4ede4e8b..0e2cbb03 100644 --- a/.gitignore +++ b/.gitignore @@ -33,4 +33,6 @@ CONTRIBUTING.md.meta .idea/ .vscode/ .aider* -.DS_Store* \ No newline at end of file +.DS_Store* +# Unity test project lock files +TestProjects/UnityMCPTests/Packages/packages-lock.json diff --git a/README-DEV.md b/README-DEV.md index eac08193..debcffc7 100644 --- a/README-DEV.md +++ b/README-DEV.md @@ -66,6 +66,41 @@ To find it reliably: Note: In recent builds, the Python server sources are also bundled inside the package under `UnityMcpServer~/src`. This is handy for local testing or pointing MCP clients directly at the packaged server. +## CI Test Workflow (GitHub Actions) + +We provide a CI job to run a Natural Language Editing mini-suite against the Unity test project. It spins up a headless Unity container and connects via the MCP bridge. + +- Trigger: Workflow dispatch (`Claude NL suite (Unity live)`). +- Image: `UNITY_IMAGE` (UnityCI) pulled by tag; the job resolves a digest at runtime. Logs are sanitized. +- Reports: JUnit at `reports/junit-nl-suite.xml`, Markdown at `reports/junit-nl-suite.md`. +- Publishing: JUnit is normalized to `reports/junit-for-actions.xml` and published; artifacts upload all files under `reports/`. + +### Test target script +- The repo includes a long, standalone C# script used to exercise larger edits and windows: + - `TestProjects/UnityMCPTests/Assets/Scripts/LongUnityScriptClaudeTest.cs` + Use this file locally and in CI to validate multi-edit batches, anchor inserts, and windowed reads on a sizable script. + +### Add a new NL test +- Edit `.claude/prompts/nl-unity-claude-tests-mini.md` (or `nl-unity-suite-full.md` for the larger suite). +- Follow the conventions: single `` root, one `` per sub-test, end system-out with `VERDICT: PASS|FAIL`. +- Keep edits minimal and reversible; include evidence windows and compact diffs. + +### Run the suite +1) Push your branch, then manually run the workflow from the Actions tab. +2) The job writes reports into `reports/` and uploads artifacts. +3) The “JUnit Test Report” check summarizes results; open the Job Summary for full markdown. + +### View results +- Job Summary: inline markdown summary of the run on the Actions tab in GitHub +- Check: “JUnit Test Report” on the PR/commit. +- Artifacts: `claude-nl-suite-artifacts` includes XML and MD. + + +### MCP Connection Debugging +- *Enable debug logs* in the Unity MCP window (inside the Editor) to view connection status, auto-setup results, and MCP client paths. It shows: + - bridge startup/port, client connections, strict framing negotiation, and parsed frames + - auto-config path detection (Windows/macOS/Linux), uv/claude resolution, and surfaced errors +- In CI, the job tails Unity logs (redacted for serial/license/password/token) and prints socket/status JSON diagnostics if startup fails. ## Workflow 1. **Make changes** to your source code in this directory diff --git a/README.md b/README.md index c3082f74..ae5e02dd 100644 --- a/README.md +++ b/README.md @@ -43,6 +43,9 @@ MCP for Unity acts as a bridge, allowing AI assistants (like Claude, Cursor) to * `manage_shader`: Performs shader CRUD operations (create, read, modify, delete). * `manage_gameobject`: Manages GameObjects: create, modify, delete, find, and component operations. * `execute_menu_item`: Executes a menu item via its path (e.g., "File/Save Project"). + * `apply_text_edits`: Precise text edits with precondition hashes and atomic multi-edit batches. + * `script_apply_edits`: Structured C# method/class edits (insert/replace/delete) with safer boundaries. + * `validate_script`: Fast validation (basic/standard) to catch syntax/structure issues before/after writes. --- diff --git a/TestProjects/UnityMCPTests/Assets/Scripts/LongUnityScriptClaudeTest.cs b/TestProjects/UnityMCPTests/Assets/Scripts/LongUnityScriptClaudeTest.cs new file mode 100644 index 00000000..27fb9348 --- /dev/null +++ b/TestProjects/UnityMCPTests/Assets/Scripts/LongUnityScriptClaudeTest.cs @@ -0,0 +1,2039 @@ +using UnityEngine; +using System.Collections.Generic; + +// Standalone, dependency-free long script for Claude NL/T editing tests. +// Intentionally verbose to simulate a complex gameplay script without external packages. +public class LongUnityScriptClaudeTest : MonoBehaviour +{ + [Header("Core References")] + public Transform reachOrigin; + public Animator animator; + + [Header("State")] + private Transform currentTarget; + private Transform previousTarget; + private float lastTargetFoundTime; + + [Header("Held Objects")] + private readonly List heldObjects = new List(); + + // Accumulators used by padding methods to avoid complete no-ops + private int padAccumulator = 0; + private Vector3 padVector = Vector3.zero; + + + [Header("Tuning")] + public float maxReachDistance = 2f; + public float maxHorizontalDistance = 1.0f; + public float maxVerticalDistance = 1.0f; + + // Public accessors used by NL tests + public bool HasTarget() { return currentTarget != null; } + public Transform GetCurrentTarget() => currentTarget; + + // Simple selection logic (self-contained) + private Transform FindBestTarget() + { + if (reachOrigin == null) return null; + // Dummy: prefer previously seen target within distance + if (currentTarget != null && Vector3.Distance(reachOrigin.position, currentTarget.position) <= maxReachDistance) + return currentTarget; + return null; + } + + private void HandleTargetSwitch(Transform next) + { + if (next == currentTarget) return; + previousTarget = currentTarget; + currentTarget = next; + lastTargetFoundTime = Time.time; + } + + private void LateUpdate() + { + // Keep file long with harmless per-frame work + if (currentTarget == null && previousTarget != null) + { + // decay previous reference over time + if (Time.time - lastTargetFoundTime > 0.5f) previousTarget = null; + } + } + + // NL tests sometimes add comments above Update() as an anchor + private void Update() + { + if (reachOrigin == null) return; + var best = FindBestTarget(); + if (best != null) HandleTargetSwitch(best); + } + + + // Dummy reach/hold API (no external deps) + public void OnObjectHeld(Transform t) + { + if (t == null) return; + if (!heldObjects.Contains(t)) heldObjects.Add(t); + animator?.SetInteger("objectsHeld", heldObjects.Count); + } + + public void OnObjectPlaced() + { + if (heldObjects.Count == 0) return; + heldObjects.RemoveAt(heldObjects.Count - 1); + animator?.SetInteger("objectsHeld", heldObjects.Count); + } + + // More padding: repetitive blocks with slight variations + #region Padding Blocks + private Vector3 AccumulateBlend(Transform t) + { + if (t == null || reachOrigin == null) return Vector3.zero; + Vector3 local = reachOrigin.InverseTransformPoint(t.position); + float bx = Mathf.Clamp(local.x / Mathf.Max(0.001f, maxHorizontalDistance), -1f, 1f); + float by = Mathf.Clamp(local.y / Mathf.Max(0.001f, maxVerticalDistance), -1f, 1f); + return new Vector3(bx, by, 0f); + } + + private void ApplyBlend(Vector3 blend) + { + if (animator == null) return; + animator.SetFloat("reachX", blend.x); + animator.SetFloat("reachY", blend.y); + } + + public void TickBlendOnce() + { + var b = AccumulateBlend(currentTarget); + ApplyBlend(b); + } + + // A long series of small no-op methods to bulk up the file without adding deps + private void Step001() { } + private void Step002() { } + private void Step003() { } + private void Step004() { } + private void Step005() { } + private void Step006() { } + private void Step007() { } + private void Step008() { } + private void Step009() { } + private void Step010() { } + private void Step011() { } + private void Step012() { } + private void Step013() { } + private void Step014() { } + private void Step015() { } + private void Step016() { } + private void Step017() { } + private void Step018() { } + private void Step019() { } + private void Step020() { } + private void Step021() { } + private void Step022() { } + private void Step023() { } + private void Step024() { } + private void Step025() { } + private void Step026() { } + private void Step027() { } + private void Step028() { } + private void Step029() { } + private void Step030() { } + private void Step031() { } + private void Step032() { } + private void Step033() { } + private void Step034() { } + private void Step035() { } + private void Step036() { } + private void Step037() { } + private void Step038() { } + private void Step039() { } + private void Step040() { } + private void Step041() { } + private void Step042() { } + private void Step043() { } + private void Step044() { } + private void Step045() { } + private void Step046() { } + private void Step047() { } + private void Step048() { } + private void Step049() { } + private void Step050() { } + #endregion + #region MassivePadding + private void Pad0051() + { + } + private void Pad0052() + { + } + private void Pad0053() + { + } + private void Pad0054() + { + } + private void Pad0055() + { + } + private void Pad0056() + { + } + private void Pad0057() + { + } + private void Pad0058() + { + } + private void Pad0059() + { + } + private void Pad0060() + { + } + private void Pad0061() + { + } + private void Pad0062() + { + } + private void Pad0063() + { + } + private void Pad0064() + { + } + private void Pad0065() + { + } + private void Pad0066() + { + } + private void Pad0067() + { + } + private void Pad0068() + { + } + private void Pad0069() + { + } + private void Pad0070() + { + } + private void Pad0071() + { + } + private void Pad0072() + { + } + private void Pad0073() + { + } + private void Pad0074() + { + } + private void Pad0075() + { + } + private void Pad0076() + { + } + private void Pad0077() + { + } + private void Pad0078() + { + } + private void Pad0079() + { + } + private void Pad0080() + { + } + private void Pad0081() + { + } + private void Pad0082() + { + } + private void Pad0083() + { + } + private void Pad0084() + { + } + private void Pad0085() + { + } + private void Pad0086() + { + } + private void Pad0087() + { + } + private void Pad0088() + { + } + private void Pad0089() + { + } + private void Pad0090() + { + } + private void Pad0091() + { + } + private void Pad0092() + { + } + private void Pad0093() + { + } + private void Pad0094() + { + } + private void Pad0095() + { + } + private void Pad0096() + { + } + private void Pad0097() + { + } + private void Pad0098() + { + } + private void Pad0099() + { + } + private void Pad0100() + { + // lightweight math to give this padding method some substance + padAccumulator = (padAccumulator * 1664525 + 1013904223 + 100) & 0x7fffffff; + float t = (padAccumulator % 1000) * 0.001f; + padVector.x = Mathf.Lerp(padVector.x, t, 0.1f); + padVector.y = Mathf.Lerp(padVector.y, 1f - t, 0.1f); + padVector.z = 0f; + } + private void Pad0101() + { + } + private void Pad0102() + { + } + private void Pad0103() + { + } + private void Pad0104() + { + } + private void Pad0105() + { + } + private void Pad0106() + { + } + private void Pad0107() + { + } + private void Pad0108() + { + } + private void Pad0109() + { + } + private void Pad0110() + { + } + private void Pad0111() + { + } + private void Pad0112() + { + } + private void Pad0113() + { + } + private void Pad0114() + { + } + private void Pad0115() + { + } + private void Pad0116() + { + } + private void Pad0117() + { + } + private void Pad0118() + { + } + private void Pad0119() + { + } + private void Pad0120() + { + } + private void Pad0121() + { + } + private void Pad0122() + { + } + private void Pad0123() + { + } + private void Pad0124() + { + } + private void Pad0125() + { + } + private void Pad0126() + { + } + private void Pad0127() + { + } + private void Pad0128() + { + } + private void Pad0129() + { + } + private void Pad0130() + { + } + private void Pad0131() + { + } + private void Pad0132() + { + } + private void Pad0133() + { + } + private void Pad0134() + { + } + private void Pad0135() + { + } + private void Pad0136() + { + } + private void Pad0137() + { + } + private void Pad0138() + { + } + private void Pad0139() + { + } + private void Pad0140() + { + } + private void Pad0141() + { + } + private void Pad0142() + { + } + private void Pad0143() + { + } + private void Pad0144() + { + } + private void Pad0145() + { + } + private void Pad0146() + { + } + private void Pad0147() + { + } + private void Pad0148() + { + } + private void Pad0149() + { + } + private void Pad0150() + { + // lightweight math to give this padding method some substance + padAccumulator = (padAccumulator * 1664525 + 1013904223 + 150) & 0x7fffffff; + float t = (padAccumulator % 1000) * 0.001f; + padVector.x = Mathf.Lerp(padVector.x, t, 0.1f); + padVector.y = Mathf.Lerp(padVector.y, 1f - t, 0.1f); + padVector.z = 0f; + } + private void Pad0151() + { + } + private void Pad0152() + { + } + private void Pad0153() + { + } + private void Pad0154() + { + } + private void Pad0155() + { + } + private void Pad0156() + { + } + private void Pad0157() + { + } + private void Pad0158() + { + } + private void Pad0159() + { + } + private void Pad0160() + { + } + private void Pad0161() + { + } + private void Pad0162() + { + } + private void Pad0163() + { + } + private void Pad0164() + { + } + private void Pad0165() + { + } + private void Pad0166() + { + } + private void Pad0167() + { + } + private void Pad0168() + { + } + private void Pad0169() + { + } + private void Pad0170() + { + } + private void Pad0171() + { + } + private void Pad0172() + { + } + private void Pad0173() + { + } + private void Pad0174() + { + } + private void Pad0175() + { + } + private void Pad0176() + { + } + private void Pad0177() + { + } + private void Pad0178() + { + } + private void Pad0179() + { + } + private void Pad0180() + { + } + private void Pad0181() + { + } + private void Pad0182() + { + } + private void Pad0183() + { + } + private void Pad0184() + { + } + private void Pad0185() + { + } + private void Pad0186() + { + } + private void Pad0187() + { + } + private void Pad0188() + { + } + private void Pad0189() + { + } + private void Pad0190() + { + } + private void Pad0191() + { + } + private void Pad0192() + { + } + private void Pad0193() + { + } + private void Pad0194() + { + } + private void Pad0195() + { + } + private void Pad0196() + { + } + private void Pad0197() + { + } + private void Pad0198() + { + } + private void Pad0199() + { + } + private void Pad0200() + { + // lightweight math to give this padding method some substance + padAccumulator = (padAccumulator * 1664525 + 1013904223 + 200) & 0x7fffffff; + float t = (padAccumulator % 1000) * 0.001f; + padVector.x = Mathf.Lerp(padVector.x, t, 0.1f); + padVector.y = Mathf.Lerp(padVector.y, 1f - t, 0.1f); + padVector.z = 0f; + } + private void Pad0201() + { + } + private void Pad0202() + { + } + private void Pad0203() + { + } + private void Pad0204() + { + } + private void Pad0205() + { + } + private void Pad0206() + { + } + private void Pad0207() + { + } + private void Pad0208() + { + } + private void Pad0209() + { + } + private void Pad0210() + { + } + private void Pad0211() + { + } + private void Pad0212() + { + } + private void Pad0213() + { + } + private void Pad0214() + { + } + private void Pad0215() + { + } + private void Pad0216() + { + } + private void Pad0217() + { + } + private void Pad0218() + { + } + private void Pad0219() + { + } + private void Pad0220() + { + } + private void Pad0221() + { + } + private void Pad0222() + { + } + private void Pad0223() + { + } + private void Pad0224() + { + } + private void Pad0225() + { + } + private void Pad0226() + { + } + private void Pad0227() + { + } + private void Pad0228() + { + } + private void Pad0229() + { + } + private void Pad0230() + { + } + private void Pad0231() + { + } + private void Pad0232() + { + } + private void Pad0233() + { + } + private void Pad0234() + { + } + private void Pad0235() + { + } + private void Pad0236() + { + } + private void Pad0237() + { + } + private void Pad0238() + { + } + private void Pad0239() + { + } + private void Pad0240() + { + } + private void Pad0241() + { + } + private void Pad0242() + { + } + private void Pad0243() + { + } + private void Pad0244() + { + } + private void Pad0245() + { + } + private void Pad0246() + { + } + private void Pad0247() + { + } + private void Pad0248() + { + } + private void Pad0249() + { + } + private void Pad0250() + { + // lightweight math to give this padding method some substance + padAccumulator = (padAccumulator * 1664525 + 1013904223 + 250) & 0x7fffffff; + float t = (padAccumulator % 1000) * 0.001f; + padVector.x = Mathf.Lerp(padVector.x, t, 0.1f); + padVector.y = Mathf.Lerp(padVector.y, 1f - t, 0.1f); + padVector.z = 0f; + } + private void Pad0251() + { + } + private void Pad0252() + { + } + private void Pad0253() + { + } + private void Pad0254() + { + } + private void Pad0255() + { + } + private void Pad0256() + { + } + private void Pad0257() + { + } + private void Pad0258() + { + } + private void Pad0259() + { + } + private void Pad0260() + { + } + private void Pad0261() + { + } + private void Pad0262() + { + } + private void Pad0263() + { + } + private void Pad0264() + { + } + private void Pad0265() + { + } + private void Pad0266() + { + } + private void Pad0267() + { + } + private void Pad0268() + { + } + private void Pad0269() + { + } + private void Pad0270() + { + } + private void Pad0271() + { + } + private void Pad0272() + { + } + private void Pad0273() + { + } + private void Pad0274() + { + } + private void Pad0275() + { + } + private void Pad0276() + { + } + private void Pad0277() + { + } + private void Pad0278() + { + } + private void Pad0279() + { + } + private void Pad0280() + { + } + private void Pad0281() + { + } + private void Pad0282() + { + } + private void Pad0283() + { + } + private void Pad0284() + { + } + private void Pad0285() + { + } + private void Pad0286() + { + } + private void Pad0287() + { + } + private void Pad0288() + { + } + private void Pad0289() + { + } + private void Pad0290() + { + } + private void Pad0291() + { + } + private void Pad0292() + { + } + private void Pad0293() + { + } + private void Pad0294() + { + } + private void Pad0295() + { + } + private void Pad0296() + { + } + private void Pad0297() + { + } + private void Pad0298() + { + } + private void Pad0299() + { + } + private void Pad0300() + { + // lightweight math to give this padding method some substance + padAccumulator = (padAccumulator * 1664525 + 1013904223 + 300) & 0x7fffffff; + float t = (padAccumulator % 1000) * 0.001f; + padVector.x = Mathf.Lerp(padVector.x, t, 0.1f); + padVector.y = Mathf.Lerp(padVector.y, 1f - t, 0.1f); + padVector.z = 0f; + } + private void Pad0301() + { + } + private void Pad0302() + { + } + private void Pad0303() + { + } + private void Pad0304() + { + } + private void Pad0305() + { + } + private void Pad0306() + { + } + private void Pad0307() + { + } + private void Pad0308() + { + } + private void Pad0309() + { + } + private void Pad0310() + { + } + private void Pad0311() + { + } + private void Pad0312() + { + } + private void Pad0313() + { + } + private void Pad0314() + { + } + private void Pad0315() + { + } + private void Pad0316() + { + } + private void Pad0317() + { + } + private void Pad0318() + { + } + private void Pad0319() + { + } + private void Pad0320() + { + } + private void Pad0321() + { + } + private void Pad0322() + { + } + private void Pad0323() + { + } + private void Pad0324() + { + } + private void Pad0325() + { + } + private void Pad0326() + { + } + private void Pad0327() + { + } + private void Pad0328() + { + } + private void Pad0329() + { + } + private void Pad0330() + { + } + private void Pad0331() + { + } + private void Pad0332() + { + } + private void Pad0333() + { + } + private void Pad0334() + { + } + private void Pad0335() + { + } + private void Pad0336() + { + } + private void Pad0337() + { + } + private void Pad0338() + { + } + private void Pad0339() + { + } + private void Pad0340() + { + } + private void Pad0341() + { + } + private void Pad0342() + { + } + private void Pad0343() + { + } + private void Pad0344() + { + } + private void Pad0345() + { + } + private void Pad0346() + { + } + private void Pad0347() + { + } + private void Pad0348() + { + } + private void Pad0349() + { + } + private void Pad0350() + { + // lightweight math to give this padding method some substance + padAccumulator = (padAccumulator * 1664525 + 1013904223 + 350) & 0x7fffffff; + float t = (padAccumulator % 1000) * 0.001f; + padVector.x = Mathf.Lerp(padVector.x, t, 0.1f); + padVector.y = Mathf.Lerp(padVector.y, 1f - t, 0.1f); + padVector.z = 0f; + } + private void Pad0351() + { + } + private void Pad0352() + { + } + private void Pad0353() + { + } + private void Pad0354() + { + } + private void Pad0355() + { + } + private void Pad0356() + { + } + private void Pad0357() + { + } + private void Pad0358() + { + } + private void Pad0359() + { + } + private void Pad0360() + { + } + private void Pad0361() + { + } + private void Pad0362() + { + } + private void Pad0363() + { + } + private void Pad0364() + { + } + private void Pad0365() + { + } + private void Pad0366() + { + } + private void Pad0367() + { + } + private void Pad0368() + { + } + private void Pad0369() + { + } + private void Pad0370() + { + } + private void Pad0371() + { + } + private void Pad0372() + { + } + private void Pad0373() + { + } + private void Pad0374() + { + } + private void Pad0375() + { + } + private void Pad0376() + { + } + private void Pad0377() + { + } + private void Pad0378() + { + } + private void Pad0379() + { + } + private void Pad0380() + { + } + private void Pad0381() + { + } + private void Pad0382() + { + } + private void Pad0383() + { + } + private void Pad0384() + { + } + private void Pad0385() + { + } + private void Pad0386() + { + } + private void Pad0387() + { + } + private void Pad0388() + { + } + private void Pad0389() + { + } + private void Pad0390() + { + } + private void Pad0391() + { + } + private void Pad0392() + { + } + private void Pad0393() + { + } + private void Pad0394() + { + } + private void Pad0395() + { + } + private void Pad0396() + { + } + private void Pad0397() + { + } + private void Pad0398() + { + } + private void Pad0399() + { + } + private void Pad0400() + { + // lightweight math to give this padding method some substance + padAccumulator = (padAccumulator * 1664525 + 1013904223 + 400) & 0x7fffffff; + float t = (padAccumulator % 1000) * 0.001f; + padVector.x = Mathf.Lerp(padVector.x, t, 0.1f); + padVector.y = Mathf.Lerp(padVector.y, 1f - t, 0.1f); + padVector.z = 0f; + } + private void Pad0401() + { + } + private void Pad0402() + { + } + private void Pad0403() + { + } + private void Pad0404() + { + } + private void Pad0405() + { + } + private void Pad0406() + { + } + private void Pad0407() + { + } + private void Pad0408() + { + } + private void Pad0409() + { + } + private void Pad0410() + { + } + private void Pad0411() + { + } + private void Pad0412() + { + } + private void Pad0413() + { + } + private void Pad0414() + { + } + private void Pad0415() + { + } + private void Pad0416() + { + } + private void Pad0417() + { + } + private void Pad0418() + { + } + private void Pad0419() + { + } + private void Pad0420() + { + } + private void Pad0421() + { + } + private void Pad0422() + { + } + private void Pad0423() + { + } + private void Pad0424() + { + } + private void Pad0425() + { + } + private void Pad0426() + { + } + private void Pad0427() + { + } + private void Pad0428() + { + } + private void Pad0429() + { + } + private void Pad0430() + { + } + private void Pad0431() + { + } + private void Pad0432() + { + } + private void Pad0433() + { + } + private void Pad0434() + { + } + private void Pad0435() + { + } + private void Pad0436() + { + } + private void Pad0437() + { + } + private void Pad0438() + { + } + private void Pad0439() + { + } + private void Pad0440() + { + } + private void Pad0441() + { + } + private void Pad0442() + { + } + private void Pad0443() + { + } + private void Pad0444() + { + } + private void Pad0445() + { + } + private void Pad0446() + { + } + private void Pad0447() + { + } + private void Pad0448() + { + } + private void Pad0449() + { + } + private void Pad0450() + { + // lightweight math to give this padding method some substance + padAccumulator = (padAccumulator * 1664525 + 1013904223 + 450) & 0x7fffffff; + float t = (padAccumulator % 1000) * 0.001f; + padVector.x = Mathf.Lerp(padVector.x, t, 0.1f); + padVector.y = Mathf.Lerp(padVector.y, 1f - t, 0.1f); + padVector.z = 0f; + } + private void Pad0451() + { + } + private void Pad0452() + { + } + private void Pad0453() + { + } + private void Pad0454() + { + } + private void Pad0455() + { + } + private void Pad0456() + { + } + private void Pad0457() + { + } + private void Pad0458() + { + } + private void Pad0459() + { + } + private void Pad0460() + { + } + private void Pad0461() + { + } + private void Pad0462() + { + } + private void Pad0463() + { + } + private void Pad0464() + { + } + private void Pad0465() + { + } + private void Pad0466() + { + } + private void Pad0467() + { + } + private void Pad0468() + { + } + private void Pad0469() + { + } + private void Pad0470() + { + } + private void Pad0471() + { + } + private void Pad0472() + { + } + private void Pad0473() + { + } + private void Pad0474() + { + } + private void Pad0475() + { + } + private void Pad0476() + { + } + private void Pad0477() + { + } + private void Pad0478() + { + } + private void Pad0479() + { + } + private void Pad0480() + { + } + private void Pad0481() + { + } + private void Pad0482() + { + } + private void Pad0483() + { + } + private void Pad0484() + { + } + private void Pad0485() + { + } + private void Pad0486() + { + } + private void Pad0487() + { + } + private void Pad0488() + { + } + private void Pad0489() + { + } + private void Pad0490() + { + } + private void Pad0491() + { + } + private void Pad0492() + { + } + private void Pad0493() + { + } + private void Pad0494() + { + } + private void Pad0495() + { + } + private void Pad0496() + { + } + private void Pad0497() + { + } + private void Pad0498() + { + } + private void Pad0499() + { + } + private void Pad0500() + { + // lightweight math to give this padding method some substance + padAccumulator = (padAccumulator * 1664525 + 1013904223 + 500) & 0x7fffffff; + float t = (padAccumulator % 1000) * 0.001f; + padVector.x = Mathf.Lerp(padVector.x, t, 0.1f); + padVector.y = Mathf.Lerp(padVector.y, 1f - t, 0.1f); + padVector.z = 0f; + } + private void Pad0501() + { + } + private void Pad0502() + { + } + private void Pad0503() + { + } + private void Pad0504() + { + } + private void Pad0505() + { + } + private void Pad0506() + { + } + private void Pad0507() + { + } + private void Pad0508() + { + } + private void Pad0509() + { + } + private void Pad0510() + { + } + private void Pad0511() + { + } + private void Pad0512() + { + } + private void Pad0513() + { + } + private void Pad0514() + { + } + private void Pad0515() + { + } + private void Pad0516() + { + } + private void Pad0517() + { + } + private void Pad0518() + { + } + private void Pad0519() + { + } + private void Pad0520() + { + } + private void Pad0521() + { + } + private void Pad0522() + { + } + private void Pad0523() + { + } + private void Pad0524() + { + } + private void Pad0525() + { + } + private void Pad0526() + { + } + private void Pad0527() + { + } + private void Pad0528() + { + } + private void Pad0529() + { + } + private void Pad0530() + { + } + private void Pad0531() + { + } + private void Pad0532() + { + } + private void Pad0533() + { + } + private void Pad0534() + { + } + private void Pad0535() + { + } + private void Pad0536() + { + } + private void Pad0537() + { + } + private void Pad0538() + { + } + private void Pad0539() + { + } + private void Pad0540() + { + } + private void Pad0541() + { + } + private void Pad0542() + { + } + private void Pad0543() + { + } + private void Pad0544() + { + } + private void Pad0545() + { + } + private void Pad0546() + { + } + private void Pad0547() + { + } + private void Pad0548() + { + } + private void Pad0549() + { + } + private void Pad0550() + { + // lightweight math to give this padding method some substance + padAccumulator = (padAccumulator * 1664525 + 1013904223 + 550) & 0x7fffffff; + float t = (padAccumulator % 1000) * 0.001f; + padVector.x = Mathf.Lerp(padVector.x, t, 0.1f); + padVector.y = Mathf.Lerp(padVector.y, 1f - t, 0.1f); + padVector.z = 0f; + } + private void Pad0551() + { + } + private void Pad0552() + { + } + private void Pad0553() + { + } + private void Pad0554() + { + } + private void Pad0555() + { + } + private void Pad0556() + { + } + private void Pad0557() + { + } + private void Pad0558() + { + } + private void Pad0559() + { + } + private void Pad0560() + { + } + private void Pad0561() + { + } + private void Pad0562() + { + } + private void Pad0563() + { + } + private void Pad0564() + { + } + private void Pad0565() + { + } + private void Pad0566() + { + } + private void Pad0567() + { + } + private void Pad0568() + { + } + private void Pad0569() + { + } + private void Pad0570() + { + } + private void Pad0571() + { + } + private void Pad0572() + { + } + private void Pad0573() + { + } + private void Pad0574() + { + } + private void Pad0575() + { + } + private void Pad0576() + { + } + private void Pad0577() + { + } + private void Pad0578() + { + } + private void Pad0579() + { + } + private void Pad0580() + { + } + private void Pad0581() + { + } + private void Pad0582() + { + } + private void Pad0583() + { + } + private void Pad0584() + { + } + private void Pad0585() + { + } + private void Pad0586() + { + } + private void Pad0587() + { + } + private void Pad0588() + { + } + private void Pad0589() + { + } + private void Pad0590() + { + } + private void Pad0591() + { + } + private void Pad0592() + { + } + private void Pad0593() + { + } + private void Pad0594() + { + } + private void Pad0595() + { + } + private void Pad0596() + { + } + private void Pad0597() + { + } + private void Pad0598() + { + } + private void Pad0599() + { + } + private void Pad0600() + { + // lightweight math to give this padding method some substance + padAccumulator = (padAccumulator * 1664525 + 1013904223 + 600) & 0x7fffffff; + float t = (padAccumulator % 1000) * 0.001f; + padVector.x = Mathf.Lerp(padVector.x, t, 0.1f); + padVector.y = Mathf.Lerp(padVector.y, 1f - t, 0.1f); + padVector.z = 0f; + } + private void Pad0601() + { + } + private void Pad0602() + { + } + private void Pad0603() + { + } + private void Pad0604() + { + } + private void Pad0605() + { + } + private void Pad0606() + { + } + private void Pad0607() + { + } + private void Pad0608() + { + } + private void Pad0609() + { + } + private void Pad0610() + { + } + private void Pad0611() + { + } + private void Pad0612() + { + } + private void Pad0613() + { + } + private void Pad0614() + { + } + private void Pad0615() + { + } + private void Pad0616() + { + } + private void Pad0617() + { + } + private void Pad0618() + { + } + private void Pad0619() + { + } + private void Pad0620() + { + } + private void Pad0621() + { + } + private void Pad0622() + { + } + private void Pad0623() + { + } + private void Pad0624() + { + } + private void Pad0625() + { + } + private void Pad0626() + { + } + private void Pad0627() + { + } + private void Pad0628() + { + } + private void Pad0629() + { + } + private void Pad0630() + { + } + private void Pad0631() + { + } + private void Pad0632() + { + } + private void Pad0633() + { + } + private void Pad0634() + { + } + private void Pad0635() + { + } + private void Pad0636() + { + } + private void Pad0637() + { + } + private void Pad0638() + { + } + private void Pad0639() + { + } + private void Pad0640() + { + } + private void Pad0641() + { + } + private void Pad0642() + { + } + private void Pad0643() + { + } + private void Pad0644() + { + } + private void Pad0645() + { + } + private void Pad0646() + { + } + private void Pad0647() + { + } + private void Pad0648() + { + } + private void Pad0649() + { + } + private void Pad0650() + { + // lightweight math to give this padding method some substance + padAccumulator = (padAccumulator * 1664525 + 1013904223 + 650) & 0x7fffffff; + float t = (padAccumulator % 1000) * 0.001f; + padVector.x = Mathf.Lerp(padVector.x, t, 0.1f); + padVector.y = Mathf.Lerp(padVector.y, 1f - t, 0.1f); + padVector.z = 0f; + } + #endregion + +} + + diff --git a/TestProjects/UnityMCPTests/Assets/Scripts/LongUnityScriptClaudeTest.cs.meta b/TestProjects/UnityMCPTests/Assets/Scripts/LongUnityScriptClaudeTest.cs.meta new file mode 100644 index 00000000..3d95d986 --- /dev/null +++ b/TestProjects/UnityMCPTests/Assets/Scripts/LongUnityScriptClaudeTest.cs.meta @@ -0,0 +1,2 @@ +fileFormatVersion: 2 +guid: dfbabf507ab1245178d1a8e745d8d283 \ No newline at end of file diff --git a/TestProjects/UnityMCPTests/Packages/packages-lock.json b/TestProjects/UnityMCPTests/Packages/packages-lock.json deleted file mode 100644 index 51cb01d4..00000000 --- a/TestProjects/UnityMCPTests/Packages/packages-lock.json +++ /dev/null @@ -1,417 +0,0 @@ -{ - "dependencies": { - "com.coplaydev.unity-mcp": { - "version": "file:../../../UnityMcpBridge", - "depth": 0, - "source": "local", - "dependencies": { - "com.unity.nuget.newtonsoft-json": "3.0.2" - } - }, - "com.unity.collab-proxy": { - "version": "2.5.2", - "depth": 0, - "source": "registry", - "dependencies": {}, - "url": "https://packages.unity.com" - }, - "com.unity.editorcoroutines": { - "version": "1.0.0", - "depth": 1, - "source": "registry", - "dependencies": {}, - "url": "https://packages.unity.com" - }, - "com.unity.ext.nunit": { - "version": "1.0.6", - "depth": 1, - "source": "registry", - "dependencies": {}, - "url": "https://packages.unity.com" - }, - "com.unity.feature.development": { - "version": "1.0.1", - "depth": 0, - "source": "builtin", - "dependencies": { - "com.unity.ide.visualstudio": "2.0.22", - "com.unity.ide.rider": "3.0.31", - "com.unity.ide.vscode": "1.2.5", - "com.unity.editorcoroutines": "1.0.0", - "com.unity.performance.profile-analyzer": "1.2.2", - "com.unity.test-framework": "1.1.33", - "com.unity.testtools.codecoverage": "1.2.6" - } - }, - "com.unity.ide.rider": { - "version": "3.0.31", - "depth": 0, - "source": "registry", - "dependencies": { - "com.unity.ext.nunit": "1.0.6" - }, - "url": "https://packages.unity.com" - }, - "com.unity.ide.visualstudio": { - "version": "2.0.22", - "depth": 0, - "source": "registry", - "dependencies": { - "com.unity.test-framework": "1.1.9" - }, - "url": "https://packages.unity.com" - }, - "com.unity.ide.vscode": { - "version": "1.2.5", - "depth": 0, - "source": "registry", - "dependencies": {}, - "url": "https://packages.unity.com" - }, - "com.unity.ide.windsurf": { - "version": "https://github.com/Asuta/com.unity.ide.windsurf.git", - "depth": 0, - "source": "git", - "dependencies": { - "com.unity.test-framework": "1.1.9" - }, - "hash": "6161accf3e7beab96341813913e714c7e2fb5c5d" - }, - "com.unity.nuget.newtonsoft-json": { - "version": "3.2.1", - "depth": 1, - "source": "registry", - "dependencies": {}, - "url": "https://packages.unity.com" - }, - "com.unity.performance.profile-analyzer": { - "version": "1.2.2", - "depth": 1, - "source": "registry", - "dependencies": {}, - "url": "https://packages.unity.com" - }, - "com.unity.settings-manager": { - "version": "1.0.3", - "depth": 2, - "source": "registry", - "dependencies": {}, - "url": "https://packages.unity.com" - }, - "com.unity.test-framework": { - "version": "1.1.33", - "depth": 0, - "source": "registry", - "dependencies": { - "com.unity.ext.nunit": "1.0.6", - "com.unity.modules.imgui": "1.0.0", - "com.unity.modules.jsonserialize": "1.0.0" - }, - "url": "https://packages.unity.com" - }, - "com.unity.testtools.codecoverage": { - "version": "1.2.6", - "depth": 1, - "source": "registry", - "dependencies": { - "com.unity.test-framework": "1.0.16", - "com.unity.settings-manager": "1.0.1" - }, - "url": "https://packages.unity.com" - }, - "com.unity.textmeshpro": { - "version": "3.0.6", - "depth": 0, - "source": "registry", - "dependencies": { - "com.unity.ugui": "1.0.0" - }, - "url": "https://packages.unity.com" - }, - "com.unity.timeline": { - "version": "1.6.5", - "depth": 0, - "source": "registry", - "dependencies": { - "com.unity.modules.audio": "1.0.0", - "com.unity.modules.director": "1.0.0", - "com.unity.modules.animation": "1.0.0", - "com.unity.modules.particlesystem": "1.0.0" - }, - "url": "https://packages.unity.com" - }, - "com.unity.ugui": { - "version": "1.0.0", - "depth": 0, - "source": "builtin", - "dependencies": { - "com.unity.modules.ui": "1.0.0", - "com.unity.modules.imgui": "1.0.0" - } - }, - "com.unity.visualscripting": { - "version": "1.9.4", - "depth": 0, - "source": "registry", - "dependencies": { - "com.unity.ugui": "1.0.0", - "com.unity.modules.jsonserialize": "1.0.0" - }, - "url": "https://packages.unity.com" - }, - "com.unity.modules.ai": { - "version": "1.0.0", - "depth": 0, - "source": "builtin", - "dependencies": {} - }, - "com.unity.modules.androidjni": { - "version": "1.0.0", - "depth": 0, - "source": "builtin", - "dependencies": {} - }, - "com.unity.modules.animation": { - "version": "1.0.0", - "depth": 0, - "source": "builtin", - "dependencies": {} - }, - "com.unity.modules.assetbundle": { - "version": "1.0.0", - "depth": 0, - "source": "builtin", - "dependencies": {} - }, - "com.unity.modules.audio": { - "version": "1.0.0", - "depth": 0, - "source": "builtin", - "dependencies": {} - }, - "com.unity.modules.cloth": { - "version": "1.0.0", - "depth": 0, - "source": "builtin", - "dependencies": { - "com.unity.modules.physics": "1.0.0" - } - }, - "com.unity.modules.director": { - "version": "1.0.0", - "depth": 0, - "source": "builtin", - "dependencies": { - "com.unity.modules.audio": "1.0.0", - "com.unity.modules.animation": "1.0.0" - } - }, - "com.unity.modules.imageconversion": { - "version": "1.0.0", - "depth": 0, - "source": "builtin", - "dependencies": {} - }, - "com.unity.modules.imgui": { - "version": "1.0.0", - "depth": 0, - "source": "builtin", - "dependencies": {} - }, - "com.unity.modules.jsonserialize": { - "version": "1.0.0", - "depth": 0, - "source": "builtin", - "dependencies": {} - }, - "com.unity.modules.particlesystem": { - "version": "1.0.0", - "depth": 0, - "source": "builtin", - "dependencies": {} - }, - "com.unity.modules.physics": { - "version": "1.0.0", - "depth": 0, - "source": "builtin", - "dependencies": {} - }, - "com.unity.modules.physics2d": { - "version": "1.0.0", - "depth": 0, - "source": "builtin", - "dependencies": {} - }, - "com.unity.modules.screencapture": { - "version": "1.0.0", - "depth": 0, - "source": "builtin", - "dependencies": { - "com.unity.modules.imageconversion": "1.0.0" - } - }, - "com.unity.modules.subsystems": { - "version": "1.0.0", - "depth": 1, - "source": "builtin", - "dependencies": { - "com.unity.modules.jsonserialize": "1.0.0" - } - }, - "com.unity.modules.terrain": { - "version": "1.0.0", - "depth": 0, - "source": "builtin", - "dependencies": {} - }, - "com.unity.modules.terrainphysics": { - "version": "1.0.0", - "depth": 0, - "source": "builtin", - "dependencies": { - "com.unity.modules.physics": "1.0.0", - "com.unity.modules.terrain": "1.0.0" - } - }, - "com.unity.modules.tilemap": { - "version": "1.0.0", - "depth": 0, - "source": "builtin", - "dependencies": { - "com.unity.modules.physics2d": "1.0.0" - } - }, - "com.unity.modules.ui": { - "version": "1.0.0", - "depth": 0, - "source": "builtin", - "dependencies": {} - }, - "com.unity.modules.uielements": { - "version": "1.0.0", - "depth": 0, - "source": "builtin", - "dependencies": { - "com.unity.modules.ui": "1.0.0", - "com.unity.modules.imgui": "1.0.0", - "com.unity.modules.jsonserialize": "1.0.0", - "com.unity.modules.uielementsnative": "1.0.0" - } - }, - "com.unity.modules.uielementsnative": { - "version": "1.0.0", - "depth": 1, - "source": "builtin", - "dependencies": { - "com.unity.modules.ui": "1.0.0", - "com.unity.modules.imgui": "1.0.0", - "com.unity.modules.jsonserialize": "1.0.0" - } - }, - "com.unity.modules.umbra": { - "version": "1.0.0", - "depth": 0, - "source": "builtin", - "dependencies": {} - }, - "com.unity.modules.unityanalytics": { - "version": "1.0.0", - "depth": 0, - "source": "builtin", - "dependencies": { - "com.unity.modules.unitywebrequest": "1.0.0", - "com.unity.modules.jsonserialize": "1.0.0" - } - }, - "com.unity.modules.unitywebrequest": { - "version": "1.0.0", - "depth": 0, - "source": "builtin", - "dependencies": {} - }, - "com.unity.modules.unitywebrequestassetbundle": { - "version": "1.0.0", - "depth": 0, - "source": "builtin", - "dependencies": { - "com.unity.modules.assetbundle": "1.0.0", - "com.unity.modules.unitywebrequest": "1.0.0" - } - }, - "com.unity.modules.unitywebrequestaudio": { - "version": "1.0.0", - "depth": 0, - "source": "builtin", - "dependencies": { - "com.unity.modules.unitywebrequest": "1.0.0", - "com.unity.modules.audio": "1.0.0" - } - }, - "com.unity.modules.unitywebrequesttexture": { - "version": "1.0.0", - "depth": 0, - "source": "builtin", - "dependencies": { - "com.unity.modules.unitywebrequest": "1.0.0", - "com.unity.modules.imageconversion": "1.0.0" - } - }, - "com.unity.modules.unitywebrequestwww": { - "version": "1.0.0", - "depth": 0, - "source": "builtin", - "dependencies": { - "com.unity.modules.unitywebrequest": "1.0.0", - "com.unity.modules.unitywebrequestassetbundle": "1.0.0", - "com.unity.modules.unitywebrequestaudio": "1.0.0", - "com.unity.modules.audio": "1.0.0", - "com.unity.modules.assetbundle": "1.0.0", - "com.unity.modules.imageconversion": "1.0.0" - } - }, - "com.unity.modules.vehicles": { - "version": "1.0.0", - "depth": 0, - "source": "builtin", - "dependencies": { - "com.unity.modules.physics": "1.0.0" - } - }, - "com.unity.modules.video": { - "version": "1.0.0", - "depth": 0, - "source": "builtin", - "dependencies": { - "com.unity.modules.audio": "1.0.0", - "com.unity.modules.ui": "1.0.0", - "com.unity.modules.unitywebrequest": "1.0.0" - } - }, - "com.unity.modules.vr": { - "version": "1.0.0", - "depth": 0, - "source": "builtin", - "dependencies": { - "com.unity.modules.jsonserialize": "1.0.0", - "com.unity.modules.physics": "1.0.0", - "com.unity.modules.xr": "1.0.0" - } - }, - "com.unity.modules.wind": { - "version": "1.0.0", - "depth": 0, - "source": "builtin", - "dependencies": {} - }, - "com.unity.modules.xr": { - "version": "1.0.0", - "depth": 0, - "source": "builtin", - "dependencies": { - "com.unity.modules.physics": "1.0.0", - "com.unity.modules.jsonserialize": "1.0.0", - "com.unity.modules.subsystems": "1.0.0" - } - } - } -} diff --git a/TestProjects/UnityMCPTests/ProjectSettings/Packages/com.unity.testtools.codecoverage/Settings.json b/TestProjects/UnityMCPTests/ProjectSettings/Packages/com.unity.testtools.codecoverage/Settings.json index ad11087f..3c7b4c18 100644 --- a/TestProjects/UnityMCPTests/ProjectSettings/Packages/com.unity.testtools.codecoverage/Settings.json +++ b/TestProjects/UnityMCPTests/ProjectSettings/Packages/com.unity.testtools.codecoverage/Settings.json @@ -1,6 +1,4 @@ { - "m_Name": "Settings", - "m_Path": "ProjectSettings/Packages/com.unity.testtools.codecoverage/Settings.json", "m_Dictionary": { "m_DictionaryValues": [] } diff --git a/TestProjects/UnityMCPTests/ProjectSettings/boot.config b/TestProjects/UnityMCPTests/ProjectSettings/boot.config deleted file mode 100644 index e69de29b..00000000 diff --git a/UnityMcpBridge/Editor/Data/McpClients.cs b/UnityMcpBridge/Editor/Data/McpClients.cs index 7b150b7a..19e41284 100644 --- a/UnityMcpBridge/Editor/Data/McpClients.cs +++ b/UnityMcpBridge/Editor/Data/McpClients.cs @@ -19,6 +19,11 @@ public class McpClients ".cursor", "mcp.json" ), + macConfigPath = Path.Combine( + Environment.GetFolderPath(Environment.SpecialFolder.UserProfile), + ".cursor", + "mcp.json" + ), linuxConfigPath = Path.Combine( Environment.GetFolderPath(Environment.SpecialFolder.UserProfile), ".cursor", @@ -35,6 +40,10 @@ public class McpClients Environment.GetFolderPath(Environment.SpecialFolder.UserProfile), ".claude.json" ), + macConfigPath = Path.Combine( + Environment.GetFolderPath(Environment.SpecialFolder.UserProfile), + ".claude.json" + ), linuxConfigPath = Path.Combine( Environment.GetFolderPath(Environment.SpecialFolder.UserProfile), ".claude.json" @@ -52,6 +61,12 @@ public class McpClients "windsurf", "mcp_config.json" ), + macConfigPath = Path.Combine( + Environment.GetFolderPath(Environment.SpecialFolder.UserProfile), + ".codeium", + "windsurf", + "mcp_config.json" + ), linuxConfigPath = Path.Combine( Environment.GetFolderPath(Environment.SpecialFolder.UserProfile), ".codeium", @@ -70,22 +85,21 @@ public class McpClients "Claude", "claude_desktop_config.json" ), - // For macOS, Claude Desktop stores config under ~/Library/Application Support/Claude - // For Linux, it remains under ~/.config/Claude - linuxConfigPath = RuntimeInformation.IsOSPlatform(OSPlatform.OSX) - ? Path.Combine( - Environment.GetFolderPath(Environment.SpecialFolder.UserProfile), - "Library", - "Application Support", - "Claude", - "claude_desktop_config.json" - ) - : Path.Combine( - Environment.GetFolderPath(Environment.SpecialFolder.UserProfile), - ".config", - "Claude", - "claude_desktop_config.json" - ), + + macConfigPath = Path.Combine( + Environment.GetFolderPath(Environment.SpecialFolder.UserProfile), + "Library", + "Application Support", + "Claude", + "claude_desktop_config.json" + ), + linuxConfigPath = Path.Combine( + Environment.GetFolderPath(Environment.SpecialFolder.UserProfile), + ".config", + "Claude", + "claude_desktop_config.json" + ), + mcpType = McpTypes.ClaudeDesktop, configStatus = "Not Configured", }, @@ -100,24 +114,23 @@ public class McpClients "User", "mcp.json" ), - // For macOS, VSCode stores user config under ~/Library/Application Support/Code/User - // For Linux, it remains under ~/.config/Code/User - linuxConfigPath = RuntimeInformation.IsOSPlatform(OSPlatform.OSX) - ? Path.Combine( - Environment.GetFolderPath(Environment.SpecialFolder.UserProfile), - "Library", - "Application Support", - "Code", - "User", - "mcp.json" - ) - : Path.Combine( - Environment.GetFolderPath(Environment.SpecialFolder.UserProfile), - ".config", - "Code", - "User", - "mcp.json" - ), + // macOS: ~/Library/Application Support/Code/User/mcp.json + macConfigPath = Path.Combine( + Environment.GetFolderPath(Environment.SpecialFolder.UserProfile), + "Library", + "Application Support", + "Code", + "User", + "mcp.json" + ), + // Linux: ~/.config/Code/User/mcp.json + linuxConfigPath = Path.Combine( + Environment.GetFolderPath(Environment.SpecialFolder.UserProfile), + ".config", + "Code", + "User", + "mcp.json" + ), mcpType = McpTypes.VSCode, configStatus = "Not Configured", }, @@ -131,6 +144,12 @@ public class McpClients "settings", "mcp.json" ), + macConfigPath = Path.Combine( + Environment.GetFolderPath(Environment.SpecialFolder.UserProfile), + ".kiro", + "settings", + "mcp.json" + ), linuxConfigPath = Path.Combine( Environment.GetFolderPath(Environment.SpecialFolder.UserProfile), ".kiro", diff --git a/UnityMcpBridge/Editor/Helpers/ConfigJsonBuilder.cs b/UnityMcpBridge/Editor/Helpers/ConfigJsonBuilder.cs index deb29708..5889e4f6 100644 --- a/UnityMcpBridge/Editor/Helpers/ConfigJsonBuilder.cs +++ b/UnityMcpBridge/Editor/Helpers/ConfigJsonBuilder.cs @@ -54,7 +54,7 @@ private static void PopulateUnityNode(JObject unity, string uvPath, string direc // For Cursor (non-VSCode) on macOS, prefer a no-spaces symlink path to avoid arg parsing issues in some runners string effectiveDir = directory; #if UNITY_EDITOR_OSX || UNITY_STANDALONE_OSX - bool isCursor = !isVSCode && (client == null || client.mcpType != Models.McpTypes.VSCode); + bool isCursor = !isVSCode && (client == null || client.mcpType != McpTypes.VSCode); if (isCursor && !string.IsNullOrEmpty(directory)) { // Replace canonical path segment with the symlink path if present @@ -65,7 +65,11 @@ private static void PopulateUnityNode(JObject unity, string uvPath, string direc // Normalize to full path style if (directory.Contains(canonical)) { - effectiveDir = directory.Replace(canonical, symlinkSeg); + var candidate = directory.Replace(canonical, symlinkSeg).Replace('\\', '/'); + if (System.IO.Directory.Exists(candidate)) + { + effectiveDir = candidate; + } } else { @@ -76,7 +80,11 @@ private static void PopulateUnityNode(JObject unity, string uvPath, string direc { string home = System.Environment.GetFolderPath(System.Environment.SpecialFolder.Personal) ?? string.Empty; string suffix = norm.Substring(idx + "/.local/share/".Length); // UnityMCP/... - effectiveDir = System.IO.Path.Combine(home, "Library", "AppSupport", suffix).Replace('\\', '/'); + string candidate = System.IO.Path.Combine(home, "Library", "AppSupport", suffix).Replace('\\', '/'); + if (System.IO.Directory.Exists(candidate)) + { + effectiveDir = candidate; + } } } } diff --git a/UnityMcpBridge/Editor/Helpers/PackageDetector.cs b/UnityMcpBridge/Editor/Helpers/PackageDetector.cs index 0a672003..d39685c2 100644 --- a/UnityMcpBridge/Editor/Helpers/PackageDetector.cs +++ b/UnityMcpBridge/Editor/Helpers/PackageDetector.cs @@ -25,19 +25,32 @@ static PackageDetector() if (!EditorPrefs.GetBool(key, false) || legacyPresent || canonicalMissing) { + // Marshal the entire flow to the main thread. EnsureServerInstalled may touch Unity APIs. EditorApplication.delayCall += () => { + string error = null; + System.Exception capturedEx = null; try { + // Ensure any UnityEditor API usage inside runs on the main thread ServerInstaller.EnsureServerInstalled(); } catch (System.Exception ex) { - Debug.LogWarning("MCP for Unity: Auto-detect on load failed: " + ex.Message); + error = ex.Message; + capturedEx = ex; } - finally + + // Unity APIs must stay on main thread + try { EditorPrefs.SetBool(key, true); } catch { } + // Ensure prefs cleanup happens on main thread + try { EditorPrefs.DeleteKey("MCPForUnity.ServerSrc"); } catch { } + try { EditorPrefs.DeleteKey("MCPForUnity.PythonDirOverride"); } catch { } + + if (!string.IsNullOrEmpty(error)) { - EditorPrefs.SetBool(key, true); + Debug.LogWarning($"MCP for Unity: Auto-detect on load failed: {capturedEx}"); + // Alternatively: Debug.LogException(capturedEx); } }; } diff --git a/UnityMcpBridge/Editor/Helpers/Response.cs b/UnityMcpBridge/Editor/Helpers/Response.cs index 5d5436d7..1a3bd520 100644 --- a/UnityMcpBridge/Editor/Helpers/Response.cs +++ b/UnityMcpBridge/Editor/Helpers/Response.cs @@ -35,10 +35,10 @@ public static object Success(string message, object data = null) /// /// Creates a standardized error response object. /// - /// A message describing the error. + /// A message describing the error. /// Optional additional data (e.g., error details) to include. /// An object representing the error response. - public static object Error(string errorMessage, object data = null) + public static object Error(string errorCodeOrMessage, object data = null) { if (data != null) { @@ -46,13 +46,16 @@ public static object Error(string errorMessage, object data = null) return new { success = false, - error = errorMessage, + // Preserve original behavior while adding a machine-parsable code field. + // If callers pass a code string, it will be echoed in both code and error. + code = errorCodeOrMessage, + error = errorCodeOrMessage, data = data, }; } else { - return new { success = false, error = errorMessage }; + return new { success = false, code = errorCodeOrMessage, error = errorCodeOrMessage }; } } } diff --git a/UnityMcpBridge/Editor/MCPForUnityBridge.cs b/UnityMcpBridge/Editor/MCPForUnityBridge.cs index 7d75908b..f90b2235 100644 --- a/UnityMcpBridge/Editor/MCPForUnityBridge.cs +++ b/UnityMcpBridge/Editor/MCPForUnityBridge.cs @@ -35,6 +35,8 @@ private static Dictionary< > commandQueue = new(); private static int currentUnityPort = 6400; // Dynamic port, starts with default private static bool isAutoConnectMode = false; + private const ulong MaxFrameBytes = 64UL * 1024 * 1024; // 64 MiB hard cap for framed payloads + private const int FrameIOTimeoutMs = 30000; // Per-read timeout to avoid stalled clients // Debug helpers private static bool IsDebugEnabled() @@ -96,8 +98,9 @@ public static bool FolderExists(string path) static MCPForUnityBridge() { - // Skip bridge in headless/batch environments (CI/builds) - if (Application.isBatchMode) + // Skip bridge in headless/batch environments (CI/builds) unless explicitly allowed via env + // CI override: set UNITY_MCP_ALLOW_BATCH=1 to allow the bridge in batch mode + if (Application.isBatchMode && string.IsNullOrWhiteSpace(Environment.GetEnvironmentVariable("UNITY_MCP_ALLOW_BATCH"))) { return; } @@ -341,7 +344,7 @@ public static void Stop() // Mark as stopping early to avoid accept logging during disposal isRunning = false; // Mark heartbeat one last time before stopping - WriteHeartbeat(false); + WriteHeartbeat(false, "stopped"); listener?.Stop(); listener = null; EditorApplication.update -= ProcessCommands; @@ -397,22 +400,50 @@ private static async Task HandleClientAsync(TcpClient client) using (client) using (NetworkStream stream = client.GetStream()) { - byte[] buffer = new byte[8192]; + // Framed I/O only; legacy mode removed + try + { + var ep = client.Client?.RemoteEndPoint?.ToString() ?? "unknown"; + Debug.Log($"UNITY-MCP: Client connected {ep}"); + } + catch { } + // Strict framing: always require FRAMING=1 and frame all I/O + try + { + client.NoDelay = true; + } + catch { } + try + { + string handshake = "WELCOME UNITY-MCP 1 FRAMING=1\n"; + byte[] handshakeBytes = System.Text.Encoding.ASCII.GetBytes(handshake); + using var cts = new CancellationTokenSource(FrameIOTimeoutMs); +#if NETSTANDARD2_1 || NET6_0_OR_GREATER + await stream.WriteAsync(handshakeBytes.AsMemory(0, handshakeBytes.Length), cts.Token).ConfigureAwait(false); +#else + await stream.WriteAsync(handshakeBytes, 0, handshakeBytes.Length, cts.Token).ConfigureAwait(false); +#endif + Debug.Log("UNITY-MCP: Sent handshake FRAMING=1 (strict)"); + } + catch (Exception ex) + { + Debug.LogWarning($"UNITY-MCP: Handshake failed: {ex.Message}"); + return; // abort this client + } + while (isRunning) { try { - int bytesRead = await stream.ReadAsync(buffer, 0, buffer.Length); - if (bytesRead == 0) + // Strict framed mode only: enforced framed I/O for this connection + string commandText = await ReadFrameAsUtf8Async(stream, FrameIOTimeoutMs); + + try { - break; // Client disconnected + var preview = commandText.Length > 120 ? commandText.Substring(0, 120) + "…" : commandText; + Debug.Log($"UNITY-MCP: recv framed: {preview}"); } - - string commandText = System.Text.Encoding.UTF8.GetString( - buffer, - 0, - bytesRead - ); + catch { } string commandId = Guid.NewGuid().ToString(); TaskCompletionSource tcs = new(); @@ -424,7 +455,7 @@ private static async Task HandleClientAsync(TcpClient client) /*lang=json,strict*/ "{\"status\":\"success\",\"result\":{\"message\":\"pong\"}}" ); - await stream.WriteAsync(pingResponseBytes, 0, pingResponseBytes.Length); + await WriteFrameAsync(stream, pingResponseBytes); continue; } @@ -435,7 +466,7 @@ private static async Task HandleClientAsync(TcpClient client) string response = await tcs.Task; byte[] responseBytes = System.Text.Encoding.UTF8.GetBytes(response); - await stream.WriteAsync(responseBytes, 0, responseBytes.Length); + await WriteFrameAsync(stream, responseBytes); } catch (Exception ex) { @@ -446,120 +477,240 @@ private static async Task HandleClientAsync(TcpClient client) } } - private static void ProcessCommands() + // Timeout-aware exact read helper with cancellation; avoids indefinite stalls and background task leaks + private static async System.Threading.Tasks.Task ReadExactAsync(NetworkStream stream, int count, int timeoutMs, CancellationToken cancel = default) { - List processedIds = new(); - lock (lockObj) + byte[] buffer = new byte[count]; + int offset = 0; + var stopwatch = System.Diagnostics.Stopwatch.StartNew(); + + while (offset < count) { - // Periodic heartbeat while editor is idle/processing - double now = EditorApplication.timeSinceStartup; - if (now >= nextHeartbeatAt) + int remaining = count - offset; + int remainingTimeout = timeoutMs <= 0 + ? Timeout.Infinite + : timeoutMs - (int)stopwatch.ElapsedMilliseconds; + + // If a finite timeout is configured and already elapsed, fail immediately + if (remainingTimeout != Timeout.Infinite && remainingTimeout <= 0) { - WriteHeartbeat(false); - nextHeartbeatAt = now + 0.5f; + throw new System.IO.IOException("Read timed out"); } - foreach ( - KeyValuePair< - string, - (string commandJson, TaskCompletionSource tcs) - > kvp in commandQueue.ToList() - ) + using var cts = CancellationTokenSource.CreateLinkedTokenSource(cancel); + if (remainingTimeout != Timeout.Infinite) { - string id = kvp.Key; - string commandText = kvp.Value.commandJson; - TaskCompletionSource tcs = kvp.Value.tcs; + cts.CancelAfter(remainingTimeout); + } - try + try + { +#if NETSTANDARD2_1 || NET6_0_OR_GREATER + int read = await stream.ReadAsync(buffer.AsMemory(offset, remaining), cts.Token).ConfigureAwait(false); +#else + int read = await stream.ReadAsync(buffer, offset, remaining, cts.Token).ConfigureAwait(false); +#endif + if (read == 0) { - // Special case handling - if (string.IsNullOrEmpty(commandText)) - { - var emptyResponse = new - { - status = "error", - error = "Empty command received", - }; - tcs.SetResult(JsonConvert.SerializeObject(emptyResponse)); - processedIds.Add(id); - continue; - } + throw new System.IO.IOException("Connection closed before reading expected bytes"); + } + offset += read; + } + catch (OperationCanceledException) when (!cancel.IsCancellationRequested) + { + throw new System.IO.IOException("Read timed out"); + } + } - // Trim the command text to remove any whitespace - commandText = commandText.Trim(); + return buffer; + } - // Non-JSON direct commands handling (like ping) - if (commandText == "ping") - { - var pingResponse = new - { - status = "success", - result = new { message = "pong" }, - }; - tcs.SetResult(JsonConvert.SerializeObject(pingResponse)); - processedIds.Add(id); - continue; - } + private static async System.Threading.Tasks.Task WriteFrameAsync(NetworkStream stream, byte[] payload) + { + using var cts = new CancellationTokenSource(FrameIOTimeoutMs); + await WriteFrameAsync(stream, payload, cts.Token); + } - // Check if the command is valid JSON before attempting to deserialize - if (!IsValidJson(commandText)) - { - var invalidJsonResponse = new - { - status = "error", - error = "Invalid JSON format", - receivedText = commandText.Length > 50 - ? commandText[..50] + "..." - : commandText, - }; - tcs.SetResult(JsonConvert.SerializeObject(invalidJsonResponse)); - processedIds.Add(id); - continue; - } + private static async System.Threading.Tasks.Task WriteFrameAsync(NetworkStream stream, byte[] payload, CancellationToken cancel) + { + if (payload == null) + { + throw new System.ArgumentNullException(nameof(payload)); + } + if ((ulong)payload.LongLength > MaxFrameBytes) + { + throw new System.IO.IOException($"Frame too large: {payload.LongLength}"); + } + byte[] header = new byte[8]; + WriteUInt64BigEndian(header, (ulong)payload.LongLength); +#if NETSTANDARD2_1 || NET6_0_OR_GREATER + await stream.WriteAsync(header.AsMemory(0, header.Length), cancel).ConfigureAwait(false); + await stream.WriteAsync(payload.AsMemory(0, payload.Length), cancel).ConfigureAwait(false); +#else + await stream.WriteAsync(header, 0, header.Length, cancel).ConfigureAwait(false); + await stream.WriteAsync(payload, 0, payload.Length, cancel).ConfigureAwait(false); +#endif + } - // Normal JSON command processing - Command command = JsonConvert.DeserializeObject(commandText); - - if (command == null) - { - var nullCommandResponse = new - { - status = "error", - error = "Command deserialized to null", - details = "The command was valid JSON but could not be deserialized to a Command object", - }; - tcs.SetResult(JsonConvert.SerializeObject(nullCommandResponse)); - } - else + private static async System.Threading.Tasks.Task ReadFrameAsUtf8Async(NetworkStream stream, int timeoutMs) + { + byte[] header = await ReadExactAsync(stream, 8, timeoutMs); + ulong payloadLen = ReadUInt64BigEndian(header); + if (payloadLen > MaxFrameBytes) + { + throw new System.IO.IOException($"Invalid framed length: {payloadLen}"); + } + if (payloadLen == 0UL) + throw new System.IO.IOException("Zero-length frames are not allowed"); + if (payloadLen > int.MaxValue) + { + throw new System.IO.IOException("Frame too large for buffer"); + } + int count = (int)payloadLen; + byte[] payload = await ReadExactAsync(stream, count, timeoutMs); + return System.Text.Encoding.UTF8.GetString(payload); + } + + private static ulong ReadUInt64BigEndian(byte[] buffer) + { + if (buffer == null || buffer.Length < 8) return 0UL; + return ((ulong)buffer[0] << 56) + | ((ulong)buffer[1] << 48) + | ((ulong)buffer[2] << 40) + | ((ulong)buffer[3] << 32) + | ((ulong)buffer[4] << 24) + | ((ulong)buffer[5] << 16) + | ((ulong)buffer[6] << 8) + | buffer[7]; + } + + private static void WriteUInt64BigEndian(byte[] dest, ulong value) + { + if (dest == null || dest.Length < 8) + { + throw new System.ArgumentException("Destination buffer too small for UInt64"); + } + dest[0] = (byte)(value >> 56); + dest[1] = (byte)(value >> 48); + dest[2] = (byte)(value >> 40); + dest[3] = (byte)(value >> 32); + dest[4] = (byte)(value >> 24); + dest[5] = (byte)(value >> 16); + dest[6] = (byte)(value >> 8); + dest[7] = (byte)(value); + } + + private static void ProcessCommands() + { + // Heartbeat without holding the queue lock + double now = EditorApplication.timeSinceStartup; + if (now >= nextHeartbeatAt) + { + WriteHeartbeat(false); + nextHeartbeatAt = now + 0.5f; + } + + // Snapshot under lock, then process outside to reduce contention + List<(string id, string text, TaskCompletionSource tcs)> work; + lock (lockObj) + { + work = commandQueue + .Select(kvp => (kvp.Key, kvp.Value.commandJson, kvp.Value.tcs)) + .ToList(); + } + + foreach (var item in work) + { + string id = item.id; + string commandText = item.text; + TaskCompletionSource tcs = item.tcs; + + try + { + // Special case handling + if (string.IsNullOrEmpty(commandText)) + { + var emptyResponse = new { - string responseJson = ExecuteCommand(command); - tcs.SetResult(responseJson); - } + status = "error", + error = "Empty command received", + }; + tcs.SetResult(JsonConvert.SerializeObject(emptyResponse)); + // Remove quickly under lock + lock (lockObj) { commandQueue.Remove(id); } + continue; } - catch (Exception ex) + + // Trim the command text to remove any whitespace + commandText = commandText.Trim(); + + // Non-JSON direct commands handling (like ping) + if (commandText == "ping") { - Debug.LogError($"Error processing command: {ex.Message}\n{ex.StackTrace}"); + var pingResponse = new + { + status = "success", + result = new { message = "pong" }, + }; + tcs.SetResult(JsonConvert.SerializeObject(pingResponse)); + lock (lockObj) { commandQueue.Remove(id); } + continue; + } - var response = new + // Check if the command is valid JSON before attempting to deserialize + if (!IsValidJson(commandText)) + { + var invalidJsonResponse = new { status = "error", - error = ex.Message, - commandType = "Unknown (error during processing)", - receivedText = commandText?.Length > 50 + error = "Invalid JSON format", + receivedText = commandText.Length > 50 ? commandText[..50] + "..." : commandText, }; - string responseJson = JsonConvert.SerializeObject(response); - tcs.SetResult(responseJson); + tcs.SetResult(JsonConvert.SerializeObject(invalidJsonResponse)); + lock (lockObj) { commandQueue.Remove(id); } + continue; } - processedIds.Add(id); - } + // Normal JSON command processing + Command command = JsonConvert.DeserializeObject(commandText); - foreach (string id in processedIds) + if (command == null) + { + var nullCommandResponse = new + { + status = "error", + error = "Command deserialized to null", + details = "The command was valid JSON but could not be deserialized to a Command object", + }; + tcs.SetResult(JsonConvert.SerializeObject(nullCommandResponse)); + } + else + { + string responseJson = ExecuteCommand(command); + tcs.SetResult(responseJson); + } + } + catch (Exception ex) { - commandQueue.Remove(id); + Debug.LogError($"Error processing command: {ex.Message}\n{ex.StackTrace}"); + + var response = new + { + status = "error", + error = ex.Message, + commandType = "Unknown (error during processing)", + receivedText = commandText?.Length > 50 + ? commandText[..50] + "..." + : commandText, + }; + string responseJson = JsonConvert.SerializeObject(response); + tcs.SetResult(responseJson); } + + // Remove quickly under lock + lock (lockObj) { commandQueue.Remove(id); } } } @@ -709,7 +860,12 @@ private static void WriteHeartbeat(bool reloading, string reason = null) { try { - string dir = Path.Combine(Environment.GetFolderPath(Environment.SpecialFolder.UserProfile), ".unity-mcp"); + // Allow override of status directory (useful in CI/containers) + string dir = Environment.GetEnvironmentVariable("UNITY_MCP_STATUS_DIR"); + if (string.IsNullOrWhiteSpace(dir)) + { + dir = Path.Combine(Environment.GetFolderPath(Environment.SpecialFolder.UserProfile), ".unity-mcp"); + } Directory.CreateDirectory(dir); string filePath = Path.Combine(dir, $"unity-mcp-status-{ComputeProjectHash(Application.dataPath)}.json"); var payload = new diff --git a/UnityMcpBridge/Editor/Models/McpClient.cs b/UnityMcpBridge/Editor/Models/McpClient.cs index bbc15da7..a32f7f59 100644 --- a/UnityMcpBridge/Editor/Models/McpClient.cs +++ b/UnityMcpBridge/Editor/Models/McpClient.cs @@ -4,8 +4,8 @@ public class McpClient { public string name; public string windowsConfigPath; + public string macConfigPath; public string linuxConfigPath; - public string macConfigPath; // optional macOS-specific config path public McpTypes mcpType; public string configStatus; public McpStatus status = McpStatus.NotConfigured; diff --git a/UnityMcpBridge/Editor/Tools/ManageEditor.cs b/UnityMcpBridge/Editor/Tools/ManageEditor.cs index e99d1b40..7ed6300b 100644 --- a/UnityMcpBridge/Editor/Tools/ManageEditor.cs +++ b/UnityMcpBridge/Editor/Tools/ManageEditor.cs @@ -1,6 +1,7 @@ using System; using System.Collections.Generic; using System.Linq; +using System.IO; using Newtonsoft.Json.Linq; using UnityEditor; using UnityEditorInternal; // Required for tag management @@ -89,6 +90,8 @@ public static object HandleCommand(JObject @params) // Editor State/Info case "get_state": return GetEditorState(); + case "get_project_root": + return GetProjectRoot(); case "get_windows": return GetEditorWindows(); case "get_active_tool": @@ -137,7 +140,7 @@ public static object HandleCommand(JObject @params) default: return Response.Error( - $"Unknown action: '{action}'. Supported actions include play, pause, stop, get_state, get_windows, get_active_tool, get_selection, set_active_tool, add_tag, remove_tag, get_tags, add_layer, remove_layer, get_layers." + $"Unknown action: '{action}'. Supported actions include play, pause, stop, get_state, get_project_root, get_windows, get_active_tool, get_selection, set_active_tool, add_tag, remove_tag, get_tags, add_layer, remove_layer, get_layers." ); } } @@ -165,6 +168,25 @@ private static object GetEditorState() } } + private static object GetProjectRoot() + { + try + { + // Application.dataPath points to /Assets + string assetsPath = Application.dataPath.Replace('\\', '/'); + string projectRoot = Directory.GetParent(assetsPath)?.FullName.Replace('\\', '/'); + if (string.IsNullOrEmpty(projectRoot)) + { + return Response.Error("Could not determine project root from Application.dataPath"); + } + return Response.Success("Project root resolved.", new { projectRoot }); + } + catch (Exception e) + { + return Response.Error($"Error getting project root: {e.Message}"); + } + } + private static object GetEditorWindows() { try diff --git a/UnityMcpBridge/Editor/Tools/ManageScript.cs b/UnityMcpBridge/Editor/Tools/ManageScript.cs index 274f84d1..0337f74f 100644 --- a/UnityMcpBridge/Editor/Tools/ManageScript.cs +++ b/UnityMcpBridge/Editor/Tools/ManageScript.cs @@ -1,15 +1,19 @@ using System; using System.IO; using System.Linq; +using System.Collections.Generic; using System.Text.RegularExpressions; using Newtonsoft.Json.Linq; using UnityEditor; using UnityEngine; using MCPForUnity.Editor.Helpers; +using System.Threading; +using System.Security.Cryptography; #if USE_ROSLYN using Microsoft.CodeAnalysis; using Microsoft.CodeAnalysis.CSharp; +using Microsoft.CodeAnalysis.Formatting; #endif #if UNITY_EDITOR @@ -47,6 +51,60 @@ namespace MCPForUnity.Editor.Tools /// public static class ManageScript { + /// + /// Resolves a directory under Assets/, preventing traversal and escaping. + /// Returns fullPathDir on disk and canonical 'Assets/...' relative path. + /// + private static bool TryResolveUnderAssets(string relDir, out string fullPathDir, out string relPathSafe) + { + string assets = Application.dataPath.Replace('\\', '/'); + + // Normalize caller path: allow both "Scripts/..." and "Assets/Scripts/..." + string rel = (relDir ?? "Scripts").Replace('\\', '/').Trim(); + if (string.IsNullOrEmpty(rel)) rel = "Scripts"; + if (rel.StartsWith("Assets/", StringComparison.OrdinalIgnoreCase)) rel = rel.Substring(7); + rel = rel.TrimStart('/'); + + string targetDir = Path.Combine(assets, rel).Replace('\\', '/'); + string full = Path.GetFullPath(targetDir).Replace('\\', '/'); + + bool underAssets = full.StartsWith(assets + "/", StringComparison.OrdinalIgnoreCase) + || string.Equals(full, assets, StringComparison.OrdinalIgnoreCase); + if (!underAssets) + { + fullPathDir = null; + relPathSafe = null; + return false; + } + + // Best-effort symlink guard: if the directory OR ANY ANCESTOR (up to Assets/) is a reparse point/symlink, reject + try + { + var di = new DirectoryInfo(full); + while (di != null) + { + if (di.Exists && (di.Attributes & FileAttributes.ReparsePoint) != 0) + { + fullPathDir = null; + relPathSafe = null; + return false; + } + var atAssets = string.Equals( + di.FullName.Replace('\\','/'), + assets, + StringComparison.OrdinalIgnoreCase + ); + if (atAssets) break; + di = di.Parent; + } + } + catch { /* best effort; proceed */ } + + fullPathDir = full; + string tail = full.Length > assets.Length ? full.Substring(assets.Length).TrimStart('/') : string.Empty; + relPathSafe = ("Assets/" + tail).TrimEnd('/'); + return true; + } /// /// Main handler for script management actions. /// @@ -89,36 +147,23 @@ public static object HandleCommand(JObject @params) return Response.Error("Name parameter is required."); } // Basic name validation (alphanumeric, underscores, cannot start with number) - if (!Regex.IsMatch(name, @"^[a-zA-Z_][a-zA-Z0-9_]*$")) + if (!Regex.IsMatch(name, @"^[a-zA-Z_][a-zA-Z0-9_]*$", RegexOptions.CultureInvariant, TimeSpan.FromSeconds(2))) { return Response.Error( $"Invalid script name: '{name}'. Use only letters, numbers, underscores, and don't start with a number." ); } - // Ensure path is relative to Assets/, removing any leading "Assets/" - // Set default directory to "Scripts" if path is not provided - string relativeDir = path ?? "Scripts"; // Default to "Scripts" if path is null - if (!string.IsNullOrEmpty(relativeDir)) - { - relativeDir = relativeDir.Replace('\\', '/').Trim('/'); - if (relativeDir.StartsWith("Assets/", StringComparison.OrdinalIgnoreCase)) - { - relativeDir = relativeDir.Substring("Assets/".Length).TrimStart('/'); - } - } - // Handle empty string case explicitly after processing - if (string.IsNullOrEmpty(relativeDir)) + // Resolve and harden target directory under Assets/ + if (!TryResolveUnderAssets(path, out string fullPathDir, out string relPathSafeDir)) { - relativeDir = "Scripts"; // Ensure default if path was provided as "" or only "/" or "Assets/" + return Response.Error($"Invalid path. Target directory must be within 'Assets/'. Provided: '{(path ?? "(null)")}'"); } - // Construct paths + // Construct file paths string scriptFileName = $"{name}.cs"; - string fullPathDir = Path.Combine(Application.dataPath, relativeDir); // Application.dataPath ends in "Assets" string fullPath = Path.Combine(fullPathDir, scriptFileName); - string relativePath = Path.Combine("Assets", relativeDir, scriptFileName) - .Replace('\\', '/'); // Ensure "Assets/" prefix and forward slashes + string relativePath = Path.Combine(relPathSafeDir, scriptFileName).Replace('\\', '/'); // Ensure the target directory exists for create/update if (action == "create" || action == "update") @@ -148,14 +193,92 @@ public static object HandleCommand(JObject @params) namespaceName ); case "read": + Debug.LogWarning("manage_script.read is deprecated; prefer resources/read. Serving read for backward compatibility."); return ReadScript(fullPath, relativePath); case "update": + Debug.LogWarning("manage_script.update is deprecated; prefer apply_text_edits. Serving update for backward compatibility."); return UpdateScript(fullPath, relativePath, name, contents); case "delete": return DeleteScript(fullPath, relativePath); + case "apply_text_edits": + { + var textEdits = @params["edits"] as JArray; + string precondition = @params["precondition_sha256"]?.ToString(); + // Respect optional options + string refreshOpt = @params["options"]?["refresh"]?.ToString()?.ToLowerInvariant(); + string validateOpt = @params["options"]?["validate"]?.ToString()?.ToLowerInvariant(); + return ApplyTextEdits(fullPath, relativePath, name, textEdits, precondition, refreshOpt, validateOpt); + } + case "validate": + { + string level = @params["level"]?.ToString()?.ToLowerInvariant() ?? "standard"; + var chosen = level switch + { + "basic" => ValidationLevel.Basic, + "standard" => ValidationLevel.Standard, + "strict" => ValidationLevel.Strict, + "comprehensive" => ValidationLevel.Comprehensive, + _ => ValidationLevel.Standard + }; + string fileText; + try { fileText = File.ReadAllText(fullPath); } + catch (Exception ex) { return Response.Error($"Failed to read script: {ex.Message}"); } + + bool ok = ValidateScriptSyntax(fileText, chosen, out string[] diagsRaw); + var diags = (diagsRaw ?? Array.Empty()).Select(s => + { + var m = Regex.Match( + s, + @"^(ERROR|WARNING|INFO): (.*?)(?: \(Line (\d+)\))?$", + RegexOptions.CultureInvariant | RegexOptions.Multiline, + TimeSpan.FromMilliseconds(250) + ); + string severity = m.Success ? m.Groups[1].Value.ToLowerInvariant() : "info"; + string message = m.Success ? m.Groups[2].Value : s; + int lineNum = m.Success && int.TryParse(m.Groups[3].Value, out var l) ? l : 0; + return new { line = lineNum, col = 0, severity, message }; + }).ToArray(); + + var result = new { diagnostics = diags }; + return ok ? Response.Success("Validation completed.", result) + : Response.Error("Validation failed.", result); + } + case "edit": + Debug.LogWarning("manage_script.edit is deprecated; prefer apply_text_edits. Serving structured edit for backward compatibility."); + var structEdits = @params["edits"] as JArray; + var options = @params["options"] as JObject; + return EditScript(fullPath, relativePath, name, structEdits, options); + case "get_sha": + { + try + { + if (!File.Exists(fullPath)) + return Response.Error($"Script not found at '{relativePath}'."); + + string text = File.ReadAllText(fullPath); + string sha = ComputeSha256(text); + var fi = new FileInfo(fullPath); + long lengthBytes; + try { lengthBytes = new System.Text.UTF8Encoding(encoderShouldEmitUTF8Identifier: false).GetByteCount(text); } + catch { lengthBytes = fi.Exists ? fi.Length : 0; } + var data = new + { + uri = $"unity://path/{relativePath}", + path = relativePath, + sha256 = sha, + lengthBytes, + lastModifiedUtc = fi.Exists ? fi.LastWriteTimeUtc.ToString("o") : string.Empty + }; + return Response.Success($"SHA computed for '{relativePath}'.", data); + } + catch (Exception ex) + { + return Response.Error($"Failed to compute SHA: {ex.Message}"); + } + } default: return Response.Error( - $"Unknown action: '{action}'. Valid actions are: create, read, update, delete." + $"Unknown action: '{action}'. Valid actions are: create, delete, apply_text_edits, validate, read (deprecated), update (deprecated), edit (deprecated)." ); } } @@ -206,8 +329,7 @@ string namespaceName bool isValid = ValidateScriptSyntax(contents, validationLevel, out string[] validationErrors); if (!isValid) { - string errorMessage = "Script validation failed:\n" + string.Join("\n", validationErrors); - return Response.Error(errorMessage); + return Response.Error("validation_failed", new { status = "validation_failed", diagnostics = validationErrors ?? Array.Empty() }); } else if (validationErrors != null && validationErrors.Length > 0) { @@ -217,13 +339,29 @@ string namespaceName try { - File.WriteAllText(fullPath, contents, new System.Text.UTF8Encoding(false)); - AssetDatabase.ImportAsset(relativePath); - AssetDatabase.Refresh(); // Ensure Unity recognizes the new script - return Response.Success( + // Atomic create without BOM; schedule refresh after reply + var enc = new System.Text.UTF8Encoding(encoderShouldEmitUTF8Identifier: false); + var tmp = fullPath + ".tmp"; + File.WriteAllText(tmp, contents, enc); + try + { + File.Move(tmp, fullPath); + } + catch (IOException) + { + File.Copy(tmp, fullPath, overwrite: true); + try { File.Delete(tmp); } catch { } + } + + var uri = $"unity://path/{relativePath}"; + var ok = Response.Success( $"Script '{name}.cs' created successfully at '{relativePath}'.", - new { path = relativePath } + new { uri, scheduledRefresh = true } ); + + // Schedule heavy work AFTER replying + ManageScriptRefreshHelpers.ScheduleScriptRefresh(relativePath); + return ok; } catch (Exception e) { @@ -244,8 +382,10 @@ private static object ReadScript(string fullPath, string relativePath) // Return both normal and encoded contents for larger files bool isLarge = contents.Length > 10000; // If content is large, include encoded version + var uri = $"unity://path/{relativePath}"; var responseData = new { + uri, path = relativePath, contents = contents, // For large files, also include base64-encoded version @@ -287,8 +427,7 @@ string contents bool isValid = ValidateScriptSyntax(contents, validationLevel, out string[] validationErrors); if (!isValid) { - string errorMessage = "Script validation failed:\n" + string.Join("\n", validationErrors); - return Response.Error(errorMessage); + return Response.Error("validation_failed", new { status = "validation_failed", diagnostics = validationErrors ?? Array.Empty() }); } else if (validationErrors != null && validationErrors.Length > 0) { @@ -298,13 +437,41 @@ string contents try { - File.WriteAllText(fullPath, contents, new System.Text.UTF8Encoding(false)); - AssetDatabase.ImportAsset(relativePath); // Re-import to reflect changes - AssetDatabase.Refresh(); - return Response.Success( + // Safe write with atomic replace when available, without BOM + var encoding = new System.Text.UTF8Encoding(encoderShouldEmitUTF8Identifier: false); + string tempPath = fullPath + ".tmp"; + File.WriteAllText(tempPath, contents, encoding); + + string backupPath = fullPath + ".bak"; + try + { + File.Replace(tempPath, fullPath, backupPath); + try { if (File.Exists(backupPath)) File.Delete(backupPath); } catch { } + } + catch (PlatformNotSupportedException) + { + File.Copy(tempPath, fullPath, true); + try { File.Delete(tempPath); } catch { } + try { if (File.Exists(backupPath)) File.Delete(backupPath); } catch { } + } + catch (IOException) + { + File.Copy(tempPath, fullPath, true); + try { File.Delete(tempPath); } catch { } + try { if (File.Exists(backupPath)) File.Delete(backupPath); } catch { } + } + + // Prepare success response BEFORE any operation that can trigger a domain reload + var uri = $"unity://path/{relativePath}"; + var ok = Response.Success( $"Script '{name}.cs' updated successfully at '{relativePath}'.", - new { path = relativePath } + new { uri, path = relativePath, scheduledRefresh = true } ); + + // Schedule a debounced import/compile on next editor tick to avoid stalling the reply + ManageScriptRefreshHelpers.ScheduleScriptRefresh(relativePath); + + return ok; } catch (Exception e) { @@ -312,6 +479,467 @@ string contents } } + /// + /// Apply simple text edits specified by line/column ranges. Applies transactionally and validates result. + /// + private const int MaxEditPayloadBytes = 64 * 1024; + + private static object ApplyTextEdits( + string fullPath, + string relativePath, + string name, + JArray edits, + string preconditionSha256, + string refreshModeFromCaller = null, + string validateMode = null) + { + if (!File.Exists(fullPath)) + return Response.Error($"Script not found at '{relativePath}'."); + // Refuse edits if the target or any ancestor is a symlink + try + { + var di = new DirectoryInfo(Path.GetDirectoryName(fullPath) ?? ""); + while (di != null && !string.Equals(di.FullName.Replace('\\','/'), Application.dataPath.Replace('\\','/'), StringComparison.OrdinalIgnoreCase)) + { + if (di.Exists && (di.Attributes & FileAttributes.ReparsePoint) != 0) + return Response.Error("Refusing to edit a symlinked script path."); + di = di.Parent; + } + } + catch + { + // If checking attributes fails, proceed without the symlink guard + } + if (edits == null || edits.Count == 0) + return Response.Error("No edits provided."); + + string original; + try { original = File.ReadAllText(fullPath); } + catch (Exception ex) { return Response.Error($"Failed to read script: {ex.Message}"); } + + // Require precondition to avoid drift on large files + string currentSha = ComputeSha256(original); + if (string.IsNullOrEmpty(preconditionSha256)) + return Response.Error("precondition_required", new { status = "precondition_required", current_sha256 = currentSha }); + if (!preconditionSha256.Equals(currentSha, StringComparison.OrdinalIgnoreCase)) + return Response.Error("stale_file", new { status = "stale_file", expected_sha256 = preconditionSha256, current_sha256 = currentSha }); + + // Convert edits to absolute index ranges + var spans = new List<(int start, int end, string text)>(); + long totalBytes = 0; + foreach (var e in edits) + { + try + { + int sl = Math.Max(1, e.Value("startLine")); + int sc = Math.Max(1, e.Value("startCol")); + int el = Math.Max(1, e.Value("endLine")); + int ec = Math.Max(1, e.Value("endCol")); + string newText = e.Value("newText") ?? string.Empty; + + if (!TryIndexFromLineCol(original, sl, sc, out int sidx)) + return Response.Error($"apply_text_edits: start out of range (line {sl}, col {sc})"); + if (!TryIndexFromLineCol(original, el, ec, out int eidx)) + return Response.Error($"apply_text_edits: end out of range (line {el}, col {ec})"); + if (eidx < sidx) (sidx, eidx) = (eidx, sidx); + + spans.Add((sidx, eidx, newText)); + checked + { + totalBytes += System.Text.Encoding.UTF8.GetByteCount(newText); + } + } + catch (Exception ex) + { + return Response.Error($"Invalid edit payload: {ex.Message}"); + } + } + + // Header guard: refuse edits that touch before the first 'using ' directive (after optional BOM) to prevent file corruption + int headerBoundary = (original.Length > 0 && original[0] == '\uFEFF') ? 1 : 0; // skip BOM once if present + // Find first top-level using (supports alias, static, and dotted namespaces) + var mUsing = System.Text.RegularExpressions.Regex.Match( + original, + @"(?m)^\s*using\s+(?:static\s+)?(?:[A-Za-z_]\w*\s*=\s*)?[A-Za-z_]\w*(?:\.[A-Za-z_]\w*)*\s*;", + System.Text.RegularExpressions.RegexOptions.CultureInvariant, + TimeSpan.FromSeconds(2) + ); + if (mUsing.Success) + { + headerBoundary = Math.Min(Math.Max(headerBoundary, mUsing.Index), original.Length); + } + foreach (var sp in spans) + { + if (sp.start < headerBoundary) + { + return Response.Error("using_guard", new { status = "using_guard", hint = "Refusing to edit before the first 'using'. Use anchor_insert near a method or a structured edit." }); + } + } + + // Attempt auto-upgrade: if a single edit targets a method header/body, re-route as structured replace_method + if (spans.Count == 1) + { + var sp = spans[0]; + // Heuristic: around the start of the edit, try to match a method header in original + int searchStart = Math.Max(0, sp.start - 200); + int searchEnd = Math.Min(original.Length, sp.start + 200); + string slice = original.Substring(searchStart, searchEnd - searchStart); + var rx = new System.Text.RegularExpressions.Regex(@"(?m)^[\t ]*(?:\[[^\]]+\][\t ]*)*[\t ]*(?:public|private|protected|internal|static|virtual|override|sealed|async|extern|unsafe|new|partial)[\s\S]*?\b([A-Za-z_][A-Za-z0-9_]*)\s*\("); + var mh = rx.Match(slice); + if (mh.Success) + { + string methodName = mh.Groups[1].Value; + // Find class span containing the edit + if (TryComputeClassSpan(original, name, null, out var clsStart, out var clsLen, out _)) + { + if (TryComputeMethodSpan(original, clsStart, clsLen, methodName, null, null, null, out var mStart, out var mLen, out _)) + { + // If the edit overlaps the method span significantly, treat as replace_method + if (sp.start <= mStart + 2 && sp.end >= mStart + 1) + { + var structEdits = new JArray(); + + // Apply the edit to get a candidate string, then recompute method span on the edited text + string candidate = original.Remove(sp.start, sp.end - sp.start).Insert(sp.start, sp.text ?? string.Empty); + string replacementText; + if (TryComputeClassSpan(candidate, name, null, out var cls2Start, out var cls2Len, out _) + && TryComputeMethodSpan(candidate, cls2Start, cls2Len, methodName, null, null, null, out var m2Start, out var m2Len, out _)) + { + replacementText = candidate.Substring(m2Start, m2Len); + } + else + { + // Fallback: adjust method start by the net delta if the edit was before the method + int delta = (sp.text?.Length ?? 0) - (sp.end - sp.start); + int adjustedStart = mStart + (sp.start <= mStart ? delta : 0); + adjustedStart = Math.Max(0, Math.Min(adjustedStart, candidate.Length)); + + // If the edit was within the original method span, adjust the length by the delta within-method + int withinMethodDelta = 0; + if (sp.start >= mStart && sp.start <= mStart + mLen) + { + withinMethodDelta = delta; + } + int adjustedLen = mLen + withinMethodDelta; + adjustedLen = Math.Max(0, Math.Min(candidate.Length - adjustedStart, adjustedLen)); + replacementText = candidate.Substring(adjustedStart, adjustedLen); + } + + var op = new JObject + { + ["mode"] = "replace_method", + ["className"] = name, + ["methodName"] = methodName, + ["replacement"] = replacementText + }; + structEdits.Add(op); + // Reuse structured path + return EditScript(fullPath, relativePath, name, structEdits, new JObject{ ["refresh"] = "immediate", ["validate"] = "standard" }); + } + } + } + } + } + + if (totalBytes > MaxEditPayloadBytes) + { + return Response.Error("too_large", new { status = "too_large", limitBytes = MaxEditPayloadBytes, hint = "split into smaller edits" }); + } + + // Ensure non-overlap and apply from back to front + spans = spans.OrderByDescending(t => t.start).ToList(); + for (int i = 1; i < spans.Count; i++) + { + if (spans[i].end > spans[i - 1].start) + { + var conflict = new[] { new { startA = spans[i].start, endA = spans[i].end, startB = spans[i - 1].start, endB = spans[i - 1].end } }; + return Response.Error("overlap", new { status = "overlap", conflicts = conflict, hint = "Sort ranges descending by start and compute from the same snapshot." }); + } + } + + string working = original; + bool relaxed = string.Equals(validateMode, "relaxed", StringComparison.OrdinalIgnoreCase); + bool syntaxOnly = string.Equals(validateMode, "syntax", StringComparison.OrdinalIgnoreCase); + foreach (var sp in spans) + { + string next = working.Remove(sp.start, sp.end - sp.start).Insert(sp.start, sp.text ?? string.Empty); + if (relaxed) + { + // Scoped balance check: validate just around the changed region to avoid false positives + if (!CheckScopedBalance(next, Math.Max(0, sp.start - 500), Math.Min(next.Length, sp.start + (sp.text?.Length ?? 0) + 500))) + { + return Response.Error("unbalanced_braces", new { status = "unbalanced_braces", line = 0, expected = "{}()[] (scoped)", hint = "Use standard validation or shrink the edit range." }); + } + } + working = next; + } + + // No-op guard: if resulting text is identical, avoid writes and return explicit no-op + if (string.Equals(working, original, StringComparison.Ordinal)) + { + string noChangeSha = ComputeSha256(original); + return Response.Success( + $"No-op: contents unchanged for '{relativePath}'.", + new + { + uri = $"unity://path/{relativePath}", + path = relativePath, + editsApplied = 0, + no_op = true, + sha256 = noChangeSha, + evidence = new { reason = "identical_content" } + } + ); + } + + if (!relaxed && !CheckBalancedDelimiters(working, out int line, out char expected)) + { + int startLine = Math.Max(1, line - 5); + int endLine = line + 5; + string hint = $"unbalanced_braces at line {line}. Call resources/read for lines {startLine}-{endLine} and resend a smaller apply_text_edits that restores balance."; + return Response.Error(hint, new { status = "unbalanced_braces", line, expected = expected.ToString(), evidenceWindow = new { startLine, endLine } }); + } + +#if USE_ROSLYN + if (!syntaxOnly) + { + var tree = CSharpSyntaxTree.ParseText(working); + var diagnostics = tree.GetDiagnostics().Where(d => d.Severity == DiagnosticSeverity.Error).Take(3) + .Select(d => new { + line = d.Location.GetLineSpan().StartLinePosition.Line + 1, + col = d.Location.GetLineSpan().StartLinePosition.Character + 1, + code = d.Id, + message = d.GetMessage() + }).ToArray(); + if (diagnostics.Length > 0) + { + int firstLine = diagnostics[0].line; + int startLineRos = Math.Max(1, firstLine - 5); + int endLineRos = firstLine + 5; + return Response.Error("syntax_error", new { status = "syntax_error", diagnostics, evidenceWindow = new { startLine = startLineRos, endLine = endLineRos } }); + } + + // Optional formatting + try + { + var root = tree.GetRoot(); + var workspace = new AdhocWorkspace(); + root = Microsoft.CodeAnalysis.Formatting.Formatter.Format(root, workspace); + working = root.ToFullString(); + } + catch { } + } +#endif + + string newSha = ComputeSha256(working); + + // Atomic write and schedule refresh + try + { + var enc = new System.Text.UTF8Encoding(encoderShouldEmitUTF8Identifier: false); + var tmp = fullPath + ".tmp"; + File.WriteAllText(tmp, working, enc); + string backup = fullPath + ".bak"; + try + { + File.Replace(tmp, fullPath, backup); + try { if (File.Exists(backup)) File.Delete(backup); } catch { /* ignore */ } + } + catch (PlatformNotSupportedException) + { + File.Copy(tmp, fullPath, true); + try { File.Delete(tmp); } catch { } + try { if (File.Exists(backup)) File.Delete(backup); } catch { } + } + catch (IOException) + { + File.Copy(tmp, fullPath, true); + try { File.Delete(tmp); } catch { } + try { if (File.Exists(backup)) File.Delete(backup); } catch { } + } + + // Respect refresh mode: immediate vs debounced + bool immediate = string.Equals(refreshModeFromCaller, "immediate", StringComparison.OrdinalIgnoreCase) || + string.Equals(refreshModeFromCaller, "sync", StringComparison.OrdinalIgnoreCase); + if (immediate) + { + EditorApplication.delayCall += () => + { + AssetDatabase.ImportAsset( + relativePath, + ImportAssetOptions.ForceSynchronousImport | ImportAssetOptions.ForceUpdate + ); +#if UNITY_EDITOR + UnityEditor.Compilation.CompilationPipeline.RequestScriptCompilation(); +#endif + }; + } + else + { + ManageScriptRefreshHelpers.ScheduleScriptRefresh(relativePath); + } + + return Response.Success( + $"Applied {spans.Count} text edit(s) to '{relativePath}'.", + new + { + uri = $"unity://path/{relativePath}", + path = relativePath, + editsApplied = spans.Count, + sha256 = newSha + } + ); + } + catch (Exception ex) + { + return Response.Error($"Failed to write edits: {ex.Message}"); + } + } + + private static bool TryIndexFromLineCol(string text, int line1, int col1, out int index) + { + // 1-based line/col to absolute index (0-based), col positions are counted in code points + int line = 1, col = 1; + for (int i = 0; i <= text.Length; i++) + { + if (line == line1 && col == col1) + { + index = i; + return true; + } + if (i == text.Length) break; + char c = text[i]; + if (c == '\r') + { + // Treat CRLF as a single newline; skip the LF if present + if (i + 1 < text.Length && text[i + 1] == '\n') + i++; + line++; + col = 1; + } + else if (c == '\n') + { + line++; + col = 1; + } + else + { + col++; + } + } + index = -1; + return false; + } + + private static string ComputeSha256(string contents) + { + using (var sha = SHA256.Create()) + { + var bytes = System.Text.Encoding.UTF8.GetBytes(contents); + var hash = sha.ComputeHash(bytes); + return BitConverter.ToString(hash).Replace("-", string.Empty).ToLowerInvariant(); + } + } + + private static bool CheckBalancedDelimiters(string text, out int line, out char expected) + { + var braceStack = new Stack(); + var parenStack = new Stack(); + var bracketStack = new Stack(); + bool inString = false, inChar = false, inSingle = false, inMulti = false, escape = false; + line = 1; expected = '\0'; + + for (int i = 0; i < text.Length; i++) + { + char c = text[i]; + char next = i + 1 < text.Length ? text[i + 1] : '\0'; + + if (c == '\n') { line++; if (inSingle) inSingle = false; } + + if (escape) { escape = false; continue; } + + if (inString) + { + if (c == '\\') { escape = true; } + else if (c == '"') inString = false; + continue; + } + if (inChar) + { + if (c == '\\') { escape = true; } + else if (c == '\'') inChar = false; + continue; + } + if (inSingle) continue; + if (inMulti) + { + if (c == '*' && next == '/') { inMulti = false; i++; } + continue; + } + + if (c == '"') { inString = true; continue; } + if (c == '\'') { inChar = true; continue; } + if (c == '/' && next == '/') { inSingle = true; i++; continue; } + if (c == '/' && next == '*') { inMulti = true; i++; continue; } + + switch (c) + { + case '{': braceStack.Push(line); break; + case '}': + if (braceStack.Count == 0) { expected = '{'; return false; } + braceStack.Pop(); + break; + case '(': parenStack.Push(line); break; + case ')': + if (parenStack.Count == 0) { expected = '('; return false; } + parenStack.Pop(); + break; + case '[': bracketStack.Push(line); break; + case ']': + if (bracketStack.Count == 0) { expected = '['; return false; } + bracketStack.Pop(); + break; + } + } + + if (braceStack.Count > 0) { line = braceStack.Peek(); expected = '}'; return false; } + if (parenStack.Count > 0) { line = parenStack.Peek(); expected = ')'; return false; } + if (bracketStack.Count > 0) { line = bracketStack.Peek(); expected = ']'; return false; } + + return true; + } + + // Lightweight scoped balance: checks delimiters within a substring, ignoring outer context + private static bool CheckScopedBalance(string text, int start, int end) + { + start = Math.Max(0, Math.Min(text.Length, start)); + end = Math.Max(start, Math.Min(text.Length, end)); + int brace = 0, paren = 0, bracket = 0; + bool inStr = false, inChr = false, esc = false; + for (int i = start; i < end; i++) + { + char c = text[i]; + char n = (i + 1 < end) ? text[i + 1] : '\0'; + if (inStr) + { + if (!esc && c == '"') inStr = false; esc = (!esc && c == '\\'); continue; + } + if (inChr) + { + if (!esc && c == '\'') inChr = false; esc = (!esc && c == '\\'); continue; + } + if (c == '"') { inStr = true; esc = false; continue; } + if (c == '\'') { inChr = true; esc = false; continue; } + if (c == '/' && n == '/') { while (i < end && text[i] != '\n') i++; continue; } + if (c == '/' && n == '*') { i += 2; while (i + 1 < end && !(text[i] == '*' && text[i + 1] == '/')) i++; i++; continue; } + if (c == '{') brace++; else if (c == '}') brace--; + else if (c == '(') paren++; else if (c == ')') paren--; + else if (c == '[') bracket++; else if (c == ']') bracket--; + if (brace < 0 || paren < 0 || bracket < 0) return false; + } + return brace >= -1 && paren >= -1 && bracket >= -1; // tolerate context from outside region + } + private static object DeleteScript(string fullPath, string relativePath) { if (!File.Exists(fullPath)) @@ -327,7 +955,8 @@ private static object DeleteScript(string fullPath, string relativePath) { AssetDatabase.Refresh(); return Response.Success( - $"Script '{Path.GetFileName(relativePath)}' moved to trash successfully." + $"Script '{Path.GetFileName(relativePath)}' moved to trash successfully.", + new { deleted = true } ); } else @@ -344,6 +973,891 @@ private static object DeleteScript(string fullPath, string relativePath) } } + /// + /// Structured edits (AST-backed where available) on existing scripts. + /// Supports class-level replace/delete with Roslyn span computation if USE_ROSLYN is defined, + /// otherwise falls back to a conservative balanced-brace scan. + /// + private static object EditScript( + string fullPath, + string relativePath, + string name, + JArray edits, + JObject options) + { + if (!File.Exists(fullPath)) + return Response.Error($"Script not found at '{relativePath}'."); + // Refuse edits if the target is a symlink + try + { + var attrs = File.GetAttributes(fullPath); + if ((attrs & FileAttributes.ReparsePoint) != 0) + return Response.Error("Refusing to edit a symlinked script path."); + } + catch + { + // ignore failures checking attributes and proceed + } + if (edits == null || edits.Count == 0) + return Response.Error("No edits provided."); + + string original; + try { original = File.ReadAllText(fullPath); } + catch (Exception ex) { return Response.Error($"Failed to read script: {ex.Message}"); } + + string working = original; + + try + { + var replacements = new List<(int start, int length, string text)>(); + int appliedCount = 0; + + // Apply mode: atomic (default) computes all spans against original and applies together. + // Sequential applies each edit immediately to the current working text (useful for dependent edits). + string applyMode = options?["applyMode"]?.ToString()?.ToLowerInvariant(); + bool applySequentially = applyMode == "sequential"; + + foreach (var e in edits) + { + var op = (JObject)e; + var mode = (op.Value("mode") ?? op.Value("op") ?? string.Empty).ToLowerInvariant(); + + switch (mode) + { + case "replace_class": + { + string className = op.Value("className"); + string ns = op.Value("namespace"); + string replacement = ExtractReplacement(op); + + if (string.IsNullOrWhiteSpace(className)) + return Response.Error("replace_class requires 'className'."); + if (replacement == null) + return Response.Error("replace_class requires 'replacement' (inline or base64)."); + + if (!TryComputeClassSpan(working, className, ns, out var spanStart, out var spanLength, out var why)) + return Response.Error($"replace_class failed: {why}"); + + if (!ValidateClassSnippet(replacement, className, out var vErr)) + return Response.Error($"Replacement snippet invalid: {vErr}"); + + if (applySequentially) + { + working = working.Remove(spanStart, spanLength).Insert(spanStart, NormalizeNewlines(replacement)); + appliedCount++; + } + else + { + replacements.Add((spanStart, spanLength, NormalizeNewlines(replacement))); + } + break; + } + + case "delete_class": + { + string className = op.Value("className"); + string ns = op.Value("namespace"); + if (string.IsNullOrWhiteSpace(className)) + return Response.Error("delete_class requires 'className'."); + + if (!TryComputeClassSpan(working, className, ns, out var s, out var l, out var why)) + return Response.Error($"delete_class failed: {why}"); + + if (applySequentially) + { + working = working.Remove(s, l); + appliedCount++; + } + else + { + replacements.Add((s, l, string.Empty)); + } + break; + } + + case "replace_method": + { + string className = op.Value("className"); + string ns = op.Value("namespace"); + string methodName = op.Value("methodName"); + string replacement = ExtractReplacement(op); + string returnType = op.Value("returnType"); + string parametersSignature = op.Value("parametersSignature"); + string attributesContains = op.Value("attributesContains"); + + if (string.IsNullOrWhiteSpace(className)) return Response.Error("replace_method requires 'className'."); + if (string.IsNullOrWhiteSpace(methodName)) return Response.Error("replace_method requires 'methodName'."); + if (replacement == null) return Response.Error("replace_method requires 'replacement' (inline or base64)."); + + if (!TryComputeClassSpan(working, className, ns, out var clsStart, out var clsLen, out var whyClass)) + return Response.Error($"replace_method failed to locate class: {whyClass}"); + + if (!TryComputeMethodSpan(working, clsStart, clsLen, methodName, returnType, parametersSignature, attributesContains, out var mStart, out var mLen, out var whyMethod)) + { + bool hasDependentInsert = edits.Any(j => j is JObject jo && + string.Equals(jo.Value("className"), className, StringComparison.Ordinal) && + string.Equals(jo.Value("methodName"), methodName, StringComparison.Ordinal) && + ((jo.Value("mode") ?? jo.Value("op") ?? string.Empty).ToLowerInvariant() == "insert_method")); + string hint = hasDependentInsert && !applySequentially ? " Hint: This batch inserts this method. Use options.applyMode='sequential' or split into separate calls." : string.Empty; + return Response.Error($"replace_method failed: {whyMethod}.{hint}"); + } + + if (applySequentially) + { + working = working.Remove(mStart, mLen).Insert(mStart, NormalizeNewlines(replacement)); + appliedCount++; + } + else + { + replacements.Add((mStart, mLen, NormalizeNewlines(replacement))); + } + break; + } + + case "delete_method": + { + string className = op.Value("className"); + string ns = op.Value("namespace"); + string methodName = op.Value("methodName"); + string returnType = op.Value("returnType"); + string parametersSignature = op.Value("parametersSignature"); + string attributesContains = op.Value("attributesContains"); + + if (string.IsNullOrWhiteSpace(className)) return Response.Error("delete_method requires 'className'."); + if (string.IsNullOrWhiteSpace(methodName)) return Response.Error("delete_method requires 'methodName'."); + + if (!TryComputeClassSpan(working, className, ns, out var clsStart, out var clsLen, out var whyClass)) + return Response.Error($"delete_method failed to locate class: {whyClass}"); + + if (!TryComputeMethodSpan(working, clsStart, clsLen, methodName, returnType, parametersSignature, attributesContains, out var mStart, out var mLen, out var whyMethod)) + { + bool hasDependentInsert = edits.Any(j => j is JObject jo && + string.Equals(jo.Value("className"), className, StringComparison.Ordinal) && + string.Equals(jo.Value("methodName"), methodName, StringComparison.Ordinal) && + ((jo.Value("mode") ?? jo.Value("op") ?? string.Empty).ToLowerInvariant() == "insert_method")); + string hint = hasDependentInsert && !applySequentially ? " Hint: This batch inserts this method. Use options.applyMode='sequential' or split into separate calls." : string.Empty; + return Response.Error($"delete_method failed: {whyMethod}.{hint}"); + } + + if (applySequentially) + { + working = working.Remove(mStart, mLen); + appliedCount++; + } + else + { + replacements.Add((mStart, mLen, string.Empty)); + } + break; + } + + case "insert_method": + { + string className = op.Value("className"); + string ns = op.Value("namespace"); + string position = (op.Value("position") ?? "end").ToLowerInvariant(); + string afterMethodName = op.Value("afterMethodName"); + string afterReturnType = op.Value("afterReturnType"); + string afterParameters = op.Value("afterParametersSignature"); + string afterAttributesContains = op.Value("afterAttributesContains"); + string snippet = ExtractReplacement(op); + // Harden: refuse empty replacement for inserts + if (snippet == null || snippet.Trim().Length == 0) + return Response.Error("insert_method requires a non-empty 'replacement' text."); + + if (string.IsNullOrWhiteSpace(className)) return Response.Error("insert_method requires 'className'."); + if (snippet == null) return Response.Error("insert_method requires 'replacement' (inline or base64) containing a full method declaration."); + + if (!TryComputeClassSpan(working, className, ns, out var clsStart, out var clsLen, out var whyClass)) + return Response.Error($"insert_method failed to locate class: {whyClass}"); + + if (position == "after") + { + if (string.IsNullOrEmpty(afterMethodName)) return Response.Error("insert_method with position='after' requires 'afterMethodName'."); + if (!TryComputeMethodSpan(working, clsStart, clsLen, afterMethodName, afterReturnType, afterParameters, afterAttributesContains, out var aStart, out var aLen, out var whyAfter)) + return Response.Error($"insert_method(after) failed to locate anchor method: {whyAfter}"); + int insAt = aStart + aLen; + string text = NormalizeNewlines("\n\n" + snippet.TrimEnd() + "\n"); + if (applySequentially) + { + working = working.Insert(insAt, text); + appliedCount++; + } + else + { + replacements.Add((insAt, 0, text)); + } + } + else if (!TryFindClassInsertionPoint(working, clsStart, clsLen, position, out var insAt, out var whyIns)) + return Response.Error($"insert_method failed: {whyIns}"); + else + { + string text = NormalizeNewlines("\n\n" + snippet.TrimEnd() + "\n"); + if (applySequentially) + { + working = working.Insert(insAt, text); + appliedCount++; + } + else + { + replacements.Add((insAt, 0, text)); + } + } + break; + } + + case "anchor_insert": + { + string anchor = op.Value("anchor"); + string position = (op.Value("position") ?? "before").ToLowerInvariant(); + string text = op.Value("text") ?? ExtractReplacement(op); + if (string.IsNullOrWhiteSpace(anchor)) return Response.Error("anchor_insert requires 'anchor' (regex)."); + if (string.IsNullOrEmpty(text)) return Response.Error("anchor_insert requires non-empty 'text'."); + + try + { + var rx = new Regex(anchor, RegexOptions.Multiline, TimeSpan.FromSeconds(2)); + var m = rx.Match(working); + if (!m.Success) return Response.Error($"anchor_insert: anchor not found: {anchor}"); + int insAt = position == "after" ? m.Index + m.Length : m.Index; + string norm = NormalizeNewlines(text); + if (!norm.EndsWith("\n")) + { + norm += "\n"; + } + + // Duplicate guard: if identical snippet already exists within this class, skip insert + if (TryComputeClassSpan(working, name, null, out var clsStartDG, out var clsLenDG, out _)) + { + string classSlice = working.Substring(clsStartDG, Math.Min(clsLenDG, working.Length - clsStartDG)); + if (classSlice.IndexOf(norm, StringComparison.Ordinal) >= 0) + { + // Do not insert duplicate; treat as no-op + break; + } + } + if (applySequentially) + { + working = working.Insert(insAt, norm); + appliedCount++; + } + else + { + replacements.Add((insAt, 0, norm)); + } + } + catch (Exception ex) + { + return Response.Error($"anchor_insert failed: {ex.Message}"); + } + break; + } + + case "anchor_delete": + { + string anchor = op.Value("anchor"); + if (string.IsNullOrWhiteSpace(anchor)) return Response.Error("anchor_delete requires 'anchor' (regex)."); + try + { + var rx = new Regex(anchor, RegexOptions.Multiline, TimeSpan.FromSeconds(2)); + var m = rx.Match(working); + if (!m.Success) return Response.Error($"anchor_delete: anchor not found: {anchor}"); + int delAt = m.Index; + int delLen = m.Length; + if (applySequentially) + { + working = working.Remove(delAt, delLen); + appliedCount++; + } + else + { + replacements.Add((delAt, delLen, string.Empty)); + } + } + catch (Exception ex) + { + return Response.Error($"anchor_delete failed: {ex.Message}"); + } + break; + } + + case "anchor_replace": + { + string anchor = op.Value("anchor"); + string replacement = op.Value("text") ?? op.Value("replacement") ?? ExtractReplacement(op) ?? string.Empty; + if (string.IsNullOrWhiteSpace(anchor)) return Response.Error("anchor_replace requires 'anchor' (regex)."); + try + { + var rx = new Regex(anchor, RegexOptions.Multiline, TimeSpan.FromSeconds(2)); + var m = rx.Match(working); + if (!m.Success) return Response.Error($"anchor_replace: anchor not found: {anchor}"); + int at = m.Index; + int len = m.Length; + string norm = NormalizeNewlines(replacement); + if (applySequentially) + { + working = working.Remove(at, len).Insert(at, norm); + appliedCount++; + } + else + { + replacements.Add((at, len, norm)); + } + } + catch (Exception ex) + { + return Response.Error($"anchor_replace failed: {ex.Message}"); + } + break; + } + + default: + return Response.Error($"Unknown edit mode: '{mode}'. Allowed: replace_class, delete_class, replace_method, delete_method, insert_method, anchor_insert, anchor_delete, anchor_replace."); + } + } + + if (!applySequentially) + { + if (HasOverlaps(replacements)) + { + var ordered = replacements.OrderByDescending(r => r.start).ToList(); + for (int i = 1; i < ordered.Count; i++) + { + if (ordered[i].start + ordered[i].length > ordered[i - 1].start) + { + var conflict = new[] { new { startA = ordered[i].start, endA = ordered[i].start + ordered[i].length, startB = ordered[i - 1].start, endB = ordered[i - 1].start + ordered[i - 1].length } }; + return Response.Error("overlap", new { status = "overlap", conflicts = conflict, hint = "Apply in descending order against the same precondition snapshot." }); + } + } + return Response.Error("overlap", new { status = "overlap" }); + } + + foreach (var r in replacements.OrderByDescending(r => r.start)) + working = working.Remove(r.start, r.length).Insert(r.start, r.text); + appliedCount = replacements.Count; + } + + // No-op guard for structured edits: if text unchanged, return explicit no-op + if (string.Equals(working, original, StringComparison.Ordinal)) + { + var sameSha = ComputeSha256(original); + return Response.Success( + $"No-op: contents unchanged for '{relativePath}'.", + new + { + path = relativePath, + uri = $"unity://path/{relativePath}", + editsApplied = 0, + no_op = true, + sha256 = sameSha, + evidence = new { reason = "identical_content" } + } + ); + } + + // Validate result using override from options if provided; otherwise GUI strictness + var level = GetValidationLevelFromGUI(); + try + { + var validateOpt = options?["validate"]?.ToString()?.ToLowerInvariant(); + if (!string.IsNullOrEmpty(validateOpt)) + { + level = validateOpt switch + { + "basic" => ValidationLevel.Basic, + "standard" => ValidationLevel.Standard, + "comprehensive" => ValidationLevel.Comprehensive, + "strict" => ValidationLevel.Strict, + _ => level + }; + } + } + catch { /* ignore option parsing issues */ } + if (!ValidateScriptSyntax(working, level, out var errors)) + return Response.Error("validation_failed", new { status = "validation_failed", diagnostics = errors ?? Array.Empty() }); + else if (errors != null && errors.Length > 0) + Debug.LogWarning($"Script validation warnings for {name}:\n" + string.Join("\n", errors)); + + // Atomic write with backup; schedule refresh + // Decide refresh behavior + string refreshMode = options?["refresh"]?.ToString()?.ToLowerInvariant(); + bool immediate = refreshMode == "immediate" || refreshMode == "sync"; + + // Persist changes atomically (no BOM), then compute/return new file SHA + var enc = new System.Text.UTF8Encoding(encoderShouldEmitUTF8Identifier: false); + var tmp = fullPath + ".tmp"; + File.WriteAllText(tmp, working, enc); + var backup = fullPath + ".bak"; + try + { + File.Replace(tmp, fullPath, backup); + try { if (File.Exists(backup)) File.Delete(backup); } catch { } + } + catch (PlatformNotSupportedException) + { + File.Copy(tmp, fullPath, true); + try { File.Delete(tmp); } catch { } + try { if (File.Exists(backup)) File.Delete(backup); } catch { } + } + catch (IOException) + { + File.Copy(tmp, fullPath, true); + try { File.Delete(tmp); } catch { } + try { if (File.Exists(backup)) File.Delete(backup); } catch { } + } + + var newSha = ComputeSha256(working); + var ok = Response.Success( + $"Applied {appliedCount} structured edit(s) to '{relativePath}'.", + new + { + path = relativePath, + uri = $"unity://path/{relativePath}", + editsApplied = appliedCount, + scheduledRefresh = !immediate, + sha256 = newSha + } + ); + + if (immediate) + { + // Force on main thread + EditorApplication.delayCall += () => + { + AssetDatabase.ImportAsset( + relativePath, + ImportAssetOptions.ForceSynchronousImport | ImportAssetOptions.ForceUpdate + ); +#if UNITY_EDITOR + UnityEditor.Compilation.CompilationPipeline.RequestScriptCompilation(); +#endif + }; + } + else + { + ManageScriptRefreshHelpers.ScheduleScriptRefresh(relativePath); + } + return ok; + } + catch (Exception ex) + { + return Response.Error($"Edit failed: {ex.Message}"); + } + } + + private static bool HasOverlaps(IEnumerable<(int start, int length, string text)> list) + { + var arr = list.OrderBy(x => x.start).ToArray(); + for (int i = 1; i < arr.Length; i++) + { + if (arr[i - 1].start + arr[i - 1].length > arr[i].start) + return true; + } + return false; + } + + private static string ExtractReplacement(JObject op) + { + var inline = op.Value("replacement"); + if (!string.IsNullOrEmpty(inline)) return inline; + + var b64 = op.Value("replacementBase64"); + if (!string.IsNullOrEmpty(b64)) + { + try { return System.Text.Encoding.UTF8.GetString(Convert.FromBase64String(b64)); } + catch { return null; } + } + return null; + } + + private static string NormalizeNewlines(string t) + { + if (string.IsNullOrEmpty(t)) return t; + return t.Replace("\r\n", "\n").Replace("\r", "\n"); + } + + private static bool ValidateClassSnippet(string snippet, string expectedName, out string err) + { +#if USE_ROSLYN + try + { + var tree = CSharpSyntaxTree.ParseText(snippet); + var root = tree.GetRoot(); + var classes = root.DescendantNodes().OfType().ToList(); + if (classes.Count != 1) { err = "snippet must contain exactly one class declaration"; return false; } + // Optional: enforce expected name + // if (classes[0].Identifier.ValueText != expectedName) { err = $"snippet declares '{classes[0].Identifier.ValueText}', expected '{expectedName}'"; return false; } + err = null; return true; + } + catch (Exception ex) { err = ex.Message; return false; } +#else + if (string.IsNullOrWhiteSpace(snippet) || !snippet.Contains("class ")) { err = "no 'class' keyword found in snippet"; return false; } + err = null; return true; +#endif + } + + private static bool TryComputeClassSpan(string source, string className, string ns, out int start, out int length, out string why) + { +#if USE_ROSLYN + try + { + var tree = CSharpSyntaxTree.ParseText(source); + var root = tree.GetRoot(); + var classes = root.DescendantNodes() + .OfType() + .Where(c => c.Identifier.ValueText == className); + + if (!string.IsNullOrEmpty(ns)) + { + classes = classes.Where(c => + (c.FirstAncestorOrSelf()?.Name?.ToString() ?? "") == ns + || (c.FirstAncestorOrSelf()?.Name?.ToString() ?? "") == ns); + } + + var list = classes.ToList(); + if (list.Count == 0) { start = length = 0; why = $"class '{className}' not found" + (ns != null ? $" in namespace '{ns}'" : ""); return false; } + if (list.Count > 1) { start = length = 0; why = $"class '{className}' matched {list.Count} declarations (partial/nested?). Disambiguate."; return false; } + + var cls = list[0]; + var span = cls.FullSpan; // includes attributes & leading trivia + start = span.Start; length = span.Length; why = null; return true; + } + catch + { + // fall back below + } +#endif + return TryComputeClassSpanBalanced(source, className, ns, out start, out length, out why); + } + + private static bool TryComputeClassSpanBalanced(string source, string className, string ns, out int start, out int length, out string why) + { + start = length = 0; why = null; + var idx = IndexOfClassToken(source, className); + if (idx < 0) { why = $"class '{className}' not found (balanced scan)"; return false; } + + if (!string.IsNullOrEmpty(ns) && !AppearsWithinNamespaceHeader(source, idx, ns)) + { why = $"class '{className}' not under namespace '{ns}' (balanced scan)"; return false; } + + // Include modifiers/attributes on the same line: back up to the start of line + int lineStart = idx; + while (lineStart > 0 && source[lineStart - 1] != '\n' && source[lineStart - 1] != '\r') lineStart--; + + int i = idx; + while (i < source.Length && source[i] != '{') i++; + if (i >= source.Length) { why = "no opening brace after class header"; return false; } + + int depth = 0; bool inStr = false, inChar = false, inSL = false, inML = false, esc = false; + int startSpan = lineStart; + for (; i < source.Length; i++) + { + char c = source[i]; + char n = i + 1 < source.Length ? source[i + 1] : '\0'; + + if (inSL) { if (c == '\n') inSL = false; continue; } + if (inML) { if (c == '*' && n == '/') { inML = false; i++; } continue; } + if (inStr) { if (!esc && c == '"') inStr = false; esc = (!esc && c == '\\'); continue; } + if (inChar) { if (!esc && c == '\'') inChar = false; esc = (!esc && c == '\\'); continue; } + + if (c == '/' && n == '/') { inSL = true; i++; continue; } + if (c == '/' && n == '*') { inML = true; i++; continue; } + if (c == '"') { inStr = true; continue; } + if (c == '\'') { inChar = true; continue; } + + if (c == '{') { depth++; } + else if (c == '}') + { + depth--; + if (depth == 0) { start = startSpan; length = (i - startSpan) + 1; return true; } + if (depth < 0) { why = "brace underflow"; return false; } + } + } + why = "unterminated class block"; return false; + } + + private static bool TryComputeMethodSpan( + string source, + int classStart, + int classLength, + string methodName, + string returnType, + string parametersSignature, + string attributesContains, + out int start, + out int length, + out string why) + { + start = length = 0; why = null; + int searchStart = classStart; + int searchEnd = Math.Min(source.Length, classStart + classLength); + + // 1) Find the method header using a stricter regex (allows optional attributes above) + string rtPattern = string.IsNullOrEmpty(returnType) ? @"[^\s]+" : Regex.Escape(returnType).Replace("\\ ", "\\s+"); + string namePattern = Regex.Escape(methodName); + // If a parametersSignature is provided, it may include surrounding parentheses. Strip them so + // we can safely embed the signature inside our own parenthesis group without duplicating. + string paramsPattern; + if (string.IsNullOrEmpty(parametersSignature)) + { + paramsPattern = @"[\s\S]*?"; // permissive when not specified + } + else + { + string ps = parametersSignature.Trim(); + if (ps.StartsWith("(") && ps.EndsWith(")") && ps.Length >= 2) + { + ps = ps.Substring(1, ps.Length - 2); + } + // Escape literal text of the signature + paramsPattern = Regex.Escape(ps); + } + string pattern = + @"(?m)^[\t ]*(?:\[[^\]]+\][\t ]*)*[\t ]*" + + @"(?:(?:public|private|protected|internal|static|virtual|override|sealed|async|extern|unsafe|new|partial|readonly|volatile|event|abstract|ref|in|out)\s+)*" + + rtPattern + @"[\t ]+" + namePattern + @"\s*(?:<[^>]+>)?\s*\(" + paramsPattern + @"\)"; + + string slice = source.Substring(searchStart, searchEnd - searchStart); + var headerMatch = Regex.Match(slice, pattern, RegexOptions.Multiline, TimeSpan.FromSeconds(2)); + if (!headerMatch.Success) + { + why = $"method '{methodName}' header not found in class"; return false; + } + int headerIndex = searchStart + headerMatch.Index; + + // Optional attributes filter: look upward from headerIndex for contiguous attribute lines + if (!string.IsNullOrEmpty(attributesContains)) + { + int attrScanStart = headerIndex; + while (attrScanStart > searchStart) + { + int prevNl = source.LastIndexOf('\n', attrScanStart - 1); + if (prevNl < 0 || prevNl < searchStart) break; + string prevLine = source.Substring(prevNl + 1, attrScanStart - (prevNl + 1)); + if (prevLine.TrimStart().StartsWith("[")) { attrScanStart = prevNl; continue; } + break; + } + string attrBlock = source.Substring(attrScanStart, headerIndex - attrScanStart); + if (attrBlock.IndexOf(attributesContains, StringComparison.Ordinal) < 0) + { + why = $"method '{methodName}' found but attributes filter did not match"; return false; + } + } + + // backtrack to the very start of header/attributes to include in span + int lineStart = headerIndex; + while (lineStart > searchStart && source[lineStart - 1] != '\n' && source[lineStart - 1] != '\r') lineStart--; + // If previous lines are attributes, include them + int attrStart = lineStart; + int probe = lineStart - 1; + while (probe > searchStart) + { + int prevNl = source.LastIndexOf('\n', probe); + if (prevNl < 0 || prevNl < searchStart) break; + string prev = source.Substring(prevNl + 1, attrStart - (prevNl + 1)); + if (prev.TrimStart().StartsWith("[")) { attrStart = prevNl + 1; probe = prevNl - 1; } + else break; + } + + // 2) Walk from the end of signature to detect body style ('{' or '=> ...;') and compute end + // Find the '(' that belongs to the method signature, not attributes + int nameTokenIdx = IndexOfTokenWithin(source, methodName, headerIndex, searchEnd); + if (nameTokenIdx < 0) { why = $"method '{methodName}' token not found after header"; return false; } + int sigOpenParen = IndexOfTokenWithin(source, "(", nameTokenIdx, searchEnd); + if (sigOpenParen < 0) { why = "method parameter list '(' not found"; return false; } + + int i = sigOpenParen; + int parenDepth = 0; bool inStr = false, inChar = false, inSL = false, inML = false, esc = false; + for (; i < searchEnd; i++) + { + char c = source[i]; + char n = i + 1 < searchEnd ? source[i + 1] : '\0'; + if (inSL) { if (c == '\n') inSL = false; continue; } + if (inML) { if (c == '*' && n == '/') { inML = false; i++; } continue; } + if (inStr) { if (!esc && c == '"') inStr = false; esc = (!esc && c == '\\'); continue; } + if (inChar) { if (!esc && c == '\'') inChar = false; esc = (!esc && c == '\\'); continue; } + + if (c == '/' && n == '/') { inSL = true; i++; continue; } + if (c == '/' && n == '*') { inML = true; i++; continue; } + if (c == '"') { inStr = true; continue; } + if (c == '\'') { inChar = true; continue; } + + if (c == '(') parenDepth++; + if (c == ')') { parenDepth--; if (parenDepth == 0) { i++; break; } } + } + + // After params: detect expression-bodied or block-bodied + // Skip whitespace/comments + for (; i < searchEnd; i++) + { + char c = source[i]; + char n = i + 1 < searchEnd ? source[i + 1] : '\0'; + if (char.IsWhiteSpace(c)) continue; + if (c == '/' && n == '/') { while (i < searchEnd && source[i] != '\n') i++; continue; } + if (c == '/' && n == '*') { i += 2; while (i + 1 < searchEnd && !(source[i] == '*' && source[i + 1] == '/')) i++; i++; continue; } + break; + } + + // Tolerate generic constraints between params and body: multiple 'where T : ...' + for (;;) + { + // Skip whitespace/comments before checking for 'where' + for (; i < searchEnd; i++) + { + char c = source[i]; + char n = i + 1 < searchEnd ? source[i + 1] : '\0'; + if (char.IsWhiteSpace(c)) continue; + if (c == '/' && n == '/') { while (i < searchEnd && source[i] != '\n') i++; continue; } + if (c == '/' && n == '*') { i += 2; while (i + 1 < searchEnd && !(source[i] == '*' && source[i + 1] == '/')) i++; i++; continue; } + break; + } + + // Check word-boundary 'where' + bool hasWhere = false; + if (i + 5 <= searchEnd) + { + hasWhere = source[i] == 'w' && source[i + 1] == 'h' && source[i + 2] == 'e' && source[i + 3] == 'r' && source[i + 4] == 'e'; + if (hasWhere) + { + // Left boundary + if (i - 1 >= 0) + { + char lb = source[i - 1]; + if (char.IsLetterOrDigit(lb) || lb == '_') hasWhere = false; + } + // Right boundary + if (hasWhere && i + 5 < searchEnd) + { + char rb = source[i + 5]; + if (char.IsLetterOrDigit(rb) || rb == '_') hasWhere = false; + } + } + } + if (!hasWhere) break; + + // Advance past the entire where-constraint clause until we hit '{' or '=>' or ';' + i += 5; // past 'where' + while (i < searchEnd) + { + char c = source[i]; + char n = i + 1 < searchEnd ? source[i + 1] : '\0'; + if (c == '{' || c == ';' || (c == '=' && n == '>')) break; + // Skip comments inline + if (c == '/' && n == '/') { while (i < searchEnd && source[i] != '\n') i++; continue; } + if (c == '/' && n == '*') { i += 2; while (i + 1 < searchEnd && !(source[i] == '*' && source[i + 1] == '/')) i++; i++; continue; } + i++; + } + } + + // Re-check for expression-bodied after constraints + if (i < searchEnd - 1 && source[i] == '=' && source[i + 1] == '>') + { + // expression-bodied method: seek to terminating semicolon + int j = i; + bool done = false; + while (j < searchEnd) + { + char c = source[j]; + if (c == ';') { done = true; break; } + j++; + } + if (!done) { why = "unterminated expression-bodied method"; return false; } + start = attrStart; length = (j - attrStart) + 1; return true; + } + + if (i >= searchEnd || source[i] != '{') { why = "no opening brace after method signature"; return false; } + + int depth = 0; inStr = false; inChar = false; inSL = false; inML = false; esc = false; + int startSpan = attrStart; + for (; i < searchEnd; i++) + { + char c = source[i]; + char n = i + 1 < searchEnd ? source[i + 1] : '\0'; + if (inSL) { if (c == '\n') inSL = false; continue; } + if (inML) { if (c == '*' && n == '/') { inML = false; i++; } continue; } + if (inStr) { if (!esc && c == '"') inStr = false; esc = (!esc && c == '\\'); continue; } + if (inChar) { if (!esc && c == '\'') inChar = false; esc = (!esc && c == '\\'); continue; } + + if (c == '/' && n == '/') { inSL = true; i++; continue; } + if (c == '/' && n == '*') { inML = true; i++; continue; } + if (c == '"') { inStr = true; continue; } + if (c == '\'') { inChar = true; continue; } + + if (c == '{') depth++; + else if (c == '}') + { + depth--; + if (depth == 0) { start = startSpan; length = (i - startSpan) + 1; return true; } + if (depth < 0) { why = "brace underflow in method"; return false; } + } + } + why = "unterminated method block"; return false; + } + + private static int IndexOfTokenWithin(string s, string token, int start, int end) + { + int idx = s.IndexOf(token, start, StringComparison.Ordinal); + return (idx >= 0 && idx < end) ? idx : -1; + } + + private static bool TryFindClassInsertionPoint(string source, int classStart, int classLength, string position, out int insertAt, out string why) + { + insertAt = 0; why = null; + int searchStart = classStart; + int searchEnd = Math.Min(source.Length, classStart + classLength); + + if (position == "start") + { + // find first '{' after class header, insert just after with a newline + int i = IndexOfTokenWithin(source, "{", searchStart, searchEnd); + if (i < 0) { why = "could not find class opening brace"; return false; } + insertAt = i + 1; return true; + } + else // end + { + // walk to matching closing brace of class and insert just before it + int i = IndexOfTokenWithin(source, "{", searchStart, searchEnd); + if (i < 0) { why = "could not find class opening brace"; return false; } + int depth = 0; bool inStr = false, inChar = false, inSL = false, inML = false, esc = false; + for (; i < searchEnd; i++) + { + char c = source[i]; + char n = i + 1 < searchEnd ? source[i + 1] : '\0'; + if (inSL) { if (c == '\n') inSL = false; continue; } + if (inML) { if (c == '*' && n == '/') { inML = false; i++; } continue; } + if (inStr) { if (!esc && c == '"') inStr = false; esc = (!esc && c == '\\'); continue; } + if (inChar) { if (!esc && c == '\'') inChar = false; esc = (!esc && c == '\\'); continue; } + + if (c == '/' && n == '/') { inSL = true; i++; continue; } + if (c == '/' && n == '*') { inML = true; i++; continue; } + if (c == '"') { inStr = true; continue; } + if (c == '\'') { inChar = true; continue; } + + if (c == '{') depth++; + else if (c == '}') + { + depth--; + if (depth == 0) { insertAt = i; return true; } + if (depth < 0) { why = "brace underflow while scanning class"; return false; } + } + } + why = "could not find class closing brace"; return false; + } + } + + private static int IndexOfClassToken(string s, string className) + { + // simple token search; could be tightened with Regex for word boundaries + var pattern = "class " + className; + return s.IndexOf(pattern, StringComparison.Ordinal); + } + + private static bool AppearsWithinNamespaceHeader(string s, int pos, string ns) + { + int from = Math.Max(0, pos - 2000); + var slice = s.Substring(from, pos - from); + return slice.Contains("namespace " + ns); + } + /// /// Generates basic C# script content based on name and type. /// @@ -451,11 +1965,14 @@ private static bool ValidateScriptSyntax(string contents, ValidationLevel level, } #if USE_ROSLYN - // Advanced Roslyn-based validation - if (!ValidateScriptSyntaxRoslyn(contents, level, errorList)) + // Advanced Roslyn-based validation: only run for Standard+; fail on Roslyn errors + if (level >= ValidationLevel.Standard) { - errors = errorList.ToArray(); - return level != ValidationLevel.Standard; //TODO: Allow standard to run roslyn right now, might formalize it in the future + if (!ValidateScriptSyntaxRoslyn(contents, level, errorList)) + { + errors = errorList.ToArray(); + return false; + } } #endif @@ -908,7 +2425,7 @@ private static void ValidateSemanticRules(string contents, System.Collections.Ge } // Check for magic numbers - var magicNumberPattern = new Regex(@"\b\d+\.?\d*f?\b(?!\s*[;})\]])"); + var magicNumberPattern = new Regex(@"\b\d+\.?\d*f?\b(?!\s*[;})\]])", RegexOptions.CultureInvariant, TimeSpan.FromSeconds(2)); var matches = magicNumberPattern.Matches(contents); if (matches.Count > 5) { @@ -916,7 +2433,7 @@ private static void ValidateSemanticRules(string contents, System.Collections.Ge } // Check for long methods (simple line count check) - var methodPattern = new Regex(@"(public|private|protected|internal)?\s*(static)?\s*\w+\s+\w+\s*\([^)]*\)\s*{"); + var methodPattern = new Regex(@"(public|private|protected|internal)?\s*(static)?\s*\w+\s+\w+\s*\([^)]*\)\s*{", RegexOptions.CultureInvariant, TimeSpan.FromSeconds(2)); var methodMatches = methodPattern.Matches(contents); foreach (Match match in methodMatches) { @@ -1028,3 +2545,80 @@ private static void ValidateSemanticRules(string contents, System.Collections.Ge } } +// Debounced refresh/compile scheduler to coalesce bursts of edits +static class RefreshDebounce +{ + private static int _pending; + private static readonly object _lock = new object(); + private static readonly HashSet _paths = new HashSet(StringComparer.OrdinalIgnoreCase); + + // The timestamp of the most recent schedule request. + private static DateTime _lastRequest; + + // Guard to ensure we only have a single ticking callback running. + private static bool _scheduled; + + public static void Schedule(string relPath, TimeSpan window) + { + // Record that work is pending and track the path in a threadsafe way. + Interlocked.Exchange(ref _pending, 1); + lock (_lock) + { + _paths.Add(relPath); + _lastRequest = DateTime.UtcNow; + + // If a debounce timer is already scheduled it will pick up the new request. + if (_scheduled) + return; + + _scheduled = true; + } + + // Kick off a ticking callback that waits until the window has elapsed + // from the last request before performing the refresh. + EditorApplication.delayCall += () => Tick(window); + } + + private static void Tick(TimeSpan window) + { + bool ready; + lock (_lock) + { + // Only proceed once the debounce window has fully elapsed. + ready = (DateTime.UtcNow - _lastRequest) >= window; + if (ready) + { + _scheduled = false; + } + } + + if (!ready) + { + // Window has not yet elapsed; check again on the next editor tick. + EditorApplication.delayCall += () => Tick(window); + return; + } + + if (Interlocked.Exchange(ref _pending, 0) == 1) + { + string[] toImport; + lock (_lock) { toImport = _paths.ToArray(); _paths.Clear(); } + foreach (var p in toImport) + AssetDatabase.ImportAsset(p, ImportAssetOptions.ForceUpdate); +#if UNITY_EDITOR + UnityEditor.Compilation.CompilationPipeline.RequestScriptCompilation(); +#endif + // Fallback if needed: + // AssetDatabase.Refresh(); + } + } +} + +static class ManageScriptRefreshHelpers +{ + public static void ScheduleScriptRefresh(string relPath) + { + RefreshDebounce.Schedule(relPath, TimeSpan.FromMilliseconds(200)); + } +} + diff --git a/UnityMcpBridge/Editor/Windows/MCPForUnityEditorWindow.cs b/UnityMcpBridge/Editor/Windows/MCPForUnityEditorWindow.cs index 96d5038c..f9235fdb 100644 --- a/UnityMcpBridge/Editor/Windows/MCPForUnityEditorWindow.cs +++ b/UnityMcpBridge/Editor/Windows/MCPForUnityEditorWindow.cs @@ -723,9 +723,8 @@ private static bool PathsEqual(string a, string b) string na = System.IO.Path.GetFullPath(a.Trim()); string nb = System.IO.Path.GetFullPath(b.Trim()); if (System.Runtime.InteropServices.RuntimeInformation.IsOSPlatform(System.Runtime.InteropServices.OSPlatform.Windows)) - { return string.Equals(na, nb, StringComparison.OrdinalIgnoreCase); - } + // Default to ordinal on Unix; optionally detect FS case-sensitivity at runtime if needed return string.Equals(na, nb, StringComparison.Ordinal); } catch { return false; } @@ -758,22 +757,112 @@ private static bool IsClaudeConfigured() private static bool VerifyBridgePing(int port) { + // Use strict framed protocol to match bridge (FRAMING=1) + const int ConnectTimeoutMs = 1000; + const int FrameTimeoutMs = 30000; // match bridge frame I/O timeout + try { - using TcpClient c = new TcpClient(); - var task = c.ConnectAsync(IPAddress.Loopback, port); - if (!task.Wait(500)) return false; - using NetworkStream s = c.GetStream(); - byte[] ping = Encoding.UTF8.GetBytes("ping"); - s.Write(ping, 0, ping.Length); - s.ReadTimeout = 1000; - byte[] buf = new byte[256]; - int n = s.Read(buf, 0, buf.Length); - if (n <= 0) return false; - string resp = Encoding.UTF8.GetString(buf, 0, n); - return resp.Contains("pong", StringComparison.OrdinalIgnoreCase); + using TcpClient client = new TcpClient(); + var connectTask = client.ConnectAsync(IPAddress.Loopback, port); + if (!connectTask.Wait(ConnectTimeoutMs)) return false; + + using NetworkStream stream = client.GetStream(); + try { client.NoDelay = true; } catch { } + + // 1) Read handshake line (ASCII, newline-terminated) + string handshake = ReadLineAscii(stream, 2000); + if (string.IsNullOrEmpty(handshake) || handshake.IndexOf("FRAMING=1", StringComparison.OrdinalIgnoreCase) < 0) + { + UnityEngine.Debug.LogWarning("MCP for Unity: Bridge handshake missing FRAMING=1"); + return false; + } + + // 2) Send framed "ping" + byte[] payload = Encoding.UTF8.GetBytes("ping"); + WriteFrame(stream, payload, FrameTimeoutMs); + + // 3) Read framed response and check for pong + string response = ReadFrameUtf8(stream, FrameTimeoutMs); + bool ok = !string.IsNullOrEmpty(response) && response.IndexOf("pong", StringComparison.OrdinalIgnoreCase) >= 0; + if (!ok) + { + UnityEngine.Debug.LogWarning($"MCP for Unity: Framed ping failed; response='{response}'"); + } + return ok; + } + catch (Exception ex) + { + UnityEngine.Debug.LogWarning($"MCP for Unity: VerifyBridgePing error: {ex.Message}"); + return false; } - catch { return false; } + } + + // Minimal framing helpers (8-byte big-endian length prefix), blocking with timeouts + private static void WriteFrame(NetworkStream stream, byte[] payload, int timeoutMs) + { + if (payload == null) throw new ArgumentNullException(nameof(payload)); + if (payload.LongLength < 1) throw new IOException("Zero-length frames are not allowed"); + byte[] header = new byte[8]; + ulong len = (ulong)payload.LongLength; + header[0] = (byte)(len >> 56); + header[1] = (byte)(len >> 48); + header[2] = (byte)(len >> 40); + header[3] = (byte)(len >> 32); + header[4] = (byte)(len >> 24); + header[5] = (byte)(len >> 16); + header[6] = (byte)(len >> 8); + header[7] = (byte)(len); + + stream.WriteTimeout = timeoutMs; + stream.Write(header, 0, header.Length); + stream.Write(payload, 0, payload.Length); + } + + private static string ReadFrameUtf8(NetworkStream stream, int timeoutMs) + { + byte[] header = ReadExact(stream, 8, timeoutMs); + ulong len = ((ulong)header[0] << 56) + | ((ulong)header[1] << 48) + | ((ulong)header[2] << 40) + | ((ulong)header[3] << 32) + | ((ulong)header[4] << 24) + | ((ulong)header[5] << 16) + | ((ulong)header[6] << 8) + | header[7]; + if (len == 0UL) throw new IOException("Zero-length frames are not allowed"); + if (len > int.MaxValue) throw new IOException("Frame too large"); + byte[] payload = ReadExact(stream, (int)len, timeoutMs); + return Encoding.UTF8.GetString(payload); + } + + private static byte[] ReadExact(NetworkStream stream, int count, int timeoutMs) + { + byte[] buffer = new byte[count]; + int offset = 0; + stream.ReadTimeout = timeoutMs; + while (offset < count) + { + int read = stream.Read(buffer, offset, count - offset); + if (read <= 0) throw new IOException("Connection closed before reading expected bytes"); + offset += read; + } + return buffer; + } + + private static string ReadLineAscii(NetworkStream stream, int timeoutMs, int maxLen = 512) + { + stream.ReadTimeout = timeoutMs; + using var ms = new MemoryStream(); + byte[] one = new byte[1]; + while (ms.Length < maxLen) + { + int n = stream.Read(one, 0, 1); + if (n <= 0) break; + if (one[0] == (byte)'\n') break; + ms.WriteByte(one[0]); + } + return Encoding.ASCII.GetString(ms.ToArray()); } private void DrawClientConfigurationCompact(McpClient mcpClient) @@ -1134,10 +1223,19 @@ private string WriteToConfig(string pythonDir, string configPath, McpClient mcpC } catch { } - // 1) Start from existing, only fill gaps - string uvPath = (ValidateUvBinarySafe(existingCommand) ? existingCommand : FindUvPath()); + // 1) Start from existing, only fill gaps (prefer trusted resolver) + string uvPath = ServerInstaller.FindUvPath(); + // Optionally trust existingCommand if it looks like uv/uv.exe + try + { + var name = System.IO.Path.GetFileName((existingCommand ?? string.Empty).Trim()).ToLowerInvariant(); + if ((name == "uv" || name == "uv.exe") && ValidateUvBinarySafe(existingCommand)) + { + uvPath = existingCommand; + } + } + catch { } if (uvPath == null) return "UV package manager not found. Please install UV first."; - string serverSrc = ExtractDirectoryArg(existingArgs); bool serverValid = !string.IsNullOrEmpty(serverSrc) && System.IO.File.Exists(System.IO.Path.Combine(serverSrc, "server.py")); @@ -1203,51 +1301,61 @@ private string WriteToConfig(string pythonDir, string configPath, McpClient mcpC string mergedJson = JsonConvert.SerializeObject(existingRoot, jsonSettings); - // Use a more robust atomic write pattern + // Robust atomic write without redundant backup or race on existence string tmp = configPath + ".tmp"; string backup = configPath + ".backup"; - + bool writeDone = false; try { - // Write to temp file first + // Write to temp file first (in same directory for atomicity) System.IO.File.WriteAllText(tmp, mergedJson, new System.Text.UTF8Encoding(false)); - - // Create backup of existing file if it exists - if (System.IO.File.Exists(configPath)) + + try { - System.IO.File.Copy(configPath, backup, true); + // Try atomic replace; creates 'backup' only on success (platform-dependent) + System.IO.File.Replace(tmp, configPath, backup); + writeDone = true; } - - // Atomic move operation (more reliable than Replace on macOS) - if (System.IO.File.Exists(configPath)) + catch (System.IO.FileNotFoundException) { - System.IO.File.Delete(configPath); + // Destination didn't exist; fall back to move + System.IO.File.Move(tmp, configPath); + writeDone = true; } - System.IO.File.Move(tmp, configPath); - - // Clean up backup - if (System.IO.File.Exists(backup)) + catch (System.PlatformNotSupportedException) { - System.IO.File.Delete(backup); + // Fallback: rename existing to backup, then move tmp into place + if (System.IO.File.Exists(configPath)) + { + try { if (System.IO.File.Exists(backup)) System.IO.File.Delete(backup); } catch { } + System.IO.File.Move(configPath, backup); + } + System.IO.File.Move(tmp, configPath); + writeDone = true; } } catch (Exception ex) { - // Clean up temp file - try { if (System.IO.File.Exists(tmp)) System.IO.File.Delete(tmp); } catch { } - // Restore backup if it exists - try { - if (System.IO.File.Exists(backup)) + + // If write did not complete, attempt restore from backup without deleting current file first + try + { + if (!writeDone && System.IO.File.Exists(backup)) { - if (System.IO.File.Exists(configPath)) - { - System.IO.File.Delete(configPath); - } - System.IO.File.Move(backup, configPath); + try { System.IO.File.Copy(backup, configPath, true); } catch { } } - } catch { } + } + catch { } throw new Exception($"Failed to write config file '{configPath}': {ex.Message}", ex); } + finally + { + // Best-effort cleanup of temp + try { if (System.IO.File.Exists(tmp)) System.IO.File.Delete(tmp); } catch { } + // Only remove backup after a confirmed successful write + try { if (writeDone && System.IO.File.Exists(backup)) System.IO.File.Delete(backup); } catch { } + } + try { if (IsValidUv(uvPath)) UnityEditor.EditorPrefs.SetString("MCPForUnity.UvPath", uvPath); @@ -1835,283 +1943,12 @@ private void UnregisterWithClaudeCode() private string FindUvPath() { - string uvPath = null; - - if (RuntimeInformation.IsOSPlatform(OSPlatform.Windows)) - { - uvPath = FindWindowsUvPath(); - } - else - { - // macOS/Linux paths - string[] possiblePaths = { - "/Library/Frameworks/Python.framework/Versions/3.13/bin/uv", - "/usr/local/bin/uv", - "/opt/homebrew/bin/uv", - "/usr/bin/uv" - }; - - foreach (string path in possiblePaths) - { - if (File.Exists(path) && IsValidUvInstallation(path)) - { - uvPath = path; - break; - } - } - - // If not found in common locations, try to find via which command - if (uvPath == null) - { - try - { - var psi = new ProcessStartInfo - { - FileName = "which", - Arguments = "uv", - UseShellExecute = false, - RedirectStandardOutput = true, - CreateNoWindow = true - }; - - using var process = Process.Start(psi); - string output = process.StandardOutput.ReadToEnd().Trim(); - process.WaitForExit(); - - if (!string.IsNullOrEmpty(output) && File.Exists(output) && IsValidUvInstallation(output)) - { - uvPath = output; - } - } - catch - { - // Ignore errors - } - } - } - - // If no specific path found, fall back to using 'uv' from PATH - if (uvPath == null) - { - // Test if 'uv' is available in PATH by trying to run it - string uvCommand = RuntimeInformation.IsOSPlatform(OSPlatform.Windows) ? "uv.exe" : "uv"; - if (IsValidUvInstallation(uvCommand)) - { - uvPath = uvCommand; - } - } - - if (uvPath == null) - { - UnityEngine.Debug.LogError("UV package manager not found! Please install UV first:\n" + - "• macOS/Linux: curl -LsSf https://astral.sh/uv/install.sh | sh\n" + - "• Windows: pip install uv\n" + - "• Or visit: https://docs.astral.sh/uv/getting-started/installation"); - return null; - } - - return uvPath; + try { return MCPForUnity.Editor.Helpers.ServerInstaller.FindUvPath(); } catch { return null; } } - private bool IsValidUvInstallation(string uvPath) - { - try - { - var psi = new ProcessStartInfo - { - FileName = uvPath, - Arguments = "--version", - UseShellExecute = false, - RedirectStandardOutput = true, - RedirectStandardError = true, - CreateNoWindow = true - }; - - using var process = Process.Start(psi); - process.WaitForExit(5000); // 5 second timeout - - if (process.ExitCode == 0) - { - string output = process.StandardOutput.ReadToEnd().Trim(); - // Basic validation - just check if it responds with version info - // UV typically outputs "uv 0.x.x" format - if (output.StartsWith("uv ") && output.Contains(".")) - { - return true; - } - } - - return false; - } - catch - { - return false; - } - } + // Validation and platform-specific scanning are handled by ServerInstaller.FindUvPath() - private string FindWindowsUvPath() - { - string appData = Environment.GetFolderPath(Environment.SpecialFolder.ApplicationData); - string localAppData = Environment.GetFolderPath(Environment.SpecialFolder.LocalApplicationData); - string userProfile = Environment.GetFolderPath(Environment.SpecialFolder.UserProfile); - - // Dynamic Python version detection - check what's actually installed - List pythonVersions = new List(); - - // Add common versions but also scan for any Python* directories - string[] commonVersions = { "Python313", "Python312", "Python311", "Python310", "Python39", "Python38", "Python37" }; - pythonVersions.AddRange(commonVersions); - - // Scan for additional Python installations - string[] pythonBasePaths = { - Path.Combine(appData, "Python"), - Path.Combine(localAppData, "Programs", "Python"), - Environment.GetFolderPath(Environment.SpecialFolder.ProgramFiles) + "\\Python", - Environment.GetFolderPath(Environment.SpecialFolder.ProgramFilesX86) + "\\Python" - }; - - foreach (string basePath in pythonBasePaths) - { - if (Directory.Exists(basePath)) - { - try - { - foreach (string dir in Directory.GetDirectories(basePath, "Python*")) - { - string versionName = Path.GetFileName(dir); - if (!pythonVersions.Contains(versionName)) - { - pythonVersions.Add(versionName); - } - } - } - catch - { - // Ignore directory access errors - } - } - } - - // Check Python installations for UV - foreach (string version in pythonVersions) - { - string[] pythonPaths = { - Path.Combine(appData, "Python", version, "Scripts", "uv.exe"), - Path.Combine(localAppData, "Programs", "Python", version, "Scripts", "uv.exe"), - Path.Combine(Environment.GetFolderPath(Environment.SpecialFolder.ProgramFiles), "Python", version, "Scripts", "uv.exe"), - Path.Combine(Environment.GetFolderPath(Environment.SpecialFolder.ProgramFilesX86), "Python", version, "Scripts", "uv.exe") - }; - - foreach (string uvPath in pythonPaths) - { - if (File.Exists(uvPath) && IsValidUvInstallation(uvPath)) - { - return uvPath; - } - } - } - - // Check package manager installations - string[] packageManagerPaths = { - // Chocolatey - Path.Combine(Environment.GetFolderPath(Environment.SpecialFolder.CommonApplicationData), "chocolatey", "lib", "uv", "tools", "uv.exe"), - Path.Combine("C:", "ProgramData", "chocolatey", "lib", "uv", "tools", "uv.exe"), - - // Scoop - Path.Combine(userProfile, "scoop", "apps", "uv", "current", "uv.exe"), - Path.Combine(userProfile, "scoop", "shims", "uv.exe"), - - // Winget/msstore - Path.Combine(localAppData, "Microsoft", "WinGet", "Packages", "astral-sh.uv_Microsoft.Winget.Source_8wekyb3d8bbwe", "uv.exe"), - - // Common standalone installations - Path.Combine(localAppData, "uv", "uv.exe"), - Path.Combine(appData, "uv", "uv.exe"), - Path.Combine(userProfile, ".local", "bin", "uv.exe"), - Path.Combine(userProfile, "bin", "uv.exe"), - - // Cargo/Rust installations - Path.Combine(userProfile, ".cargo", "bin", "uv.exe"), - - // Manual installations in common locations - Path.Combine(Environment.GetFolderPath(Environment.SpecialFolder.ProgramFiles), "uv", "uv.exe"), - Path.Combine(Environment.GetFolderPath(Environment.SpecialFolder.ProgramFilesX86), "uv", "uv.exe") - }; - - foreach (string uvPath in packageManagerPaths) - { - if (File.Exists(uvPath) && IsValidUvInstallation(uvPath)) - { - return uvPath; - } - } - - // Try to find uv via where command (Windows equivalent of which) - // Use where.exe explicitly to avoid PowerShell alias conflicts - try - { - var psi = new ProcessStartInfo - { - FileName = "where.exe", - Arguments = "uv", - UseShellExecute = false, - RedirectStandardOutput = true, - RedirectStandardError = true, - CreateNoWindow = true - }; - - using var process = Process.Start(psi); - string output = process.StandardOutput.ReadToEnd().Trim(); - process.WaitForExit(); - - if (process.ExitCode == 0 && !string.IsNullOrEmpty(output)) - { - string[] lines = output.Split('\n'); - foreach (string line in lines) - { - string cleanPath = line.Trim(); - if (File.Exists(cleanPath) && IsValidUvInstallation(cleanPath)) - { - return cleanPath; - } - } - } - } - catch - { - // If where.exe fails, try PowerShell's Get-Command as fallback - try - { - var psi = new ProcessStartInfo - { - FileName = "powershell.exe", - Arguments = "-Command \"(Get-Command uv -ErrorAction SilentlyContinue).Source\"", - UseShellExecute = false, - RedirectStandardOutput = true, - RedirectStandardError = true, - CreateNoWindow = true - }; - - using var process = Process.Start(psi); - string output = process.StandardOutput.ReadToEnd().Trim(); - process.WaitForExit(); - - if (process.ExitCode == 0 && !string.IsNullOrEmpty(output) && File.Exists(output)) - { - if (IsValidUvInstallation(output)) - { - return output; - } - } - } - catch - { - // Ignore PowerShell errors too - } - } - - return null; // Will fallback to using 'uv' from PATH - } + // Windows-specific discovery removed; use ServerInstaller.FindUvPath() instead // Removed unused FindClaudeCommand @@ -2123,10 +1960,14 @@ private void CheckClaudeCodeConfiguration(McpClient mcpClient) string unityProjectDir = Application.dataPath; string projectDir = Path.GetDirectoryName(unityProjectDir); - // Read the global Claude config file - string configPath = RuntimeInformation.IsOSPlatform(OSPlatform.Windows) - ? mcpClient.windowsConfigPath - : mcpClient.linuxConfigPath; + // Read the global Claude config file (honor macConfigPath on macOS) + string configPath; + if (RuntimeInformation.IsOSPlatform(OSPlatform.Windows)) + configPath = mcpClient.windowsConfigPath; + else if (RuntimeInformation.IsOSPlatform(OSPlatform.OSX)) + configPath = string.IsNullOrEmpty(mcpClient.macConfigPath) ? mcpClient.linuxConfigPath : mcpClient.macConfigPath; + else + configPath = mcpClient.linuxConfigPath; if (debugLogsEnabled) { diff --git a/UnityMcpBridge/Editor/Windows/ManualConfigEditorWindow.cs b/UnityMcpBridge/Editor/Windows/ManualConfigEditorWindow.cs index 9fe776a9..501e37a4 100644 --- a/UnityMcpBridge/Editor/Windows/ManualConfigEditorWindow.cs +++ b/UnityMcpBridge/Editor/Windows/ManualConfigEditorWindow.cs @@ -119,7 +119,9 @@ protected virtual void OnGUI() else if (RuntimeInformation.IsOSPlatform(OSPlatform.OSX)) { displayPath = string.IsNullOrEmpty(mcpClient.macConfigPath) - ? mcpClient.linuxConfigPath + + ? configPath + : mcpClient.macConfigPath; } else if (RuntimeInformation.IsOSPlatform(OSPlatform.Linux)) diff --git a/UnityMcpBridge/UnityMcpServer~/src/config.py b/UnityMcpBridge/UnityMcpServer~/src/config.py index 5df28b8a..3023f119 100644 --- a/UnityMcpBridge/UnityMcpServer~/src/config.py +++ b/UnityMcpBridge/UnityMcpServer~/src/config.py @@ -17,6 +17,9 @@ class ServerConfig: # Connection settings connection_timeout: float = 60.0 # default steady-state timeout; retries use shorter timeouts buffer_size: int = 16 * 1024 * 1024 # 16MB buffer + # Framed receive behavior + framed_receive_timeout: float = 2.0 # max seconds to wait while consuming heartbeats only + max_heartbeat_frames: int = 16 # cap heartbeat frames consumed before giving up # Logging settings log_level: str = "INFO" diff --git a/UnityMcpBridge/UnityMcpServer~/src/pyrightconfig.json b/UnityMcpBridge/UnityMcpServer~/src/pyrightconfig.json new file mode 100644 index 00000000..4fdeb465 --- /dev/null +++ b/UnityMcpBridge/UnityMcpServer~/src/pyrightconfig.json @@ -0,0 +1,11 @@ +{ + "typeCheckingMode": "basic", + "reportMissingImports": "none", + "pythonVersion": "3.11", + "executionEnvironments": [ + { + "root": ".", + "pythonVersion": "3.11" + } + ] +} diff --git a/UnityMcpBridge/UnityMcpServer~/src/server_version.txt b/UnityMcpBridge/UnityMcpServer~/src/server_version.txt index cb2b00e4..94ff29cc 100644 --- a/UnityMcpBridge/UnityMcpServer~/src/server_version.txt +++ b/UnityMcpBridge/UnityMcpServer~/src/server_version.txt @@ -1 +1 @@ -3.0.1 +3.1.1 diff --git a/UnityMcpBridge/UnityMcpServer~/src/tools/__init__.py b/UnityMcpBridge/UnityMcpServer~/src/tools/__init__.py index 2bf711df..43b53096 100644 --- a/UnityMcpBridge/UnityMcpServer~/src/tools/__init__.py +++ b/UnityMcpBridge/UnityMcpServer~/src/tools/__init__.py @@ -1,3 +1,5 @@ +import logging +from .manage_script_edits import register_manage_script_edits_tools from .manage_script import register_manage_script_tools from .manage_scene import register_manage_scene_tools from .manage_editor import register_manage_editor_tools @@ -6,10 +8,15 @@ from .manage_shader import register_manage_shader_tools from .read_console import register_read_console_tools from .execute_menu_item import register_execute_menu_item_tools +from .resource_tools import register_resource_tools + +logger = logging.getLogger("mcp-for-unity-server") def register_all_tools(mcp): """Register all refactored tools with the MCP server.""" - print("Registering MCP for Unity Server refactored tools...") + # Prefer the surgical edits tool so LLMs discover it first + logger.info("Registering MCP for Unity Server refactored tools...") + register_manage_script_edits_tools(mcp) register_manage_script_tools(mcp) register_manage_scene_tools(mcp) register_manage_editor_tools(mcp) @@ -18,4 +25,6 @@ def register_all_tools(mcp): register_manage_shader_tools(mcp) register_read_console_tools(mcp) register_execute_menu_item_tools(mcp) - print("MCP for Unity Server tool registration complete.") + # Expose resource wrappers as normal tools so IDEs without resources primitive can use them + register_resource_tools(mcp) + logger.info("MCP for Unity Server tool registration complete.") diff --git a/UnityMcpBridge/UnityMcpServer~/src/tools/manage_asset.py b/UnityMcpBridge/UnityMcpServer~/src/tools/manage_asset.py index 19ac0c2e..ccafb047 100644 --- a/UnityMcpBridge/UnityMcpServer~/src/tools/manage_asset.py +++ b/UnityMcpBridge/UnityMcpServer~/src/tools/manage_asset.py @@ -76,4 +76,4 @@ async def manage_asset( # Use centralized async retry helper to avoid blocking the event loop result = await async_send_command_with_retry("manage_asset", params_dict, loop=loop) # Return the result obtained from Unity - return result if isinstance(result, dict) else {"success": False, "message": str(result)} \ No newline at end of file + return result if isinstance(result, dict) else {"success": False, "message": str(result)} diff --git a/UnityMcpBridge/UnityMcpServer~/src/tools/manage_script.py b/UnityMcpBridge/UnityMcpServer~/src/tools/manage_script.py index a41fb85c..9aad1249 100644 --- a/UnityMcpBridge/UnityMcpServer~/src/tools/manage_script.py +++ b/UnityMcpBridge/UnityMcpServer~/src/tools/manage_script.py @@ -1,29 +1,414 @@ from mcp.server.fastmcp import FastMCP, Context -from typing import Dict, Any -from unity_connection import get_unity_connection, send_command_with_retry -from config import config -import time -import os +from typing import Dict, Any, List +from unity_connection import send_command_with_retry import base64 +import os +from urllib.parse import urlparse, unquote + def register_manage_script_tools(mcp: FastMCP): """Register all script management tools with the MCP server.""" - @mcp.tool() + def _split_uri(uri: str) -> tuple[str, str]: + """Split an incoming URI or path into (name, directory) suitable for Unity. + + Rules: + - unity://path/Assets/... → keep as Assets-relative (after decode/normalize) + - file://... → percent-decode, normalize, strip host and leading slashes, + then, if any 'Assets' segment exists, return path relative to that 'Assets' root. + Otherwise, fall back to original name/dir behavior. + - plain paths → decode/normalize separators; if they contain an 'Assets' segment, + return relative to 'Assets'. + """ + raw_path: str + if uri.startswith("unity://path/"): + raw_path = uri[len("unity://path/") :] + elif uri.startswith("file://"): + parsed = urlparse(uri) + host = (parsed.netloc or "").strip() + p = parsed.path or "" + # UNC: file://server/share/... -> //server/share/... + if host and host.lower() != "localhost": + p = f"//{host}{p}" + # Use percent-decoded path, preserving leading slashes + raw_path = unquote(p) + else: + raw_path = uri + + # Percent-decode any residual encodings and normalize separators + raw_path = unquote(raw_path).replace("\\", "/") + # Strip leading slash only for Windows drive-letter forms like "/C:/..." + if os.name == "nt" and len(raw_path) >= 3 and raw_path[0] == "/" and raw_path[2] == ":": + raw_path = raw_path[1:] + + # Normalize path (collapse ../, ./) + norm = os.path.normpath(raw_path).replace("\\", "/") + + # If an 'Assets' segment exists, compute path relative to it (case-insensitive) + parts = [p for p in norm.split("/") if p not in ("", ".")] + idx = next((i for i, seg in enumerate(parts) if seg.lower() == "assets"), None) + assets_rel = "/".join(parts[idx:]) if idx is not None else None + + effective_path = assets_rel if assets_rel else norm + # For POSIX absolute paths outside Assets, drop the leading '/' + # to return a clean relative-like directory (e.g., '/tmp' -> 'tmp'). + if effective_path.startswith("/"): + effective_path = effective_path[1:] + + name = os.path.splitext(os.path.basename(effective_path))[0] + directory = os.path.dirname(effective_path) + return name, directory + + @mcp.tool(description=( + "Apply small text edits to a C# script identified by URI.\n\n" + "⚠️ IMPORTANT: This tool replaces EXACT character positions. Always verify content at target lines/columns BEFORE editing!\n" + "Common mistakes:\n" + "- Assuming what's on a line without checking\n" + "- Using wrong line numbers (they're 1-indexed)\n" + "- Miscounting column positions (also 1-indexed, tabs count as 1)\n\n" + "RECOMMENDED WORKFLOW:\n" + "1) First call resources/read with start_line/line_count to verify exact content\n" + "2) Count columns carefully (or use find_in_file to locate patterns)\n" + "3) Apply your edit with precise coordinates\n" + "4) Consider script_apply_edits with anchors for safer pattern-based replacements\n\n" + "Args:\n" + "- uri: unity://path/Assets/... or file://... or Assets/...\n" + "- edits: list of {startLine,startCol,endLine,endCol,newText} (1-indexed!)\n" + "- precondition_sha256: optional SHA of current file (prevents concurrent edit conflicts)\n\n" + "Notes:\n" + "- Path must resolve under Assets/\n" + "- For method/class operations, use script_apply_edits (safer, structured edits)\n" + "- For pattern-based replacements, consider anchor operations in script_apply_edits\n" + )) + def apply_text_edits( + ctx: Context, + uri: str, + edits: List[Dict[str, Any]], + precondition_sha256: str | None = None, + strict: bool | None = None, + options: Dict[str, Any] | None = None, + ) -> Dict[str, Any]: + """Apply small text edits to a C# script identified by URI.""" + name, directory = _split_uri(uri) + + # Normalize common aliases/misuses for resilience: + # - Accept LSP-style range objects: {range:{start:{line,character}, end:{...}}, newText|text} + # - Accept index ranges as a 2-int array: {range:[startIndex,endIndex], text} + # If normalization is required, read current contents to map indices -> 1-based line/col. + def _needs_normalization(arr: List[Dict[str, Any]]) -> bool: + for e in arr or []: + if ("startLine" not in e) or ("startCol" not in e) or ("endLine" not in e) or ("endCol" not in e) or ("newText" not in e and "text" in e): + return True + return False + + normalized_edits: List[Dict[str, Any]] = [] + warnings: List[str] = [] + if _needs_normalization(edits): + # Read file to support index->line/col conversion when needed + read_resp = send_command_with_retry("manage_script", { + "action": "read", + "name": name, + "path": directory, + }) + if not (isinstance(read_resp, dict) and read_resp.get("success")): + return read_resp if isinstance(read_resp, dict) else {"success": False, "message": str(read_resp)} + data = read_resp.get("data", {}) + contents = data.get("contents") + if not contents and data.get("contentsEncoded"): + try: + contents = base64.b64decode(data.get("encodedContents", "").encode("utf-8")).decode("utf-8", "replace") + except Exception: + contents = contents or "" + + # Helper to map 0-based character index to 1-based line/col + def line_col_from_index(idx: int) -> tuple[int, int]: + if idx <= 0: + return 1, 1 + # Count lines up to idx and position within line + nl_count = contents.count("\n", 0, idx) + line = nl_count + 1 + last_nl = contents.rfind("\n", 0, idx) + col = (idx - (last_nl + 1)) + 1 if last_nl >= 0 else idx + 1 + return line, col + + for e in edits or []: + e2 = dict(e) + # Map text->newText if needed + if "newText" not in e2 and "text" in e2: + e2["newText"] = e2.pop("text") + + if "startLine" in e2 and "startCol" in e2 and "endLine" in e2 and "endCol" in e2: + # Guard: explicit fields must be 1-based. + zero_based = False + for k in ("startLine","startCol","endLine","endCol"): + try: + if int(e2.get(k, 1)) < 1: + zero_based = True + except Exception: + pass + if zero_based: + if strict: + return {"success": False, "code": "zero_based_explicit_fields", "message": "Explicit line/col fields are 1-based; received zero-based.", "data": {"normalizedEdits": normalized_edits}} + # Normalize by clamping to 1 and warn + for k in ("startLine","startCol","endLine","endCol"): + try: + if int(e2.get(k, 1)) < 1: + e2[k] = 1 + except Exception: + pass + warnings.append("zero_based_explicit_fields_normalized") + normalized_edits.append(e2) + continue + + rng = e2.get("range") + if isinstance(rng, dict): + # LSP style: 0-based + s = rng.get("start", {}) + t = rng.get("end", {}) + e2["startLine"] = int(s.get("line", 0)) + 1 + e2["startCol"] = int(s.get("character", 0)) + 1 + e2["endLine"] = int(t.get("line", 0)) + 1 + e2["endCol"] = int(t.get("character", 0)) + 1 + e2.pop("range", None) + normalized_edits.append(e2) + continue + if isinstance(rng, (list, tuple)) and len(rng) == 2: + try: + a = int(rng[0]) + b = int(rng[1]) + if b < a: + a, b = b, a + sl, sc = line_col_from_index(a) + el, ec = line_col_from_index(b) + e2["startLine"] = sl + e2["startCol"] = sc + e2["endLine"] = el + e2["endCol"] = ec + e2.pop("range", None) + normalized_edits.append(e2) + continue + except Exception: + pass + # Could not normalize this edit + return { + "success": False, + "code": "missing_field", + "message": "apply_text_edits requires startLine/startCol/endLine/endCol/newText or a normalizable 'range'", + "data": {"expected": ["startLine","startCol","endLine","endCol","newText"], "got": e} + } + else: + # Even when edits appear already in explicit form, validate 1-based coordinates. + normalized_edits = [] + for e in edits or []: + e2 = dict(e) + has_all = all(k in e2 for k in ("startLine","startCol","endLine","endCol")) + if has_all: + zero_based = False + for k in ("startLine","startCol","endLine","endCol"): + try: + if int(e2.get(k, 1)) < 1: + zero_based = True + except Exception: + pass + if zero_based: + if strict: + return {"success": False, "code": "zero_based_explicit_fields", "message": "Explicit line/col fields are 1-based; received zero-based.", "data": {"normalizedEdits": [e2]}} + for k in ("startLine","startCol","endLine","endCol"): + try: + if int(e2.get(k, 1)) < 1: + e2[k] = 1 + except Exception: + pass + if "zero_based_explicit_fields_normalized" not in warnings: + warnings.append("zero_based_explicit_fields_normalized") + normalized_edits.append(e2) + + # Preflight: detect overlapping ranges among normalized line/col spans + def _pos_tuple(e: Dict[str, Any], key_start: bool) -> tuple[int, int]: + return ( + int(e.get("startLine", 1)) if key_start else int(e.get("endLine", 1)), + int(e.get("startCol", 1)) if key_start else int(e.get("endCol", 1)), + ) + + def _le(a: tuple[int, int], b: tuple[int, int]) -> bool: + return a[0] < b[0] or (a[0] == b[0] and a[1] <= b[1]) + + # Consider only true replace ranges (non-zero length). Pure insertions (zero-width) don't overlap. + spans = [] + for e in normalized_edits or []: + try: + s = _pos_tuple(e, True) + t = _pos_tuple(e, False) + if s != t: + spans.append((s, t)) + except Exception: + # If coordinates missing or invalid, let the server validate later + pass + + if spans: + spans_sorted = sorted(spans, key=lambda p: (p[0][0], p[0][1])) + for i in range(1, len(spans_sorted)): + prev_end = spans_sorted[i-1][1] + curr_start = spans_sorted[i][0] + # Overlap if prev_end > curr_start (strict), i.e., not prev_end <= curr_start + if not _le(prev_end, curr_start): + conflicts = [{ + "startA": {"line": spans_sorted[i-1][0][0], "col": spans_sorted[i-1][0][1]}, + "endA": {"line": spans_sorted[i-1][1][0], "col": spans_sorted[i-1][1][1]}, + "startB": {"line": spans_sorted[i][0][0], "col": spans_sorted[i][0][1]}, + "endB": {"line": spans_sorted[i][1][0], "col": spans_sorted[i][1][1]}, + }] + return {"success": False, "code": "overlap", "data": {"status": "overlap", "conflicts": conflicts}} + + # Note: Do not auto-compute precondition if missing; callers should supply it + # via mcp__unity__get_sha or a prior read. This avoids hidden extra calls and + # preserves existing call-count expectations in clients/tests. + + # Default options: for multi-span batches, prefer atomic to avoid mid-apply imbalance + opts: Dict[str, Any] = dict(options or {}) + try: + if len(normalized_edits) > 1 and "applyMode" not in opts: + opts["applyMode"] = "atomic" + except Exception: + pass + # Support optional debug preview for span-by-span simulation without write + if opts.get("debug_preview"): + try: + import difflib + # Apply locally to preview final result + lines = [] + # Build an indexable original from a read if we normalized from read; otherwise skip + prev = "" + # We cannot guarantee file contents here without a read; return normalized spans only + return { + "success": True, + "message": "Preview only (no write)", + "data": { + "normalizedEdits": normalized_edits, + "preview": True + } + } + except Exception as e: + return {"success": False, "code": "preview_failed", "message": f"debug_preview failed: {e}", "data": {"normalizedEdits": normalized_edits}} + + params = { + "action": "apply_text_edits", + "name": name, + "path": directory, + "edits": normalized_edits, + "precondition_sha256": precondition_sha256, + "options": opts, + } + params = {k: v for k, v in params.items() if v is not None} + resp = send_command_with_retry("manage_script", params) + if isinstance(resp, dict): + data = resp.setdefault("data", {}) + data.setdefault("normalizedEdits", normalized_edits) + if warnings: + data.setdefault("warnings", warnings) + return resp + return {"success": False, "message": str(resp)} + + @mcp.tool(description=( + "Create a new C# script at the given project path.\n\n" + "Args: path (e.g., 'Assets/Scripts/My.cs'), contents (string), script_type, namespace.\n" + "Rules: path must be under Assets/. Contents will be Base64-encoded over transport.\n" + )) + def create_script( + ctx: Context, + path: str, + contents: str = "", + script_type: str | None = None, + namespace: str | None = None, + ) -> Dict[str, Any]: + """Create a new C# script at the given path.""" + name = os.path.splitext(os.path.basename(path))[0] + directory = os.path.dirname(path) + # Local validation to avoid round-trips on obviously bad input + norm_path = os.path.normpath((path or "").replace("\\", "/")).replace("\\", "/") + if not directory or directory.split("/")[0].lower() != "assets": + return {"success": False, "code": "path_outside_assets", "message": f"path must be under 'Assets/'; got '{path}'."} + if ".." in norm_path.split("/") or norm_path.startswith("/"): + return {"success": False, "code": "bad_path", "message": "path must not contain traversal or be absolute."} + if not name: + return {"success": False, "code": "bad_path", "message": "path must include a script file name."} + if not norm_path.lower().endswith(".cs"): + return {"success": False, "code": "bad_extension", "message": "script file must end with .cs."} + params: Dict[str, Any] = { + "action": "create", + "name": name, + "path": directory, + "namespace": namespace, + "scriptType": script_type, + } + if contents: + params["encodedContents"] = base64.b64encode(contents.encode("utf-8")).decode("utf-8") + params["contentsEncoded"] = True + params = {k: v for k, v in params.items() if v is not None} + resp = send_command_with_retry("manage_script", params) + return resp if isinstance(resp, dict) else {"success": False, "message": str(resp)} + + @mcp.tool(description=( + "Delete a C# script by URI or Assets-relative path.\n\n" + "Args: uri (unity://path/... or file://... or Assets/...).\n" + "Rules: Target must resolve under Assets/.\n" + )) + def delete_script(ctx: Context, uri: str) -> Dict[str, Any]: + """Delete a C# script by URI.""" + name, directory = _split_uri(uri) + if not directory or directory.split("/")[0].lower() != "assets": + return {"success": False, "code": "path_outside_assets", "message": "URI must resolve under 'Assets/'."} + params = {"action": "delete", "name": name, "path": directory} + resp = send_command_with_retry("manage_script", params) + return resp if isinstance(resp, dict) else {"success": False, "message": str(resp)} + + @mcp.tool(description=( + "Validate a C# script and return diagnostics.\n\n" + "Args: uri, level=('basic'|'standard').\n" + "- basic: quick syntax checks.\n" + "- standard: deeper checks (performance hints, common pitfalls).\n" + )) + def validate_script( + ctx: Context, uri: str, level: str = "basic" + ) -> Dict[str, Any]: + """Validate a C# script and return diagnostics.""" + name, directory = _split_uri(uri) + if not directory or directory.split("/")[0].lower() != "assets": + return {"success": False, "code": "path_outside_assets", "message": "URI must resolve under 'Assets/'."} + if level not in ("basic", "standard"): + return {"success": False, "code": "bad_level", "message": "level must be 'basic' or 'standard'."} + params = { + "action": "validate", + "name": name, + "path": directory, + "level": level, + } + resp = send_command_with_retry("manage_script", params) + return resp if isinstance(resp, dict) else {"success": False, "message": str(resp)} + + @mcp.tool(description=( + "Compatibility router for legacy script operations.\n\n" + "Actions: create|read|delete (update is routed to apply_text_edits with precondition).\n" + "Args: name (no .cs), path (Assets/...), contents (for create), script_type, namespace.\n" + "Notes: prefer apply_text_edits (ranges) or script_apply_edits (structured) for edits.\n" + )) def manage_script( ctx: Context, action: str, name: str, path: str, - contents: str, - script_type: str, - namespace: str + contents: str = "", + script_type: str | None = None, + namespace: str | None = None, ) -> Dict[str, Any]: - """Manages C# scripts in Unity (create, read, update, delete). - Make reference variables public for easier access in the Unity Editor. + """Compatibility router for legacy script operations. + + IMPORTANT: + - Direct file reads should use resources/read. + - Edits should use apply_text_edits. Args: - action: Operation ('create', 'read', 'update', 'delete'). + action: Operation ('create', 'read', 'delete'). name: Script name (no .cs extension). path: Asset path (default: "Assets/"). contents: C# code for 'create'/'update'. @@ -34,42 +419,143 @@ def manage_script( Dictionary with results ('success', 'message', 'data'). """ try: + # Graceful migration for legacy 'update': route to apply_text_edits (whole-file replace) + if action == 'update': + try: + # 1) Read current contents to compute end range and precondition + read_resp = send_command_with_retry("manage_script", { + "action": "read", + "name": name, + "path": path, + }) + if not (isinstance(read_resp, dict) and read_resp.get("success")): + return {"success": False, "code": "deprecated_update", "message": "Use apply_text_edits; automatic migration failed to read current file."} + data = read_resp.get("data", {}) + current = data.get("contents") + if not current and data.get("contentsEncoded"): + current = base64.b64decode(data.get("encodedContents", "").encode("utf-8")).decode("utf-8", "replace") + if current is None: + return {"success": False, "code": "deprecated_update", "message": "Use apply_text_edits; current file read returned no contents."} + + # 2) Compute whole-file range (1-based, end exclusive) and SHA + import hashlib as _hashlib + old_lines = current.splitlines(keepends=True) + end_line = len(old_lines) + 1 + sha = _hashlib.sha256(current.encode("utf-8")).hexdigest() + + # 3) Apply single whole-file text edit with provided 'contents' + edits = [{ + "startLine": 1, + "startCol": 1, + "endLine": end_line, + "endCol": 1, + "newText": contents or "", + }] + route_params = { + "action": "apply_text_edits", + "name": name, + "path": path, + "edits": edits, + "precondition_sha256": sha, + "options": {"refresh": "immediate", "validate": "standard"}, + } + # Preflight size vs. default cap (256 KiB) to avoid opaque server errors + try: + import json as _json + payload_bytes = len(_json.dumps({"edits": edits}, ensure_ascii=False).encode("utf-8")) + if payload_bytes > 256 * 1024: + return {"success": False, "code": "payload_too_large", "message": f"Edit payload {payload_bytes} bytes exceeds 256 KiB cap; try structured ops or chunking."} + except Exception: + pass + routed = send_command_with_retry("manage_script", route_params) + if isinstance(routed, dict): + routed.setdefault("message", "Routed legacy update to apply_text_edits") + return routed + return {"success": False, "message": str(routed)} + except Exception as e: + return {"success": False, "code": "deprecated_update", "message": f"Use apply_text_edits; migration error: {e}"} + # Prepare parameters for Unity params = { "action": action, "name": name, "path": path, "namespace": namespace, - "scriptType": script_type + "scriptType": script_type, } - + # Base64 encode the contents if they exist to avoid JSON escaping issues - if contents is not None: - if action in ['create', 'update']: - # Encode content for safer transmission + if contents: + if action == 'create': params["encodedContents"] = base64.b64encode(contents.encode('utf-8')).decode('utf-8') params["contentsEncoded"] = True else: params["contents"] = contents - - # Remove None values so they don't get sent as null + params = {k: v for k, v in params.items() if v is not None} - # Send command via centralized retry helper response = send_command_with_retry("manage_script", params) - - # Process response from Unity - if isinstance(response, dict) and response.get("success"): - # If the response contains base64 encoded content, decode it - if response.get("data", {}).get("contentsEncoded"): - decoded_contents = base64.b64decode(response["data"]["encodedContents"]).decode('utf-8') - response["data"]["contents"] = decoded_contents - del response["data"]["encodedContents"] - del response["data"]["contentsEncoded"] - - return {"success": True, "message": response.get("message", "Operation successful."), "data": response.get("data")} - return response if isinstance(response, dict) else {"success": False, "message": str(response)} + if isinstance(response, dict): + if response.get("success"): + if response.get("data", {}).get("contentsEncoded"): + decoded_contents = base64.b64decode(response["data"]["encodedContents"]).decode('utf-8') + response["data"]["contents"] = decoded_contents + del response["data"]["encodedContents"] + del response["data"]["contentsEncoded"] + + return { + "success": True, + "message": response.get("message", "Operation successful."), + "data": response.get("data"), + } + return response + + return {"success": False, "message": str(response)} + + except Exception as e: + return { + "success": False, + "message": f"Python error managing script: {str(e)}", + } + + @mcp.tool(description=( + "Get manage_script capabilities (supported ops, limits, and guards).\n\n" + "Returns:\n- ops: list of supported structured ops\n- text_ops: list of supported text ops\n- max_edit_payload_bytes: server edit payload cap\n- guards: header/using guard enabled flag\n" + )) + def manage_script_capabilities(ctx: Context) -> Dict[str, Any]: + try: + # Keep in sync with server/Editor ManageScript implementation + ops = [ + "replace_class","delete_class","replace_method","delete_method", + "insert_method","anchor_insert","anchor_delete","anchor_replace" + ] + text_ops = ["replace_range","regex_replace","prepend","append"] + # Match ManageScript.MaxEditPayloadBytes if exposed; hardcode a sensible default fallback + max_edit_payload_bytes = 256 * 1024 + guards = {"using_guard": True} + extras = {"get_sha": True} + return {"success": True, "data": { + "ops": ops, + "text_ops": text_ops, + "max_edit_payload_bytes": max_edit_payload_bytes, + "guards": guards, + "extras": extras, + }} + except Exception as e: + return {"success": False, "error": f"capabilities error: {e}"} + + @mcp.tool(description=( + "Get SHA256 and metadata for a Unity C# script without returning file contents.\n\n" + "Args: uri (unity://path/Assets/... or file://... or Assets/...).\n" + "Returns: {sha256, lengthBytes, lastModifiedUtc, uri, path}." + )) + def get_sha(ctx: Context, uri: str) -> Dict[str, Any]: + """Return SHA256 and basic metadata for a script.""" + try: + name, directory = _split_uri(uri) + params = {"action": "get_sha", "name": name, "path": directory} + resp = send_command_with_retry("manage_script", params) + return resp if isinstance(resp, dict) else {"success": False, "message": str(resp)} except Exception as e: - # Handle Python-side errors (e.g., connection issues) - return {"success": False, "message": f"Python error managing script: {str(e)}"} \ No newline at end of file + return {"success": False, "message": f"get_sha error: {e}"} diff --git a/UnityMcpBridge/UnityMcpServer~/src/tools/manage_script_edits.py b/UnityMcpBridge/UnityMcpServer~/src/tools/manage_script_edits.py new file mode 100644 index 00000000..fc50be33 --- /dev/null +++ b/UnityMcpBridge/UnityMcpServer~/src/tools/manage_script_edits.py @@ -0,0 +1,833 @@ +from mcp.server.fastmcp import FastMCP, Context +from typing import Dict, Any, List, Tuple +import base64 +import re +from unity_connection import send_command_with_retry + + +def _apply_edits_locally(original_text: str, edits: List[Dict[str, Any]]) -> str: + text = original_text + for edit in edits or []: + op = ( + (edit.get("op") + or edit.get("operation") + or edit.get("type") + or edit.get("mode") + or "") + .strip() + .lower() + ) + + if not op: + allowed = "anchor_insert, prepend, append, replace_range, regex_replace" + raise RuntimeError( + f"op is required; allowed: {allowed}. Use 'op' (aliases accepted: type/mode/operation)." + ) + + if op == "prepend": + prepend_text = edit.get("text", "") + text = (prepend_text if prepend_text.endswith("\n") else prepend_text + "\n") + text + elif op == "append": + append_text = edit.get("text", "") + if not text.endswith("\n"): + text += "\n" + text += append_text + if not text.endswith("\n"): + text += "\n" + elif op == "anchor_insert": + anchor = edit.get("anchor", "") + position = (edit.get("position") or "before").lower() + insert_text = edit.get("text", "") + flags = re.MULTILINE | (re.IGNORECASE if edit.get("ignore_case") else 0) + m = re.search(anchor, text, flags) + if not m: + if edit.get("allow_noop", True): + continue + raise RuntimeError(f"anchor not found: {anchor}") + idx = m.start() if position == "before" else m.end() + text = text[:idx] + insert_text + text[idx:] + elif op == "replace_range": + start_line = int(edit.get("startLine", 1)) + start_col = int(edit.get("startCol", 1)) + end_line = int(edit.get("endLine", start_line)) + end_col = int(edit.get("endCol", 1)) + replacement = edit.get("text", "") + lines = text.splitlines(keepends=True) + max_line = len(lines) + 1 # 1-based, exclusive end + if (start_line < 1 or end_line < start_line or end_line > max_line + or start_col < 1 or end_col < 1): + raise RuntimeError("replace_range out of bounds") + def index_of(line: int, col: int) -> int: + if line <= len(lines): + return sum(len(l) for l in lines[: line - 1]) + (col - 1) + return sum(len(l) for l in lines) + a = index_of(start_line, start_col) + b = index_of(end_line, end_col) + text = text[:a] + replacement + text[b:] + elif op == "regex_replace": + pattern = edit.get("pattern", "") + repl = edit.get("replacement", "") + # Translate $n backrefs (our input) to Python \g + repl_py = re.sub(r"\$(\d+)", r"\\g<\1>", repl) + count = int(edit.get("count", 0)) # 0 = replace all + flags = re.MULTILINE + if edit.get("ignore_case"): + flags |= re.IGNORECASE + text = re.sub(pattern, repl_py, text, count=count, flags=flags) + else: + allowed = "anchor_insert, prepend, append, replace_range, regex_replace" + raise RuntimeError(f"unknown edit op: {op}; allowed: {allowed}. Use 'op' (aliases accepted: type/mode/operation).") + return text + + +def _infer_class_name(script_name: str) -> str: + # Default to script name as class name (common Unity pattern) + return (script_name or "").strip() + + +def _extract_code_after(keyword: str, request: str) -> str: + # Deprecated with NL removal; retained as no-op for compatibility + idx = request.lower().find(keyword) + if idx >= 0: + return request[idx + len(keyword):].strip() + return "" +def _is_structurally_balanced(text: str) -> bool: + """Lightweight delimiter balance check for braces/paren/brackets. + Not a full parser; used to preflight destructive regex deletes. + """ + brace = paren = bracket = 0 + in_str = in_chr = False + esc = False + i = 0 + n = len(text) + while i < n: + c = text[i] + nxt = text[i+1] if i+1 < n else '' + if in_str: + if not esc and c == '"': + in_str = False + esc = (not esc and c == '\\') + i += 1 + continue + if in_chr: + if not esc and c == "'": + in_chr = False + esc = (not esc and c == '\\') + i += 1 + continue + # comments + if c == '/' and nxt == '/': + # skip to EOL + i = text.find('\n', i) + if i == -1: + break + i += 1 + continue + if c == '/' and nxt == '*': + j = text.find('*/', i+2) + i = (j + 2) if j != -1 else n + continue + if c == '"': + in_str = True; esc = False; i += 1; continue + if c == "'": + in_chr = True; esc = False; i += 1; continue + if c == '{': brace += 1 + elif c == '}': brace -= 1 + elif c == '(': paren += 1 + elif c == ')': paren -= 1 + elif c == '[': bracket += 1 + elif c == ']': bracket -= 1 + if brace < 0 or paren < 0 or bracket < 0: + return False + i += 1 + return brace == 0 and paren == 0 and bracket == 0 + + + +def _normalize_script_locator(name: str, path: str) -> Tuple[str, str]: + """Best-effort normalization of script "name" and "path". + + Accepts any of: + - name = "SmartReach", path = "Assets/Scripts/Interaction" + - name = "SmartReach.cs", path = "Assets/Scripts/Interaction" + - name = "Assets/Scripts/Interaction/SmartReach.cs", path = "" + - path = "Assets/Scripts/Interaction/SmartReach.cs" (name empty) + - name or path using uri prefixes: unity://path/..., file://... + - accidental duplicates like "Assets/.../SmartReach.cs/SmartReach.cs" + + Returns (name_without_extension, directory_path_under_Assets). + """ + n = (name or "").strip() + p = (path or "").strip() + + def strip_prefix(s: str) -> str: + if s.startswith("unity://path/"): + return s[len("unity://path/"):] + if s.startswith("file://"): + return s[len("file://"):] + return s + + def collapse_duplicate_tail(s: str) -> str: + # Collapse trailing "/X.cs/X.cs" to "/X.cs" + parts = s.split("/") + if len(parts) >= 2 and parts[-1] == parts[-2]: + parts = parts[:-1] + return "/".join(parts) + + # Prefer a full path if provided in either field + candidate = "" + for v in (n, p): + v2 = strip_prefix(v) + if v2.endswith(".cs") or v2.startswith("Assets/"): + candidate = v2 + break + + if candidate: + candidate = collapse_duplicate_tail(candidate) + # If a directory was passed in path and file in name, join them + if not candidate.endswith(".cs") and n.endswith(".cs"): + v2 = strip_prefix(n) + candidate = (candidate.rstrip("/") + "/" + v2.split("/")[-1]) + if candidate.endswith(".cs"): + parts = candidate.split("/") + file_name = parts[-1] + dir_path = "/".join(parts[:-1]) if len(parts) > 1 else "Assets" + base = file_name[:-3] if file_name.lower().endswith(".cs") else file_name + return base, dir_path + + # Fall back: remove extension from name if present and return given path + base_name = n[:-3] if n.lower().endswith(".cs") else n + return base_name, (p or "Assets") + + +def _with_norm(resp: Dict[str, Any] | Any, edits: List[Dict[str, Any]], routing: str | None = None) -> Dict[str, Any] | Any: + if not isinstance(resp, dict): + return resp + data = resp.setdefault("data", {}) + data.setdefault("normalizedEdits", edits) + if routing: + data["routing"] = routing + return resp + + +def _err(code: str, message: str, *, expected: Dict[str, Any] | None = None, rewrite: Dict[str, Any] | None = None, + normalized: List[Dict[str, Any]] | None = None, routing: str | None = None, extra: Dict[str, Any] | None = None) -> Dict[str, Any]: + payload: Dict[str, Any] = {"success": False, "code": code, "message": message} + data: Dict[str, Any] = {} + if expected: + data["expected"] = expected + if rewrite: + data["rewrite_suggestion"] = rewrite + if normalized is not None: + data["normalizedEdits"] = normalized + if routing: + data["routing"] = routing + if extra: + data.update(extra) + if data: + payload["data"] = data + return payload + +# Natural-language parsing removed; clients should send structured edits. + + +def register_manage_script_edits_tools(mcp: FastMCP): + @mcp.tool(description=( + "Structured C# edits (methods/classes) with safer boundaries — prefer this over raw text.\n\n" + "Best practices:\n" + "- Prefer anchor_* ops for pattern-based insert/replace near stable markers\n" + "- Use replace_method/delete_method for whole-method changes (keeps signatures balanced)\n" + "- Avoid whole-file regex deletes; validators will guard unbalanced braces\n" + "- For tail insertions, prefer anchor/regex_replace on final brace (class closing)\n" + "- Pass options.validate='standard' for structural checks; 'relaxed' for interior-only edits\n\n" + "Canonical fields (use these exact keys):\n" + "- op: replace_method | insert_method | delete_method | anchor_insert | anchor_delete | anchor_replace\n" + "- className: string (defaults to 'name' if omitted on method/class ops)\n" + "- methodName: string (required for replace_method, delete_method)\n" + "- replacement: string (required for replace_method, insert_method)\n" + "- position: start | end | after | before (insert_method only)\n" + "- afterMethodName / beforeMethodName: string (required when position='after'/'before')\n" + "- anchor: regex string (for anchor_* ops)\n" + "- text: string (for anchor_insert/anchor_replace)\n\n" + "Do NOT use: new_method, anchor_method, content, newText (aliases accepted but normalized).\n\n" + "Examples:\n" + "1) Replace a method:\n" + "{ 'name':'SmartReach','path':'Assets/Scripts/Interaction','edits':[\n" + " { 'op':'replace_method','className':'SmartReach','methodName':'HasTarget',\n" + " 'replacement':'public bool HasTarget(){ return currentTarget!=null; }' }\n" + "], 'options':{'validate':'standard','refresh':'immediate'} }\n\n" + "2) Insert a method after another:\n" + "{ 'name':'SmartReach','path':'Assets/Scripts/Interaction','edits':[\n" + " { 'op':'insert_method','className':'SmartReach','replacement':'public void PrintSeries(){ Debug.Log(seriesName); }',\n" + " 'position':'after','afterMethodName':'GetCurrentTarget' }\n" + "] }\n" + )) + def script_apply_edits( + ctx: Context, + name: str, + path: str, + edits: List[Dict[str, Any]], + options: Dict[str, Any] | None = None, + script_type: str = "MonoBehaviour", + namespace: str = "", + ) -> Dict[str, Any]: + # Normalize locator first so downstream calls target the correct script file. + name, path = _normalize_script_locator(name, path) + + # No NL path: clients must provide structured edits in 'edits'. + + # Normalize unsupported or aliased ops to known structured/text paths + def _unwrap_and_alias(edit: Dict[str, Any]) -> Dict[str, Any]: + # Unwrap single-key wrappers like {"replace_method": {...}} + for wrapper_key in ( + "replace_method","insert_method","delete_method", + "replace_class","delete_class", + "anchor_insert","anchor_replace","anchor_delete", + ): + if wrapper_key in edit and isinstance(edit[wrapper_key], dict): + inner = dict(edit[wrapper_key]) + inner["op"] = wrapper_key + edit = inner + break + + e = dict(edit) + op = (e.get("op") or e.get("operation") or e.get("type") or e.get("mode") or "").strip().lower() + if op: + e["op"] = op + + # Common field aliases + if "class_name" in e and "className" not in e: + e["className"] = e.pop("class_name") + if "class" in e and "className" not in e: + e["className"] = e.pop("class") + if "method_name" in e and "methodName" not in e: + e["methodName"] = e.pop("method_name") + # Some clients use a generic 'target' for method name + if "target" in e and "methodName" not in e: + e["methodName"] = e.pop("target") + if "method" in e and "methodName" not in e: + e["methodName"] = e.pop("method") + if "new_content" in e and "replacement" not in e: + e["replacement"] = e.pop("new_content") + if "newMethod" in e and "replacement" not in e: + e["replacement"] = e.pop("newMethod") + if "new_method" in e and "replacement" not in e: + e["replacement"] = e.pop("new_method") + if "content" in e and "replacement" not in e: + e["replacement"] = e.pop("content") + if "after" in e and "afterMethodName" not in e: + e["afterMethodName"] = e.pop("after") + if "after_method" in e and "afterMethodName" not in e: + e["afterMethodName"] = e.pop("after_method") + if "before" in e and "beforeMethodName" not in e: + e["beforeMethodName"] = e.pop("before") + if "before_method" in e and "beforeMethodName" not in e: + e["beforeMethodName"] = e.pop("before_method") + # anchor_method → before/after based on position (default after) + if "anchor_method" in e: + anchor = e.pop("anchor_method") + pos = (e.get("position") or "after").strip().lower() + if pos == "before" and "beforeMethodName" not in e: + e["beforeMethodName"] = anchor + elif "afterMethodName" not in e: + e["afterMethodName"] = anchor + if "anchorText" in e and "anchor" not in e: + e["anchor"] = e.pop("anchorText") + if "pattern" in e and "anchor" not in e and e.get("op") and e["op"].startswith("anchor_"): + e["anchor"] = e.pop("pattern") + if "newText" in e and "text" not in e: + e["text"] = e.pop("newText") + + # CI compatibility (T‑A/T‑E): + # Accept method-anchored anchor_insert and upgrade to insert_method + # Example incoming shape: + # {"op":"anchor_insert","afterMethodName":"GetCurrentTarget","text":"..."} + if ( + e.get("op") == "anchor_insert" + and not e.get("anchor") + and (e.get("afterMethodName") or e.get("beforeMethodName")) + ): + e["op"] = "insert_method" + if "replacement" not in e: + e["replacement"] = e.get("text", "") + + # LSP-like range edit -> replace_range + if "range" in e and isinstance(e["range"], dict): + rng = e.pop("range") + start = rng.get("start", {}) + end = rng.get("end", {}) + # Convert 0-based to 1-based line/col + e["op"] = "replace_range" + e["startLine"] = int(start.get("line", 0)) + 1 + e["startCol"] = int(start.get("character", 0)) + 1 + e["endLine"] = int(end.get("line", 0)) + 1 + e["endCol"] = int(end.get("character", 0)) + 1 + if "newText" in edit and "text" not in e: + e["text"] = edit.get("newText", "") + return e + + normalized_edits: List[Dict[str, Any]] = [] + for raw in edits or []: + e = _unwrap_and_alias(raw) + op = (e.get("op") or e.get("operation") or e.get("type") or e.get("mode") or "").strip().lower() + + # Default className to script name if missing on structured method/class ops + if op in ("replace_class","delete_class","replace_method","delete_method","insert_method") and not e.get("className"): + e["className"] = name + + # Map common aliases for text ops + if op in ("text_replace",): + e["op"] = "replace_range" + normalized_edits.append(e) + continue + if op in ("regex_delete",): + e["op"] = "regex_replace" + e.setdefault("text", "") + normalized_edits.append(e) + continue + if op == "regex_replace" and ("replacement" not in e): + if "text" in e: + e["replacement"] = e.get("text", "") + elif "insert" in e or "content" in e: + e["replacement"] = e.get("insert") or e.get("content") or "" + if op == "anchor_insert" and not (e.get("text") or e.get("insert") or e.get("content") or e.get("replacement")): + e["op"] = "anchor_delete" + normalized_edits.append(e) + continue + normalized_edits.append(e) + + edits = normalized_edits + normalized_for_echo = edits + + # Validate required fields and produce machine-parsable hints + def error_with_hint(message: str, expected: Dict[str, Any], suggestion: Dict[str, Any]) -> Dict[str, Any]: + return _err("missing_field", message, expected=expected, rewrite=suggestion, normalized=normalized_for_echo) + + for e in edits or []: + op = e.get("op", "") + if op == "replace_method": + if not e.get("methodName"): + return error_with_hint( + "replace_method requires 'methodName'.", + {"op": "replace_method", "required": ["className", "methodName", "replacement"]}, + {"edits[0].methodName": "HasTarget"} + ) + if not (e.get("replacement") or e.get("text")): + return error_with_hint( + "replace_method requires 'replacement' (inline or base64).", + {"op": "replace_method", "required": ["className", "methodName", "replacement"]}, + {"edits[0].replacement": "public bool X(){ return true; }"} + ) + elif op == "insert_method": + if not (e.get("replacement") or e.get("text")): + return error_with_hint( + "insert_method requires a non-empty 'replacement'.", + {"op": "insert_method", "required": ["className", "replacement"], "position": {"after_requires": "afterMethodName", "before_requires": "beforeMethodName"}}, + {"edits[0].replacement": "public void PrintSeries(){ Debug.Log(\"1,2,3\"); }"} + ) + pos = (e.get("position") or "").lower() + if pos == "after" and not e.get("afterMethodName"): + return error_with_hint( + "insert_method with position='after' requires 'afterMethodName'.", + {"op": "insert_method", "position": {"after_requires": "afterMethodName"}}, + {"edits[0].afterMethodName": "GetCurrentTarget"} + ) + if pos == "before" and not e.get("beforeMethodName"): + return error_with_hint( + "insert_method with position='before' requires 'beforeMethodName'.", + {"op": "insert_method", "position": {"before_requires": "beforeMethodName"}}, + {"edits[0].beforeMethodName": "GetCurrentTarget"} + ) + elif op == "delete_method": + if not e.get("methodName"): + return error_with_hint( + "delete_method requires 'methodName'.", + {"op": "delete_method", "required": ["className", "methodName"]}, + {"edits[0].methodName": "PrintSeries"} + ) + elif op in ("anchor_insert", "anchor_replace", "anchor_delete"): + if not e.get("anchor"): + return error_with_hint( + f"{op} requires 'anchor' (regex).", + {"op": op, "required": ["anchor"]}, + {"edits[0].anchor": "(?m)^\\s*public\\s+bool\\s+HasTarget\\s*\\("} + ) + if op in ("anchor_insert", "anchor_replace") and not (e.get("text") or e.get("replacement")): + return error_with_hint( + f"{op} requires 'text'.", + {"op": op, "required": ["anchor", "text"]}, + {"edits[0].text": "/* comment */\n"} + ) + + # Decide routing: structured vs text vs mixed + STRUCT = {"replace_class","delete_class","replace_method","delete_method","insert_method","anchor_delete","anchor_replace","anchor_insert"} + TEXT = {"prepend","append","replace_range","regex_replace"} + ops_set = { (e.get("op") or "").lower() for e in edits or [] } + all_struct = ops_set.issubset(STRUCT) + all_text = ops_set.issubset(TEXT) + mixed = not (all_struct or all_text) + + # If everything is structured (method/class/anchor ops), forward directly to Unity's structured editor. + if all_struct: + opts2 = dict(options or {}) + # Do not force sequential; allow server default (atomic) unless caller requests otherwise + opts2.setdefault("refresh", "immediate") + params_struct: Dict[str, Any] = { + "action": "edit", + "name": name, + "path": path, + "namespace": namespace, + "scriptType": script_type, + "edits": edits, + "options": opts2, + } + resp_struct = send_command_with_retry("manage_script", params_struct) + return _with_norm(resp_struct if isinstance(resp_struct, dict) else {"success": False, "message": str(resp_struct)}, normalized_for_echo, routing="structured") + + # 1) read from Unity + read_resp = send_command_with_retry("manage_script", { + "action": "read", + "name": name, + "path": path, + "namespace": namespace, + "scriptType": script_type, + }) + if not isinstance(read_resp, dict) or not read_resp.get("success"): + return read_resp if isinstance(read_resp, dict) else {"success": False, "message": str(read_resp)} + + data = read_resp.get("data") or read_resp.get("result", {}).get("data") or {} + contents = data.get("contents") + if contents is None and data.get("contentsEncoded") and data.get("encodedContents"): + contents = base64.b64decode(data["encodedContents"]).decode("utf-8") + if contents is None: + return {"success": False, "message": "No contents returned from Unity read."} + + # Optional preview/dry-run: apply locally and return diff without writing + preview = bool((options or {}).get("preview")) + + # If we have a mixed batch (TEXT + STRUCT), apply text first with precondition, then structured + if mixed: + text_edits = [e for e in edits or [] if (e.get("op") or "").lower() in TEXT] + struct_edits = [e for e in edits or [] if (e.get("op") or "").lower() in STRUCT] + try: + base_text = contents + def line_col_from_index(idx: int) -> Tuple[int, int]: + line = base_text.count("\n", 0, idx) + 1 + last_nl = base_text.rfind("\n", 0, idx) + col = (idx - (last_nl + 1)) + 1 if last_nl >= 0 else idx + 1 + return line, col + + at_edits: List[Dict[str, Any]] = [] + import re as _re + for e in text_edits: + opx = (e.get("op") or e.get("operation") or e.get("type") or e.get("mode") or "").strip().lower() + text_field = e.get("text") or e.get("insert") or e.get("content") or e.get("replacement") or "" + if opx == "anchor_insert": + anchor = e.get("anchor") or "" + position = (e.get("position") or "after").lower() + flags = _re.MULTILINE | (_re.IGNORECASE if e.get("ignore_case") else 0) + try: + regex_obj = _re.compile(anchor, flags) + except Exception as ex: + return _with_norm(_err("bad_regex", f"Invalid anchor regex: {ex}", normalized=normalized_for_echo, routing="mixed/text-first", extra={"hint": "Escape parentheses/braces or use a simpler anchor."}), normalized_for_echo, routing="mixed/text-first") + m = regex_obj.search(base_text) + if not m: + return _with_norm({"success": False, "code": "anchor_not_found", "message": f"anchor not found: {anchor}"}, normalized_for_echo, routing="mixed/text-first") + idx = m.start() if position == "before" else m.end() + # Normalize insertion to avoid jammed methods + text_field_norm = text_field + if not text_field_norm.startswith("\n"): + text_field_norm = "\n" + text_field_norm + if not text_field_norm.endswith("\n"): + text_field_norm = text_field_norm + "\n" + sl, sc = line_col_from_index(idx) + at_edits.append({"startLine": sl, "startCol": sc, "endLine": sl, "endCol": sc, "newText": text_field_norm}) + # do not mutate base_text when building atomic spans + elif opx == "replace_range": + if all(k in e for k in ("startLine","startCol","endLine","endCol")): + at_edits.append({ + "startLine": int(e.get("startLine", 1)), + "startCol": int(e.get("startCol", 1)), + "endLine": int(e.get("endLine", 1)), + "endCol": int(e.get("endCol", 1)), + "newText": text_field + }) + else: + return _with_norm(_err("missing_field", "replace_range requires startLine/startCol/endLine/endCol", normalized=normalized_for_echo, routing="mixed/text-first"), normalized_for_echo, routing="mixed/text-first") + elif opx == "regex_replace": + pattern = e.get("pattern") or "" + try: + regex_obj = _re.compile(pattern, _re.MULTILINE | (_re.IGNORECASE if e.get("ignore_case") else 0)) + except Exception as ex: + return _with_norm(_err("bad_regex", f"Invalid regex pattern: {ex}", normalized=normalized_for_echo, routing="mixed/text-first", extra={"hint": "Escape special chars or prefer structured delete for methods."}), normalized_for_echo, routing="mixed/text-first") + m = regex_obj.search(base_text) + if not m: + continue + # Expand $1, $2... in replacement using this match + def _expand_dollars(rep: str) -> str: + return _re.sub(r"\$(\d+)", lambda g: m.group(int(g.group(1))) or "", rep) + repl = _expand_dollars(text_field) + sl, sc = line_col_from_index(m.start()) + el, ec = line_col_from_index(m.end()) + at_edits.append({"startLine": sl, "startCol": sc, "endLine": el, "endCol": ec, "newText": repl}) + # do not mutate base_text when building atomic spans + elif opx in ("prepend","append"): + if opx == "prepend": + sl, sc = 1, 1 + at_edits.append({"startLine": sl, "startCol": sc, "endLine": sl, "endCol": sc, "newText": text_field}) + # prepend can be applied atomically without local mutation + else: + # Insert at true EOF position (handles both \n and \r\n correctly) + eof_idx = len(base_text) + sl, sc = line_col_from_index(eof_idx) + new_text = ("\n" if not base_text.endswith("\n") else "") + text_field + at_edits.append({"startLine": sl, "startCol": sc, "endLine": sl, "endCol": sc, "newText": new_text}) + # do not mutate base_text when building atomic spans + else: + return _with_norm(_err("unknown_op", f"Unsupported text edit op: {opx}", normalized=normalized_for_echo, routing="mixed/text-first"), normalized_for_echo, routing="mixed/text-first") + + import hashlib + sha = hashlib.sha256(base_text.encode("utf-8")).hexdigest() + if at_edits: + params_text: Dict[str, Any] = { + "action": "apply_text_edits", + "name": name, + "path": path, + "namespace": namespace, + "scriptType": script_type, + "edits": at_edits, + "precondition_sha256": sha, + "options": {"refresh": "immediate", "validate": (options or {}).get("validate", "standard"), "applyMode": ("atomic" if len(at_edits) > 1 else (options or {}).get("applyMode", "sequential"))} + } + resp_text = send_command_with_retry("manage_script", params_text) + if not (isinstance(resp_text, dict) and resp_text.get("success")): + return _with_norm(resp_text if isinstance(resp_text, dict) else {"success": False, "message": str(resp_text)}, normalized_for_echo, routing="mixed/text-first") + except Exception as e: + return _with_norm({"success": False, "message": f"Text edit conversion failed: {e}"}, normalized_for_echo, routing="mixed/text-first") + + if struct_edits: + opts2 = dict(options or {}) + # Let server decide; do not force sequential + opts2.setdefault("refresh", "immediate") + params_struct: Dict[str, Any] = { + "action": "edit", + "name": name, + "path": path, + "namespace": namespace, + "scriptType": script_type, + "edits": struct_edits, + "options": opts2 + } + resp_struct = send_command_with_retry("manage_script", params_struct) + return _with_norm(resp_struct if isinstance(resp_struct, dict) else {"success": False, "message": str(resp_struct)}, normalized_for_echo, routing="mixed/text-first") + + return _with_norm({"success": True, "message": "Applied text edits (no structured ops)"}, normalized_for_echo, routing="mixed/text-first") + + # If the edits are text-ops, prefer sending them to Unity's apply_text_edits with precondition + # so header guards and validation run on the C# side. + # Supported conversions: anchor_insert, replace_range, regex_replace (first match only). + text_ops = { (e.get("op") or e.get("operation") or e.get("type") or e.get("mode") or "").strip().lower() for e in (edits or []) } + structured_kinds = {"replace_class","delete_class","replace_method","delete_method","insert_method","anchor_insert"} + if not text_ops.issubset(structured_kinds): + # Convert to apply_text_edits payload + try: + base_text = contents + def line_col_from_index(idx: int) -> Tuple[int, int]: + # 1-based line/col against base buffer + line = base_text.count("\n", 0, idx) + 1 + last_nl = base_text.rfind("\n", 0, idx) + col = (idx - (last_nl + 1)) + 1 if last_nl >= 0 else idx + 1 + return line, col + + at_edits: List[Dict[str, Any]] = [] + import re as _re + for e in edits or []: + op = (e.get("op") or e.get("operation") or e.get("type") or e.get("mode") or "").strip().lower() + # aliasing for text field + text_field = e.get("text") or e.get("insert") or e.get("content") or "" + if op == "anchor_insert": + anchor = e.get("anchor") or "" + position = (e.get("position") or "after").lower() + # Early regex compile with helpful errors + try: + regex_obj = _re.compile(anchor, _re.MULTILINE) + except Exception as ex: + return _with_norm(_err("bad_regex", f"Invalid anchor regex: {ex}", normalized=normalized_for_echo, routing="text", extra={"hint": "Escape parentheses/braces or use a simpler anchor."}), normalized_for_echo, routing="text") + m = regex_obj.search(base_text) + if not m: + return _with_norm({"success": False, "code": "anchor_not_found", "message": f"anchor not found: {anchor}"}, normalized_for_echo, routing="text") + idx = m.start() if position == "before" else m.end() + # Normalize insertion newlines + if text_field and not text_field.startswith("\n"): + text_field = "\n" + text_field + if text_field and not text_field.endswith("\n"): + text_field = text_field + "\n" + sl, sc = line_col_from_index(idx) + at_edits.append({ + "startLine": sl, + "startCol": sc, + "endLine": sl, + "endCol": sc, + "newText": text_field or "" + }) + # Do not mutate base buffer when building an atomic batch + elif op == "replace_range": + # Directly forward if already in line/col form + if "startLine" in e: + at_edits.append({ + "startLine": int(e.get("startLine", 1)), + "startCol": int(e.get("startCol", 1)), + "endLine": int(e.get("endLine", 1)), + "endCol": int(e.get("endCol", 1)), + "newText": text_field + }) + else: + # If only indices provided, skip (we don't support index-based here) + return _with_norm({"success": False, "code": "missing_field", "message": "replace_range requires startLine/startCol/endLine/endCol"}, normalized_for_echo, routing="text") + elif op == "regex_replace": + pattern = e.get("pattern") or "" + repl = text_field + flags = _re.MULTILINE | (_re.IGNORECASE if e.get("ignore_case") else 0) + # Early compile for clearer error messages + try: + regex_obj = _re.compile(pattern, flags) + except Exception as ex: + return _with_norm(_err("bad_regex", f"Invalid regex pattern: {ex}", normalized=normalized_for_echo, routing="text", extra={"hint": "Escape special chars or prefer structured delete for methods."}), normalized_for_echo, routing="text") + m = regex_obj.search(base_text) + if not m: + continue + # Expand $1, $2... backrefs in replacement using the first match (consistent with mixed-path behavior) + def _expand_dollars(rep: str) -> str: + return _re.sub(r"\$(\d+)", lambda g: m.group(int(g.group(1))) or "", rep) + repl_expanded = _expand_dollars(repl) + # Preview structural balance after replacement; refuse destructive deletes + preview = base_text[:m.start()] + repl_expanded + base_text[m.end():] + if not _is_structurally_balanced(preview): + return _with_norm(_err("validation_failed", "regex_replace would unbalance braces/parentheses; prefer delete_method", + normalized=normalized_for_echo, routing="text", + extra={"status": "validation_failed", "hint": "Use script_apply_edits delete_method for method removal"}), normalized_for_echo, routing="text") + sl, sc = line_col_from_index(m.start()) + el, ec = line_col_from_index(m.end()) + at_edits.append({ + "startLine": sl, + "startCol": sc, + "endLine": el, + "endCol": ec, + "newText": repl_expanded + }) + # Do not mutate base buffer when building an atomic batch + else: + return _with_norm({"success": False, "code": "unsupported_op", "message": f"Unsupported text edit op for server-side apply_text_edits: {op}"}, normalized_for_echo, routing="text") + + if not at_edits: + return _with_norm({"success": False, "code": "no_spans", "message": "No applicable text edit spans computed (anchor not found or zero-length)."}, normalized_for_echo, routing="text") + + # Send to Unity with precondition SHA to enforce guards and immediate refresh + import hashlib + sha = hashlib.sha256(base_text.encode("utf-8")).hexdigest() + params: Dict[str, Any] = { + "action": "apply_text_edits", + "name": name, + "path": path, + "namespace": namespace, + "scriptType": script_type, + "edits": at_edits, + "precondition_sha256": sha, + "options": { + "refresh": "immediate", + "validate": (options or {}).get("validate", "standard"), + "applyMode": ("atomic" if len(at_edits) > 1 else (options or {}).get("applyMode", "sequential")) + } + } + resp = send_command_with_retry("manage_script", params) + return _with_norm( + resp if isinstance(resp, dict) else {"success": False, "message": str(resp)}, + normalized_for_echo, + routing="text" + ) + except Exception as e: + return _with_norm({"success": False, "code": "conversion_failed", "message": f"Edit conversion failed: {e}"}, normalized_for_echo, routing="text") + + # For regex_replace, honor preview consistently: if preview=true, always return diff without writing. + # If confirm=false (default) and preview not requested, return diff and instruct confirm=true to apply. + if "regex_replace" in text_ops and (preview or not (options or {}).get("confirm")): + try: + preview_text = _apply_edits_locally(contents, edits) + import difflib + diff = list(difflib.unified_diff(contents.splitlines(), preview_text.splitlines(), fromfile="before", tofile="after", n=2)) + if len(diff) > 800: + diff = diff[:800] + ["... (diff truncated) ..."] + if preview: + return {"success": True, "message": "Preview only (no write)", "data": {"diff": "\n".join(diff), "normalizedEdits": normalized_for_echo}} + return _with_norm({"success": False, "message": "Preview diff; set options.confirm=true to apply.", "data": {"diff": "\n".join(diff)}}, normalized_for_echo, routing="text") + except Exception as e: + return _with_norm({"success": False, "code": "preview_failed", "message": f"Preview failed: {e}"}, normalized_for_echo, routing="text") + # 2) apply edits locally (only if not text-ops) + try: + new_contents = _apply_edits_locally(contents, edits) + except Exception as e: + return {"success": False, "message": f"Edit application failed: {e}"} + + # Short-circuit no-op edits to avoid false "applied" reports downstream + if new_contents == contents: + return _with_norm({ + "success": True, + "message": "No-op: contents unchanged", + "data": {"no_op": True, "evidence": {"reason": "identical_content"}} + }, normalized_for_echo, routing="text") + + if preview: + # Produce a compact unified diff limited to small context + import difflib + a = contents.splitlines() + b = new_contents.splitlines() + diff = list(difflib.unified_diff(a, b, fromfile="before", tofile="after", n=3)) + # Limit diff size to keep responses small + if len(diff) > 2000: + diff = diff[:2000] + ["... (diff truncated) ..."] + return {"success": True, "message": "Preview only (no write)", "data": {"diff": "\n".join(diff), "normalizedEdits": normalized_for_echo}} + + # 3) update to Unity + # Default refresh/validate for natural usage on text path as well + options = dict(options or {}) + options.setdefault("validate", "standard") + options.setdefault("refresh", "immediate") + + import hashlib + # Compute the SHA of the current file contents for the precondition + old_lines = contents.splitlines(keepends=True) + end_line = len(old_lines) + 1 # 1-based exclusive end + sha = hashlib.sha256(contents.encode("utf-8")).hexdigest() + + # Apply a whole-file text edit rather than the deprecated 'update' action + params = { + "action": "apply_text_edits", + "name": name, + "path": path, + "namespace": namespace, + "scriptType": script_type, + "edits": [ + { + "startLine": 1, + "startCol": 1, + "endLine": end_line, + "endCol": 1, + "newText": new_contents, + } + ], + "precondition_sha256": sha, + "options": options or {"validate": "standard", "refresh": "immediate"}, + } + + write_resp = send_command_with_retry("manage_script", params) + return _with_norm( + write_resp if isinstance(write_resp, dict) + else {"success": False, "message": str(write_resp)}, + normalized_for_echo, + routing="text", + ) + + + + + # safe_script_edit removed to simplify API; clients should call script_apply_edits directly diff --git a/UnityMcpBridge/UnityMcpServer~/src/tools/resource_tools.py b/UnityMcpBridge/UnityMcpServer~/src/tools/resource_tools.py new file mode 100644 index 00000000..23f72ac3 --- /dev/null +++ b/UnityMcpBridge/UnityMcpServer~/src/tools/resource_tools.py @@ -0,0 +1,357 @@ +""" +Resource wrapper tools so clients that do not expose MCP resources primitives +can still list and read files via normal tools. These call into the same +safe path logic (re-implemented here to avoid importing server.py). +""" +from __future__ import annotations + +from typing import Dict, Any, List +import re +from pathlib import Path +from urllib.parse import urlparse, unquote +import fnmatch +import hashlib +import os + +from mcp.server.fastmcp import FastMCP, Context +from unity_connection import send_command_with_retry + + +def _resolve_project_root(override: str | None) -> Path: + # 1) Explicit override + if override: + pr = Path(override).expanduser().resolve() + if (pr / "Assets").exists(): + return pr + # 2) Environment + env = os.environ.get("UNITY_PROJECT_ROOT") + if env: + env_path = Path(env).expanduser() + # If UNITY_PROJECT_ROOT is relative, resolve against repo root (cwd's repo) instead of src dir + pr = (Path.cwd() / env_path).resolve() if not env_path.is_absolute() else env_path.resolve() + if (pr / "Assets").exists(): + return pr + # 3) Ask Unity via manage_editor.get_project_root + try: + resp = send_command_with_retry("manage_editor", {"action": "get_project_root"}) + if isinstance(resp, dict) and resp.get("success"): + pr = Path(resp.get("data", {}).get("projectRoot", "")).expanduser().resolve() + if pr and (pr / "Assets").exists(): + return pr + except Exception: + pass + + # 4) Walk up from CWD to find a Unity project (Assets + ProjectSettings) + cur = Path.cwd().resolve() + for _ in range(6): + if (cur / "Assets").exists() and (cur / "ProjectSettings").exists(): + return cur + if cur.parent == cur: + break + cur = cur.parent + # 5) Search downwards (shallow) from repo root for first folder with Assets + ProjectSettings + try: + import os as _os + root = Path.cwd().resolve() + max_depth = 3 + for dirpath, dirnames, _ in _os.walk(root): + rel = Path(dirpath).resolve() + try: + depth = len(rel.relative_to(root).parts) + except Exception: + # Unrelated mount/permission edge; skip deeper traversal + dirnames[:] = [] + continue + if depth > max_depth: + # Prune deeper traversal + dirnames[:] = [] + continue + if (rel / "Assets").exists() and (rel / "ProjectSettings").exists(): + return rel + except Exception: + pass + # 6) Fallback: CWD + return Path.cwd().resolve() + + +def _resolve_safe_path_from_uri(uri: str, project: Path) -> Path | None: + raw: str | None = None + if uri.startswith("unity://path/"): + raw = uri[len("unity://path/"):] + elif uri.startswith("file://"): + parsed = urlparse(uri) + raw = unquote(parsed.path or "") + # On Windows, urlparse('file:///C:/x') -> path='/C:/x'. Strip the leading slash for drive letters. + try: + import os as _os + if _os.name == "nt" and raw.startswith("/") and re.match(r"^/[A-Za-z]:/", raw): + raw = raw[1:] + # UNC paths: file://server/share -> netloc='server', path='/share'. Treat as \\\\server/share + if _os.name == "nt" and parsed.netloc: + raw = f"//{parsed.netloc}{raw}" + except Exception: + pass + elif uri.startswith("Assets/"): + raw = uri + if raw is None: + return None + # Normalize separators early + raw = raw.replace("\\", "/") + p = (project / raw).resolve() + try: + p.relative_to(project) + except ValueError: + return None + return p + + +def register_resource_tools(mcp: FastMCP) -> None: + """Registers list_resources and read_resource wrapper tools.""" + + @mcp.tool(description=( + "List project URIs (unity://path/...) under a folder (default: Assets).\n\n" + "Args: pattern (glob, default *.cs), under (folder under project root), limit, project_root.\n" + "Security: restricted to Assets/ subtree; symlinks are resolved and must remain under Assets/.\n" + "Notes: Only .cs files are returned by default; always appends unity://spec/script-edits.\n" + )) + async def list_resources( + ctx: Context | None = None, + pattern: str | None = "*.cs", + under: str = "Assets", + limit: int = 200, + project_root: str | None = None, + ) -> Dict[str, Any]: + """ + Lists project URIs (unity://path/...) under a folder (default: Assets). + - pattern: glob like *.cs or *.shader (None to list all files) + - under: relative folder under project root + - limit: max results + """ + try: + project = _resolve_project_root(project_root) + base = (project / under).resolve() + try: + base.relative_to(project) + except ValueError: + return {"success": False, "error": "Base path must be under project root"} + # Enforce listing only under Assets + try: + base.relative_to(project / "Assets") + except ValueError: + return {"success": False, "error": "Listing is restricted to Assets/"} + + matches: List[str] = [] + for p in base.rglob("*"): + if not p.is_file(): + continue + # Resolve symlinks and ensure the real path stays under project/Assets + try: + rp = p.resolve() + rp.relative_to(project / "Assets") + except Exception: + continue + # Enforce .cs extension regardless of provided pattern + if p.suffix.lower() != ".cs": + continue + if pattern and not fnmatch.fnmatch(p.name, pattern): + continue + rel = p.relative_to(project).as_posix() + matches.append(f"unity://path/{rel}") + if len(matches) >= max(1, limit): + break + + # Always include the canonical spec resource so NL clients can discover it + if "unity://spec/script-edits" not in matches: + matches.append("unity://spec/script-edits") + + return {"success": True, "data": {"uris": matches, "count": len(matches)}} + except Exception as e: + return {"success": False, "error": str(e)} + + @mcp.tool(description=( + "Read a resource by unity://path/... URI with optional slicing.\n\n" + "Args: uri, start_line/line_count or head_bytes, tail_lines (optional), project_root, request (NL hints).\n" + "Security: uri must resolve under Assets/.\n" + "Examples: head_bytes=1024; start_line=100,line_count=40; tail_lines=120.\n" + )) + async def read_resource( + uri: str, + ctx: Context | None = None, + start_line: int | None = None, + line_count: int | None = None, + head_bytes: int | None = None, + tail_lines: int | None = None, + project_root: str | None = None, + request: str | None = None, + ) -> Dict[str, Any]: + """ + Reads a resource by unity://path/... URI with optional slicing. + One of line window (start_line/line_count) or head_bytes can be used to limit size. + """ + try: + # Serve the canonical spec directly when requested (allow bare or with scheme) + if uri in ("unity://spec/script-edits", "spec/script-edits", "script-edits"): + spec_json = ( + '{\n' + ' "name": "Unity MCP — Script Edits v1",\n' + ' "target_tool": "script_apply_edits",\n' + ' "canonical_rules": {\n' + ' "always_use": ["op","className","methodName","replacement","afterMethodName","beforeMethodName"],\n' + ' "never_use": ["new_method","anchor_method","content","newText"],\n' + ' "defaults": {\n' + ' "className": "\u2190 server will default to \'name\' when omitted",\n' + ' "position": "end"\n' + ' }\n' + ' },\n' + ' "ops": [\n' + ' {"op":"replace_method","required":["className","methodName","replacement"],"optional":["returnType","parametersSignature","attributesContains"],"examples":[{"note":"match overload by signature","parametersSignature":"(int a, string b)"},{"note":"ensure attributes retained","attributesContains":"ContextMenu"}]},\n' + ' {"op":"insert_method","required":["className","replacement"],"position":{"enum":["start","end","after","before"],"after_requires":"afterMethodName","before_requires":"beforeMethodName"}},\n' + ' {"op":"delete_method","required":["className","methodName"]},\n' + ' {"op":"anchor_insert","required":["anchor","text"],"notes":"regex; position=before|after"}\n' + ' ],\n' + ' "apply_text_edits_recipe": {\n' + ' "step1_read": { "tool": "resources/read", "args": {"uri": "unity://path/Assets/Scripts/Interaction/SmartReach.cs"} },\n' + ' "step2_apply": {\n' + ' "tool": "manage_script",\n' + ' "args": {\n' + ' "action": "apply_text_edits",\n' + ' "name": "SmartReach", "path": "Assets/Scripts/Interaction",\n' + ' "edits": [{"startLine": 42, "startCol": 1, "endLine": 42, "endCol": 1, "newText": "[MyAttr]\\n"}],\n' + ' "precondition_sha256": "",\n' + ' "options": {"refresh": "immediate", "validate": "standard"}\n' + ' }\n' + ' },\n' + ' "note": "newText is for apply_text_edits ranges only; use replacement in script_apply_edits ops."\n' + ' },\n' + ' "examples": [\n' + ' {\n' + ' "title": "Replace a method",\n' + ' "args": {\n' + ' "name": "SmartReach",\n' + ' "path": "Assets/Scripts/Interaction",\n' + ' "edits": [\n' + ' {"op":"replace_method","className":"SmartReach","methodName":"HasTarget","replacement":"public bool HasTarget() { return currentTarget != null; }"}\n' + ' ],\n' + ' "options": { "validate": "standard", "refresh": "immediate" }\n' + ' }\n' + ' },\n' + ' {\n' + ' "title": "Insert a method after another",\n' + ' "args": {\n' + ' "name": "SmartReach",\n' + ' "path": "Assets/Scripts/Interaction",\n' + ' "edits": [\n' + ' {"op":"insert_method","className":"SmartReach","replacement":"public void PrintSeries() { Debug.Log(seriesName); }","position":"after","afterMethodName":"GetCurrentTarget"}\n' + ' ]\n' + ' }\n' + ' }\n' + ' ]\n' + '}\n' + ) + sha = hashlib.sha256(spec_json.encode("utf-8")).hexdigest() + return {"success": True, "data": {"text": spec_json, "metadata": {"sha256": sha}}} + + project = _resolve_project_root(project_root) + p = _resolve_safe_path_from_uri(uri, project) + if not p or not p.exists() or not p.is_file(): + return {"success": False, "error": f"Resource not found: {uri}"} + try: + p.relative_to(project / "Assets") + except ValueError: + return {"success": False, "error": "Read restricted to Assets/"} + # Natural-language convenience: request like "last 120 lines", "first 200 lines", + # "show 40 lines around MethodName", etc. + if request: + req = request.strip().lower() + m = re.search(r"last\s+(\d+)\s+lines", req) + if m: + tail_lines = int(m.group(1)) + m = re.search(r"first\s+(\d+)\s+lines", req) + if m: + start_line = 1 + line_count = int(m.group(1)) + m = re.search(r"first\s+(\d+)\s*bytes", req) + if m: + head_bytes = int(m.group(1)) + m = re.search(r"show\s+(\d+)\s+lines\s+around\s+([A-Za-z_][A-Za-z0-9_]*)", req) + if m: + window = int(m.group(1)) + method = m.group(2) + # naive search for method header to get a line number + text_all = p.read_text(encoding="utf-8") + lines_all = text_all.splitlines() + pat = re.compile(rf"^\s*(?:\[[^\]]+\]\s*)*(?:public|private|protected|internal|static|virtual|override|sealed|async|extern|unsafe|new|partial).*?\b{re.escape(method)}\s*\(", re.MULTILINE) + hit_line = None + for i, line in enumerate(lines_all, start=1): + if pat.search(line): + hit_line = i + break + if hit_line: + half = max(1, window // 2) + start_line = max(1, hit_line - half) + line_count = window + + # Mutually exclusive windowing options precedence: + # 1) head_bytes, 2) tail_lines, 3) start_line+line_count, else full text + if head_bytes and head_bytes > 0: + raw = p.read_bytes()[: head_bytes] + text = raw.decode("utf-8", errors="replace") + else: + text = p.read_text(encoding="utf-8") + if tail_lines is not None and tail_lines > 0: + lines = text.splitlines() + n = max(0, tail_lines) + text = "\n".join(lines[-n:]) + elif start_line is not None and line_count is not None and line_count >= 0: + lines = text.splitlines() + s = max(0, start_line - 1) + e = min(len(lines), s + line_count) + text = "\n".join(lines[s:e]) + + sha = hashlib.sha256(text.encode("utf-8")).hexdigest() + return {"success": True, "data": {"text": text, "metadata": {"sha256": sha}}} + except Exception as e: + return {"success": False, "error": str(e)} + + @mcp.tool() + async def find_in_file( + uri: str, + pattern: str, + ctx: Context | None = None, + ignore_case: bool | None = True, + project_root: str | None = None, + max_results: int | None = 200, + ) -> Dict[str, Any]: + """ + Searches a file with a regex pattern and returns line numbers and excerpts. + - uri: unity://path/Assets/... or file path form supported by read_resource + - pattern: regular expression (Python re) + - ignore_case: case-insensitive by default + - max_results: cap results to avoid huge payloads + """ + # re is already imported at module level + try: + project = _resolve_project_root(project_root) + p = _resolve_safe_path_from_uri(uri, project) + if not p or not p.exists() or not p.is_file(): + return {"success": False, "error": f"Resource not found: {uri}"} + + text = p.read_text(encoding="utf-8") + flags = re.MULTILINE + if ignore_case: + flags |= re.IGNORECASE + rx = re.compile(pattern, flags) + + results = [] + lines = text.splitlines() + for i, line in enumerate(lines, start=1): + if rx.search(line): + results.append({"line": i, "text": line}) + if max_results and len(results) >= max_results: + break + + return {"success": True, "data": {"matches": results, "count": len(results)}} + except Exception as e: + return {"success": False, "error": str(e)} + + diff --git a/UnityMcpBridge/UnityMcpServer~/src/unity_connection.py b/UnityMcpBridge/UnityMcpServer~/src/unity_connection.py index a284f539..f41b7a25 100644 --- a/UnityMcpBridge/UnityMcpServer~/src/unity_connection.py +++ b/UnityMcpBridge/UnityMcpServer~/src/unity_connection.py @@ -1,12 +1,15 @@ -import socket +import contextlib +import errno import json import logging +import random +import socket +import struct +import threading +import time from dataclasses import dataclass from pathlib import Path -import time -import random -import errno -from typing import Dict, Any +from typing import Any, Dict from config import config from port_discovery import PortDiscovery @@ -17,31 +20,86 @@ ) logger = logging.getLogger("mcp-for-unity-server") +# Module-level lock to guard global connection initialization +_connection_lock = threading.Lock() + +# Maximum allowed framed payload size (64 MiB) +FRAMED_MAX = 64 * 1024 * 1024 + @dataclass class UnityConnection: """Manages the socket connection to the Unity Editor.""" host: str = config.unity_host port: int = None # Will be set dynamically sock: socket.socket = None # Socket for Unity communication + use_framing: bool = False # Negotiated per-connection def __post_init__(self): """Set port from discovery if not explicitly provided""" if self.port is None: self.port = PortDiscovery.discover_unity_port() + self._io_lock = threading.Lock() + self._conn_lock = threading.Lock() def connect(self) -> bool: """Establish a connection to the Unity Editor.""" if self.sock: return True - try: - self.sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) - self.sock.connect((self.host, self.port)) - logger.info(f"Connected to Unity at {self.host}:{self.port}") - return True - except Exception as e: - logger.error(f"Failed to connect to Unity: {str(e)}") - self.sock = None - return False + with self._conn_lock: + if self.sock: + return True + try: + # Bounded connect to avoid indefinite blocking + connect_timeout = float(getattr(config, "connect_timeout", getattr(config, "connection_timeout", 1.0))) + self.sock = socket.create_connection((self.host, self.port), connect_timeout) + # Disable Nagle's algorithm to reduce small RPC latency + with contextlib.suppress(Exception): + self.sock.setsockopt(socket.IPPROTO_TCP, socket.TCP_NODELAY, 1) + logger.debug(f"Connected to Unity at {self.host}:{self.port}") + + # Strict handshake: require FRAMING=1 + try: + require_framing = getattr(config, "require_framing", True) + timeout = float(getattr(config, "handshake_timeout", 1.0)) + self.sock.settimeout(timeout) + buf = bytearray() + deadline = time.monotonic() + timeout + while time.monotonic() < deadline and len(buf) < 512: + try: + chunk = self.sock.recv(256) + if not chunk: + break + buf.extend(chunk) + if b"\n" in buf: + break + except socket.timeout: + break + text = bytes(buf).decode('ascii', errors='ignore').strip() + + if 'FRAMING=1' in text: + self.use_framing = True + logger.debug('Unity MCP handshake received: FRAMING=1 (strict)') + else: + if require_framing: + # Best-effort plain-text advisory for legacy peers + with contextlib.suppress(Exception): + self.sock.sendall(b'Unity MCP requires FRAMING=1\n') + raise ConnectionError(f'Unity MCP requires FRAMING=1, got: {text!r}') + else: + self.use_framing = False + logger.warning('Unity MCP handshake missing FRAMING=1; proceeding in legacy mode by configuration') + finally: + self.sock.settimeout(config.connection_timeout) + return True + except Exception as e: + logger.error(f"Failed to connect to Unity: {str(e)}") + try: + if self.sock: + self.sock.close() + except Exception: + pass + self.sock = None + return False def disconnect(self): """Close the connection to the Unity Editor.""" @@ -53,10 +111,48 @@ def disconnect(self): finally: self.sock = None + def _read_exact(self, sock: socket.socket, count: int) -> bytes: + data = bytearray() + while len(data) < count: + chunk = sock.recv(count - len(data)) + if not chunk: + raise ConnectionError("Connection closed before reading expected bytes") + data.extend(chunk) + return bytes(data) + def receive_full_response(self, sock, buffer_size=config.buffer_size) -> bytes: """Receive a complete response from Unity, handling chunked data.""" + if self.use_framing: + try: + # Consume heartbeats, but do not hang indefinitely if only zero-length frames arrive + heartbeat_count = 0 + deadline = time.monotonic() + getattr(config, 'framed_receive_timeout', 2.0) + while True: + header = self._read_exact(sock, 8) + payload_len = struct.unpack('>Q', header)[0] + if payload_len == 0: + # Heartbeat/no-op frame: consume and continue waiting for a data frame + logger.debug("Received heartbeat frame (length=0)") + heartbeat_count += 1 + if heartbeat_count >= getattr(config, 'max_heartbeat_frames', 16) or time.monotonic() > deadline: + # Treat as empty successful response to match C# server behavior + logger.debug("Heartbeat threshold reached; returning empty response") + return b"" + continue + if payload_len > FRAMED_MAX: + raise ValueError(f"Invalid framed length: {payload_len}") + payload = self._read_exact(sock, payload_len) + logger.debug(f"Received framed response ({len(payload)} bytes)") + return payload + except socket.timeout as e: + logger.warning("Socket timeout during framed receive") + raise TimeoutError("Timeout receiving Unity response") from e + except Exception as e: + logger.error(f"Error during framed receive: {str(e)}") + raise + chunks = [] - sock.settimeout(config.connection_timeout) # Use timeout from config + # Respect the socket's currently configured timeout try: while True: chunk = sock.recv(buffer_size) @@ -148,15 +244,9 @@ def read_status_file() -> dict | None: for attempt in range(attempts + 1): try: - # Ensure connected - if not self.sock: - # During retries use short connect timeout - self.sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) - self.sock.settimeout(1.0) - self.sock.connect((self.host, self.port)) - # restore steady-state timeout for receive - self.sock.settimeout(config.connection_timeout) - logger.info(f"Connected to Unity at {self.host}:{self.port}") + # Ensure connected (handshake occurs within connect()) + if not self.sock and not self.connect(): + raise Exception("Could not connect to Unity") # Build payload if command_type == 'ping': @@ -165,18 +255,36 @@ def read_status_file() -> dict | None: command = {"type": command_type, "params": params or {}} payload = json.dumps(command, ensure_ascii=False).encode('utf-8') - # Send - self.sock.sendall(payload) + # Send/receive are serialized to protect the shared socket + with self._io_lock: + mode = 'framed' if self.use_framing else 'legacy' + with contextlib.suppress(Exception): + logger.debug( + "send %d bytes; mode=%s; head=%s", + len(payload), + mode, + (payload[:32]).decode('utf-8', 'ignore'), + ) + if self.use_framing: + header = struct.pack('>Q', len(payload)) + self.sock.sendall(header) + self.sock.sendall(payload) + else: + self.sock.sendall(payload) - # During retry bursts use a short receive timeout - if attempt > 0 and last_short_timeout is None: - last_short_timeout = self.sock.gettimeout() - self.sock.settimeout(1.0) - response_data = self.receive_full_response(self.sock) - # restore steady-state timeout if changed - if last_short_timeout is not None: - self.sock.settimeout(config.connection_timeout) - last_short_timeout = None + # During retry bursts use a short receive timeout and ensure restoration + restore_timeout = None + if attempt > 0 and last_short_timeout is None: + restore_timeout = self.sock.gettimeout() + self.sock.settimeout(1.0) + try: + response_data = self.receive_full_response(self.sock) + with contextlib.suppress(Exception): + logger.debug("recv %d bytes; mode=%s", len(response_data), mode) + finally: + if restore_timeout is not None: + self.sock.settimeout(restore_timeout) + last_short_timeout = None # Parse if command_type == 'ping': @@ -241,43 +349,26 @@ def read_status_file() -> dict | None: _unity_connection = None def get_unity_connection() -> UnityConnection: - """Retrieve or establish a persistent Unity connection.""" + """Retrieve or establish a persistent Unity connection. + + Note: Do NOT ping on every retrieval to avoid connection storms. Rely on + send_command() exceptions to detect broken sockets and reconnect there. + """ global _unity_connection if _unity_connection is not None: - try: - # Try to ping with a short timeout to verify connection - result = _unity_connection.send_command("ping") - # If we get here, the connection is still valid - logger.debug("Reusing existing Unity connection") + return _unity_connection + + # Double-checked locking to avoid concurrent socket creation + with _connection_lock: + if _unity_connection is not None: return _unity_connection - except Exception as e: - logger.warning(f"Existing connection failed: {str(e)}") - try: - _unity_connection.disconnect() - except: - pass + logger.info("Creating new Unity connection") + _unity_connection = UnityConnection() + if not _unity_connection.connect(): _unity_connection = None - - # Create a new connection - logger.info("Creating new Unity connection") - _unity_connection = UnityConnection() - if not _unity_connection.connect(): - _unity_connection = None - raise ConnectionError("Could not connect to Unity. Ensure the Unity Editor and MCP Bridge are running.") - - try: - # Verify the new connection works - _unity_connection.send_command("ping") - logger.info("Successfully established new Unity connection") + raise ConnectionError("Could not connect to Unity. Ensure the Unity Editor and MCP Bridge are running.") + logger.info("Connected to Unity on startup") return _unity_connection - except Exception as e: - logger.error(f"Could not verify new connection: {str(e)}") - try: - _unity_connection.disconnect() - except: - pass - _unity_connection = None - raise ConnectionError(f"Could not establish valid Unity connection: {str(e)}") # ----------------------------- diff --git a/UnityMcpBridge/UnityMcpServer~/src/uv.lock b/UnityMcpBridge/UnityMcpServer~/src/uv.lock index 4f43d249..87a4deb9 100644 --- a/UnityMcpBridge/UnityMcpServer~/src/uv.lock +++ b/UnityMcpBridge/UnityMcpServer~/src/uv.lock @@ -160,6 +160,21 @@ cli = [ { name = "typer" }, ] +[[package]] +name = "mcpforunityserver" +version = "3.0.2" +source = { editable = "." } +dependencies = [ + { name = "httpx" }, + { name = "mcp", extra = ["cli"] }, +] + +[package.metadata] +requires-dist = [ + { name = "httpx", specifier = ">=0.27.2" }, + { name = "mcp", extras = ["cli"], specifier = ">=1.4.1" }, +] + [[package]] name = "mdurl" version = "0.1.2" @@ -370,21 +385,6 @@ wheels = [ { url = "https://files.pythonhosted.org/packages/26/9f/ad63fc0248c5379346306f8668cda6e2e2e9c95e01216d2b8ffd9ff037d0/typing_extensions-4.12.2-py3-none-any.whl", hash = "sha256:04e5ca0351e0f3f85c6853954072df659d0d13fac324d0072316b67d7794700d", size = 37438 }, ] -[[package]] -name = "mcpforunityserver" -version = "2.1.2" -source = { editable = "." } -dependencies = [ - { name = "httpx" }, - { name = "mcp", extra = ["cli"] }, -] - -[package.metadata] -requires-dist = [ - { name = "httpx", specifier = ">=0.27.2" }, - { name = "mcp", extras = ["cli"], specifier = ">=1.4.1" }, -] - [[package]] name = "uvicorn" version = "0.34.0" diff --git a/claude-chunk.md b/claude-chunk.md deleted file mode 100644 index 964038c6..00000000 --- a/claude-chunk.md +++ /dev/null @@ -1,51 +0,0 @@ -### macOS: Claude CLI fails to start (dyld ICU library not loaded) - -- Symptoms - - MCP for Unity error: “Failed to start Claude CLI. dyld: Library not loaded: /usr/local/opt/icu4c/lib/libicui18n.71.dylib …” - - Running `claude` in Terminal fails with missing `libicui18n.xx.dylib`. - -- Cause - - Homebrew Node (or the `claude` binary) was linked against an ICU version that’s no longer installed; dyld can’t find that dylib. - -- Fix options (pick one) - - Reinstall Homebrew Node (relinks to current ICU), then reinstall CLI: - ```bash - brew update - brew reinstall node - npm uninstall -g @anthropic-ai/claude-code - npm install -g @anthropic-ai/claude-code - ``` - - Use NVM Node (avoids Homebrew ICU churn): - ```bash - nvm install --lts - nvm use --lts - npm install -g @anthropic-ai/claude-code - # MCP for Unity → Claude Code → Choose Claude Location → ~/.nvm/versions/node//bin/claude - ``` - - Use the native installer (puts claude in a stable path): - ```bash - # macOS/Linux - curl -fsSL https://claude.ai/install.sh | bash - # MCP for Unity → Claude Code → Choose Claude Location → /opt/homebrew/bin/claude or ~/.local/bin/claude - ``` - -- After fixing - - In MCP for Unity (Claude Code), click “Choose Claude Location” and select the working `claude` binary, then Register again. - -- More details - - See: Troubleshooting MCP for Unity and Claude Code - ---- - -### FAQ (Claude Code) - -- Q: Unity can’t find `claude` even though Terminal can. - - A: macOS apps launched from Finder/Hub don’t inherit your shell PATH. In the MCP for Unity window, click “Choose Claude Location” and select the absolute path (e.g., `/opt/homebrew/bin/claude` or `~/.nvm/versions/node//bin/claude`). - -- Q: I installed via NVM; where is `claude`? - - A: Typically `~/.nvm/versions/node//bin/claude`. Our UI also scans NVM versions and you can browse to it via “Choose Claude Location”. - -- Q: The Register button says “Claude Not Found”. - - A: Install the CLI or set the path. Click the orange “[HELP]” link in the MCP for Unity window for step‑by‑step install instructions, then choose the binary location. - - diff --git a/mcp_source.py b/mcp_source.py index bb8f16cb..7d5a48a3 100755 --- a/mcp_source.py +++ b/mcp_source.py @@ -165,4 +165,4 @@ def main() -> None: if __name__ == "__main__": - main() \ No newline at end of file + main() diff --git a/test_unity_socket_framing.py b/test_unity_socket_framing.py new file mode 100644 index 00000000..7c0cb93f --- /dev/null +++ b/test_unity_socket_framing.py @@ -0,0 +1,98 @@ +#!/usr/bin/env python3 +import socket, struct, json, sys + +HOST = "127.0.0.1" +PORT = 6400 +try: + SIZE_MB = int(sys.argv[1]) +except (IndexError, ValueError): + SIZE_MB = 5 # e.g., 5 or 10 +FILL = "R" +MAX_FRAME = 64 * 1024 * 1024 + +def recv_exact(sock, n): + buf = bytearray(n) + view = memoryview(buf) + off = 0 + while off < n: + r = sock.recv_into(view[off:]) + if r == 0: + raise RuntimeError("socket closed") + off += r + return bytes(buf) + +def is_valid_json(b): + try: + json.loads(b.decode("utf-8")) + return True + except Exception: + return False + +def recv_legacy_json(sock, timeout=60): + sock.settimeout(timeout) + chunks = [] + while True: + chunk = sock.recv(65536) + if not chunk: + data = b"".join(chunks) + if not data: + raise RuntimeError("no data, socket closed") + return data + chunks.append(chunk) + data = b"".join(chunks) + if data.strip() == b"ping": + return data + if is_valid_json(data): + return data + +def main(): + # Cap filler to stay within framing limit (reserve small overhead for JSON) + safe_max = max(1, MAX_FRAME - 4096) + filler_len = min(SIZE_MB * 1024 * 1024, safe_max) + body = { + "type": "read_console", + "params": { + "action": "get", + "types": ["all"], + "count": 1000, + "format": "detailed", + "includeStacktrace": True, + "filterText": FILL * filler_len + } + } + body_bytes = json.dumps(body, ensure_ascii=False).encode("utf-8") + + with socket.create_connection((HOST, PORT), timeout=5) as s: + s.settimeout(2) + # Read optional greeting + try: + greeting = s.recv(256) + except Exception: + greeting = b"" + greeting_text = greeting.decode("ascii", errors="ignore").strip() + print(f"Greeting: {greeting_text or '(none)'}") + + framing = "FRAMING=1" in greeting_text + print(f"Using framing? {framing}") + + s.settimeout(120) + if framing: + header = struct.pack(">Q", len(body_bytes)) + s.sendall(header + body_bytes) + resp_len = struct.unpack(">Q", recv_exact(s, 8))[0] + print(f"Response framed length: {resp_len}") + MAX_RESP = MAX_FRAME + if resp_len <= 0 or resp_len > MAX_RESP: + raise RuntimeError(f"invalid framed length: {resp_len} (max {MAX_RESP})") + resp = recv_exact(s, resp_len) + else: + s.sendall(body_bytes) + resp = recv_legacy_json(s) + + print(f"Response bytes: {len(resp)}") + print(f"Response head: {resp[:120].decode('utf-8','ignore')}") + +if __name__ == "__main__": + main() + + diff --git a/tests/test_edit_normalization_and_noop.py b/tests/test_edit_normalization_and_noop.py new file mode 100644 index 00000000..ab97e5e2 --- /dev/null +++ b/tests/test_edit_normalization_and_noop.py @@ -0,0 +1,116 @@ +import sys +import pathlib +import importlib.util +import types + + +ROOT = pathlib.Path(__file__).resolve().parents[1] +SRC = ROOT / "UnityMcpBridge" / "UnityMcpServer~" / "src" +sys.path.insert(0, str(SRC)) + +# stub mcp.server.fastmcp +mcp_pkg = types.ModuleType("mcp") +server_pkg = types.ModuleType("mcp.server") +fastmcp_pkg = types.ModuleType("mcp.server.fastmcp") +class _Dummy: pass +fastmcp_pkg.FastMCP = _Dummy +fastmcp_pkg.Context = _Dummy +server_pkg.fastmcp = fastmcp_pkg +mcp_pkg.server = server_pkg +sys.modules.setdefault("mcp", mcp_pkg) +sys.modules.setdefault("mcp.server", server_pkg) +sys.modules.setdefault("mcp.server.fastmcp", fastmcp_pkg) + +def _load(path: pathlib.Path, name: str): + spec = importlib.util.spec_from_file_location(name, path) + mod = importlib.util.module_from_spec(spec) + spec.loader.exec_module(mod) + return mod + +manage_script = _load(SRC / "tools" / "manage_script.py", "manage_script_mod2") +manage_script_edits = _load(SRC / "tools" / "manage_script_edits.py", "manage_script_edits_mod2") + + +class DummyMCP: + def __init__(self): self.tools = {} + def tool(self, *args, **kwargs): + def deco(fn): self.tools[fn.__name__] = fn; return fn + return deco + +def setup_tools(): + mcp = DummyMCP() + manage_script.register_manage_script_tools(mcp) + return mcp.tools + + +def test_normalizes_lsp_and_index_ranges(monkeypatch): + tools = setup_tools() + apply = tools["apply_text_edits"] + calls = [] + + def fake_send(cmd, params): + calls.append(params) + return {"success": True} + + monkeypatch.setattr(manage_script, "send_command_with_retry", fake_send) + + # LSP-style + edits = [{ + "range": {"start": {"line": 10, "character": 2}, "end": {"line": 10, "character": 2}}, + "newText": "// lsp\n" + }] + apply(None, uri="unity://path/Assets/Scripts/F.cs", edits=edits, precondition_sha256="x") + p = calls[-1] + e = p["edits"][0] + assert e["startLine"] == 11 and e["startCol"] == 3 + + # Index pair + calls.clear() + edits = [{"range": [0, 0], "text": "// idx\n"}] + # fake read to provide contents length + def fake_read(cmd, params): + if params.get("action") == "read": + return {"success": True, "data": {"contents": "hello\n"}} + return {"success": True} + monkeypatch.setattr(manage_script, "send_command_with_retry", fake_read) + apply(None, uri="unity://path/Assets/Scripts/F.cs", edits=edits, precondition_sha256="x") + # last call is apply_text_edits + + +def test_noop_evidence_shape(monkeypatch): + tools = setup_tools() + apply = tools["apply_text_edits"] + # Route response from Unity indicating no-op + def fake_send(cmd, params): + return {"success": True, "data": {"no_op": True, "evidence": {"reason": "identical_content"}}} + monkeypatch.setattr(manage_script, "send_command_with_retry", fake_send) + + resp = apply(None, uri="unity://path/Assets/Scripts/F.cs", edits=[{"startLine":1,"startCol":1,"endLine":1,"endCol":1,"newText":""}], precondition_sha256="x") + assert resp["success"] is True + assert resp.get("data", {}).get("no_op") is True + + +def test_atomic_multi_span_and_relaxed(monkeypatch): + tools_text = setup_tools() + apply_text = tools_text["apply_text_edits"] + tools_struct = DummyMCP(); manage_script_edits.register_manage_script_edits_tools(tools_struct) + # Fake send for read and write; verify atomic applyMode and validate=relaxed passes through + sent = {} + def fake_send(cmd, params): + if params.get("action") == "read": + return {"success": True, "data": {"contents": "public class C{\nvoid M(){ int x=2; }\n}\n"}} + sent.setdefault("calls", []).append(params) + return {"success": True} + monkeypatch.setattr(manage_script, "send_command_with_retry", fake_send) + + edits = [ + {"startLine": 2, "startCol": 14, "endLine": 2, "endCol": 15, "newText": "3"}, + {"startLine": 3, "startCol": 2, "endLine": 3, "endCol": 2, "newText": "// tail\n"} + ] + resp = apply_text(None, uri="unity://path/Assets/Scripts/C.cs", edits=edits, precondition_sha256="sha", options={"validate": "relaxed", "applyMode": "atomic"}) + assert resp["success"] is True + # Last manage_script call should include options with applyMode atomic and validate relaxed + last = sent["calls"][-1] + assert last.get("options", {}).get("applyMode") == "atomic" + assert last.get("options", {}).get("validate") == "relaxed" + diff --git a/tests/test_edit_strict_and_warnings.py b/tests/test_edit_strict_and_warnings.py new file mode 100644 index 00000000..1d35323f --- /dev/null +++ b/tests/test_edit_strict_and_warnings.py @@ -0,0 +1,84 @@ +import sys +import pathlib +import importlib.util +import types + + +ROOT = pathlib.Path(__file__).resolve().parents[1] +SRC = ROOT / "UnityMcpBridge" / "UnityMcpServer~" / "src" +sys.path.insert(0, str(SRC)) + +# stub mcp.server.fastmcp +mcp_pkg = types.ModuleType("mcp") +server_pkg = types.ModuleType("mcp.server") +fastmcp_pkg = types.ModuleType("mcp.server.fastmcp") +class _Dummy: pass +fastmcp_pkg.FastMCP = _Dummy +fastmcp_pkg.Context = _Dummy +server_pkg.fastmcp = fastmcp_pkg +mcp_pkg.server = server_pkg +sys.modules.setdefault("mcp", mcp_pkg) +sys.modules.setdefault("mcp.server", server_pkg) +sys.modules.setdefault("mcp.server.fastmcp", fastmcp_pkg) + + +def _load(path: pathlib.Path, name: str): + spec = importlib.util.spec_from_file_location(name, path) + mod = importlib.util.module_from_spec(spec) + spec.loader.exec_module(mod) + return mod + + +manage_script = _load(SRC / "tools" / "manage_script.py", "manage_script_mod3") + + +class DummyMCP: + def __init__(self): self.tools = {} + def tool(self, *args, **kwargs): + def deco(fn): self.tools[fn.__name__] = fn; return fn + return deco + + +def setup_tools(): + mcp = DummyMCP() + manage_script.register_manage_script_tools(mcp) + return mcp.tools + + +def test_explicit_zero_based_normalized_warning(monkeypatch): + tools = setup_tools() + apply_edits = tools["apply_text_edits"] + + def fake_send(cmd, params): + # Simulate Unity path returning minimal success + return {"success": True} + + monkeypatch.setattr(manage_script, "send_command_with_retry", fake_send) + + # Explicit fields given as 0-based (invalid); SDK should normalize and warn + edits = [{"startLine": 0, "startCol": 0, "endLine": 0, "endCol": 0, "newText": "//x"}] + resp = apply_edits(None, uri="unity://path/Assets/Scripts/F.cs", edits=edits, precondition_sha256="sha") + + assert resp["success"] is True + data = resp.get("data", {}) + assert "normalizedEdits" in data + assert any(w == "zero_based_explicit_fields_normalized" for w in data.get("warnings", [])) + ne = data["normalizedEdits"][0] + assert ne["startLine"] == 1 and ne["startCol"] == 1 and ne["endLine"] == 1 and ne["endCol"] == 1 + + +def test_strict_zero_based_error(monkeypatch): + tools = setup_tools() + apply_edits = tools["apply_text_edits"] + + def fake_send(cmd, params): + return {"success": True} + + monkeypatch.setattr(manage_script, "send_command_with_retry", fake_send) + + edits = [{"startLine": 0, "startCol": 0, "endLine": 0, "endCol": 0, "newText": "//x"}] + resp = apply_edits(None, uri="unity://path/Assets/Scripts/F.cs", edits=edits, precondition_sha256="sha", strict=True) + assert resp["success"] is False + assert resp.get("code") == "zero_based_explicit_fields" + + diff --git a/tests/test_get_sha.py b/tests/test_get_sha.py new file mode 100644 index 00000000..cb58ce29 --- /dev/null +++ b/tests/test_get_sha.py @@ -0,0 +1,74 @@ +import sys +import pathlib +import importlib.util +import types + + +ROOT = pathlib.Path(__file__).resolve().parents[1] +SRC = ROOT / "UnityMcpBridge" / "UnityMcpServer~" / "src" +sys.path.insert(0, str(SRC)) + +# stub mcp.server.fastmcp to satisfy imports without full dependency +mcp_pkg = types.ModuleType("mcp") +server_pkg = types.ModuleType("mcp.server") +fastmcp_pkg = types.ModuleType("mcp.server.fastmcp") + +class _Dummy: + pass + +fastmcp_pkg.FastMCP = _Dummy +fastmcp_pkg.Context = _Dummy +server_pkg.fastmcp = fastmcp_pkg +mcp_pkg.server = server_pkg +sys.modules.setdefault("mcp", mcp_pkg) +sys.modules.setdefault("mcp.server", server_pkg) +sys.modules.setdefault("mcp.server.fastmcp", fastmcp_pkg) + + +def _load_module(path: pathlib.Path, name: str): + spec = importlib.util.spec_from_file_location(name, path) + mod = importlib.util.module_from_spec(spec) + spec.loader.exec_module(mod) + return mod + + +manage_script = _load_module(SRC / "tools" / "manage_script.py", "manage_script_mod") + + +class DummyMCP: + def __init__(self): + self.tools = {} + + def tool(self, *args, **kwargs): + def deco(fn): + self.tools[fn.__name__] = fn + return fn + return deco + + +def setup_tools(): + mcp = DummyMCP() + manage_script.register_manage_script_tools(mcp) + return mcp.tools + + +def test_get_sha_param_shape_and_routing(monkeypatch): + tools = setup_tools() + get_sha = tools["get_sha"] + + captured = {} + + def fake_send(cmd, params): + captured["cmd"] = cmd + captured["params"] = params + return {"success": True, "data": {"sha256": "abc", "lengthBytes": 1, "lastModifiedUtc": "2020-01-01T00:00:00Z", "uri": "unity://path/Assets/Scripts/A.cs", "path": "Assets/Scripts/A.cs"}} + + monkeypatch.setattr(manage_script, "send_command_with_retry", fake_send) + + resp = get_sha(None, uri="unity://path/Assets/Scripts/A.cs") + assert captured["cmd"] == "manage_script" + assert captured["params"]["action"] == "get_sha" + assert captured["params"]["name"] == "A" + assert captured["params"]["path"].endswith("Assets/Scripts") + assert resp["success"] is True + diff --git a/tests/test_logging_stdout.py b/tests/test_logging_stdout.py new file mode 100644 index 00000000..5b40fba3 --- /dev/null +++ b/tests/test_logging_stdout.py @@ -0,0 +1,68 @@ +import ast +from pathlib import Path + +import pytest + + +# locate server src dynamically to avoid hardcoded layout assumptions +ROOT = Path(__file__).resolve().parents[1] +candidates = [ + ROOT / "UnityMcpBridge" / "UnityMcpServer~" / "src", + ROOT / "UnityMcpServer~" / "src", +] +SRC = next((p for p in candidates if p.exists()), None) +if SRC is None: + searched = "\n".join(str(p) for p in candidates) + pytest.skip( + "Unity MCP server source not found. Tried:\n" + searched, + allow_module_level=True, + ) + + +@pytest.mark.skip(reason="TODO: ensure server logs only to stderr and rotating file") +def test_no_stdout_output_from_tools(): + pass + + +def test_no_print_statements_in_codebase(): + """Ensure no stray print/sys.stdout writes remain in server source.""" + offenders = [] + syntax_errors = [] + for py_file in SRC.rglob("*.py"): + # Skip virtual envs and third-party packages if they exist under SRC + parts = set(py_file.parts) + if ".venv" in parts or "site-packages" in parts: + continue + try: + text = py_file.read_text(encoding="utf-8", errors="strict") + except UnicodeDecodeError: + # Be tolerant of encoding edge cases in source tree without silently dropping bytes + text = py_file.read_text(encoding="utf-8", errors="replace") + try: + tree = ast.parse(text, filename=str(py_file)) + except SyntaxError: + syntax_errors.append(py_file.relative_to(SRC)) + continue + + class StdoutVisitor(ast.NodeVisitor): + def __init__(self): + self.hit = False + + def visit_Call(self, node: ast.Call): + # print(...) + if isinstance(node.func, ast.Name) and node.func.id == "print": + self.hit = True + # sys.stdout.write(...) + if isinstance(node.func, ast.Attribute) and node.func.attr == "write": + val = node.func.value + if isinstance(val, ast.Attribute) and val.attr == "stdout": + if isinstance(val.value, ast.Name) and val.value.id == "sys": + self.hit = True + self.generic_visit(node) + + v = StdoutVisitor() + v.visit(tree) + if v.hit: + offenders.append(py_file.relative_to(SRC)) + assert not syntax_errors, "syntax errors in: " + ", ".join(str(e) for e in syntax_errors) + assert not offenders, "stdout writes found in: " + ", ".join(str(o) for o in offenders) diff --git a/tests/test_manage_script_uri.py b/tests/test_manage_script_uri.py new file mode 100644 index 00000000..40b64584 --- /dev/null +++ b/tests/test_manage_script_uri.py @@ -0,0 +1,126 @@ +import sys +import types +from pathlib import Path + +import pytest + + + +# Locate server src dynamically to avoid hardcoded layout assumptions (same as other tests) +ROOT = Path(__file__).resolve().parents[1] +candidates = [ + ROOT / "UnityMcpBridge" / "UnityMcpServer~" / "src", + ROOT / "UnityMcpServer~" / "src", +] +SRC = next((p for p in candidates if p.exists()), None) +if SRC is None: + searched = "\n".join(str(p) for p in candidates) + pytest.skip( + "Unity MCP server source not found. Tried:\n" + searched, + allow_module_level=True, + ) +sys.path.insert(0, str(SRC)) + +# Stub mcp.server.fastmcp to satisfy imports without full package +mcp_pkg = types.ModuleType("mcp") +server_pkg = types.ModuleType("mcp.server") +fastmcp_pkg = types.ModuleType("mcp.server.fastmcp") +class _Dummy: pass +fastmcp_pkg.FastMCP = _Dummy +fastmcp_pkg.Context = _Dummy +server_pkg.fastmcp = fastmcp_pkg +mcp_pkg.server = server_pkg +sys.modules.setdefault("mcp", mcp_pkg) +sys.modules.setdefault("mcp.server", server_pkg) +sys.modules.setdefault("mcp.server.fastmcp", fastmcp_pkg) + + +# Import target module after path injection +import tools.manage_script as manage_script # type: ignore + + +class DummyMCP: + def __init__(self): + self.tools = {} + + def tool(self, *args, **kwargs): # ignore decorator kwargs like description + def _decorator(fn): + self.tools[fn.__name__] = fn + return fn + return _decorator + + +class DummyCtx: # FastMCP Context placeholder + pass + + +def _register_tools(): + mcp = DummyMCP() + manage_script.register_manage_script_tools(mcp) # populates mcp.tools + return mcp.tools + + +def test_split_uri_unity_path(monkeypatch): + tools = _register_tools() + captured = {} + + def fake_send(cmd, params): # capture params and return success + captured['cmd'] = cmd + captured['params'] = params + return {"success": True, "message": "ok"} + + monkeypatch.setattr(manage_script, "send_command_with_retry", fake_send) + + fn = tools['apply_text_edits'] + uri = "unity://path/Assets/Scripts/MyScript.cs" + fn(DummyCtx(), uri=uri, edits=[], precondition_sha256=None) + + assert captured['cmd'] == 'manage_script' + assert captured['params']['name'] == 'MyScript' + assert captured['params']['path'] == 'Assets/Scripts' + + +@pytest.mark.parametrize( + "uri, expected_name, expected_path", + [ + ("file:///Users/alex/Project/Assets/Scripts/Foo%20Bar.cs", "Foo Bar", "Assets/Scripts"), + ("file://localhost/Users/alex/Project/Assets/Hello.cs", "Hello", "Assets"), + ("file:///C:/Users/Alex/Proj/Assets/Scripts/Hello.cs", "Hello", "Assets/Scripts"), + ("file:///tmp/Other.cs", "Other", "tmp"), # outside Assets → fall back to normalized dir + ], +) +def test_split_uri_file_urls(monkeypatch, uri, expected_name, expected_path): + tools = _register_tools() + captured = {} + + def fake_send(cmd, params): + captured['cmd'] = cmd + captured['params'] = params + return {"success": True, "message": "ok"} + + monkeypatch.setattr(manage_script, "send_command_with_retry", fake_send) + + fn = tools['apply_text_edits'] + fn(DummyCtx(), uri=uri, edits=[], precondition_sha256=None) + + assert captured['params']['name'] == expected_name + assert captured['params']['path'] == expected_path + + +def test_split_uri_plain_path(monkeypatch): + tools = _register_tools() + captured = {} + + def fake_send(cmd, params): + captured['params'] = params + return {"success": True, "message": "ok"} + + monkeypatch.setattr(manage_script, "send_command_with_retry", fake_send) + + fn = tools['apply_text_edits'] + fn(DummyCtx(), uri="Assets/Scripts/Thing.cs", edits=[], precondition_sha256=None) + + assert captured['params']['name'] == 'Thing' + assert captured['params']['path'] == 'Assets/Scripts' + + diff --git a/tests/test_regex_delete_guard.py b/tests/test_regex_delete_guard.py new file mode 100644 index 00000000..c5bba26a --- /dev/null +++ b/tests/test_regex_delete_guard.py @@ -0,0 +1,151 @@ +import sys +import pytest +import pathlib +import importlib.util +import types + + +ROOT = pathlib.Path(__file__).resolve().parents[1] +SRC = ROOT / "UnityMcpBridge" / "UnityMcpServer~" / "src" +sys.path.insert(0, str(SRC)) + +# stub mcp.server.fastmcp +mcp_pkg = types.ModuleType("mcp") +server_pkg = types.ModuleType("mcp.server") +fastmcp_pkg = types.ModuleType("mcp.server.fastmcp") +class _D: pass +fastmcp_pkg.FastMCP = _D +fastmcp_pkg.Context = _D +server_pkg.fastmcp = fastmcp_pkg +mcp_pkg.server = server_pkg +sys.modules.setdefault("mcp", mcp_pkg) +sys.modules.setdefault("mcp.server", server_pkg) +sys.modules.setdefault("mcp.server.fastmcp", fastmcp_pkg) + + +def _load(path: pathlib.Path, name: str): + spec = importlib.util.spec_from_file_location(name, path) + mod = importlib.util.module_from_spec(spec) + spec.loader.exec_module(mod) + return mod + + +manage_script_edits = _load(SRC / "tools" / "manage_script_edits.py", "manage_script_edits_mod_guard") + + +class DummyMCP: + def __init__(self): self.tools = {} + def tool(self, *args, **kwargs): + def deco(fn): self.tools[fn.__name__] = fn; return fn + return deco + + +def setup_tools(): + mcp = DummyMCP() + manage_script_edits.register_manage_script_edits_tools(mcp) + return mcp.tools + + +def test_regex_delete_structural_guard(monkeypatch): + tools = setup_tools() + apply = tools["script_apply_edits"] + + # Craft a minimal C# snippet with a method; a bad regex that deletes only the header and '{' + # will unbalance braces and should be rejected by preflight. + bad_pattern = r"(?m)^\s*private\s+void\s+PrintSeries\s*\(\s*\)\s*\{" + contents = ( + "using UnityEngine;\n\n" + "public class LongUnityScriptClaudeTest : MonoBehaviour\n{\n" + "private void PrintSeries()\n{\n Debug.Log(\"1,2,3\");\n}\n" + "}\n" + ) + + def fake_send(cmd, params): + # Only the initial read should be invoked; provide contents + if cmd == "manage_script" and params.get("action") == "read": + return {"success": True, "data": {"contents": contents}} + # If preflight failed as intended, no write should be attempted; return a marker if called + return {"success": True, "message": "SHOULD_NOT_WRITE"} + + monkeypatch.setattr(manage_script_edits, "send_command_with_retry", fake_send) + + resp = apply( + ctx=None, + name="LongUnityScriptClaudeTest", + path="Assets/Scripts", + edits=[{"op": "regex_replace", "pattern": bad_pattern, "replacement": ""}], + options={"validate": "standard"}, + ) + + assert isinstance(resp, dict) + assert resp.get("success") is False + assert resp.get("code") == "validation_failed" + data = resp.get("data", {}) + assert data.get("status") == "validation_failed" + # Helpful hint to prefer structured delete + assert "delete_method" in (data.get("hint") or "") + + +# Parameterized robustness cases +BRACE_CONTENT = ( + "using UnityEngine;\n\n" + "public class LongUnityScriptClaudeTest : MonoBehaviour\n{\n" + "private void PrintSeries()\n{\n Debug.Log(\"1,2,3\");\n}\n" + "}\n" +) + +ATTR_CONTENT = ( + "using UnityEngine;\n\n" + "public class LongUnityScriptClaudeTest : MonoBehaviour\n{\n" + "[ContextMenu(\"PS\")]\nprivate void PrintSeries()\n{\n Debug.Log(\"1,2,3\");\n}\n" + "}\n" +) + +EXPR_CONTENT = ( + "using UnityEngine;\n\n" + "public class LongUnityScriptClaudeTest : MonoBehaviour\n{\n" + "private void PrintSeries() => Debug.Log(\"1\");\n" + "}\n" +) + + +@pytest.mark.parametrize( + "contents,pattern,repl,expect_success", + [ + # Unbalanced deletes (should fail with validation_failed) + (BRACE_CONTENT, r"(?m)^\s*private\s+void\s+PrintSeries\s*\(\s*\)\s*\{", "", False), + # Remove method closing brace only (leaves class closing brace) -> unbalanced + (BRACE_CONTENT, r"\n\}\n(?=\s*\})", "\n", False), + (ATTR_CONTENT, r"(?m)^\s*private\s+void\s+PrintSeries\s*\(\s*\)\s*\{", "", False), + # Expression-bodied: remove only '(' in header -> paren mismatch + (EXPR_CONTENT, r"(?m)private\s+void\s+PrintSeries\s*\(", "", False), + # Safe changes (should succeed) + (BRACE_CONTENT, r"(?m)^\s*Debug\.Log\(.*?\);\s*$", "", True), + (EXPR_CONTENT, r"Debug\.Log\(\"1\"\)", "Debug.Log(\"2\")", True), + ], +) +def test_regex_delete_variants(monkeypatch, contents, pattern, repl, expect_success): + tools = setup_tools() + apply = tools["script_apply_edits"] + + def fake_send(cmd, params): + if cmd == "manage_script" and params.get("action") == "read": + return {"success": True, "data": {"contents": contents}} + return {"success": True, "message": "WRITE"} + + monkeypatch.setattr(manage_script_edits, "send_command_with_retry", fake_send) + + resp = apply( + ctx=None, + name="LongUnityScriptClaudeTest", + path="Assets/Scripts", + edits=[{"op": "regex_replace", "pattern": pattern, "replacement": repl}], + options={"validate": "standard"}, + ) + + if expect_success: + assert isinstance(resp, dict) and resp.get("success") is True + else: + assert isinstance(resp, dict) and resp.get("success") is False and resp.get("code") == "validation_failed" + + diff --git a/tests/test_resources_api.py b/tests/test_resources_api.py new file mode 100644 index 00000000..29082160 --- /dev/null +++ b/tests/test_resources_api.py @@ -0,0 +1,81 @@ +import pytest + + +import sys +from pathlib import Path +import pytest +import types + +# locate server src dynamically to avoid hardcoded layout assumptions +ROOT = Path(__file__).resolve().parents[1] +candidates = [ + ROOT / "UnityMcpBridge" / "UnityMcpServer~" / "src", + ROOT / "UnityMcpServer~" / "src", +] +SRC = next((p for p in candidates if p.exists()), None) +if SRC is None: + searched = "\n".join(str(p) for p in candidates) + pytest.skip( + "Unity MCP server source not found. Tried:\n" + searched, + allow_module_level=True, + ) +sys.path.insert(0, str(SRC)) + +from tools.resource_tools import register_resource_tools # type: ignore + +class DummyMCP: + def __init__(self): + self._tools = {} + def tool(self, *args, **kwargs): # accept kwargs like description + def deco(fn): + self._tools[fn.__name__] = fn + return fn + return deco + +@pytest.fixture() +def resource_tools(): + mcp = DummyMCP() + register_resource_tools(mcp) + return mcp._tools + + +def test_resource_list_filters_and_rejects_traversal(resource_tools, tmp_path, monkeypatch): + # Create fake project structure + proj = tmp_path + assets = proj / "Assets" / "Scripts" + assets.mkdir(parents=True) + (assets / "A.cs").write_text("// a", encoding="utf-8") + (assets / "B.txt").write_text("b", encoding="utf-8") + outside = tmp_path / "Outside.cs" + outside.write_text("// outside", encoding="utf-8") + # Symlink attempting to escape + sneaky_link = assets / "link_out" + try: + sneaky_link.symlink_to(outside) + except Exception: + # Some platforms may not allow symlinks in tests; ignore + pass + + list_resources = resource_tools["list_resources"] + # Only .cs under Assets should be listed + import asyncio + resp = asyncio.get_event_loop().run_until_complete( + list_resources(ctx=None, pattern="*.cs", under="Assets", limit=50, project_root=str(proj)) + ) + assert resp["success"] is True + uris = resp["data"]["uris"] + assert any(u.endswith("Assets/Scripts/A.cs") for u in uris) + assert not any(u.endswith("B.txt") for u in uris) + assert not any(u.endswith("Outside.cs") for u in uris) + + +def test_resource_list_rejects_outside_paths(resource_tools, tmp_path): + proj = tmp_path + # under points outside Assets + list_resources = resource_tools["list_resources"] + import asyncio + resp = asyncio.get_event_loop().run_until_complete( + list_resources(ctx=None, pattern="*.cs", under="..", limit=10, project_root=str(proj)) + ) + assert resp["success"] is False + assert "Assets" in resp.get("error", "") or "under project root" in resp.get("error", "") diff --git a/tests/test_script_editing.py b/tests/test_script_editing.py new file mode 100644 index 00000000..88046d00 --- /dev/null +++ b/tests/test_script_editing.py @@ -0,0 +1,36 @@ +import pytest + + +@pytest.mark.xfail(strict=False, reason="pending: create new script, validate, apply edits, build and compile scene") +def test_script_edit_happy_path(): + pass + + +@pytest.mark.xfail(strict=False, reason="pending: multiple micro-edits debounce to single compilation") +def test_micro_edits_debounce(): + pass + + +@pytest.mark.xfail(strict=False, reason="pending: line ending variations handled correctly") +def test_line_endings_and_columns(): + pass + + +@pytest.mark.xfail(strict=False, reason="pending: regex_replace no-op with allow_noop honored") +def test_regex_replace_noop_allowed(): + pass + + +@pytest.mark.xfail(strict=False, reason="pending: large edit size boundaries and overflow protection") +def test_large_edit_size_and_overflow(): + pass + + +@pytest.mark.xfail(strict=False, reason="pending: symlink and junction protections on edits") +def test_symlink_and_junction_protection(): + pass + + +@pytest.mark.xfail(strict=False, reason="pending: atomic write guarantees") +def test_atomic_write_guarantees(): + pass diff --git a/tests/test_script_tools.py b/tests/test_script_tools.py new file mode 100644 index 00000000..c7cadd35 --- /dev/null +++ b/tests/test_script_tools.py @@ -0,0 +1,159 @@ +import sys +import pathlib +import importlib.util +import types +import pytest +import asyncio + +# add server src to path and load modules without triggering package imports +ROOT = pathlib.Path(__file__).resolve().parents[1] +SRC = ROOT / "UnityMcpBridge" / "UnityMcpServer~" / "src" +sys.path.insert(0, str(SRC)) + +# stub mcp.server.fastmcp to satisfy imports without full dependency +mcp_pkg = types.ModuleType("mcp") +server_pkg = types.ModuleType("mcp.server") +fastmcp_pkg = types.ModuleType("mcp.server.fastmcp") + +class _Dummy: + pass + +fastmcp_pkg.FastMCP = _Dummy +fastmcp_pkg.Context = _Dummy +server_pkg.fastmcp = fastmcp_pkg +mcp_pkg.server = server_pkg +sys.modules.setdefault("mcp", mcp_pkg) +sys.modules.setdefault("mcp.server", server_pkg) +sys.modules.setdefault("mcp.server.fastmcp", fastmcp_pkg) + +def load_module(path, name): + spec = importlib.util.spec_from_file_location(name, path) + module = importlib.util.module_from_spec(spec) + spec.loader.exec_module(module) + return module + +manage_script_module = load_module(SRC / "tools" / "manage_script.py", "manage_script_module") +manage_asset_module = load_module(SRC / "tools" / "manage_asset.py", "manage_asset_module") + + +class DummyMCP: + def __init__(self): + self.tools = {} + + def tool(self, *args, **kwargs): # accept decorator kwargs like description + def decorator(func): + self.tools[func.__name__] = func + return func + return decorator + +def setup_manage_script(): + mcp = DummyMCP() + manage_script_module.register_manage_script_tools(mcp) + return mcp.tools + +def setup_manage_asset(): + mcp = DummyMCP() + manage_asset_module.register_manage_asset_tools(mcp) + return mcp.tools + +def test_apply_text_edits_long_file(monkeypatch): + tools = setup_manage_script() + apply_edits = tools["apply_text_edits"] + captured = {} + + def fake_send(cmd, params): + captured["cmd"] = cmd + captured["params"] = params + return {"success": True} + + monkeypatch.setattr(manage_script_module, "send_command_with_retry", fake_send) + + edit = {"startLine": 1005, "startCol": 0, "endLine": 1005, "endCol": 5, "newText": "Hello"} + resp = apply_edits(None, "unity://path/Assets/Scripts/LongFile.cs", [edit]) + assert captured["cmd"] == "manage_script" + assert captured["params"]["action"] == "apply_text_edits" + assert captured["params"]["edits"][0]["startLine"] == 1005 + assert resp["success"] is True + +def test_sequential_edits_use_precondition(monkeypatch): + tools = setup_manage_script() + apply_edits = tools["apply_text_edits"] + calls = [] + + def fake_send(cmd, params): + calls.append(params) + return {"success": True, "sha256": f"hash{len(calls)}"} + + monkeypatch.setattr(manage_script_module, "send_command_with_retry", fake_send) + + edit1 = {"startLine": 1, "startCol": 0, "endLine": 1, "endCol": 0, "newText": "//header\n"} + resp1 = apply_edits(None, "unity://path/Assets/Scripts/File.cs", [edit1]) + edit2 = {"startLine": 2, "startCol": 0, "endLine": 2, "endCol": 0, "newText": "//second\n"} + resp2 = apply_edits(None, "unity://path/Assets/Scripts/File.cs", [edit2], precondition_sha256=resp1["sha256"]) + + assert calls[1]["precondition_sha256"] == resp1["sha256"] + assert resp2["sha256"] == "hash2" + + +def test_apply_text_edits_forwards_options(monkeypatch): + tools = setup_manage_script() + apply_edits = tools["apply_text_edits"] + captured = {} + + def fake_send(cmd, params): + captured["params"] = params + return {"success": True} + + monkeypatch.setattr(manage_script_module, "send_command_with_retry", fake_send) + + opts = {"validate": "relaxed", "applyMode": "atomic", "refresh": "immediate"} + apply_edits(None, "unity://path/Assets/Scripts/File.cs", [{"startLine":1,"startCol":1,"endLine":1,"endCol":1,"newText":"x"}], options=opts) + assert captured["params"].get("options") == opts + + +def test_apply_text_edits_defaults_atomic_for_multi_span(monkeypatch): + tools = setup_manage_script() + apply_edits = tools["apply_text_edits"] + captured = {} + + def fake_send(cmd, params): + captured["params"] = params + return {"success": True} + + monkeypatch.setattr(manage_script_module, "send_command_with_retry", fake_send) + + edits = [ + {"startLine": 2, "startCol": 2, "endLine": 2, "endCol": 3, "newText": "A"}, + {"startLine": 3, "startCol": 2, "endLine": 3, "endCol": 2, "newText": "// tail\n"}, + ] + apply_edits(None, "unity://path/Assets/Scripts/File.cs", edits, precondition_sha256="x") + opts = captured["params"].get("options", {}) + assert opts.get("applyMode") == "atomic" + +def test_manage_asset_prefab_modify_request(monkeypatch): + tools = setup_manage_asset() + manage_asset = tools["manage_asset"] + captured = {} + + async def fake_async(cmd, params, loop=None): + captured["cmd"] = cmd + captured["params"] = params + return {"success": True} + + monkeypatch.setattr(manage_asset_module, "async_send_command_with_retry", fake_async) + monkeypatch.setattr(manage_asset_module, "get_unity_connection", lambda: object()) + + async def run(): + resp = await manage_asset( + None, + action="modify", + path="Assets/Prefabs/Player.prefab", + properties={"hp": 100}, + ) + assert captured["cmd"] == "manage_asset" + assert captured["params"]["action"] == "modify" + assert captured["params"]["path"] == "Assets/Prefabs/Player.prefab" + assert captured["params"]["properties"] == {"hp": 100} + assert resp["success"] is True + + asyncio.run(run()) diff --git a/tests/test_transport_framing.py b/tests/test_transport_framing.py new file mode 100644 index 00000000..42f93701 --- /dev/null +++ b/tests/test_transport_framing.py @@ -0,0 +1,218 @@ +import sys +import json +import struct +import socket +import threading +import time +import select +from pathlib import Path + +import pytest + +# locate server src dynamically to avoid hardcoded layout assumptions +ROOT = Path(__file__).resolve().parents[1] +candidates = [ + ROOT / "UnityMcpBridge" / "UnityMcpServer~" / "src", + ROOT / "UnityMcpServer~" / "src", +] +SRC = next((p for p in candidates if p.exists()), None) +if SRC is None: + searched = "\n".join(str(p) for p in candidates) + pytest.skip( + "Unity MCP server source not found. Tried:\n" + searched, + allow_module_level=True, + ) +sys.path.insert(0, str(SRC)) + +from unity_connection import UnityConnection + + +def start_dummy_server(greeting: bytes, respond_ping: bool = False): + """Start a minimal TCP server for handshake tests.""" + sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) + sock.bind(("127.0.0.1", 0)) + sock.listen(1) + port = sock.getsockname()[1] + ready = threading.Event() + + def _run(): + ready.set() + conn, _ = sock.accept() + conn.settimeout(1.0) + if greeting: + conn.sendall(greeting) + if respond_ping: + try: + # Read exactly n bytes helper + def _read_exact(n: int) -> bytes: + buf = b"" + while len(buf) < n: + chunk = conn.recv(n - len(buf)) + if not chunk: + break + buf += chunk + return buf + + header = _read_exact(8) + if len(header) == 8: + length = struct.unpack(">Q", header)[0] + payload = _read_exact(length) + if payload == b'{"type":"ping"}': + resp = b'{"type":"pong"}' + conn.sendall(struct.pack(">Q", len(resp)) + resp) + except Exception: + pass + time.sleep(0.1) + try: + conn.close() + except Exception: + pass + finally: + sock.close() + + threading.Thread(target=_run, daemon=True).start() + ready.wait() + return port + + +def start_handshake_enforcing_server(): + """Server that drops connection if client sends data before handshake.""" + sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) + sock.bind(("127.0.0.1", 0)) + sock.listen(1) + port = sock.getsockname()[1] + ready = threading.Event() + + def _run(): + ready.set() + conn, _ = sock.accept() + # If client sends any data before greeting, disconnect (poll briefly) + try: + conn.setblocking(False) + deadline = time.time() + 0.15 # short, reduces race with legitimate clients + while time.time() < deadline: + r, _, _ = select.select([conn], [], [], 0.01) + if r: + try: + peek = conn.recv(1, socket.MSG_PEEK) + except BlockingIOError: + peek = b"" + except Exception: + peek = b"\x00" + if peek: + conn.close() + sock.close() + return + # No pre-handshake data observed; send greeting + conn.setblocking(True) + conn.sendall(b"MCP/0.1 FRAMING=1\n") + time.sleep(0.1) + finally: + try: + conn.close() + finally: + sock.close() + + threading.Thread(target=_run, daemon=True).start() + ready.wait() + return port + + +def test_handshake_requires_framing(): + port = start_dummy_server(b"MCP/0.1\n") + conn = UnityConnection(host="127.0.0.1", port=port) + assert conn.connect() is False + assert conn.sock is None + + +def test_small_frame_ping_pong(): + port = start_dummy_server(b"MCP/0.1 FRAMING=1\n", respond_ping=True) + conn = UnityConnection(host="127.0.0.1", port=port) + try: + assert conn.connect() is True + assert conn.use_framing is True + payload = b'{"type":"ping"}' + conn.sock.sendall(struct.pack(">Q", len(payload)) + payload) + resp = conn.receive_full_response(conn.sock) + assert json.loads(resp.decode("utf-8"))["type"] == "pong" + finally: + conn.disconnect() + + +def test_unframed_data_disconnect(): + port = start_handshake_enforcing_server() + sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) + sock.connect(("127.0.0.1", port)) + sock.settimeout(1.0) + sock.sendall(b"BAD") + time.sleep(0.4) + try: + data = sock.recv(1024) + assert data == b"" + except (ConnectionResetError, ConnectionAbortedError): + # Some platforms raise instead of returning empty bytes when the + # server closes the connection after detecting pre-handshake data. + pass + finally: + sock.close() + + +def test_zero_length_payload_heartbeat(): + # Server that sends handshake and a zero-length heartbeat frame followed by a pong payload + import socket, struct, threading, time + + sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) + sock.bind(("127.0.0.1", 0)) + sock.listen(1) + port = sock.getsockname()[1] + ready = threading.Event() + + def _run(): + ready.set() + conn, _ = sock.accept() + try: + conn.sendall(b"MCP/0.1 FRAMING=1\n") + time.sleep(0.02) + # Heartbeat frame (length=0) + conn.sendall(struct.pack(">Q", 0)) + time.sleep(0.02) + # Real payload frame + payload = b'{"type":"pong"}' + conn.sendall(struct.pack(">Q", len(payload)) + payload) + time.sleep(0.02) + finally: + try: conn.close() + except Exception: pass + sock.close() + + threading.Thread(target=_run, daemon=True).start() + ready.wait() + + conn = UnityConnection(host="127.0.0.1", port=port) + try: + assert conn.connect() is True + # Receive should skip heartbeat and return the pong payload (or empty if only heartbeats seen) + resp = conn.receive_full_response(conn.sock) + assert resp in (b'{"type":"pong"}', b"") + finally: + conn.disconnect() + + +@pytest.mark.skip(reason="TODO: oversized payload should disconnect") +def test_oversized_payload_rejected(): + pass + + +@pytest.mark.skip(reason="TODO: partial header/payload triggers timeout and disconnect") +def test_partial_frame_timeout(): + pass + + +@pytest.mark.skip(reason="TODO: concurrency test with parallel tool invocations") +def test_parallel_invocations_no_interleaving(): + pass + + +@pytest.mark.skip(reason="TODO: reconnection after drop mid-command") +def test_reconnect_mid_command(): + pass