Conversation
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughAdds LSP diagnostics caching/awaiting, implementation/implementation-request support, diagnostics-aware edit tooling (pre/post snapshots and reporting), JetBrains refactor/inspection endpoints, verified download/extract with SHA256/allowlists and npm helper, Ty language server and Python_TY enum, editor edit-path tracking, many tests and demo scripts. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
participant Tool as Tool (caller)
participant Editor as Component/CodeEditor
participant LS as Language Server
participant Agent as SerenaAgent
Note over Tool,Editor: Diagnostics-aware edit flow (pre/post snapshot)
Tool->>Editor: capture_pre_edit_diagnostics(relative_path, min_severity)
Editor->>LS: request_published_text_document_diagnostics(relative_path)
LS-->>Editor: diagnostics (generation N)
Editor-->>Tool: PublishedDiagnosticsSnapshot(generation N, identities)
Tool->>Editor: apply_workspace_edit(edit)
Editor->>LS: apply_workspace_edit(...)
LS-->>Editor: edit_applied
Editor->>LS: request_published_text_document_diagnostics(relative_path, after_generation=N)
LS-->>Editor: diagnostics (generation M)
Editor->>Tool: newly_introduced = compare_identities(before, after)
Tool-->>Agent: return_formatted_edit_result(with new diagnostics payload)
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Poem
✨ Finishing Touches🧪 Generate unit tests (beta)
|
| find_file_tool = agent.get_tool(FindFileTool) | ||
| search_pattern_tool = agent.get_tool(SearchForPatternTool) | ||
| overview_tool = agent.get_tool(JetBrainsGetSymbolsOverviewTool) | ||
| safe_delete_tool = agent.get_tool(JetBrainsSafeDeleteTool) |
| search_pattern_tool = agent.get_tool(SearchForPatternTool) | ||
| overview_tool = agent.get_tool(JetBrainsGetSymbolsOverviewTool) | ||
| safe_delete_tool = agent.get_tool(JetBrainsSafeDeleteTool) | ||
| inline_symbol = agent.get_tool(JetBrainsInlineSymbol) |
| overview_tool = agent.get_tool(JetBrainsGetSymbolsOverviewTool) | ||
| safe_delete_tool = agent.get_tool(JetBrainsSafeDeleteTool) | ||
| inline_symbol = agent.get_tool(JetBrainsInlineSymbol) | ||
| inspections_tool = agent.get_tool(JetBrainsRunInspectionsTool) |
| safe_delete_tool = agent.get_tool(JetBrainsSafeDeleteTool) | ||
| inline_symbol = agent.get_tool(JetBrainsInlineSymbol) | ||
| inspections_tool = agent.get_tool(JetBrainsRunInspectionsTool) | ||
| list_inspections_tool = agent.get_tool(JetBrainsListInspectionsTool) |
| log.error("Aborting installation due to checksum mismatch - possible security issue!") | ||
| try: | ||
| os.remove(archive_path) | ||
| except OSError: |
9d8ec03 to
3c41536
Compare
There was a problem hiding this comment.
Actionable comments posted: 8
Note
Due to the large number of review comments, Critical severity comments were prioritized as inline comments.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (5)
src/solidlsp/language_servers/systemverilog_server.py (1)
64-77:⚠️ Potential issue | 🟠 MajorSHA256 verification will fail for custom
verible_versionoverrides.Unlike the HLSL server which conditionally sets
sha256=Nonefor non-default versions, this implementation always provides hardcoded SHA256 values. If a user sets a customverible_version, the download will fail integrity checks because the hash won't match the new version's archive.Consider making SHA256 conditional on the version matching the default, similar to the HLSL pattern:
Proposed fix
+_DEFAULT_VERIBLE_VERSION = "v0.0-4051-g9fdb4057" +_VERIBLE_SHA256_BY_PLATFORM = { + "linux-x64": "f52e5920ef63f70620a6086e09dea8bd778147cd7a9ff827bb7de5d6316b1754", + "linux-arm64": "30dd9c6f6e0f4840d6ba0c9e81ea2774a50b5a1a523a855245f9a9b4beb6b58b", + "osx-x64": "9ef92e9ad345285dd593763e10ca61c8532fcf47bbb6cf4448f9a9423882d662", + "osx-arm64": "9ef92e9ad345285dd593763e10ca61c8532fcf47bbb6cf4448f9a9423882d662", + "win-x64": "729aa244036da4a4f87bc026d33555456fc7f7be79778d983ebe9c893f4a0ca3", +} + class DependencyProvider(LanguageServerDependencyProviderSinglePath): def _get_or_install_core_dependency(self) -> str: # ... - verible_version = self._custom_settings.get("verible_version", "v0.0-4051-g9fdb4057") + verible_version = self._custom_settings.get("verible_version", _DEFAULT_VERIBLE_VERSION) # ... RuntimeDependency( # ... - sha256="f52e5920ef63f70620a6086e09dea8bd778147cd7a9ff827bb7de5d6316b1754", + sha256=_VERIBLE_SHA256_BY_PLATFORM["linux-x64"] if verible_version == _DEFAULT_VERIBLE_VERSION else None, allowed_hosts=VERIBLE_ALLOWED_HOSTS, ),🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/solidlsp/language_servers/systemverilog_server.py` around lines 64 - 77, The hardcoded SHA256 for the Verible runtime dependency will break when users override verible_version; update the RuntimeDependency instantiation in systemverilog_server.py (the verible_version variable and the RuntimeDependency with id "verible-ls"/binary_name "verible-{verible_version}/bin/verible-verilog-ls") to set sha256 to the known hash only when verible_version equals the default constant (e.g., "v0.0-4051-g9fdb4057"); otherwise set sha256=None (same conditional pattern used in the HLSL server) so custom versions skip integrity verification.test/solidlsp/csharp/test_csharp_nuget_download.py (1)
102-130:⚠️ Potential issue | 🟡 MinorTest
test_download_method_does_not_call_azure_feedappears incomplete.The test mocks
FileUtils.download_and_extract_archive_verifiedbut doesn't assert anything after calling_download_nuget_package. The test name suggests it should verify Azure feed is not accessed, but there's no verification logic.🔧 Suggested fix to add meaningful assertions
# Mock urllib.request.urlopen to track if Azure feed is accessed with patch( "solidlsp.language_servers.csharp_language_server.FileUtils.download_and_extract_archive_verified", - ): - dependency_provider._download_nuget_package(test_dependency) + ) as mock_download: + dependency_provider._download_nuget_package(test_dependency) + + # Verify the download was called with the NuGet.org URL, not Azure + mock_download.assert_called_once() + called_url = mock_download.call_args[0][0] + assert "nuget.org" in called_url, "Should use NuGet.org URL" + assert "azure" not in called_url.lower(), "Should not use Azure feed"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/solidlsp/csharp/test_csharp_nuget_download.py` around lines 102 - 130, The test currently patches FileUtils.download_and_extract_archive_verified but makes no assertions; update test_download_method_does_not_call_azure_feed to assert behavior: patch both solildsp.language_servers.csharp_language_server.FileUtils.download_and_extract_archive_verified (as mock_download) and urllib.request.urlopen (as mock_urlopen) before calling CSharpLanguageServer.DependencyProvider._download_nuget_package(test_dependency), then assert mock_download.assert_called_once() (or called_with the expected nuget.org URL) and assert mock_urlopen.assert_not_called() to verify the Azure feed (urllib.request.urlopen) was not accessed.test/solidlsp/python/test_symbol_retrieval.py (1)
270-318:⚠️ Potential issue | 🟠 MajorRemove the duplicated strict nested-function assertions.
The first
func_within_funccheck already treats method-local resolution as backend-dependent. Lines 304-318 repeat the same lookup with hard asserts, so this test can still fail on one of the newly addedPYTHON_LANGUAGE_BACKENDS.Suggested fix
- # Test 2: Find definition of the nested class - defining_symbol = language_server.request_defining_symbol(file_path, 15, 18) # Position of NestedClass - - # This should resolve to the NestedClass - assert defining_symbol is not None - assert defining_symbol.get("name") == "NestedClass" - assert defining_symbol.get("kind") == SymbolKind.Class.value - - # Test 3: Find definition of a method-local function - defining_symbol = language_server.request_defining_symbol(file_path, 9, 15) # Position inside func_within_func - - # This is challenging for many language servers and may fail - assert defining_symbol is not None - assert defining_symbol.get("name") == "func_within_func" - assert defining_symbol.get("kind") in {SymbolKind.Function.value, SymbolKind.Method.value}The earlier tolerant block already covers this case.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/solidlsp/python/test_symbol_retrieval.py` around lines 270 - 318, The test function test_request_defining_symbol_nested_function duplicates the strict assertions for the method-local function lookup (calls to language_server.request_defining_symbol for the func_within_func position); remove the second, stricter block (the repeated requests/asserts that check defining_symbol is not None and kind in {SymbolKind.Function.value, SymbolKind.Method.value}) and keep the earlier tolerant try/warn check that already handles backend-dependent resolution of func_within_func; update only the code inside test_request_defining_symbol_nested_function around the duplicated request_defining_symbol(file_path, 9, 15) checks so the nested-function case is tested once using the tolerant pattern.src/solidlsp/language_servers/common.py (1)
102-125:⚠️ Potential issue | 🟠 MajorDon't send configurable npm arguments through the shell.
build_npm_install_command()packagesversionand optionalregistryinto a list that is passed to_run_command(), which joins list commands with spaces and executes them withshell=Trueon POSIX systems. When version and registry values come from configuration, shell metacharacters become command injection or command-breakage risks.Suggested fix:
Modify _run_command to avoid shell interpretation of list arguments
`@staticmethod` def _run_command(command: str | list[str], cwd: str) -> None: kwargs = subprocess_kwargs() if not PlatformUtils.get_platform_id().is_windows(): import pwd kwargs["user"] = pwd.getpwuid(os.getuid()).pw_name # type: ignore - is_windows = platform.system() == "Windows" - if not isinstance(command, str) and not is_windows: - # Since we are using the shell, we need to convert the command list to a single string - # on Linux/macOS - command = " ".join(command) + use_shell = isinstance(command, str) log.info("Running command %s in '%s'", f"'{command}'" if isinstance(command, str) else command, cwd) completed_process = subprocess.run( command, - shell=True, + shell=use_shell, check=True, cwd=cwd, stdout=subprocess.PIPE, stderr=subprocess.STDOUT, **kwargs, ) # type: ignore🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/solidlsp/language_servers/common.py` around lines 102 - 125, The _run_command function currently joins list commands into a single shell string and runs subprocess.run with shell=True on POSIX, which allows injection via configurable npm args; change it so that when command is a list (e.g., built by build_npm_install_command) you pass it directly to subprocess.run with shell=False (do not join into a string), only use shell=True for string commands if necessary, and ensure on Windows you keep the correct behavior (handle is_windows via platform.system()). Also adjust the log call to represent the list safely (without joining) and keep existing kwargs (including the pwd-based "user") unchanged.src/solidlsp/language_servers/lua_ls.py (1)
52-74:⚠️ Potential issue | 🟠 MajorUse version-aware paths and require checksums for all downloaded versions.
The version override mechanism is ineffective because
_get_lua_ls_path()returns any existing binary without version verification, and all versions install to the same directory. Changinglua_language_server_versiononly triggers a download if no binary exists; otherwise, the cached binary is reused. Additionally, custom versions (non-3.15.0) skip SHA-256 verification by passingexpected_sha256=None, allowing unverified binaries to execute.Suggested fixes:
- Include the version in the installation directory path (e.g.,
<resources>/lua/{version}/bin/lua-language-server) to distinguish between versions- Add SHA-256 hashes for supported custom versions or reject unsigned downloads
- Update
_get_lua_ls_path()to be version-aware when checking cached binaries🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/solidlsp/language_servers/lua_ls.py` around lines 52 - 74, The current _get_lua_ls_path() returns any existing lua-language-server binary without regard to configured version and downloads may skip checksum; update it to be version-aware by incorporating solidlsp_settings.lua_language_server_version into the resource path (e.g., use LuaLanguageServer.ls_resources_dir(solidlsp_settings)/"lua"/{version}/"bin"/"lua-language-server") and when checking cached binaries prefer the path for the requested version before falling back to generic locations; ensure the installer always requires a SHA-256 for any non-default version (add known hashes for supported custom versions or refuse downloads when expected_sha256 is missing) and adjust any code that calls LuaLanguageServer.ls_resources_dir or the downloader to use the versioned directory so different versions do not collide.
🟠 Major comments (15)
test/resources/repos/kotlin/test_repo/gradlew-117-117 (1)
117-117:⚠️ Potential issue | 🟠 MajorFix over-escaped
CLASSPATHinitialization.Line 117 sets
CLASSPATHto a literal\"\"string instead of an empty value. That can leak malformed path data into Line 175 (cygpath) on Cygwin/MSYS.🔧 Proposed fix
-CLASSPATH="\\\"\\\"" +CLASSPATH=""🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/resources/repos/kotlin/test_repo/gradlew` at line 117, The CLASSPATH variable is over-escaped and set to the literal string \"\"; change its initialization so CLASSPATH is truly empty (e.g., CLASSPATH="" or CLASSPATH=) instead of CLASSPATH="\\\"\\\"" to avoid leaking malformed path data into the later cygpath handling (see use of cygpath around line where CLASSPATH is converted). Update the assignment of CLASSPATH in the gradlew script accordingly so downstream path logic receives an actual empty value.test/resources/repos/kotlin/test_repo/gradlew.bat-1-94 (1)
1-94:⚠️ Potential issue | 🟠 MajorUse CRLF line endings for this
.batfile.Line 1 through Line 94 appear to be LF-only. On Windows
cmd.exe, that can break batch parsing in edge cases and make wrapper execution flaky.💡 Proposed fix
+# .gitattributes (repo root) +test/resources/repos/kotlin/test_repo/gradlew.bat text eol=crlfThen re-save
test/resources/repos/kotlin/test_repo/gradlew.batwith CRLF and recommit.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/resources/repos/kotlin/test_repo/gradlew.bat` around lines 1 - 94, The gradlew.bat file uses LF-only endings which can break Windows cmd parsing; open test/resources/repos/kotlin/test_repo/gradlew.bat (contains labels like :execute and :fail and calls to "%JAVA_EXE%") and convert/save the file with CRLF line endings, ensure the repository stores the file with CRLF (or update .gitattributes / set core.autocrlf appropriately), then recommit the updated gradlew.bat so Windows environments run the :execute / :fail flow reliably.src/solidlsp/language_servers/dart_language_server.py-39-51 (1)
39-51:⚠️ Potential issue | 🟠 MajorRequire checksums for custom Dart SDK versions.
When
dart_sdk_version != "3.7.1", every dependency falls back tosha256=None, so the override path downloads and executes an unverified SDK. That undermines the new secure-download behavior. Please require a checksum alongside the override (for each platform artifact), or resolve trusted hashes before install.Also applies to: 56-61, 66-71, 76-81, 86-91
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/solidlsp/language_servers/dart_language_server.py` around lines 39 - 51, The current logic in dart_language_server.py sets sha256=None for any dart_sdk_version != "3.7.1", leaving RuntimeDependency entries (created in RuntimeDependencyCollection / RuntimeDependency for DartLanguageServer with fields like url, platform_id, archive_type, binary_name, allowed_hosts/DART_ALLOWED_HOSTS) unverified; update the code to require a checksum for custom SDK overrides by validating dart_sdk_version overrides: if a user supplies a non-default version, require/accept a corresponding sha256 (from solidlsp_settings or environment) and populate RuntimeDependency.sha256 accordingly, or else raise/return an error to stop installation; apply this same requirement for all platform artifacts created in the blocks that produce RuntimeDependency entries (the repeated sections noted around lines 56-61, 66-71, 76-81, 86-91) so no RuntimeDependency is created with sha256=None.src/solidlsp/language_servers/terraform_ls.py-101-154 (1)
101-154:⚠️ Potential issue | 🟠 Major
terraform_ls_versionoverrides disable checksum verification and may silently reuse old binaries.When the version differs from
0.36.5, every dependency entry setssha256=None, so the download loses integrity verification. Additionally, lines 160–163 always use a versionless install path and skip reinstallation if the binary already exists, which means changing the configured version can silently keep launching the old binary.Suggested fix
cls._ensure_tf_command_available() terraform_settings = solidlsp_settings.get_ls_specific_settings(Language.TERRAFORM) terraform_ls_version = terraform_settings.get("terraform_ls_version", "0.36.5") + terraform_ls_sha256 = terraform_settings.get("terraform_ls_sha256") + if terraform_ls_version != "0.36.5" and not terraform_ls_sha256: + raise ValueError("terraform_ls_sha256 is required when overriding terraform_ls_version") platform_id = PlatformUtils.get_platform_id() deps = RuntimeDependencyCollection( [ RuntimeDependency( id="TerraformLS", @@ - sha256="fee8743aa71fe2d8b0b9b91283b844cfa57d58457306a62e53a8f38d143cec8c" if terraform_ls_version == "0.36.5" else None, + sha256=terraform_ls_sha256 or "fee8743aa71fe2d8b0b9b91283b844cfa57d58457306a62e53a8f38d143cec8c", allowed_hosts=TERRAFORM_LS_ALLOWED_HOSTS, ), + # Apply the same `terraform_ls_sha256 or <default hash>` pattern to the other platform entries. ] ) dependency = deps.get_single_dep_for_current_platform() - terraform_ls_executable_path = deps.binary_path(cls.ls_resources_dir(solidlsp_settings)) + install_dir = os.path.join(cls.ls_resources_dir(solidlsp_settings), terraform_ls_version) + terraform_ls_executable_path = deps.binary_path(install_dir) if not os.path.exists(terraform_ls_executable_path): log.info(f"Downloading terraform-ls from {dependency.url}") - deps.install(cls.ls_resources_dir(solidlsp_settings)) + deps.install(install_dir)Also applies to: 160–163
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/solidlsp/language_servers/terraform_ls.py` around lines 101 - 154, The RuntimeDependency entries set sha256=None whenever terraform_ls_version != "0.36.5" which disables integrity checks, and the installer uses a versionless install path so an existing binary can be reused when the configured version changes; update the logic around terraform_ls_version and installation to (1) maintain per-version checksums (or require explicit opt-in to disable checks) instead of conditionally assigning None in the RuntimeDependency constructors (reference the terraform_ls_version variable and the RuntimeDependency(...) sha256 arguments), and (2) include the terraform_ls_version in the install path / artifact location and in the existence check so installs are versioned (reference the install code that currently checks for an existing terraform-ls/terraform-ls.exe and the RuntimeDependencyCollection handling) — this ensures downloads keep integrity verification per-version and switching terraform_ls_version forces a new install rather than silently reusing an old binary.src/solidlsp/language_servers/pascal_server.py-620-625 (1)
620-625:⚠️ Potential issue | 🟠 MajorMutating class attributes creates race conditions with multiple instances.
_setup_runtime_dependenciesmodifiescls.PASLS_VERSION,cls.PASLS_RELEASES_URL, andcls.PASLS_API_URL. These are class-level attributes shared across all instances. If multiplePascalLanguageServerinstances are created concurrently with differentpasls_versionsettings, they will overwrite each other's URLs.Consider using instance attributes or passing the version/URLs through the call chain instead of mutating class state.
Proposed fix using instance-scoped values
`@classmethod` def _setup_runtime_dependencies(cls, solidlsp_settings: SolidLSPSettings) -> str: """ Setup runtime dependencies for Pascal Language Server (pasls). Downloads the pinned Serena-supported pasls release with checksum verification. Returns: str: The command to start the pasls server """ pascal_settings = solidlsp_settings.get_ls_specific_settings(Language.PASCAL) pasls_version = pascal_settings.get("pasls_version", PASLS_VERSION) - cls.PASLS_VERSION = pasls_version - cls.PASLS_RELEASES_URL = f"https://github.com/zen010101/pascal-language-server/releases/download/{pasls_version}" - cls.PASLS_API_URL = f"https://api.github.com/repos/zen010101/pascal-language-server/releases/tags/{pasls_version}" + pasls_releases_url = f"https://github.com/zen010101/pascal-language-server/releases/download/{pasls_version}" + pasls_api_url = f"https://api.github.com/repos/zen010101/pascal-language-server/releases/tags/{pasls_version}"Then pass these URLs to the methods that need them rather than relying on class attributes.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/solidlsp/language_servers/pascal_server.py` around lines 620 - 625, The _setup_runtime_dependencies method currently mutates class attributes (cls.PASLS_VERSION, cls.PASLS_RELEASES_URL, cls.PASLS_API_URL) which can cause race conditions across concurrent PascalLanguageServer instances; change the implementation to use instance-scoped values (e.g., self.pasls_version, self.pasls_releases_url, self.pasls_api_url) or local variables and pass those values down through the call chain to any methods that need them instead of writing to class attributes, and update callers of _setup_runtime_dependencies and any methods referencing PASLS_* to accept and use the instance/local values.src/serena/tools/tools_base.py-34-43 (1)
34-43:⚠️ Potential issue | 🟠 MajorDon't include absolute ranges in the pre/post diagnostic identity.
Using start/end positions as part of
DiagnosticIdentitymeans any insert/delete above an existing warning turns the same warning into a “new” one. That will create false positives on edits that only shift lines. Compare diagnostics on a stable key first (for example message/code/source/severity, optionally owner symbol) and use the range only as a secondary tiebreaker.Also applies to: 282-295, 418-422
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/serena/tools/tools_base.py` around lines 34 - 43, The DiagnosticIdentity dataclass currently encodes absolute start/end positions which makes identities unstable across edits; remove the start_line/start_character/end_line/end_character fields from DiagnosticIdentity and keep only stable keys (message, severity, code_repr, source, and optionally an owner symbol), then update all places that construct or compare DiagnosticIdentity instances to first compare on those stable fields and only use the diagnostic range (if needed) as a secondary tiebreaker; update constructors/factories and equality/hash/compare logic accordingly so range information is no longer part of the primary identity.test/serena/test_serena_agent.py-1272-1296 (1)
1272-1296: 🛠️ Refactor suggestion | 🟠 MajorAdd snapshot coverage for this symbolic edit flow.
These
ReplaceSymbolBodyToolassertions only check a couple of substrings, so response-shape, severity-grouping, and path-format regressions can slip through unnoticed. The repo rule for symbolic editing operations expects snapshot tests here.Based on learnings, "Applies to test/**/*.py : Symbolic editing operations must have snapshot tests".
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/serena/test_serena_agent.py` around lines 1272 - 1296, Update the test_replace_symbol_body_reports_new_diagnostics flow to include a snapshot assertion that captures the full diagnostics response shape instead of only checking substrings: after calling ReplaceSymbolBodyTool.apply and parsing results with parse_edit_diagnostics_result, serialize the diagnostics (use the same normalized_relative_path lookup with project_file_modification_context) and assert it against a stored snapshot (use your test suite's snapshot fixture/mechanism) to cover response shape, severity grouping, and path formatting; keep existing substring checks if desired but make the snapshot the primary guard against regressions.test/conftest.py-255-276 (1)
255-276:⚠️ Potential issue | 🟠 MajorAdd availability gating for
PYTHON_TYwhenuvxoruvis unavailable.The
TyLanguageServer.create_launch_command()method requires eitheruvxoruvto launch the server, raising aRuntimeErrorif neither is found. However,Language.PYTHON_TYcurrently has no availability check in its pytest marker definition or in_determine_disabled_languages(). On machines withoutuv/uvxinstalled, these parametrized test cases will fail during fixture setup instead of skipping cleanly like other language-specific test suites (e.g.,CLOJURE,LEAN4).Add a
pytest.mark.skipifcondition to thePYTHON_TYmarker definition to check foruvxoruvavailability, consistent with how other backends are gated.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/conftest.py` around lines 255 - 276, The PYTHON_TY entry in the _LANGUAGE_PYTEST_MARKERS dict lacks an availability guard and should skip tests when neither uvx nor uv is present; update the Language.PYTHON_TY value in _LANGUAGE_PYTEST_MARKERS to include a pytest.mark.skipif that checks (_sh.which("uvx") or _sh.which("uv")) and supplies a reason like "uvx or uv is not installed" so TyLanguageServer.create_launch_command won't raise during fixture setup.src/solidlsp/ls_utils.py-169-179 (1)
169-179:⚠️ Potential issue | 🟠 MajorDon't change
PathUtils.get_relative_pathsemantics without updating callers.This now returns
Noneon incompatible Windows roots, butSolidLanguageServer.request_dir_overviewstill relies on the oldValueErrorpath to trigger its fallback logic. That branch no longer runs and now falls through to the laterassert path_str is not Noneinstead. Please keep one contract consistently—either still raise here or update the remaining callers to branch onNone.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/solidlsp/ls_utils.py` around lines 169 - 179, The new PathUtils.get_relative_path changes behavior to return None for incompatible Windows drive roots, which breaks callers expecting a ValueError (notably SolidLanguageServer.request_dir_overview); restore the original contract by raising a ValueError when PureWindowsPath(path).drive.lower() != PureWindowsPath(base_path).drive.lower() (in PathUtils.get_relative_path) or, alternatively, update callers such as SolidLanguageServer.request_dir_overview to check for None instead of relying on a ValueError; pick one consistent approach across the codebase and implement it (reference symbols: PathUtils.get_relative_path and SolidLanguageServer.request_dir_overview).src/serena/jetbrains/jetbrains_plugin_client.py-428-441 (1)
428-441:⚠️ Potential issue | 🟠 MajorFail fast on unsupported JetBrains plugin versions.
get_supertypes/get_subtypesalready guard newer plugin routes, but these new endpoints don't. If Serena is updated before the IDE plugin, callers will get opaque HTTP/API failures here instead of the existing “update the plugin” guidance. Please add the same_require_version_at_least(...)preflight once the minimum compatible plugin version is defined.Also applies to: 522-557, 579-647
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/serena/jetbrains/jetbrains_plugin_client.py` around lines 428 - 441, The new endpoint methods (e.g., move_symbol) lack the preflight version check used by get_supertypes/get_subtypes, so callers get opaque HTTP errors when the IDE plugin is older; add a call to self._require_version_at_least("<min_version>") at the top of each new endpoint method (for move_symbol and the other endpoint methods added in the same file) before building request_data or calling self._make_request; use the same minimum version string used by get_supertypes/get_subtypes (or a newly defined constant) and preserve the existing error/message semantics from _require_version_at_least.src/serena/tools/jetbrains_tools.py-92-118 (1)
92-118:⚠️ Potential issue | 🟠 MajorReject empty move requests before opening the JetBrains client.
This currently accepts an all-
Nonecall and forwards it to the backend, so callers get a plugin error instead of deterministic tool validation. Please validate that the request identifies something to move and somewhere to move it.💡 Suggested fix
def apply( self, name_path: str | None = None, relative_path: str | None = None, target_parent_name_path: str | None = None, target_relative_path: str | None = None, ) -> str: + if name_path is None and relative_path is None: + raise ValueError("At least one of `name_path` or `relative_path` must be provided") + if target_parent_name_path is None and target_relative_path is None: + raise ValueError("At least one of `target_parent_name_path` or `target_relative_path` must be provided") + """ Moves a symbol to a different location (file or parent symbol) using the JetBrains refactoring engine.Based on learnings: "New tools must inherit from the
Toolbase class insrc/serena/tools/tools_base.py, implement required methods and parameter validation, be registered in appropriate tool registry, and added to context/mode configurations"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/serena/tools/jetbrains_tools.py` around lines 92 - 118, The apply method currently forwards all-None parameters to the backend; before opening JetBrainsPluginClient.from_project validate the request: ensure at least one source identifier is provided (name_path or relative_path) and at least one target identifier is provided (target_parent_name_path or target_relative_path); if the check fails, raise a clear ValueError (or ToolValidationError) with a descriptive message so callers get deterministic validation instead of a plugin error. Reference: the apply(...) method and the JetBrainsPluginClient.from_project call so the guard runs before the client is created.src/solidlsp/ls_utils.py-232-247 (1)
232-247:⚠️ Potential issue | 🟠 MajorRedirect bypass in host validation exposes SSRF vulnerability.
requests.get()follows redirects automatically by default, andresponse.urlcontains the final redirected URL. A whitelisted URL that redirects to an unapproved host will cause the client to fetch from that host before the validation on line 246 rejectsresponse.url. The actual content transfer occurs during therequests.get()call on line 241, circumventing the egress/SSRF control.Fix by either disabling redirects with
allow_redirects=Falseor validating each redirect hop before following it.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/solidlsp/ls_utils.py` around lines 232 - 247, requests.get is following redirects before FileUtils._validate_download_host can check the final URL, allowing SSRF; modify the download flow in the function that creates temp_file_path so you disable automatic redirects (use requests.get(..., allow_redirects=False)) and validate every redirect hop with FileUtils._validate_download_host against allowed_hosts before issuing a new request, or implement manual redirect-following: on 3xx responses read the Location header, resolve it to an absolute URL, call FileUtils._validate_download_host(location, allowed_hosts) and only then call requests.get for the next hop, repeating until a non-redirect (200) response is reached; ensure you still validate the final response.url and proceed to stream to temp_file_path.src/serena/symbol.py-1051-1068 (1)
1051-1068:⚠️ Potential issue | 🟠 MajorNormalize the primary symbol before grouping diagnostics.
The reference path lifts low-level symbols to their structural owner, but the initially resolved
symbolis inserted as-is. For locals/parameters that often narrows the scan to a tiny range and produces empty or inconsistent results compared with the reference symbols.💡 Suggested fix
- symbols_to_check: "OrderedDict[tuple[str | None, int | None, int | None, str], LanguageServerSymbol]" = OrderedDict() - symbols_to_check[self._symbol_identity(symbol)] = symbol + symbol = self._normalize_symbol_for_diagnostics(symbol) + symbols_to_check: "OrderedDict[tuple[str | None, int | None, int | None, str], LanguageServerSymbol]" = OrderedDict() + symbols_to_check[self._symbol_identity(symbol)] = symbol🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/serena/symbol.py` around lines 1051 - 1068, When inserting the initially resolved symbol into symbols_to_check, normalize it to its structural owner using the symbol's reference path/location before grouping diagnostics; specifically, after resolving or creating `symbol` (from `find_by_location` or `LanguageServerSymbol`), if `symbol` exposes a reference (e.g., `symbol.reference_path` or `symbol.reference_location`), call `self.find_by_location` with that reference and replace `symbol` with the returned symbol (fall back to the original if None) before doing `symbols_to_check[self._symbol_identity(symbol)] = symbol` so the primary entry uses the lifted/normalized symbol.src/solidlsp/ls.py-1325-1335 (1)
1325-1335:⚠️ Potential issue | 🟠 MajorDon't index optional diagnostic fields unconditionally.
The publish-diagnostics path correctly treats
severityandcodeas optional using.get()with type checks. The pull-diagnostics code unconditionally indexes both keys, which will raiseKeyErrorand abort diagnostics retrieval for any server omitting either field.💡 Suggested fix
for item in response["items"]: # type: ignore new_item: ls_types.Diagnostic = { "uri": uri, - "severity": item["severity"], "message": item["message"], "range": item["range"], - "code": item["code"], # type: ignore } + if "severity" in item: + new_item["severity"] = item["severity"] + if "code" in item: + new_item["code"] = item["code"] # type: ignore if "source" in item: new_item["source"] = item["source"]🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/solidlsp/ls.py` around lines 1325 - 1335, The pull-diagnostics loop unconditionally indexes optional fields "severity" and "code" which can raise KeyError; update the loop over response["items"] to access item.get("severity") and item.get("code") and only add those keys into new_item when present and of the expected type (mirror the publish-diagnostics checks), i.e., validate severity is an int/expected enum and code is the expected type before setting new_item["severity"] or new_item["code"], then construct ls_types.Diagnostic(**new_item) as before using uri, message, range and optional fields when valid.src/serena/tools/symbol_tools.py-848-850 (1)
848-850:⚠️ Potential issue | 🟠 MajorReturn post-delete diagnostics the same way the other edit tools do.
Line 850 returns bare
SUCCESS_RESULT, so deletions are the one symbolic edit in this module that do not surface newly introduced diagnostics. That makesSafeDeleteSymbolreport success even when the delete leaves the file or project invalid.🩺 Proposed fix
+ edited_file_paths = [EditedFilePath(symbol_rel_path, symbol_rel_path)] + diagnostics_snapshot = self._capture_published_lsp_diagnostics_snapshot(edited_file_paths) code_editor = self.create_ls_code_editor() code_editor.delete_symbol(symbol_name_path, relative_file_path=symbol_rel_path) - return SUCCESS_RESULT + return self._format_lsp_edit_result_with_new_diagnostics( + SUCCESS_RESULT, edited_file_paths, diagnostics_snapshot + )🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/serena/tools/symbol_tools.py` around lines 848 - 850, The deletion flow currently returns SUCCESS_RESULT directly after calling create_ls_code_editor().delete_symbol(...), which skips returning post-edit diagnostics; update SafeDeleteSymbol to mirror other edit tools by invoking the same post-edit diagnostics helper used elsewhere in this module (i.e., after calling create_ls_code_editor() and delete_symbol) and return that result instead of SUCCESS_RESULT so any diagnostics introduced by the delete are surfaced; keep references to create_ls_code_editor, delete_symbol, and SUCCESS_RESULT when making the change.
🟡 Minor comments (11)
src/solidlsp/ls_process.py-153-153 (1)
153-153:⚠️ Potential issue | 🟡 MinorPotential race condition on
_notification_observerslist.The
_notification_observerslist is modified byon_any_notification(append) and iterated in_notification_handlerwithout synchronization. If an observer is registered while notifications are being processed (from different threads), this could lead to undefined behavior.Consider adding a lock similar to other shared resources in this class:
🔒 Proposed fix to add thread-safe access
self._notification_observers: list[Callable[[str, Any], None]] = [] +self._notification_observers_lock = threading.Lock()def on_any_notification(self, cb: Callable[[str, Any], None]) -> None: """ Register an observer that is invoked for every notification received from the server. """ - self._notification_observers.append(cb) + with self._notification_observers_lock: + self._notification_observers.append(cb)def _notification_handler(self, response: StringDict) -> None: ... - for observer in self._notification_observers: + with self._notification_observers_lock: + observers = list(self._notification_observers) + for observer in observers: try: observer(method, params)Also applies to: 514-518, 571-579
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/solidlsp/ls_process.py` at line 153, The _notification_observers list is not thread-safe: update it in on_any_notification and iterate it in _notification_handler without synchronization, which can race; add a dedicated threading.Lock (e.g., self._notification_lock) initialized alongside other locks and use it when mutating or reading the list — acquire the lock to append/remove observers in on_any_notification and when preparing to notify in _notification_handler (take a shallow copy of the list while holding the lock, then release the lock and iterate the copy to invoke callbacks) to avoid holding the lock during callbacks; apply the same locking pattern wherever _notification_observers is accessed.test/solidlsp/python/test_python_basic.py-103-105 (1)
103-105:⚠️ Potential issue | 🟡 MinorAdd a Python pytest marker to
TestProjectBasicsfor selective runs.
TestProjectBasicsis Python-specific but currently unmarked, so marker-based selection can miss or misclassify it (Line [103] and Line [211]).Suggested patch
+@pytest.mark.python class TestProjectBasics:As per coding guidelines, "Language-specific tests use pytest markers for selective testing".
Also applies to: 211-212
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/solidlsp/python/test_python_basic.py` around lines 103 - 105, The test class TestProjectBasics is Python-specific and needs a pytest marker for selective runs; add the pytest.mark.<marker_name> decorator to the class definition (above class TestProjectBasics) so the whole class is marked, and do the same for the second unmarked TestProjectBasics occurrence referenced around the later block (ensure both class declarations use the same marker and keep the existing parametrized test method test_retrieve_content_around_line and signatures unchanged).test/solidlsp/test_defining_symbol_matrix.py-137-139 (1)
137-139:⚠️ Potential issue | 🟡 MinorMatch the resolved filename exactly.
Substring containment can turn this into a false positive if the server returns something like
SomeModel.javaorhelper.php.bak. Comparing the basename is stricter and still cross-platform.♻️ Proposed fix
- assert expected_definition_file in defining_symbol["location"].get("relativePath", ""), ( + actual_definition_path = Path(defining_symbol["location"].get("relativePath", "")) + assert actual_definition_path.name == expected_definition_file, ( f"Expected defining symbol in {expected_definition_file!r}, got: {defining_symbol}" )🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/solidlsp/test_defining_symbol_matrix.py` around lines 137 - 139, The test currently checks substring containment of expected_definition_file in defining_symbol["location"].get("relativePath", "") which can yield false positives; change it to compare the resolved filename exactly by extracting the basename of the returned path (e.g., via os.path.basename or pathlib.Path(...).name) and assert that that basename equals expected_definition_file, referencing the defining_symbol variable and its "location" -> "relativePath" field.test/solidlsp/util/test_ls_utils.py-3-5 (1)
3-5:⚠️ Potential issue | 🟡 MinorAdd return type annotation to
_FakeResponse.iter_content.The method lacks an explicit return type, violating strict typing requirements. It should be annotated as
-> Iterator[bytes]since it yields bytes objects.Proposed fix
+from collections.abc import Iterator from pathlib import Path from unittest.mock import patch @@ - def iter_content(self, chunk_size: int = 1): + def iter_content(self, chunk_size: int = 1) -> Iterator[bytes]: for offset in range(0, len(self._payload), chunk_size): yield self._payload[offset : offset + chunk_size]🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/solidlsp/util/test_ls_utils.py` around lines 3 - 5, Add an explicit return type annotation to the test helper method _FakeResponse.iter_content by changing its signature to return Iterator[bytes] (i.e. def iter_content(self, chunk_size: int = 1024) -> Iterator[bytes]:). Also import Iterator from typing at the top of the file (add from typing import Iterator) so the annotation is valid; update the _FakeResponse class method signature accordingly.test/solidlsp/python/test_symbol_retrieval.py-391-396 (1)
391-396:⚠️ Potential issue | 🟡 MinorNormalize the returned
relativePathbefore comparing it.Only the expected string is normalized here. If a backend returns
examples\user_management.py, these assertions still fail even though the paths are equivalent.Suggested fix
- user_management_rel_path = user_management_node["location"]["relativePath"] + user_management_rel_path = normalize_relative_path(user_management_node["location"]["relativePath"]) assert user_management_rel_path == normalize_relative_path(os.path.join("examples", "user_management.py"))Apply the same change in both blocks.
Also applies to: 414-419
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/solidlsp/python/test_symbol_retrieval.py` around lines 391 - 396, The test compares user_management_node["location"]["relativePath"] to a normalized expected path but fails to normalize the returned value; update the assertions to normalize the returned relative path before comparing by passing user_management_node["location"]["relativePath"] through normalize_relative_path (i.e., set user_management_rel_path = normalize_relative_path(user_management_node["location"]["relativePath"])) and use that normalized value in the assert and in the request_document_symbols call, and apply the same change in the second block that also handles this node (the other occurrence around the request_document_symbols/get_all_symbols_and_roots usage).docs/02-usage/001_features.md-17-17 (1)
17-17:⚠️ Potential issue | 🟡 MinorFix the typo in Line 17.
qujualityshould bequality.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/02-usage/001_features.md` at line 17, Fix the typo in the documentation sentence "Tool results are compact JSON, keeping token usage low and output qujuality high." by replacing the misspelled word "qujuality" with "quality" so the line reads "Tool results are compact JSON, keeping token usage low and output quality high."src/solidlsp/language_servers/groovy_language_server.py-127-161 (1)
127-161:⚠️ Potential issue | 🟡 MinorHardcoded JRE paths may break with version overrides.
The
java_home_pathandjava_pathvalues (e.g.,extension/jre/21.0.7-linux-x86_64) are hardcoded and won't change whenvscode_java_versionis overridden. If a different vscode-java version bundles a different JRE version with a different directory name, these paths will be incorrect.Consider documenting this limitation or making the JRE version configurable alongside the vscode-java version.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/solidlsp/language_servers/groovy_language_server.py` around lines 127 - 161, The hardcoded JRE directory names in the platform mapping (the "java_home_path" and "java_path" values) will break if vscode_java_version is overridden; update the code to derive the JRE folder name from a configurable jre_version (or parse it from vscode_java_version) and use f-strings like f"extension/jre/{jre_version}-linux-x86_64" in place of the fixed "21.0.7-..." strings for all platforms, or add a documented configuration option (e.g., jre_version) alongside vscode_java_version so the mapping uses that value; touch the variables vscode_java_version, VSCODE_JAVA_SHA256_BY_PLATFORM, and the platform dict entries ("java_home_path" / "java_path") when making the change.test/diagnostics_cases.py-24-29 (1)
24-29:⚠️ Potential issue | 🟡 MinorRename the
idparameter to avoid shadowing Python's built-in.The parameter
idviolates Ruff rule A002 (builtin shadowing), which will fail the linting gate. Rename it to something likecase_idortest_idto comply with ruff formatting requirements as specified in the coding guidelines.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/diagnostics_cases.py` around lines 24 - 29, The parameter name `id` in function diagnostic_case_param shadows the Python builtin and violates Ruff A002; rename the parameter to `case_id` (or `test_id`) in the diagnostic_case_param signature and update its usage inside the function (change id to case_id in the return id=... slot) and any call sites that use the keyword argument `id` to use `case_id` instead so linting no longer fails.test/serena/test_serena_agent.py-48-49 (1)
48-49:⚠️ Potential issue | 🟡 MinorRename the keyword-only
idargument to avoid shadowing Python's built-in.This parameter violates Ruff rule A002 (builtin-argument-shadowing). The same issue exists in
diagnostic_case_paramintest/diagnostics_cases.py. Both must be renamed to satisfy the lint rules.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/serena/test_serena_agent.py` around lines 48 - 49, The parameter name id shadows Python's built-in; rename the keyword-only parameter in to_pytest_param to a non-shadowing name (e.g., id_ or param_id) and update the call-site inside the function (change id=id to id_=id or param_id=param_id) so pytest.param receives the renamed keyword, and apply the same rename in diagnostic_case_param; search for usages of to_pytest_param and diagnostic_case_param and update those call sites accordingly to use the new parameter name so lint rule A002 is satisfied.src/serena/tools/symbol_tools.py-490-494 (1)
490-494:⚠️ Potential issue | 🟡 MinorReject regexes with any group count other than 1.
Line 490 only rejects the zero-group case. A pattern like
(foo)(bar)?still passes even though the tool documents "exactly one capturing group", so invalid regexes can slip through validation and resolve the wrong capture.🔧 Proposed fix
- if compiled_regex.groups == 0: + if compiled_regex.groups != 1: return ( f"Error: Regex '{regex}' must contain exactly one capturing group that identifies the symbol usage in " f"{search_scope_description}." )🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/serena/tools/symbol_tools.py` around lines 490 - 494, The current validation only rejects zero-group regexes; update the check to reject any regex whose compiled_regex.groups is not equal to 1 so patterns like "(foo)(bar)?" are disallowed. Change the condition from "compiled_regex.groups == 0" to "compiled_regex.groups != 1" and update the error message (which references regex and search_scope_description) so it clearly states the regex must contain exactly one capturing group. Ensure you modify the same validation block that inspects compiled_regex.groups and returns the error string.src/serena/tools/symbol_tools.py-473-476 (1)
473-476:⚠️ Potential issue | 🟡 MinorCatch
ValueErrorinstead ofExceptionto avoid hiding real failures.Line 475 catches
Exceptionbutfind_uniqueonly raisesValueErrorfor the expected cases of missing or ambiguous symbols. This broad exception handling masks unexpected failures from the symbol resolver, which can hide regressions. Changeexcept Exception as e:toexcept ValueError as e:.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/serena/tools/symbol_tools.py` around lines 473 - 476, The current broad except block around symbol_retriever.find_unique(containing_symbol_name_path, within_relative_path=relative_path) catches all Exceptions and can hide real failures; change the handler to catch only ValueError (i.e., use except ValueError as e) so only the expected missing/ambiguous-symbol cases produce the user-facing error string for containing_symbol, and allow other unexpected exceptions to propagate.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 1d4a8a04-fcde-4bd7-9d0a-e4b0636ea717
⛔ Files ignored due to path filters (1)
test/resources/repos/kotlin/test_repo/gradle/wrapper/gradle-wrapper.jaris excluded by!**/*.jar
📒 Files selected for processing (125)
.gitignore.serena/project.ymldocs/02-usage/001_features.mddocs/02-usage/050_configuration.mddocs/02-usage/070_security.mdpyproject.tomlscripts/demo_diagnostics.pyscripts/demo_find_defining_symbol.pyscripts/demo_find_implementing_symbol.pyscripts/demo_run_tools.pysrc/serena/code_editor.pysrc/serena/jetbrains/jetbrains_plugin_client.pysrc/serena/jetbrains/jetbrains_types.pysrc/serena/resources/config/internal_modes/jetbrains.ymlsrc/serena/resources/project.template.ymlsrc/serena/symbol.pysrc/serena/tools/file_tools.pysrc/serena/tools/jetbrains_tools.pysrc/serena/tools/symbol_tools.pysrc/serena/tools/tools_base.pysrc/solidlsp/language_servers/al_language_server.pysrc/solidlsp/language_servers/ansible_language_server.pysrc/solidlsp/language_servers/bash_language_server.pysrc/solidlsp/language_servers/clangd_language_server.pysrc/solidlsp/language_servers/clojure_lsp.pysrc/solidlsp/language_servers/common.pysrc/solidlsp/language_servers/csharp_language_server.pysrc/solidlsp/language_servers/dart_language_server.pysrc/solidlsp/language_servers/eclipse_jdtls.pysrc/solidlsp/language_servers/elixir_tools/elixir_tools.pysrc/solidlsp/language_servers/elm_language_server.pysrc/solidlsp/language_servers/fsharp_language_server.pysrc/solidlsp/language_servers/gopls.pysrc/solidlsp/language_servers/groovy_language_server.pysrc/solidlsp/language_servers/hlsl_language_server.pysrc/solidlsp/language_servers/intelephense.pysrc/solidlsp/language_servers/kotlin_language_server.pysrc/solidlsp/language_servers/lua_ls.pysrc/solidlsp/language_servers/luau_lsp.pysrc/solidlsp/language_servers/marksman.pysrc/solidlsp/language_servers/matlab_language_server.pysrc/solidlsp/language_servers/omnisharp.pysrc/solidlsp/language_servers/omnisharp/runtime_dependencies.jsonsrc/solidlsp/language_servers/pascal_server.pysrc/solidlsp/language_servers/phpactor.pysrc/solidlsp/language_servers/powershell_language_server.pysrc/solidlsp/language_servers/ruby_lsp.pysrc/solidlsp/language_servers/rust_analyzer.pysrc/solidlsp/language_servers/solidity_language_server.pysrc/solidlsp/language_servers/systemverilog_server.pysrc/solidlsp/language_servers/taplo_server.pysrc/solidlsp/language_servers/terraform_ls.pysrc/solidlsp/language_servers/ty_server.pysrc/solidlsp/language_servers/typescript_language_server.pysrc/solidlsp/language_servers/vts_language_server.pysrc/solidlsp/language_servers/vue_language_server.pysrc/solidlsp/language_servers/yaml_language_server.pysrc/solidlsp/ls.pysrc/solidlsp/ls_config.pysrc/solidlsp/ls_process.pysrc/solidlsp/ls_types.pysrc/solidlsp/ls_utils.pytest/conftest.pytest/diagnostics_cases.pytest/resources/repos/clojure/test_repo/src/test_app/diagnostics_sample.cljtest/resources/repos/cpp/test_repo/compile_commands.jsontest/resources/repos/cpp/test_repo/diagnostics_sample.cpptest/resources/repos/csharp/test_repo/DiagnosticsSample.cstest/resources/repos/csharp/test_repo/Program.cstest/resources/repos/csharp/test_repo/Services/ConsoleGreeter.cstest/resources/repos/csharp/test_repo/Services/IGreeter.cstest/resources/repos/fsharp/test_repo/DiagnosticsSample.fstest/resources/repos/fsharp/test_repo/Formatter.fstest/resources/repos/fsharp/test_repo/Program.fstest/resources/repos/fsharp/test_repo/Shapes.fstest/resources/repos/fsharp/test_repo/TestProject.fsprojtest/resources/repos/go/test_repo/diagnostics_sample.gotest/resources/repos/go/test_repo/main.gotest/resources/repos/java/test_repo/src/main/java/test_repo/ConsoleGreeter.javatest/resources/repos/java/test_repo/src/main/java/test_repo/DiagnosticsSample.javatest/resources/repos/java/test_repo/src/main/java/test_repo/Greeter.javatest/resources/repos/java/test_repo/src/main/java/test_repo/Main.javatest/resources/repos/kotlin/test_repo/gradle/wrapper/gradle-wrapper.propertiestest/resources/repos/kotlin/test_repo/gradlewtest/resources/repos/kotlin/test_repo/gradlew.battest/resources/repos/kotlin/test_repo/src/main/kotlin/test_repo/DiagnosticsSample.kttest/resources/repos/lean4/test_repo/DiagnosticsSample.leantest/resources/repos/lua/test_repo/main.luatest/resources/repos/lua/test_repo/src/animals.luatest/resources/repos/php/test_repo/diagnostics_sample.phptest/resources/repos/powershell/test_repo/diagnostics_sample.ps1test/resources/repos/python/test_repo/test_repo/diagnostics_sample.pytest/resources/repos/ruby/test_repo/lib.rbtest/resources/repos/ruby/test_repo/main.rbtest/resources/repos/rust/test_repo/src/diagnostics_sample.rstest/resources/repos/rust/test_repo/src/lib.rstest/resources/repos/rust/test_repo/src/main.rstest/resources/repos/typescript/test_repo/.serena/project.ymltest/resources/repos/typescript/test_repo/diagnostics_sample.tstest/resources/repos/typescript/test_repo/formatters.tstest/resources/repos/typescript/test_repo/index.tstest/serena/__snapshots__/test_symbol_editing.ambrtest/serena/test_serena_agent.pytest/serena/test_symbol.pytest/solidlsp/clojure/__init__.pytest/solidlsp/csharp/test_csharp_basic.pytest/solidlsp/csharp/test_csharp_nuget_download.pytest/solidlsp/dart/test_dart_basic.pytest/solidlsp/fortran/test_fortran_basic.pytest/solidlsp/fsharp/test_fsharp_basic.pytest/solidlsp/go/test_go_basic.pytest/solidlsp/java/test_java_basic.pytest/solidlsp/lua/test_lua_basic.pytest/solidlsp/luau/test_luau_dependency_provider.pytest/solidlsp/python/test_python_basic.pytest/solidlsp/python/test_retrieval_with_ignored_dirs.pytest/solidlsp/python/test_symbol_retrieval.pytest/solidlsp/ruby/test_ruby_symbol_retrieval.pytest/solidlsp/rust/test_rust_basic.pytest/solidlsp/solidity/test_solidity_basic.pytest/solidlsp/test_defining_symbol_matrix.pytest/solidlsp/test_diagnostics_matrix.pytest/solidlsp/test_ls_common.pytest/solidlsp/typescript/test_typescript_basic.pytest/solidlsp/util/test_ls_utils.py
| # preferring uvx for on-demand execution | ||
| uvx_path = os.environ.get("UVX") or shutil.which("uvx") | ||
| if uvx_path is not None: | ||
| return [uvx_path, "--from", f"ty=={ty_version}", "ty", "server"] | ||
|
|
||
| # falling back to uv's uvx-compatible subcommand when only `uv` is available | ||
| uv_path = shutil.which("uv") | ||
| if uv_path is not None: | ||
| return [uv_path, "x", "--from", f"ty=={ty_version}", "ty", "server"] |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
cd src/solidlsp/language_servers && wc -l ty_server.py && sed -n '50,70p' ty_server.pyRepository: oraios/serena
Length of output: 1032
🌐 Web query:
uv tool run vs uv x uvx documentation astral.sh
💡 Result:
uvx is a convenience alias for uv tool run. Both run Python tools (e.g., ruff, black) in isolated, temporary virtual environments, independent of any project context. They automatically install the tool if needed, cache environments for reuse, and support options like --from for specific packages or @version for pinned versions. Key points from official docs: - Equivalent: uvx ruff == uv tool run ruff (docs.astral.sh/uv/guides/tools/) - Use for standalone tools: Isolated from project deps (docs.astral.sh/uv/concepts/tools/) - If tool needs project installed (e.g., pytest, mypy in non-src layout), prefer uv run instead (docs.astral.sh/uv/guides/tools/) - Install persistently: uv tool install ruff to add to PATH without uvx/uv tool run (docs.astral.sh/uv/guides/tools/) uv run differs: Runs in project environment (syncs .venv, uses project deps), for project scripts/tests (docs.astral.sh/uv/concepts/projects/run/). Examples: uvx ruff check . # Isolated linter uv tool run [email protected] check . uv run pytest # Project-aware tests A GitHub issue notes minor historical path differences post-install, but docs confirm equivalence in behavior (github.com/astral-sh/uv/issues/15333).
Citations:
- 1: https://docs.astral.sh/uv/guides/tools/
- 2: https://docs.astral.sh/uv/concepts/tools/
- 3: https://docs.astral.sh/uv/reference/cli/
- 4: https://docs.astral.sh/uv/
Use uv tool run, not uv x, for the uv fallback.
The official uv documentation confirms uvx as an alias for uv tool run. The fallback command at line 62 uses uv x, which is not a documented command and will fail on machines that have uv installed but no separate uvx shim.
Suggested fix
- return [uv_path, "x", "--from", f"ty=={ty_version}", "ty", "server"]
+ return [uv_path, "tool", "run", "--from", f"ty=={ty_version}", "ty", "server"]🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/solidlsp/language_servers/ty_server.py` around lines 58 - 66, The
fallback for running the Ty language server uses an incorrect subcommand ("x")
on the uv binary; update the branch that checks uv_path to use the documented
"tool run" invocation instead. Specifically, where the code returns [uv_path,
"x", "--from", f"ty=={ty_version}", "ty", "server"], change it to call uv_path
with the subcommands "tool", "run" (i.e., [uv_path, "tool", "run", "--from",
f"ty=={ty_version}", "ty", "server"]) so the uv fallback matches the uvx
behavior; this touches the uv_path check in the module that builds the command
(referenced symbols: uvx_path, uv_path, ty_version).
| with tarfile.open(archive_path, tar_mode) as tar_ref: | ||
| for tar_member in tar_ref.getmembers(): | ||
| FileUtils._validate_extraction_path(tar_member.name, target_path) | ||
|
|
||
| tar_ref.extractall(target_path) |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
In Python tarfile, can tarfile.extractall()create symlinks or hardlinks from archive members, making validation ofTarInfo.name alone insufficient to contain extraction to a target directory?
💡 Result:
Yes, tarfile.extractall can create symlinks and hardlinks from archive members. Validation of TarInfo.name alone is insufficient to contain extraction to a target directory because symlink and hardlink targets (TarInfo.linkname) are not restricted by the TarInfo.name path and can point outside the target directory, potentially allowing path traversal attacks when subsequent files are extracted through those links.
Citations:
- 1: https://docs.python.org/library/tarfile.html
- 2: https://docs.python.org/3/library/tarfile.html
- 3: https://github.com/python/cpython/blob/main/Lib/tarfile.py
- 4: https://peps.python.org/pep-0706/
- 5: GHSA-m6w7-qv66-g3mf
- 6: https://www.trellix.com/blogs/research/tarfile-exploiting-the-world/
- 7: https://stackoverflow.com/questions/10060069/safely-extract-zip-or-tar-using-python
🏁 Script executed:
cat -n src/solidlsp/ls_utils.py | sed -n '415,430p'Repository: oraios/serena
Length of output: 687
🏁 Script executed:
# Let's also find the _validate_extraction_path method to understand what it does
rg -A 10 "_validate_extraction_path" src/solidlsp/ls_utils.pyRepository: oraios/serena
Length of output: 1561
🏁 Script executed:
rg -B 2 "_validate_extraction_path" src/solidlsp/ls_utils.py | head -20Repository: oraios/serena
Length of output: 564
🏁 Script executed:
# Check if tarinfo types are handled anywhere
rg -i "isdir|issym|isreg|islnk|isdev" src/solidlsp/ls_utils.pyRepository: oraios/serena
Length of output: 39
tarfile.extractall() enables tar traversal via symlinks and hardlinks.
The validation of tar_member.name is insufficient. A tar archive can contain symlink, hardlink, and device entries whose link targets (tar_member.linkname) are not validated and can point outside target_path. The extractall() method will materialize these links, enabling path traversal. Reject non-regular tar members (symlinks, hardlinks, devices, fifos) and extract only regular files and directories individually, as done in the ZIP extraction code above.
🧰 Tools
🪛 Ruff (0.15.7)
[error] 425-425: Uses of tarfile.extractall()
(S202)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/solidlsp/ls_utils.py` around lines 421 - 425, The tar extraction
currently uses tar_ref.extractall which allows symlink/hardlink/device
traversal; replace this by iterating tar_ref.getmembers() (as in the diff) and
for each tar_member call FileUtils._validate_extraction_path(tar_member.name,
target_path) and additionally reject any non-regular-file and non-directory
members (check tar_member.type / tarfile constants and tar_member.linkname) so
symlinks, hardlinks, device nodes, fifos are not extracted; for regular files
create directories as needed and extract each file member individually into
target_path (do not call tar_ref.extractall), preserving permissions only for
safe regular entries.
| from serena.tools import ( | ||
| SUCCESS_RESULT, | ||
| FindDefiningSymbolAtLocationTool, | ||
| FindDefiningSymbolTool, | ||
| FindImplementationsTool, | ||
| FindReferencingSymbolsTool, | ||
| FindSymbolTool, | ||
| GetDiagnosticsForFileTool, | ||
| GetDiagnosticsForSymbolTool, | ||
| ReplaceContentTool, | ||
| ReplaceSymbolBodyTool, | ||
| ) |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -e
rg -n "class .*SafeDelete|SafeDeleteSymbol" src/serena/tools test/serena -g '*.py'Repository: oraios/serena
Length of output: 812
🏁 Script executed:
# Check if SafeDeleteSymbol is exported from serena.tools
cat -n src/serena/tools/__init__.py | head -100Repository: oraios/serena
Length of output: 396
🏁 Script executed:
# View the code at lines 1319-1341 to see actual usage
sed -n '1315,1345p' test/serena/test_serena_agent.pyRepository: oraios/serena
Length of output: 2142
🏁 Script executed:
# Verify the complete import block at lines 17-28
sed -n '17,28p' test/serena/test_serena_agent.pyRepository: oraios/serena
Length of output: 367
🏁 Script executed:
# Check if there are any snapshot test patterns in the test file
rg -n "snapshot|assert_snapshot|snap" test/serena/test_serena_agent.py | head -20Repository: oraios/serena
Length of output: 39
🏁 Script executed:
# Search for snapshot test usage patterns in other tests
fd "test_.*\.py" test/serena --exec rg -l "snapshot|assert_snapshot" {} \;Repository: oraios/serena
Length of output: 91
🏁 Script executed:
# Check snapshot testing patterns in the symbol editing test file
head -50 test/serena/test_symbol_editing.pyRepository: oraios/serena
Length of output: 1427
🏁 Script executed:
# Search for how snapshots are implemented - look for imports
rg -n "import.*snap|from.*snap" test/serena/test_symbol_editing.pyRepository: oraios/serena
Length of output: 39
Import SafeDeleteSymbol from serena.tools and add snapshot tests for symbolic editing operations.
SafeDeleteSymbol is used at lines 1328 and 1340 but is not imported in the import block (lines 17-28). Add it to the imports from serena.tools.
Additionally, the tests at lines 1319-1341 are symbolic editing operations (SafeDeleteSymbol inherits ToolMarkerSymbolicEdit) and must use snapshot testing instead of manual assertions, per the coding guidelines. Use the syrupy library pattern shown in test/serena/test_symbol_editing.py with SnapshotAssertion and pytest.mark.snapshot.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@test/serena/test_serena_agent.py` around lines 17 - 28, Add SafeDeleteSymbol
to the import list from serena.tools (alongside FindSymbolTool,
ReplaceSymbolBodyTool, etc.) so references to SafeDeleteSymbol (the symbolic
edit tool that inherits ToolMarkerSymbolicEdit) compile, and convert the tests
that exercise symbolic editing (the cases around the existing SafeDeleteSymbol
usage) to use syrupy snapshot testing: import or use SnapshotAssertion and mark
the test with pytest.mark.snapshot, then assert via SnapshotAssertion(snapshot,
result) instead of manual assertions—follow the same pattern as
test_symbol_editing.py for snapshot structure and naming.
| if language_has_verified_implementation_support(Language.FSHARP): | ||
|
|
||
| @pytest.mark.parametrize("language_server", [Language.FSHARP], indirect=True) | ||
| def test_find_implementations(self, language_server: SolidLanguageServer) -> None: | ||
| repo_path = get_repo_path(Language.FSHARP) | ||
| pos = find_identifier_position(repo_path / "Formatter.fs", "FormatGreeting") | ||
| assert pos is not None, "Could not find IGreeter.FormatGreeting in fixture" | ||
|
|
||
| implementations = language_server.request_implementation("Formatter.fs", *pos) | ||
| assert implementations, "Expected at least one implementation of IGreeter.FormatGreeting" | ||
| assert any("Formatter.fs" in implementation.get("relativePath", "") for implementation in implementations), ( | ||
| f"Expected ConsoleGreeter.FormatGreeting in implementations, got: {implementations}" | ||
| ) | ||
|
|
||
| @pytest.mark.parametrize("language_server", [Language.FSHARP], indirect=True) | ||
| def test_request_implementing_symbols(self, language_server: SolidLanguageServer) -> None: | ||
| repo_path = get_repo_path(Language.FSHARP) | ||
| pos = find_identifier_position(repo_path / "Formatter.fs", "FormatGreeting") | ||
| assert pos is not None, "Could not find IGreeter.FormatGreeting in fixture" | ||
|
|
||
| implementing_symbols = language_server.request_implementing_symbols("Formatter.fs", *pos) | ||
| assert implementing_symbols, "Expected implementing symbols for IGreeter.FormatGreeting" | ||
| assert any( | ||
| symbol.get("name") == "FormatGreeting" and "Formatter.fs" in symbol["location"].get("relativePath", "") | ||
| for symbol in implementing_symbols | ||
| ), f"Expected ConsoleGreeter.FormatGreeting symbol, got: {implementing_symbols}" |
There was a problem hiding this comment.
Same class-level if statement issue as in Rust tests.
This has the same problem as test_rust_basic.py - defining methods inside an if block at class body level breaks method binding. Use @pytest.mark.skipif instead.
Proposed fix using pytest.mark.skipif
- if language_has_verified_implementation_support(Language.FSHARP):
-
- `@pytest.mark.parametrize`("language_server", [Language.FSHARP], indirect=True)
- def test_find_implementations(self, language_server: SolidLanguageServer) -> None:
+ `@pytest.mark.skipif`(
+ not language_has_verified_implementation_support(Language.FSHARP),
+ reason="F# implementation support not verified"
+ )
+ `@pytest.mark.parametrize`("language_server", [Language.FSHARP], indirect=True)
+ def test_find_implementations(self, language_server: SolidLanguageServer) -> None:
repo_path = get_repo_path(Language.FSHARP)
pos = find_identifier_position(repo_path / "Formatter.fs", "FormatGreeting")
assert pos is not None, "Could not find IGreeter.FormatGreeting in fixture"
implementations = language_server.request_implementation("Formatter.fs", *pos)
assert implementations, "Expected at least one implementation of IGreeter.FormatGreeting"
assert any("Formatter.fs" in implementation.get("relativePath", "") for implementation in implementations), (
f"Expected ConsoleGreeter.FormatGreeting in implementations, got: {implementations}"
)
- `@pytest.mark.parametrize`("language_server", [Language.FSHARP], indirect=True)
- def test_request_implementing_symbols(self, language_server: SolidLanguageServer) -> None:
+ `@pytest.mark.skipif`(
+ not language_has_verified_implementation_support(Language.FSHARP),
+ reason="F# implementation support not verified"
+ )
+ `@pytest.mark.parametrize`("language_server", [Language.FSHARP], indirect=True)
+ def test_request_implementing_symbols(self, language_server: SolidLanguageServer) -> None:
repo_path = get_repo_path(Language.FSHARP)
pos = find_identifier_position(repo_path / "Formatter.fs", "FormatGreeting")
assert pos is not None, "Could not find IGreeter.FormatGreeting in fixture"
implementing_symbols = language_server.request_implementing_symbols("Formatter.fs", *pos)
assert implementing_symbols, "Expected implementing symbols for IGreeter.FormatGreeting"
assert any(
symbol.get("name") == "FormatGreeting" and "Formatter.fs" in symbol["location"].get("relativePath", "")
for symbol in implementing_symbols
), f"Expected ConsoleGreeter.FormatGreeting symbol, got: {implementing_symbols}"📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if language_has_verified_implementation_support(Language.FSHARP): | |
| @pytest.mark.parametrize("language_server", [Language.FSHARP], indirect=True) | |
| def test_find_implementations(self, language_server: SolidLanguageServer) -> None: | |
| repo_path = get_repo_path(Language.FSHARP) | |
| pos = find_identifier_position(repo_path / "Formatter.fs", "FormatGreeting") | |
| assert pos is not None, "Could not find IGreeter.FormatGreeting in fixture" | |
| implementations = language_server.request_implementation("Formatter.fs", *pos) | |
| assert implementations, "Expected at least one implementation of IGreeter.FormatGreeting" | |
| assert any("Formatter.fs" in implementation.get("relativePath", "") for implementation in implementations), ( | |
| f"Expected ConsoleGreeter.FormatGreeting in implementations, got: {implementations}" | |
| ) | |
| @pytest.mark.parametrize("language_server", [Language.FSHARP], indirect=True) | |
| def test_request_implementing_symbols(self, language_server: SolidLanguageServer) -> None: | |
| repo_path = get_repo_path(Language.FSHARP) | |
| pos = find_identifier_position(repo_path / "Formatter.fs", "FormatGreeting") | |
| assert pos is not None, "Could not find IGreeter.FormatGreeting in fixture" | |
| implementing_symbols = language_server.request_implementing_symbols("Formatter.fs", *pos) | |
| assert implementing_symbols, "Expected implementing symbols for IGreeter.FormatGreeting" | |
| assert any( | |
| symbol.get("name") == "FormatGreeting" and "Formatter.fs" in symbol["location"].get("relativePath", "") | |
| for symbol in implementing_symbols | |
| ), f"Expected ConsoleGreeter.FormatGreeting symbol, got: {implementing_symbols}" | |
| `@pytest.mark.skipif`( | |
| not language_has_verified_implementation_support(Language.FSHARP), | |
| reason="F# implementation support not verified" | |
| ) | |
| `@pytest.mark.parametrize`("language_server", [Language.FSHARP], indirect=True) | |
| def test_find_implementations(self, language_server: SolidLanguageServer) -> None: | |
| repo_path = get_repo_path(Language.FSHARP) | |
| pos = find_identifier_position(repo_path / "Formatter.fs", "FormatGreeting") | |
| assert pos is not None, "Could not find IGreeter.FormatGreeting in fixture" | |
| implementations = language_server.request_implementation("Formatter.fs", *pos) | |
| assert implementations, "Expected at least one implementation of IGreeter.FormatGreeting" | |
| assert any("Formatter.fs" in implementation.get("relativePath", "") for implementation in implementations), ( | |
| f"Expected ConsoleGreeter.FormatGreeting in implementations, got: {implementations}" | |
| ) | |
| `@pytest.mark.skipif`( | |
| not language_has_verified_implementation_support(Language.FSHARP), | |
| reason="F# implementation support not verified" | |
| ) | |
| `@pytest.mark.parametrize`("language_server", [Language.FSHARP], indirect=True) | |
| def test_request_implementing_symbols(self, language_server: SolidLanguageServer) -> None: | |
| repo_path = get_repo_path(Language.FSHARP) | |
| pos = find_identifier_position(repo_path / "Formatter.fs", "FormatGreeting") | |
| assert pos is not None, "Could not find IGreeter.FormatGreeting in fixture" | |
| implementing_symbols = language_server.request_implementing_symbols("Formatter.fs", *pos) | |
| assert implementing_symbols, "Expected implementing symbols for IGreeter.FormatGreeting" | |
| assert any( | |
| symbol.get("name") == "FormatGreeting" and "Formatter.fs" in symbol["location"].get("relativePath", "") | |
| for symbol in implementing_symbols | |
| ), f"Expected ConsoleGreeter.FormatGreeting symbol, got: {implementing_symbols}" |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@test/solidlsp/fsharp/test_fsharp_basic.py` around lines 41 - 66, The two
tests test_find_implementations and test_request_implementing_symbols are
defined inside a class-level if guarded by
language_has_verified_implementation_support(Language.FSHARP), which prevents
proper pytest method binding; remove the class-level if and instead apply
pytest.mark.skipif to each test function using condition=not
language_has_verified_implementation_support(Language.FSHARP) (or the inverse
logic you prefer) so pytest will skip them when F# support isn't available;
ensure pytest is imported and keep the same function names
(test_find_implementations, test_request_implementing_symbols) and references to
Language.FSHARP so the tests and helpers
(language_has_verified_implementation_support, SolidLanguageServer,
request_implementation, request_implementing_symbols) are located correctly.
| if language_has_verified_implementation_support(Language.RUST): | ||
|
|
||
| @pytest.mark.parametrize("language_server", [Language.RUST], indirect=True) | ||
| def test_find_implementations(self, language_server: SolidLanguageServer) -> None: | ||
| repo_path = get_repo_path(Language.RUST) | ||
| pos = find_identifier_position(repo_path / os.path.join("src", "lib.rs"), "format_greeting") | ||
| assert pos is not None, "Could not find Greeter.format_greeting in fixture" | ||
|
|
||
| implementations = language_server.request_implementation(os.path.join("src", "lib.rs"), *pos) | ||
| assert implementations, "Expected at least one implementation of Greeter.format_greeting" | ||
| assert any("src/lib.rs" in implementation.get("relativePath", "").replace("\\", "/") for implementation in implementations), ( | ||
| f"Expected ConsoleGreeter.format_greeting in implementations, got: {implementations}" | ||
| ) | ||
|
|
||
| @pytest.mark.parametrize("language_server", [Language.RUST], indirect=True) | ||
| def test_request_implementing_symbols(self, language_server: SolidLanguageServer) -> None: | ||
| repo_path = get_repo_path(Language.RUST) | ||
| pos = find_identifier_position(repo_path / os.path.join("src", "lib.rs"), "format_greeting") | ||
| assert pos is not None, "Could not find Greeter.format_greeting in fixture" | ||
|
|
||
| implementing_symbols = language_server.request_implementing_symbols(os.path.join("src", "lib.rs"), *pos) | ||
| assert implementing_symbols, "Expected implementing symbols for Greeter.format_greeting" | ||
| assert any( | ||
| symbol.get("name") == "format_greeting" and "src/lib.rs" in symbol["location"].get("relativePath", "").replace("\\", "/") | ||
| for symbol in implementing_symbols | ||
| ), f"Expected ConsoleGreeter.format_greeting symbol, got: {implementing_symbols}" |
There was a problem hiding this comment.
Class-level if statement breaks method binding.
Defining methods inside an if block at class body level is a Python anti-pattern. The if is evaluated at class definition time, and while the methods will be added to the class, they won't be properly bound as instance methods because they're defined in a conditional block's local scope rather than the class scope.
This will cause TypeError: test_find_implementations() takes 1 positional argument but 2 were given or similar errors at runtime.
Proposed fix using pytest.mark.skipif
- if language_has_verified_implementation_support(Language.RUST):
-
- `@pytest.mark.parametrize`("language_server", [Language.RUST], indirect=True)
- def test_find_implementations(self, language_server: SolidLanguageServer) -> None:
+ `@pytest.mark.skipif`(
+ not language_has_verified_implementation_support(Language.RUST),
+ reason="Rust implementation support not verified"
+ )
+ `@pytest.mark.parametrize`("language_server", [Language.RUST], indirect=True)
+ def test_find_implementations(self, language_server: SolidLanguageServer) -> None:
repo_path = get_repo_path(Language.RUST)
pos = find_identifier_position(repo_path / os.path.join("src", "lib.rs"), "format_greeting")
assert pos is not None, "Could not find Greeter.format_greeting in fixture"
implementations = language_server.request_implementation(os.path.join("src", "lib.rs"), *pos)
assert implementations, "Expected at least one implementation of Greeter.format_greeting"
assert any("src/lib.rs" in implementation.get("relativePath", "").replace("\\", "/") for implementation in implementations), (
f"Expected ConsoleGreeter.format_greeting in implementations, got: {implementations}"
)
- `@pytest.mark.parametrize`("language_server", [Language.RUST], indirect=True)
- def test_request_implementing_symbols(self, language_server: SolidLanguageServer) -> None:
+ `@pytest.mark.skipif`(
+ not language_has_verified_implementation_support(Language.RUST),
+ reason="Rust implementation support not verified"
+ )
+ `@pytest.mark.parametrize`("language_server", [Language.RUST], indirect=True)
+ def test_request_implementing_symbols(self, language_server: SolidLanguageServer) -> None:
repo_path = get_repo_path(Language.RUST)
pos = find_identifier_position(repo_path / os.path.join("src", "lib.rs"), "format_greeting")
assert pos is not None, "Could not find Greeter.format_greeting in fixture"
implementing_symbols = language_server.request_implementing_symbols(os.path.join("src", "lib.rs"), *pos)
assert implementing_symbols, "Expected implementing symbols for Greeter.format_greeting"
assert any(
symbol.get("name") == "format_greeting" and "src/lib.rs" in symbol["location"].get("relativePath", "").replace("\\", "/")
for symbol in implementing_symbols
), f"Expected ConsoleGreeter.format_greeting symbol, got: {implementing_symbols}"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@test/solidlsp/rust/test_rust_basic.py` around lines 60 - 85, The tests
test_find_implementations and test_request_implementing_symbols are defined
inside a class-level if guarded by
language_has_verified_implementation_support(Language.RUST), which breaks method
binding; move the test function definitions out of that if and instead apply
pytest.mark.skipif to each test (or combine skipif with the existing
pytest.mark.parametrize decorator) using the same predicate
language_has_verified_implementation_support(Language.RUST) so the functions are
defined as normal top-level/ class methods (bound correctly) but skipped when
the predicate is false; update the decorators on test_find_implementations and
test_request_implementing_symbols accordingly.
There was a problem hiding this comment.
Actionable comments posted: 18
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
src/solidlsp/language_servers/fsharp_language_server.py (1)
75-84:⚠️ Potential issue | 🟠 MajorAssign
RuntimeDependencyCollectionto variable and use itsinstall()method instead of manual subprocess logic.The collection is created (lines 75–84) but the result is discarded. Every other language server assigns it to
depsand callsdeps.install(target_dir). Replace the manual subprocess.run call (lines 104–110) with the standard pattern: assign the collection to a variable and invoke itsinstall()method to maintain consistency with the codebase.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/solidlsp/language_servers/fsharp_language_server.py` around lines 75 - 84, The RuntimeDependencyCollection instance for fsautocomplete is created but never used; replace the discarded construction with an assignment (e.g., deps = RuntimeDependencyCollection([...])) and remove the manual subprocess.run install logic, then call deps.install(target_dir) to perform the installation; update references to use the same deps variable and keep the RuntimeDependency(...) entry for fsautocomplete intact so the standard install() flow is used.src/serena/code_editor.py (1)
343-346:⚠️ Potential issue | 🟠 MajorHandle rename edits that move files into a new directory.
os.rename(old_abs_path, new_abs_path)will raiseENOENTwhen the target parent directory does not exist. Workspace rename-file edits can move files as well as rename them, so valid edits likefoo.py -> pkg/foo.pywill fail here unless the destination directory is created first.Suggested adjustment
def apply(self) -> None: old_abs_path = os.path.join(self._code_editor.project_root, self._old_relative_path) new_abs_path = os.path.join(self._code_editor.project_root, self._new_relative_path) + new_parent = os.path.dirname(new_abs_path) + if new_parent: + os.makedirs(new_parent, exist_ok=True) os.rename(old_abs_path, new_abs_path)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/serena/code_editor.py` around lines 343 - 346, The rename logic in the apply method currently calls os.rename(old_abs_path, new_abs_path) which fails with ENOENT if the destination parent directory doesn't exist; before renaming in apply (inside the same scope as old_abs_path/new_abs_path), ensure the destination directory exists by computing dest_dir = os.path.dirname(new_abs_path) and creating it with os.makedirs(dest_dir, exist_ok=True) when dest_dir is non-empty, then perform os.rename; this fixes moves like "foo.py -> pkg/foo.py".
♻️ Duplicate comments (4)
test/solidlsp/fsharp/test_fsharp_basic.py (1)
41-66:⚠️ Potential issue | 🟡 MinorUse
skipifinstead of hiding these tests at import time.When
language_has_verified_implementation_support(Language.FSHARP)is false, pytest never collects these methods, so the suite silently loses coverage instead of reporting a skip. Keep the test methods defined and gate each with@pytest.mark.skipif(...).As per coding guidelines, "Language-specific tests use pytest markers for selective testing".
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/solidlsp/fsharp/test_fsharp_basic.py` around lines 41 - 66, The tests are being conditionally defined based on language_has_verified_implementation_support(Language.FSHARP), which prevents pytest from collecting and reporting them; instead, keep the test functions test_find_implementations and test_request_implementing_symbols defined unconditionally and decorate each with `@pytest.mark.skipif`(not language_has_verified_implementation_support(Language.FSHARP), reason="F# implementation support not verified") so pytest will collect and report skips; update imports if needed to reference pytest and the language_has_verified_implementation_support/Language.FSHARP symbols.test/serena/test_serena_agent.py (1)
1273-1297:⚠️ Potential issue | 🟠 MajorAdd snapshot assertions for these symbolic edit cases.
ReplaceSymbolBodyToolandSafeDeleteSymbolare symbolic edits, but these tests only check a few substrings. That misses response-shape regressions and edit-output changes that the snapshot suite is supposed to catch. Please switch them to the existing snapshot pattern used for symbolic editing tests. Based on learnings,Applies to test/**/*.py : Symbolic editing operations must have snapshot tests.Also applies to: 1320-1349
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/serena/test_serena_agent.py` around lines 1273 - 1297, The test test_replace_symbol_body_reports_new_diagnostics currently only asserts substrings; change it (and the similar test covering SafeDeleteSymbol) to use the project's snapshot pattern for symbolic editing tests: capture the full edit result (the variable result or diagnostics from parse_edit_diagnostics_result) and assert against the stored snapshot (use the same snapshot helper/fixture used by other symbolic edit tests), replacing the two assert "in" checks with a single snapshot assertion so response shape and output regressions are caught; keep references to ReplaceSymbolBodyTool, SafeDeleteSymbol, parse_edit_diagnostics_result and project_file_modification_context to locate and update the tests.src/solidlsp/ls_utils.py (1)
421-425:⚠️ Potential issue | 🔴 Critical
extractall()still bypasses your tar safety checks.The tarfile docs explicitly recommend extraction filters for safe extraction and note that the
datafilter rejects links outside the destination and special files. Here onlytar_member.nameis validated before a plainextractall(), so symlink/hardlink/device members can still violate the intended extraction policy. At minimum usefilter="data", or extract only approved members individually. (docs.python.org)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/solidlsp/ls_utils.py` around lines 421 - 425, The current extraction uses tar_ref.extractall(target_path) after only validating tar_member.name via FileUtils._validate_extraction_path, which still allows symlinks/hardlinks/devices to escape or create unsafe entries; modify the extraction in the tarfile.open(archive_path, tar_mode) block to either call tar_ref.extractall(target_path, filter="data") to enforce the tarfile data filter or iterate over tar_ref.getmembers(), run FileUtils._validate_extraction_path(member.name, target_path) and additionally reject members with unsafe types (e.g., symlink, hardlink, dev/char/block devices) before calling tar_ref.extract(member, target_path) for each approved member; ensure the changes reference the existing FileUtils._validate_extraction_path, tar_ref, archive_path, tar_mode, and target_path symbols.src/solidlsp/language_servers/ty_server.py (1)
4-7:⚠️ Potential issue | 🟠 MajorUse
uv tool runfor theuvfallback.The module docs and fallback branch both still reference
uv x, but the official uv docs defineuvxas the shorthand foruv tool run. This fallback should mirror that documented form instead of invoking the currentxvariant. (docs.astral.sh)Suggested fix
- # falling back to uv's uvx-compatible subcommand when only `uv` is available + # falling back to uv's documented equivalent when only `uv` is available uv_path = shutil.which("uv") if uv_path is not None: - return [uv_path, "x", "--from", f"ty=={ty_version}", "ty", "server"] + return [uv_path, "tool", "run", "--from", f"ty=={ty_version}", "ty", "server"]Also applies to: 63-66
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/solidlsp/language_servers/ty_server.py` around lines 4 - 7, The docs and fallback in ty_server.py that describe the uv fallback (referenced from ls_specific_settings["python_ty"]) currently use the outdated "uv x" invocation; update both the documentation strings and the fallback command construction to use the official form "uv tool run" (and you may note "uvx" as the documented shorthand) everywhere you build or describe the fallback command so the code path that launches the tool uses "uv tool run" instead of "uv x".
🧹 Nitpick comments (8)
test/resources/repos/kotlin/test_repo/gradlew.bat (1)
58-58: Inconsistent path separator usage.Line 58 uses forward slash while line 77 uses backslash for paths. While Windows CMD generally accepts both, using backslash consistently is more idiomatic for batch scripts.
-set JAVA_EXE=%JAVA_HOME%/bin/java.exe +set JAVA_EXE=%JAVA_HOME%\bin\java.exe🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/resources/repos/kotlin/test_repo/gradlew.bat` at line 58, The assignment to JAVA_EXE uses a forward slash in the path (set JAVA_EXE=%JAVA_HOME%/bin/java.exe) which is inconsistent with other backslash paths; update the assignment so the path uses backslashes (e.g., set JAVA_EXE=%JAVA_HOME%\bin\java.exe) to match the idiomatic Windows batch style and other occurrences of JAVA_EXE in the script.src/solidlsp/language_servers/fsharp_language_server.py (2)
25-26: Consider adding explicit type annotation for strict typing.Per coding guidelines requiring strict typing with mypy, adding an explicit type annotation improves clarity and tooling support.
✏️ Suggested change
-FSAUTOCOMPLETE_VERSION = "0.83.0" +FSAUTOCOMPLETE_VERSION: str = "0.83.0"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/solidlsp/language_servers/fsharp_language_server.py` around lines 25 - 26, Add an explicit type annotation to the FSAUTOCOMPLETE_VERSION constant to satisfy strict typing (mypy); update the declaration of FSAUTOCOMPLETE_VERSION in fsharp_language_server.py to include its type (str) so the constant is declared as FSAUTOCOMPLETE_VERSION: str = "0.83.0".
71-72: Consider validating the version string from user settings.The
fsautocomplete_versionvalue is used directly without validation. A malformed version string (e.g., containing spaces or special characters) could cause crypticdotnet tool installfailures. Basic validation would improve error messages for misconfigured settings.✏️ Suggested validation
fsharp_settings = solidlsp_settings.get_ls_specific_settings(Language.FSHARP) fsautocomplete_version = fsharp_settings.get("fsautocomplete_version", FSAUTOCOMPLETE_VERSION) +if not isinstance(fsautocomplete_version, str) or not fsautocomplete_version.strip(): + log.warning(f"Invalid fsautocomplete_version '{fsautocomplete_version}', using default {FSAUTOCOMPLETE_VERSION}") + fsautocomplete_version = FSAUTOCOMPLETE_VERSION🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/solidlsp/language_servers/fsharp_language_server.py` around lines 71 - 72, The fsautocomplete_version from solidlsp_settings (retrieved via fsharp_settings = solidlsp_settings.get_ls_specific_settings(Language.FSHARP) and stored in fsautocomplete_version with fallback FSAUTOCOMPLETE_VERSION) must be validated before use; add a simple check that the string matches an allowed pattern (e.g., semantic version like 1.2.3 with optional prerelease/build identifiers or an allowed keyword such as "latest") and if it fails, log or raise a clear configuration error explaining the invalid value and expected format, so callers that run dotnet tool install receive a helpful message instead of cryptic failures.scripts/demo_find_implementing_symbol.py (1)
54-56: Consider adding a comment explaining the startup wait.The
execute_task(lambda: None)call is a non-obvious idiom for waiting for language server initialization. A brief inline comment would improve readability.Suggested documentation improvement
try: - # letting the language server finish startup - agent.execute_task(lambda: None) + # Block until the language server completes startup by executing a no-op task. + # This ensures the LS is ready before running the actual implementation lookup. + agent.execute_task(lambda: None)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scripts/demo_find_implementing_symbol.py` around lines 54 - 56, The call execute_task(lambda: None) is a non-obvious idiom used to wait for the language server to finish startup; add a concise inline comment directly above that line explaining that execute_task is invoked with a no-op lambda to yield control until the language server has initialized (e.g., "wait for language server startup by scheduling a no-op task via execute_task(lambda: None)"), referencing the execute_task(lambda: None) usage so future readers understand the intent.test/solidlsp/typescript/test_typescript_basic.py (1)
36-61: Consider usingpytest.mark.skipifinstead of class-level conditional.The class-level
ifstatement to conditionally define test methods works but is unconventional. Iflanguage_has_verified_implementation_support(Language.TYPESCRIPT)returnsFalse, these tests won't exist at all rather than being skipped.This can make test discovery confusing - the tests won't appear in test counts or be reported as skipped. Consider:
♻️ Alternative using pytest skipif decorator
- if language_has_verified_implementation_support(Language.TYPESCRIPT): - - `@pytest.mark.parametrize`("language_server", [Language.TYPESCRIPT], indirect=True) - def test_find_implementations(self, language_server: SolidLanguageServer) -> None: + `@pytest.mark.skipif`( + not language_has_verified_implementation_support(Language.TYPESCRIPT), + reason="TypeScript implementation support not verified" + ) + `@pytest.mark.parametrize`("language_server", [Language.TYPESCRIPT], indirect=True) + def test_find_implementations(self, language_server: SolidLanguageServer) -> None:🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/solidlsp/typescript/test_typescript_basic.py` around lines 36 - 61, The tests are conditionally defined with an outer if using language_has_verified_implementation_support(Language.TYPESCRIPT) which hides them from test discovery; replace that pattern by applying pytest.mark.skipif to the test functions (or the containing test class) so tests are discovered and reported as skipped when support is absent. Concretely, remove the outer if and add `@pytest.mark.skipif`(not language_has_verified_implementation_support(Language.TYPESCRIPT), reason="TypeScript implementation support not verified") above the test_find_implementations and test_request_implementing_symbols (or above their test class), keeping the existing `@pytest.mark.parametrize` decorators and references to Language.TYPESCRIPT unchanged.src/solidlsp/ls.py (2)
1337-1346: Empty pull result triggers unnecessary fallback wait.When
response["items"]is an empty list (valid response indicating no diagnostics), the code still waits for published diagnostics due toif not ret:. This may cause unnecessary delays for files that genuinely have no issues.♻️ Consider checking for None explicitly
- if not ret: + if ret is None: published_diagnostics = self._wait_for_published_diagnostics(This way, an empty list from a successful pull is accepted as the final result, while only a failed pull (where
retstaysNone) triggers the fallback.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/solidlsp/ls.py` around lines 1337 - 1346, The current check uses a falsy test ("if not ret:") so an empty list returned by the pull is treated as failure and triggers the fallback wait; change the conditional to explicitly check None ("if ret is None:") so that an empty list from the pull is accepted as a valid result and only a true failure (ret is None) calls _wait_for_published_diagnostics(uri, after_generation=diagnostics_before_request, timeout=2.5 if pull_diagnostics_failed else 0.5) and then falls back to _get_cached_published_diagnostics(uri) if necessary.
576-591: Redundant dict construction pattern.Line 591 constructs
normalized_diagnosticas a dict, then unpacks it intols_types.Diagnostic(**normalized_diagnostic). Sincels_types.Diagnosticis likely a TypedDict, the intermediate dict construction is valid but the**unpacking is unnecessary if the dict already matches the TypedDict shape.♻️ Consider simplifying the construction
- normalized_diagnostics.append(ls_types.Diagnostic(**normalized_diagnostic)) + normalized_diagnostics.append(normalized_diagnostic)This works if
normalized_diagnosticalready satisfiesls_types.Diagnostic. If the explicit constructor call is intentional for type validation, the current form is fine.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/solidlsp/ls.py` around lines 576 - 591, The code builds a plain dict named normalized_diagnostic then unpacks it into ls_types.Diagnostic(**normalized_diagnostic) which is redundant; instead construct a value typed as ls_types.Diagnostic directly (e.g., declare normalized_diagnostic: ls_types.Diagnostic = {...}) and append that object to normalized_diagnostics without using the ** unpacking, preserving the existing conditional handling for severity (use ls_types.DiagnosticSeverity when severity is int) and code (accept int or str) and including source when present; update usages of normalized_diagnostic, DiagnosticSeverity, and normalized_diagnostics accordingly.src/serena/tools/symbol_tools.py (1)
813-851: Consider adding diagnostics capture for consistency with other edit tools.The
SafeDeleteSymboltool doesn't capture diagnostics before/after the edit, unlikeReplaceSymbolBodyTool,InsertAfterSymbolTool, etc. While deletion typically removes rather than introduces issues, the consistent pattern would help the agent understand the post-edit state.♻️ Suggested enhancement for consistency
code_editor = self.create_ls_code_editor() + edited_file_paths = [EditedFilePath(symbol_rel_path, symbol_rel_path)] + diagnostics_snapshot = self._capture_published_lsp_diagnostics_snapshot(edited_file_paths) code_editor.delete_symbol(symbol_name_path, relative_file_path=symbol_rel_path) - return SUCCESS_RESULT + return self._format_lsp_edit_result_with_new_diagnostics(SUCCESS_RESULT, edited_file_paths, diagnostics_snapshot)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/serena/tools/symbol_tools.py` around lines 813 - 851, SafeDeleteSymbol currently skips capturing diagnostics before/after performing the delete; update SafeDeleteSymbol.apply to mirror other edit tools by querying the language server for diagnostics around the file (use the existing lang_server = ls_symbol_retriever.get_language_server(symbol_rel_path)) before making edits (e.g., call lang_server.request_diagnostics(symbol_rel_path) or the project's equivalent) and again after code_editor.delete_symbol(...) and include those diagnostics (or a JSON-serializable summary) in the returned result or in the success/failure message so callers get consistent pre/post-edit diagnostic info.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@docs/02-usage/001_features.md`:
- Line 17: The sentence "Tool results are compact JSON, keeping token usage low
and output qujuality high." contains a typo: replace "qujuality" with "quality"
so it reads "Tool results are compact JSON, keeping token usage low and output
quality high." Edit the line in 001_features.md where that sentence appears
(look for the exact string "output qujuality high") and save the change.
In `@docs/02-usage/070_security.md`:
- Around line 28-41: Update the paragraph describing Serena's four-way
verification to state it applies only to Serena's bundled/pinned versions:
clarify that Serena enforces expected version, host, SHA256, and extraction
layout for bundled artifacts, and that installations abort on failures; also add
a brief note that when users override bundled versions some downloaders
intentionally pass expected_sha256=None (e.g., the downloader code paths that
accept an expected_sha256 parameter), so custom overrides may not receive full
checksum verification and therefore provide a weaker guarantee.
In `@src/serena/code_editor.py`:
- Around line 381-390: Move the call to _set_last_edited_file_paths so it only
records paths from operations that actually applied: iterate operations, call
operation.apply() for each, and after each successful apply collect that
operation.get_edited_file_paths() into edited_file_paths (or collect all
successful ones and call _set_last_edited_file_paths once after the loop); do
not call _set_last_edited_file_paths using the planned operations list before
any apply, and ensure exceptions from operation.apply() prevent adding that
operation's paths.
- Around line 22-25: JetBrainsCodeEditor.rename_symbol currently performs a
JetBrains-backed rename but does not update the editor's
_last_edited_file_paths, causing callers to see stale paths; after the rename
operation completes (and any workspace edits are applied), populate
_last_edited_file_paths with EditedFilePath instances (before_relative_path and
after_relative_path) for each file changed—mirroring what
LanguageServerCodeEditor._apply_workspace_edit and the plain text-edit helpers
do—and ensure the same fix is applied to the other JetBrains-backed methods
mentioned (the block at lines 84-96) so all rename/code-edit paths are updated
consistently.
In `@src/serena/tools/tools_base.py`:
- Around line 34-43: DiagnosticIdentity currently includes exact start/end
positions which causes diagnostics that shift location to be treated as new;
change DiagnosticIdentity (the dataclass) to only include stable identifying
fields (e.g., message, severity, code_repr, source and any filename/rule id if
available) and remove start_line/start_character/end_line/end_character from the
identity, then introduce a separate type (e.g., DiagnosticRange or
DiagnosticOccurrence) that carries the range for display and use that where
rendering occurs; update any comparison or set/key usage that references
DiagnosticIdentity (and the other occurrence around lines 418-423) to use the
new stable identity for before/after comparisons and the separate range type
only for UI/display logic.
In `@src/solidlsp/language_servers/clojure_lsp.py`:
- Around line 139-140: The current logic reads clojure_lsp_version and computes
deps via ClojureLSP._runtime_dependencies but does not invalidate or version the
installed binary, causing Serena to reuse an old clojure-lsp executable; update
the installation/cache logic in the ClojureLSP class so installs are stored in a
versioned directory (e.g., include clojure_lsp_version in the install path) or
add a persisted installed-version check before reusing the existing binary;
specifically update where ClojureLSP installs/returns the binary path (the
methods that compute runtime deps and install the binary called from
ClojureLSP._runtime_dependencies and any ClojureLSP.install/get_executable
methods) to incorporate clojure_lsp_version into the path or to read/compare a
stored version and trigger reinstallation when it differs.
In `@src/solidlsp/language_servers/elixir_tools/elixir_tools.py`:
- Around line 80-81: The current installer uses expert_version (from
elixir_settings / EXPERT_VERSION) only for download metadata so once an "expert"
binary is installed it is reused forever; make the cache/versioning explicit by
including the expert_version in the cached binary path or by persisting the
installed version and invalidating/re-downloading when it differs: update the
code that computes the cached binary location and the installer/lookup
(references: elixir_settings, expert_version, EXPERT_VERSION, and the
Serena-managed install/check routine) to either (a) incorporate expert_version
into the filename/path or (b) read/write a small metadata file storing the
installed version and trigger reinstall when it mismatches the current setting;
apply the same change to the other installer logic mentioned (the blocks around
the second occurrence).
In `@src/solidlsp/language_servers/matlab_language_server.py`:
- Around line 186-194: The install currently reuses a fixed matlab_extension_dir
so changing matlab_extension_version still serves the cached extension; update
the install path to include the version marker so cache is versioned—e.g., build
matlab_extension_dir (the directory used by _download_matlab_extension) to
incorporate matlab_extension_version (or MATLAB_EXTENSION_VERSION fallback) when
constructing the path before calling _download_matlab_extension and when
checking existing installs, ensuring matlab_extension_url and the versioned
install path stay in sync.
In `@src/solidlsp/language_servers/pascal_server.py`:
- Around line 533-546: The checksum selection currently prefers the downloaded
checksums dict over the bundled dep.sha256; change the logic in the block that
computes expected_sha256 so dep.sha256 is used if present and non-empty and only
fall back to checksums.get(archive_filename) when dep.sha256 is missing,
ensuring the subsequent call to cls._verify_checksum(archive_path,
expected_sha256) still runs the same verification and failure/removal/logging
path when expected_sha256 is set; reference variables to update: expected_sha256
assignment, checksums, dep.sha256, archive_filename, and the verification call
_verify_checksum.
- Around line 620-624: The code stores resolved pasls_version and derived URLs
on the class (cls.PASLS_VERSION, cls.PASLS_RELEASES_URL, cls.PASLS_API_URL),
making them global; instead keep these values local to the setup flow or attach
them to the instance. Replace assignments to the class with either local
variables (pasls_version, pasls_releases_url, pasls_api_url) used only within
the initialization logic that calls pascal_settings =
solidlsp_settings.get_ls_specific_settings(Language.PASCAL), or set them on the
instance (self.PASLS_VERSION, self.PASLS_RELEASES_URL, self.PASLS_API_URL) so
each server instance uses its own resolved values and no cross-instance leakage
occurs.
In `@src/solidlsp/language_servers/ruby_lsp.py`:
- Around line 230-235: The gem install call currently always runs ["gem",
"install", "ruby-lsp", "-v", ruby_lsp_version], which can install into the
system Ruby; modify the subprocess.run in the installation branch to use the
rbenv toolchain when use_rbenv is True by building the command as ["rbenv",
"exec", "gem", "install", "ruby-lsp", "-v", ruby_lsp_version] if use_rbenv else
the existing ["gem", "install", "ruby-lsp", "-v", ruby_lsp_version], and call
subprocess.run with the same check=True, capture_output=True,
cwd=repository_root_path so the gem is installed into the correct Ruby
environment (referencing variables use_rbenv, ruby_lsp_version,
repository_root_path and the existing subprocess.run invocation).
In `@src/solidlsp/ls_utils.py`:
- Around line 223-247: The download_file_verified function currently calls
requests.get() (in download_file_verified) which allows automatic redirects and
validates response.url only after redirects; change this to disable automatic
redirects (allow_redirects=False) and manually follow each Location hop: on each
redirect response validate the next URL with FileUtils._validate_download_host
before issuing the next request, and switch the entire implementation to an
async HTTP client (e.g., use aiohttp.ClientSession or httpx.AsyncClient) so the
function performs non-blocking I/O; additionally, for the tar extraction code
(the extractall() call) replace the unsafe extractall() with a safe extraction
routine that filters members to prevent path traversal and rejects/blocks
symlinks and hardlinks before writing files.
In `@src/solidlsp/ls.py`:
- Line 1034: The current code uses "assert False, f'Unexpected response from
Language Server: {response}'" which can be stripped out with Python -O; replace
it with an explicit exception raise to ensure the check always runs (e.g., raise
AssertionError(f"Unexpected response from Language Server: {response}")). Update
the failing location in ls.py where that assertion appears (look for the literal
message "Unexpected response from Language Server" and the local variable
response) to raise an AssertionError with the same descriptive message instead
of using assert.
In `@test/diagnostics_cases.py`:
- Around line 24-29: Rename the parameter named id in the helper function
diagnostic_case_param to avoid shadowing the builtin; change the signature to
accept case_id (and update any internal usage of id to case_id) and update all
call sites that pass the id keyword accordingly so behavior remains the same;
this targets the function diagnostic_case_param and its parameter id to resolve
Ruff A002 without altering functionality.
In `@test/resources/repos/kotlin/test_repo/gradlew.bat`:
- Around line 1-94: The gradlew.bat file uses LF-only line endings which can
break Windows batch parsing (affecting labels like :execute, :fail, :mainEnd);
convert gradlew.bat to CRLF line endings, commit the updated file, and add or
update .gitattributes with an entry such as "*.bat text eol=crlf" so future
commits preserve CRLF for .bat files.
In `@test/resources/repos/python/test_repo/test_repo/diagnostics_sample.py`:
- Line 5: The undefined names in this diagnostics fixture (e.g., the identifier
`missing_user` returned on the line containing "return missing_user" and the
other undefined name at line 11) are intentional, so suppress Ruff F821
explicitly by appending a targeted comment `# noqa: F821` to those lines (e.g.,
change `return missing_user` to `return missing_user # noqa: F821` and add the
same `# noqa: F821` comment to the other undefined identifier on the indicated
line); do not change any other code or remove the intentionally undefined names.
In `@test/serena/test_serena_agent.py`:
- Around line 49-50: The parameter name `id` in the to_pytest_param method
shadows the builtin and triggers Ruff A002; rename the parameter to something
like `case_id` in the function signature and update its use in the return
expression (function: to_pytest_param, references: pytest.param(..., id=...)) so
the behavior stays the same but the builtin is not shadowed; run ruff/mypy
formatting to ensure type hints and imports remain correct after renaming.
In `@test/solidlsp/go/test_go_basic.py`:
- Around line 35-60: The tests test_find_implementations and
test_request_implementing_symbols are being defined inside a class-level if
block using language_has_verified_implementation_support(Language.GO); instead,
remove the conditional block and apply pytest.mark.skipif to each test (or the
whole module) so tests are registered but skipped when support is
unavailable—for example decorate the test functions test_find_implementations
and test_request_implementing_symbols with `@pytest.mark.skipif`(not
language_has_verified_implementation_support(Language.GO), reason="GO
implementation not available") while keeping the existing parametrize and
function signatures intact.
---
Outside diff comments:
In `@src/serena/code_editor.py`:
- Around line 343-346: The rename logic in the apply method currently calls
os.rename(old_abs_path, new_abs_path) which fails with ENOENT if the destination
parent directory doesn't exist; before renaming in apply (inside the same scope
as old_abs_path/new_abs_path), ensure the destination directory exists by
computing dest_dir = os.path.dirname(new_abs_path) and creating it with
os.makedirs(dest_dir, exist_ok=True) when dest_dir is non-empty, then perform
os.rename; this fixes moves like "foo.py -> pkg/foo.py".
In `@src/solidlsp/language_servers/fsharp_language_server.py`:
- Around line 75-84: The RuntimeDependencyCollection instance for fsautocomplete
is created but never used; replace the discarded construction with an assignment
(e.g., deps = RuntimeDependencyCollection([...])) and remove the manual
subprocess.run install logic, then call deps.install(target_dir) to perform the
installation; update references to use the same deps variable and keep the
RuntimeDependency(...) entry for fsautocomplete intact so the standard install()
flow is used.
---
Duplicate comments:
In `@src/solidlsp/language_servers/ty_server.py`:
- Around line 4-7: The docs and fallback in ty_server.py that describe the uv
fallback (referenced from ls_specific_settings["python_ty"]) currently use the
outdated "uv x" invocation; update both the documentation strings and the
fallback command construction to use the official form "uv tool run" (and you
may note "uvx" as the documented shorthand) everywhere you build or describe the
fallback command so the code path that launches the tool uses "uv tool run"
instead of "uv x".
In `@src/solidlsp/ls_utils.py`:
- Around line 421-425: The current extraction uses
tar_ref.extractall(target_path) after only validating tar_member.name via
FileUtils._validate_extraction_path, which still allows
symlinks/hardlinks/devices to escape or create unsafe entries; modify the
extraction in the tarfile.open(archive_path, tar_mode) block to either call
tar_ref.extractall(target_path, filter="data") to enforce the tarfile data
filter or iterate over tar_ref.getmembers(), run
FileUtils._validate_extraction_path(member.name, target_path) and additionally
reject members with unsafe types (e.g., symlink, hardlink, dev/char/block
devices) before calling tar_ref.extract(member, target_path) for each approved
member; ensure the changes reference the existing
FileUtils._validate_extraction_path, tar_ref, archive_path, tar_mode, and
target_path symbols.
In `@test/serena/test_serena_agent.py`:
- Around line 1273-1297: The test
test_replace_symbol_body_reports_new_diagnostics currently only asserts
substrings; change it (and the similar test covering SafeDeleteSymbol) to use
the project's snapshot pattern for symbolic editing tests: capture the full edit
result (the variable result or diagnostics from parse_edit_diagnostics_result)
and assert against the stored snapshot (use the same snapshot helper/fixture
used by other symbolic edit tests), replacing the two assert "in" checks with a
single snapshot assertion so response shape and output regressions are caught;
keep references to ReplaceSymbolBodyTool, SafeDeleteSymbol,
parse_edit_diagnostics_result and project_file_modification_context to locate
and update the tests.
In `@test/solidlsp/fsharp/test_fsharp_basic.py`:
- Around line 41-66: The tests are being conditionally defined based on
language_has_verified_implementation_support(Language.FSHARP), which prevents
pytest from collecting and reporting them; instead, keep the test functions
test_find_implementations and test_request_implementing_symbols defined
unconditionally and decorate each with `@pytest.mark.skipif`(not
language_has_verified_implementation_support(Language.FSHARP), reason="F#
implementation support not verified") so pytest will collect and report skips;
update imports if needed to reference pytest and the
language_has_verified_implementation_support/Language.FSHARP symbols.
---
Nitpick comments:
In `@scripts/demo_find_implementing_symbol.py`:
- Around line 54-56: The call execute_task(lambda: None) is a non-obvious idiom
used to wait for the language server to finish startup; add a concise inline
comment directly above that line explaining that execute_task is invoked with a
no-op lambda to yield control until the language server has initialized (e.g.,
"wait for language server startup by scheduling a no-op task via
execute_task(lambda: None)"), referencing the execute_task(lambda: None) usage
so future readers understand the intent.
In `@src/serena/tools/symbol_tools.py`:
- Around line 813-851: SafeDeleteSymbol currently skips capturing diagnostics
before/after performing the delete; update SafeDeleteSymbol.apply to mirror
other edit tools by querying the language server for diagnostics around the file
(use the existing lang_server =
ls_symbol_retriever.get_language_server(symbol_rel_path)) before making edits
(e.g., call lang_server.request_diagnostics(symbol_rel_path) or the project's
equivalent) and again after code_editor.delete_symbol(...) and include those
diagnostics (or a JSON-serializable summary) in the returned result or in the
success/failure message so callers get consistent pre/post-edit diagnostic info.
In `@src/solidlsp/language_servers/fsharp_language_server.py`:
- Around line 25-26: Add an explicit type annotation to the
FSAUTOCOMPLETE_VERSION constant to satisfy strict typing (mypy); update the
declaration of FSAUTOCOMPLETE_VERSION in fsharp_language_server.py to include
its type (str) so the constant is declared as FSAUTOCOMPLETE_VERSION: str =
"0.83.0".
- Around line 71-72: The fsautocomplete_version from solidlsp_settings
(retrieved via fsharp_settings =
solidlsp_settings.get_ls_specific_settings(Language.FSHARP) and stored in
fsautocomplete_version with fallback FSAUTOCOMPLETE_VERSION) must be validated
before use; add a simple check that the string matches an allowed pattern (e.g.,
semantic version like 1.2.3 with optional prerelease/build identifiers or an
allowed keyword such as "latest") and if it fails, log or raise a clear
configuration error explaining the invalid value and expected format, so callers
that run dotnet tool install receive a helpful message instead of cryptic
failures.
In `@src/solidlsp/ls.py`:
- Around line 1337-1346: The current check uses a falsy test ("if not ret:") so
an empty list returned by the pull is treated as failure and triggers the
fallback wait; change the conditional to explicitly check None ("if ret is
None:") so that an empty list from the pull is accepted as a valid result and
only a true failure (ret is None) calls _wait_for_published_diagnostics(uri,
after_generation=diagnostics_before_request, timeout=2.5 if
pull_diagnostics_failed else 0.5) and then falls back to
_get_cached_published_diagnostics(uri) if necessary.
- Around line 576-591: The code builds a plain dict named normalized_diagnostic
then unpacks it into ls_types.Diagnostic(**normalized_diagnostic) which is
redundant; instead construct a value typed as ls_types.Diagnostic directly
(e.g., declare normalized_diagnostic: ls_types.Diagnostic = {...}) and append
that object to normalized_diagnostics without using the ** unpacking, preserving
the existing conditional handling for severity (use ls_types.DiagnosticSeverity
when severity is int) and code (accept int or str) and including source when
present; update usages of normalized_diagnostic, DiagnosticSeverity, and
normalized_diagnostics accordingly.
In `@test/resources/repos/kotlin/test_repo/gradlew.bat`:
- Line 58: The assignment to JAVA_EXE uses a forward slash in the path (set
JAVA_EXE=%JAVA_HOME%/bin/java.exe) which is inconsistent with other backslash
paths; update the assignment so the path uses backslashes (e.g., set
JAVA_EXE=%JAVA_HOME%\bin\java.exe) to match the idiomatic Windows batch style
and other occurrences of JAVA_EXE in the script.
In `@test/solidlsp/typescript/test_typescript_basic.py`:
- Around line 36-61: The tests are conditionally defined with an outer if using
language_has_verified_implementation_support(Language.TYPESCRIPT) which hides
them from test discovery; replace that pattern by applying pytest.mark.skipif to
the test functions (or the containing test class) so tests are discovered and
reported as skipped when support is absent. Concretely, remove the outer if and
add `@pytest.mark.skipif`(not
language_has_verified_implementation_support(Language.TYPESCRIPT),
reason="TypeScript implementation support not verified") above the
test_find_implementations and test_request_implementing_symbols (or above their
test class), keeping the existing `@pytest.mark.parametrize` decorators and
references to Language.TYPESCRIPT unchanged.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: e71f930f-8a3d-4078-bcb0-e39f1b6bd1dd
⛔ Files ignored due to path filters (1)
test/resources/repos/kotlin/test_repo/gradle/wrapper/gradle-wrapper.jaris excluded by!**/*.jar
📒 Files selected for processing (124)
.serena/project.ymldocs/02-usage/001_features.mddocs/02-usage/050_configuration.mddocs/02-usage/070_security.mdpyproject.tomlscripts/demo_diagnostics.pyscripts/demo_find_defining_symbol.pyscripts/demo_find_implementing_symbol.pyscripts/demo_run_tools.pysrc/serena/code_editor.pysrc/serena/jetbrains/jetbrains_plugin_client.pysrc/serena/jetbrains/jetbrains_types.pysrc/serena/resources/config/internal_modes/jetbrains.ymlsrc/serena/resources/project.template.ymlsrc/serena/symbol.pysrc/serena/tools/file_tools.pysrc/serena/tools/jetbrains_tools.pysrc/serena/tools/symbol_tools.pysrc/serena/tools/tools_base.pysrc/solidlsp/language_servers/al_language_server.pysrc/solidlsp/language_servers/ansible_language_server.pysrc/solidlsp/language_servers/bash_language_server.pysrc/solidlsp/language_servers/clangd_language_server.pysrc/solidlsp/language_servers/clojure_lsp.pysrc/solidlsp/language_servers/common.pysrc/solidlsp/language_servers/csharp_language_server.pysrc/solidlsp/language_servers/dart_language_server.pysrc/solidlsp/language_servers/eclipse_jdtls.pysrc/solidlsp/language_servers/elixir_tools/elixir_tools.pysrc/solidlsp/language_servers/elm_language_server.pysrc/solidlsp/language_servers/fsharp_language_server.pysrc/solidlsp/language_servers/gopls.pysrc/solidlsp/language_servers/groovy_language_server.pysrc/solidlsp/language_servers/hlsl_language_server.pysrc/solidlsp/language_servers/intelephense.pysrc/solidlsp/language_servers/kotlin_language_server.pysrc/solidlsp/language_servers/lua_ls.pysrc/solidlsp/language_servers/luau_lsp.pysrc/solidlsp/language_servers/marksman.pysrc/solidlsp/language_servers/matlab_language_server.pysrc/solidlsp/language_servers/omnisharp.pysrc/solidlsp/language_servers/omnisharp/runtime_dependencies.jsonsrc/solidlsp/language_servers/pascal_server.pysrc/solidlsp/language_servers/phpactor.pysrc/solidlsp/language_servers/powershell_language_server.pysrc/solidlsp/language_servers/ruby_lsp.pysrc/solidlsp/language_servers/rust_analyzer.pysrc/solidlsp/language_servers/solidity_language_server.pysrc/solidlsp/language_servers/systemverilog_server.pysrc/solidlsp/language_servers/taplo_server.pysrc/solidlsp/language_servers/terraform_ls.pysrc/solidlsp/language_servers/ty_server.pysrc/solidlsp/language_servers/typescript_language_server.pysrc/solidlsp/language_servers/vts_language_server.pysrc/solidlsp/language_servers/vue_language_server.pysrc/solidlsp/language_servers/yaml_language_server.pysrc/solidlsp/ls.pysrc/solidlsp/ls_config.pysrc/solidlsp/ls_process.pysrc/solidlsp/ls_types.pysrc/solidlsp/ls_utils.pytest/conftest.pytest/diagnostics_cases.pytest/resources/repos/clojure/test_repo/src/test_app/diagnostics_sample.cljtest/resources/repos/cpp/test_repo/compile_commands.jsontest/resources/repos/cpp/test_repo/diagnostics_sample.cpptest/resources/repos/csharp/test_repo/DiagnosticsSample.cstest/resources/repos/csharp/test_repo/Program.cstest/resources/repos/csharp/test_repo/Services/ConsoleGreeter.cstest/resources/repos/csharp/test_repo/Services/IGreeter.cstest/resources/repos/fsharp/test_repo/DiagnosticsSample.fstest/resources/repos/fsharp/test_repo/Formatter.fstest/resources/repos/fsharp/test_repo/Program.fstest/resources/repos/fsharp/test_repo/Shapes.fstest/resources/repos/fsharp/test_repo/TestProject.fsprojtest/resources/repos/go/test_repo/diagnostics_sample.gotest/resources/repos/go/test_repo/main.gotest/resources/repos/java/test_repo/src/main/java/test_repo/ConsoleGreeter.javatest/resources/repos/java/test_repo/src/main/java/test_repo/DiagnosticsSample.javatest/resources/repos/java/test_repo/src/main/java/test_repo/Greeter.javatest/resources/repos/java/test_repo/src/main/java/test_repo/Main.javatest/resources/repos/kotlin/test_repo/gradle/wrapper/gradle-wrapper.propertiestest/resources/repos/kotlin/test_repo/gradlewtest/resources/repos/kotlin/test_repo/gradlew.battest/resources/repos/kotlin/test_repo/src/main/kotlin/test_repo/DiagnosticsSample.kttest/resources/repos/lean4/test_repo/DiagnosticsSample.leantest/resources/repos/lua/test_repo/main.luatest/resources/repos/lua/test_repo/src/animals.luatest/resources/repos/php/test_repo/diagnostics_sample.phptest/resources/repos/powershell/test_repo/diagnostics_sample.ps1test/resources/repos/python/test_repo/test_repo/diagnostics_sample.pytest/resources/repos/ruby/test_repo/lib.rbtest/resources/repos/ruby/test_repo/main.rbtest/resources/repos/rust/test_repo/src/diagnostics_sample.rstest/resources/repos/rust/test_repo/src/lib.rstest/resources/repos/rust/test_repo/src/main.rstest/resources/repos/typescript/test_repo/.serena/project.ymltest/resources/repos/typescript/test_repo/diagnostics_sample.tstest/resources/repos/typescript/test_repo/formatters.tstest/resources/repos/typescript/test_repo/index.tstest/serena/__snapshots__/test_symbol_editing.ambrtest/serena/test_serena_agent.pytest/serena/test_symbol.pytest/solidlsp/clojure/__init__.pytest/solidlsp/csharp/test_csharp_basic.pytest/solidlsp/csharp/test_csharp_nuget_download.pytest/solidlsp/dart/test_dart_basic.pytest/solidlsp/fortran/test_fortran_basic.pytest/solidlsp/fsharp/test_fsharp_basic.pytest/solidlsp/go/test_go_basic.pytest/solidlsp/java/test_java_basic.pytest/solidlsp/lua/test_lua_basic.pytest/solidlsp/luau/test_luau_dependency_provider.pytest/solidlsp/python/test_python_basic.pytest/solidlsp/python/test_retrieval_with_ignored_dirs.pytest/solidlsp/python/test_symbol_retrieval.pytest/solidlsp/ruby/test_ruby_symbol_retrieval.pytest/solidlsp/rust/test_rust_basic.pytest/solidlsp/solidity/test_solidity_basic.pytest/solidlsp/test_defining_symbol_matrix.pytest/solidlsp/test_diagnostics_matrix.pytest/solidlsp/test_ls_common.pytest/solidlsp/typescript/test_typescript_basic.pytest/solidlsp/util/test_ls_utils.py
✅ Files skipped from review due to trivial changes (24)
- test/resources/repos/java/test_repo/src/main/java/test_repo/Main.java
- test/resources/repos/java/test_repo/src/main/java/test_repo/Greeter.java
- test/resources/repos/kotlin/test_repo/gradle/wrapper/gradle-wrapper.properties
- test/resources/repos/java/test_repo/src/main/java/test_repo/ConsoleGreeter.java
- test/resources/repos/csharp/test_repo/Services/IGreeter.cs
- src/serena/resources/project.template.yml
- test/resources/repos/fsharp/test_repo/Shapes.fs
- test/resources/repos/fsharp/test_repo/TestProject.fsproj
- test/resources/repos/kotlin/test_repo/src/main/kotlin/test_repo/DiagnosticsSample.kt
- src/solidlsp/language_servers/yaml_language_server.py
- src/solidlsp/language_servers/gopls.py
- test/solidlsp/ruby/test_ruby_symbol_retrieval.py
- src/serena/resources/config/internal_modes/jetbrains.yml
- test/resources/repos/csharp/test_repo/Services/ConsoleGreeter.cs
- test/resources/repos/lua/test_repo/main.lua
- src/solidlsp/language_servers/kotlin_language_server.py
- test/resources/repos/fsharp/test_repo/Formatter.fs
- test/resources/repos/ruby/test_repo/lib.rb
- test/solidlsp/clojure/init.py
- src/serena/jetbrains/jetbrains_types.py
- test/solidlsp/util/test_ls_utils.py
- scripts/demo_diagnostics.py
- pyproject.toml
- test/resources/repos/typescript/test_repo/formatters.ts
🚧 Files skipped from review as they are similar to previous changes (51)
- test/resources/repos/csharp/test_repo/Program.cs
- test/solidlsp/dart/test_dart_basic.py
- test/resources/repos/cpp/test_repo/compile_commands.json
- test/resources/repos/lean4/test_repo/DiagnosticsSample.lean
- test/resources/repos/fsharp/test_repo/DiagnosticsSample.fs
- src/solidlsp/language_servers/rust_analyzer.py
- src/solidlsp/language_servers/solidity_language_server.py
- test/resources/repos/typescript/test_repo/index.ts
- test/resources/repos/go/test_repo/main.go
- test/resources/repos/powershell/test_repo/diagnostics_sample.ps1
- test/serena/test_symbol.py
- test/solidlsp/test_ls_common.py
- src/solidlsp/language_servers/ansible_language_server.py
- test/resources/repos/csharp/test_repo/DiagnosticsSample.cs
- test/resources/repos/rust/test_repo/src/diagnostics_sample.rs
- src/solidlsp/ls_process.py
- test/resources/repos/typescript/test_repo/diagnostics_sample.ts
- .serena/project.yml
- scripts/demo_run_tools.py
- src/solidlsp/language_servers/intelephense.py
- src/solidlsp/language_servers/typescript_language_server.py
- test/solidlsp/solidity/test_solidity_basic.py
- test/solidlsp/fortran/test_fortran_basic.py
- test/resources/repos/clojure/test_repo/src/test_app/diagnostics_sample.clj
- test/solidlsp/test_defining_symbol_matrix.py
- test/resources/repos/rust/test_repo/src/lib.rs
- src/solidlsp/language_servers/vts_language_server.py
- test/solidlsp/luau/test_luau_dependency_provider.py
- src/solidlsp/language_servers/systemverilog_server.py
- src/solidlsp/language_servers/hlsl_language_server.py
- test/serena/snapshots/test_symbol_editing.ambr
- src/solidlsp/language_servers/terraform_ls.py
- test/resources/repos/lua/test_repo/src/animals.lua
- test/resources/repos/ruby/test_repo/main.rb
- test/solidlsp/lua/test_lua_basic.py
- src/solidlsp/language_servers/groovy_language_server.py
- src/solidlsp/language_servers/taplo_server.py
- test/solidlsp/python/test_retrieval_with_ignored_dirs.py
- test/solidlsp/csharp/test_csharp_basic.py
- test/solidlsp/java/test_java_basic.py
- scripts/demo_find_defining_symbol.py
- test/solidlsp/csharp/test_csharp_nuget_download.py
- src/solidlsp/language_servers/eclipse_jdtls.py
- src/solidlsp/language_servers/omnisharp/runtime_dependencies.json
- src/solidlsp/language_servers/dart_language_server.py
- src/solidlsp/language_servers/lua_ls.py
- src/serena/jetbrains/jetbrains_plugin_client.py
- docs/02-usage/050_configuration.md
- src/solidlsp/language_servers/csharp_language_server.py
- src/serena/symbol.py
- src/serena/tools/jetbrains_tools.py
| This is a deliberate design choice: line numbers and column offsets are fragile and shift with every edit, | ||
| making them unreliable anchors for agents working on evolving code. | ||
| Symbols (classes, methods, functions) are stable, meaningful identifiers that agents can reason about naturally. | ||
| Tool results are compact JSON, keeping token usage low and output qujuality high. |
There was a problem hiding this comment.
Fix typo: "qujuality" should be "quality".
There's a spelling error in the word "quality" that should be corrected for professional documentation.
📝 Proposed fix
- Tool results are compact JSON, keeping token usage low and output qujuality high.
+ Tool results are compact JSON, keeping token usage low and output quality high.📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| Tool results are compact JSON, keeping token usage low and output qujuality high. | |
| Tool results are compact JSON, keeping token usage low and output quality high. |
🧰 Tools
🪛 LanguageTool
[grammar] ~17-~17: Ensure spelling is correct
Context: ...SON, keeping token usage low and output qujuality high. - **LLM- and framework-independen...
(QB_NEW_EN_ORTHOGRAPHY_ERROR_IDS_1)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@docs/02-usage/001_features.md` at line 17, The sentence "Tool results are
compact JSON, keeping token usage low and output qujuality high." contains a
typo: replace "qujuality" with "quality" so it reads "Tool results are compact
JSON, keeping token usage low and output quality high." Edit the line in
001_features.md where that sentence appears (look for the exact string "output
qujuality high") and save the change.
| - **Pinned versions by default**: default downloads use exact versions instead of floating `latest` or nightly channels. | ||
| - **Integrity verification**: downloaded artifacts are checked against pinned SHA256 hashes stored in Serena's source code. | ||
| - **Host allowlists**: download URLs are restricted to the expected hosts for a given dependency. | ||
| - **Safe extraction**: archive extraction validates paths to prevent path traversal and zip-slip style attacks. | ||
| - **Managed install locations**: dependencies are installed into Serena-managed directories instead of into the project repository. | ||
|
|
||
| In practice, this means that a downloaded artifact must match all of the following: | ||
|
|
||
| - the expected version, | ||
| - the expected host, | ||
| - the expected SHA256 checksum, | ||
| - and the expected extraction layout. | ||
|
|
||
| If any of these checks fail, Serena aborts the installation instead of continuing. |
There was a problem hiding this comment.
Narrow the checksum guarantee to Serena's bundled versions.
This section reads as unconditional, but several downloaders in this PR deliberately pass expected_sha256=None when the user overrides the bundled version. As written, readers will assume every supported override is hash-verified, which is not true today. Please scope the four-way verification claim to Serena's pinned defaults and call out the weaker guarantee for custom version overrides.
🧰 Tools
🪛 LanguageTool
[grammar] ~31-~31: Use a hyphen to join words.
Context: ...s to prevent path traversal and zip-slip style attacks. - **Managed install locat...
(QB_NEW_EN_HYPHEN)
[style] ~34-~34: Consider removing “of” to be more concise
Context: ...s that a downloaded artifact must match all of the following: - the expected version, - t...
(ALL_OF_THE)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@docs/02-usage/070_security.md` around lines 28 - 41, Update the paragraph
describing Serena's four-way verification to state it applies only to Serena's
bundled/pinned versions: clarify that Serena enforces expected version, host,
SHA256, and extraction layout for bundled artifacts, and that installations
abort on failures; also add a brief note that when users override bundled
versions some downloaders intentionally pass expected_sha256=None (e.g., the
downloader code paths that accept an expected_sha256 parameter), so custom
overrides may not receive full checksum verification and therefore provide a
weaker guarantee.
| # recording the affected files | ||
| edited_file_paths: list[EditedFilePath] = [] | ||
| for operation in operations: | ||
| edited_file_paths.extend(operation.get_edited_file_paths()) | ||
|
|
||
| self._set_last_edited_file_paths(edited_file_paths) | ||
|
|
||
| # applying the edit operations | ||
| for operation in operations: | ||
| operation.apply() |
There was a problem hiding this comment.
Only record edited paths after the operations succeed.
_set_last_edited_file_paths() runs before any filesystem/LSP edit is applied. If one of the operations raises, the new accessor will report files that were never changed, which is especially misleading for the post-edit diagnostics this PR adds. Please update the field from the successfully applied operations instead of the planned ones.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/serena/code_editor.py` around lines 381 - 390, Move the call to
_set_last_edited_file_paths so it only records paths from operations that
actually applied: iterate operations, call operation.apply() for each, and after
each successful apply collect that operation.get_edited_file_paths() into
edited_file_paths (or collect all successful ones and call
_set_last_edited_file_paths once after the loop); do not call
_set_last_edited_file_paths using the planned operations list before any apply,
and ensure exceptions from operation.apply() prevent adding that operation's
paths.
| @dataclass(frozen=True) | ||
| class DiagnosticIdentity: | ||
| message: str | ||
| start_line: int | ||
| start_character: int | ||
| end_line: int | ||
| end_character: int | ||
| severity: int | None | ||
| code_repr: str | None | ||
| source: str | None |
There was a problem hiding this comment.
Absolute ranges will over-report “new” diagnostics.
The diff here keys on DiagnosticIdentity, and that identity includes exact start/end positions. Any pre-existing warning that merely moves because the edit inserted or removed lines will now be classified as newly introduced. Use a stable identity for the before/after comparison and keep the range only for display.
Also applies to: 418-423
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/serena/tools/tools_base.py` around lines 34 - 43, DiagnosticIdentity
currently includes exact start/end positions which causes diagnostics that shift
location to be treated as new; change DiagnosticIdentity (the dataclass) to only
include stable identifying fields (e.g., message, severity, code_repr, source
and any filename/rule id if available) and remove
start_line/start_character/end_line/end_character from the identity, then
introduce a separate type (e.g., DiagnosticRange or DiagnosticOccurrence) that
carries the range for display and use that where rendering occurs; update any
comparison or set/key usage that references DiagnosticIdentity (and the other
occurrence around lines 418-423) to use the new stable identity for before/after
comparisons and the separate range type only for UI/display logic.
| def diagnostic_case_param( | ||
| case: DiagnosticCase, | ||
| *marks: MarkDecorator | Mark, | ||
| id: str, | ||
| ): | ||
| return pytest.param(case.language, case, marks=[*get_pytest_markers(case.language), *marks], id=id) |
There was a problem hiding this comment.
Rename the id parameter to clear Ruff A002.
id shadows the builtin here, and Ruff already reports that as A002. Renaming it to something like case_id avoids a lint failure without changing the helper's behavior. As per coding guidelines, **/*.py: Use strict typing with mypy and format code with ruff.
🧰 Tools
🪛 Ruff (0.15.7)
[error] 27-27: Function argument id is shadowing a Python builtin
(A002)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@test/diagnostics_cases.py` around lines 24 - 29, Rename the parameter named
id in the helper function diagnostic_case_param to avoid shadowing the builtin;
change the signature to accept case_id (and update any internal usage of id to
case_id) and update all call sites that pass the id keyword accordingly so
behavior remains the same; this targets the function diagnostic_case_param and
its parameter id to resolve Ruff A002 without altering functionality.
| @rem | ||
| @rem Copyright 2015 the original author or authors. | ||
| @rem | ||
| @rem Licensed under the Apache License, Version 2.0 (the "License"); | ||
| @rem you may not use this file except in compliance with the License. | ||
| @rem You may obtain a copy of the License at | ||
| @rem | ||
| @rem https://www.apache.org/licenses/LICENSE-2.0 | ||
| @rem | ||
| @rem Unless required by applicable law or agreed to in writing, software | ||
| @rem distributed under the License is distributed on an "AS IS" BASIS, | ||
| @rem WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | ||
| @rem See the License for the specific language governing permissions and | ||
| @rem limitations under the License. | ||
| @rem | ||
| @rem SPDX-License-Identifier: Apache-2.0 | ||
| @rem | ||
|
|
||
| @if "%DEBUG%"=="" @echo off | ||
| @rem ########################################################################## | ||
| @rem | ||
| @rem Gradle startup script for Windows | ||
| @rem | ||
| @rem ########################################################################## | ||
|
|
||
| @rem Set local scope for the variables with windows NT shell | ||
| if "%OS%"=="Windows_NT" setlocal | ||
|
|
||
| set DIRNAME=%~dp0 | ||
| if "%DIRNAME%"=="" set DIRNAME=. | ||
| @rem This is normally unused | ||
| set APP_BASE_NAME=%~n0 | ||
| set APP_HOME=%DIRNAME% | ||
|
|
||
| @rem Resolve any "." and ".." in APP_HOME to make it shorter. | ||
| for %%i in ("%APP_HOME%") do set APP_HOME=%%~fi | ||
|
|
||
| @rem Add default JVM options here. You can also use JAVA_OPTS and GRADLE_OPTS to pass JVM options to this script. | ||
| set DEFAULT_JVM_OPTS="-Xmx64m" "-Xms64m" | ||
|
|
||
| @rem Find java.exe | ||
| if defined JAVA_HOME goto findJavaFromJavaHome | ||
|
|
||
| set JAVA_EXE=java.exe | ||
| %JAVA_EXE% -version >NUL 2>&1 | ||
| if %ERRORLEVEL% equ 0 goto execute | ||
|
|
||
| echo. 1>&2 | ||
| echo ERROR: JAVA_HOME is not set and no 'java' command could be found in your PATH. 1>&2 | ||
| echo. 1>&2 | ||
| echo Please set the JAVA_HOME variable in your environment to match the 1>&2 | ||
| echo location of your Java installation. 1>&2 | ||
|
|
||
| goto fail | ||
|
|
||
| :findJavaFromJavaHome | ||
| set JAVA_HOME=%JAVA_HOME:"=% | ||
| set JAVA_EXE=%JAVA_HOME%/bin/java.exe | ||
|
|
||
| if exist "%JAVA_EXE%" goto execute | ||
|
|
||
| echo. 1>&2 | ||
| echo ERROR: JAVA_HOME is set to an invalid directory: %JAVA_HOME% 1>&2 | ||
| echo. 1>&2 | ||
| echo Please set the JAVA_HOME variable in your environment to match the 1>&2 | ||
| echo location of your Java installation. 1>&2 | ||
|
|
||
| goto fail | ||
|
|
||
| :execute | ||
| @rem Setup the command line | ||
|
|
||
| set CLASSPATH= | ||
|
|
||
|
|
||
| @rem Execute Gradle | ||
| "%JAVA_EXE%" %DEFAULT_JVM_OPTS% %JAVA_OPTS% %GRADLE_OPTS% "-Dorg.gradle.appname=%APP_BASE_NAME%" -classpath "%CLASSPATH%" -jar "%APP_HOME%\gradle\wrapper\gradle-wrapper.jar" %* | ||
|
|
||
| :end | ||
| @rem End local scope for the variables with windows NT shell | ||
| if %ERRORLEVEL% equ 0 goto mainEnd | ||
|
|
||
| :fail | ||
| rem Set variable GRADLE_EXIT_CONSOLE if you need the _script_ return code instead of | ||
| rem the _cmd.exe /c_ return code! | ||
| set EXIT_CODE=%ERRORLEVEL% | ||
| if %EXIT_CODE% equ 0 set EXIT_CODE=1 | ||
| if not ""=="%GRADLE_EXIT_CONSOLE%" exit %EXIT_CODE% | ||
| exit /b %EXIT_CODE% | ||
|
|
||
| :mainEnd | ||
| if "%OS%"=="Windows_NT" endlocal | ||
|
|
||
| :omega |
There was a problem hiding this comment.
Convert to Windows line endings (CRLF) for batch file compatibility.
The static analysis tool correctly identifies that this batch file uses Unix line endings (LF-only). Windows batch parser can exhibit parsing failures at 512-byte boundaries with LF-only line endings, potentially causing GOTO and CALL label resolution issues.
Since this is a test resource for the Kotlin test repository, ensure the file is committed with CRLF line endings. You can configure this via .gitattributes:
*.bat text eol=crlf🧰 Tools
🪛 Blinter (1.0.112)
[error] 1-1: Unix line endings detected. Explanation: Batch file uses Unix line endings (LF-only) which can cause GOTO/CALL label parsing failures and script malfunction due to Windows batch parser 512-byte boundary bugs. Recommendation: Convert file to Windows line endings (CRLF). Use tools like dos2unix, notepad++, or configure git with 'git config core.autocrlf true'. Context: File uses Unix line endings (LF-only) - 94 LF sequences found
(E018)
[error] 88-88: Complex quote escaping error. Explanation: Complex quote patterns may not be handled correctly by the command interpreter. Recommendation: Use triple-quote pattern """text""" for quotes within quoted strings, or escape properly with variables. Context: Complex quote escaping detected
(E028)
[error] 29-29: Unsafe SET command usage. Explanation: SET commands without proper validation or quoting can cause security issues. Recommendation: Always quote SET values and validate input: SET "var=safe value". Context: SET command value should be quoted for safety
(SEC002)
[error] 45-45: Command injection via variable substitution. Explanation: Variables containing user input used in commands may allow code injection. Recommendation: Validate and sanitize variables before use in command execution. Context: Variable used with shell operators may allow injection
(SEC013)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@test/resources/repos/kotlin/test_repo/gradlew.bat` around lines 1 - 94, The
gradlew.bat file uses LF-only line endings which can break Windows batch parsing
(affecting labels like :execute, :fail, :mainEnd); convert gradlew.bat to CRLF
line endings, commit the updated file, and add or update .gitattributes with an
entry such as "*.bat text eol=crlf" so future commits preserve CRLF for .bat
files.
|
|
||
|
|
||
| def broken_factory() -> User: | ||
| return missing_user |
There was a problem hiding this comment.
Preserve intentional diagnostics, but suppress Ruff F821 explicitly.
These undefined names are intentional for diagnostics fixtures, but they currently violate Ruff and may break lint-gated CI. Add targeted # noqa: F821 comments so intent is explicit and tooling stays green.
Proposed fix
def broken_factory() -> User:
- return missing_user
+ return missing_user # noqa: F821 - intentionally undefined for diagnostics fixture
@@
def broken_consumer() -> None:
created_user = broken_factory()
print(created_user)
- print(undefined_name)
+ print(undefined_name) # noqa: F821 - intentionally undefined for diagnostics fixtureAs per coding guidelines: **/*.py: Use strict typing with mypy and format code with ruff.
Also applies to: 11-11
🧰 Tools
🪛 Ruff (0.15.7)
[error] 5-5: Undefined name missing_user
(F821)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@test/resources/repos/python/test_repo/test_repo/diagnostics_sample.py` at
line 5, The undefined names in this diagnostics fixture (e.g., the identifier
`missing_user` returned on the line containing "return missing_user" and the
other undefined name at line 11) are intentional, so suppress Ruff F821
explicitly by appending a targeted comment `# noqa: F821` to those lines (e.g.,
change `return missing_user` to `return missing_user # noqa: F821` and add the
same `# noqa: F821` comment to the other undefined identifier on the indicated
line); do not change any other code or remove the intentionally undefined names.
| def to_pytest_param(self, *marks: MarkDecorator | Mark, id: str) -> object: | ||
| return pytest.param(self.language, self, marks=[*get_pytest_markers(self.language), *marks], id=id) |
There was a problem hiding this comment.
Rename the id parameter to clear Ruff A002.
id shadows the builtin here, and Ruff already reports that as A002. Renaming it to something like case_id avoids a lint failure without changing the helper's behavior. As per coding guidelines, **/*.py: Use strict typing with mypy and format code with ruff.
🧰 Tools
🪛 Ruff (0.15.7)
[error] 49-49: Function argument id is shadowing a Python builtin
(A002)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@test/serena/test_serena_agent.py` around lines 49 - 50, The parameter name
`id` in the to_pytest_param method shadows the builtin and triggers Ruff A002;
rename the parameter to something like `case_id` in the function signature and
update its use in the return expression (function: to_pytest_param, references:
pytest.param(..., id=...)) so the behavior stays the same but the builtin is not
shadowed; run ruff/mypy formatting to ensure type hints and imports remain
correct after renaming.
| if language_has_verified_implementation_support(Language.GO): | ||
|
|
||
| @pytest.mark.parametrize("language_server", [Language.GO], indirect=True) | ||
| def test_find_implementations(self, language_server: SolidLanguageServer) -> None: | ||
| repo_path = get_repo_path(Language.GO) | ||
| pos = find_identifier_position(repo_path / "main.go", "FormatGreeting") | ||
| assert pos is not None, "Could not find Greeter.FormatGreeting in fixture" | ||
|
|
||
| implementations = language_server.request_implementation("main.go", *pos) | ||
| assert implementations, "Expected at least one implementation of Greeter.FormatGreeting" | ||
| assert any("main.go" in implementation.get("relativePath", "") for implementation in implementations), ( | ||
| f"Expected ConsoleGreeter.FormatGreeting in implementations, got: {implementations}" | ||
| ) | ||
|
|
||
| @pytest.mark.parametrize("language_server", [Language.GO], indirect=True) | ||
| def test_request_implementing_symbols(self, language_server: SolidLanguageServer) -> None: | ||
| repo_path = get_repo_path(Language.GO) | ||
| pos = find_identifier_position(repo_path / "main.go", "FormatGreeting") | ||
| assert pos is not None, "Could not find Greeter.FormatGreeting in fixture" | ||
|
|
||
| implementing_symbols = language_server.request_implementing_symbols("main.go", *pos) | ||
| assert implementing_symbols, "Expected implementing symbols for Greeter.FormatGreeting" | ||
| assert any( | ||
| symbol.get("name") == "(ConsoleGreeter).FormatGreeting" and "main.go" in symbol["location"].get("relativePath", "") | ||
| for symbol in implementing_symbols | ||
| ), f"Expected ConsoleGreeter.FormatGreeting symbol, got: {implementing_symbols}" |
There was a problem hiding this comment.
Class-level if statement for conditional test definition.
Same pattern issue as in the Rust tests. Defining test methods inside a class-level if block works but is an anti-pattern. Consider using pytest.mark.skipif for consistency and clarity:
Suggested refactor using pytest.mark.skipif
- if language_has_verified_implementation_support(Language.GO):
-
- `@pytest.mark.parametrize`("language_server", [Language.GO], indirect=True)
- def test_find_implementations(self, language_server: SolidLanguageServer) -> None:
+ `@pytest.mark.skipif`(
+ not language_has_verified_implementation_support(Language.GO),
+ reason="Go implementation support not verified"
+ )
+ `@pytest.mark.parametrize`("language_server", [Language.GO], indirect=True)
+ def test_find_implementations(self, language_server: SolidLanguageServer) -> None:
repo_path = get_repo_path(Language.GO)
pos = find_identifier_position(repo_path / "main.go", "FormatGreeting")
# ... rest of test
- `@pytest.mark.parametrize`("language_server", [Language.GO], indirect=True)
- def test_request_implementing_symbols(self, language_server: SolidLanguageServer) -> None:
+ `@pytest.mark.skipif`(
+ not language_has_verified_implementation_support(Language.GO),
+ reason="Go implementation support not verified"
+ )
+ `@pytest.mark.parametrize`("language_server", [Language.GO], indirect=True)
+ def test_request_implementing_symbols(self, language_server: SolidLanguageServer) -> None:
# ... rest of test🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@test/solidlsp/go/test_go_basic.py` around lines 35 - 60, The tests
test_find_implementations and test_request_implementing_symbols are being
defined inside a class-level if block using
language_has_verified_implementation_support(Language.GO); instead, remove the
conditional block and apply pytest.mark.skipif to each test (or the whole
module) so tests are registered but skipped when support is unavailable—for
example decorate the test functions test_find_implementations and
test_request_implementing_symbols with `@pytest.mark.skipif`(not
language_has_verified_implementation_support(Language.GO), reason="GO
implementation not available") while keeping the existing parametrize and
function signatures intact.
3c41536 to
4d55c51
Compare
There was a problem hiding this comment.
Actionable comments posted: 10
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src/solidlsp/language_servers/lua_ls.py (1)
52-75:⚠️ Potential issue | 🟠 MajorThe configured Lua LS version can be silently ignored.
_get_lua_ls_path()accepts any existinglua-language-serverbefore consideringlua_language_server_version, and the fixed.../luainstall dir does not record which version is inside it. On machines with a system install or an older cached download, the new per-project override never takes effect.🛠️ One way to make the override real
`@staticmethod` def _get_lua_ls_path(solidlsp_settings: SolidLSPSettings | None = None) -> str | None: """Get the path to lua-language-server executable.""" - # First check if it's in PATH - lua_ls = shutil.which("lua-language-server") - if lua_ls: - return lua_ls + requested_version: str | None = None + if solidlsp_settings is not None: + lua_settings = solidlsp_settings.get_ls_specific_settings(Language.LUA) + requested_version = lua_settings.get("lua_language_server_version") + + # Only trust arbitrary system installs when no explicit version was requested. + if requested_version is None: + lua_ls = shutil.which("lua-language-server") + if lua_ls: + return lua_ls @@ - install_dir = Path(LuaLanguageServer.ls_resources_dir(solidlsp_settings)) / "lua" + install_dir = Path(LuaLanguageServer.ls_resources_dir(solidlsp_settings)) / "lua" / lua_ls_versionAlso applies to: 93-95, 123-124, 150-160
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/solidlsp/language_servers/lua_ls.py` around lines 52 - 75, The function _get_lua_ls_path currently returns any system-installed lua-language-server (via shutil.which) before honoring a configured lua_language_server_version, so project override can be ignored; update _get_lua_ls_path (and the other similar lookup spots around LuaLanguageServer.ls_resources_dir) to first check if solidlsp_settings and solidlsp_settings.lua_language_server_version are set, then build and prefer version-specific resource paths (e.g., Path(LuaLanguageServer.ls_resources_dir(solidlsp_settings)) / "lua" / <version> / "bin" / "lua-language-server" and the ".exe" variant), returning that path if executable, and only if not found fall back to the existing possible_paths and shutil.which lookup; ensure the same version-first logic is applied to the other lookup sites referenced (the blocks that construct ls_resource_dir and possible_paths).
♻️ Duplicate comments (10)
src/solidlsp/ls_utils.py (2)
239-246:⚠️ Potential issue | 🟠 MajorRedirect validation occurs after redirects have been followed.
The
requests.get()at line 241 uses defaultallow_redirects=True, so the redirect chain completes before the validation at line 246 executes. An attacker could initiate from an approved host and redirect to an unapproved destination; the download will have already occurred. This was flagged in a previous review.Use
allow_redirects=Falseand manually validate each redirect target before following:🔒 Proposed fix to validate redirects before following
- response = requests.get(url, stream=True, timeout=60) - if response.status_code != 200: - log.error(f"Error downloading file '{url}': {response.status_code} {response.text}") - raise SolidLSPException("Error downloading file.") - - FileUtils._validate_download_host(response.url, allowed_hosts) + current_url = url + max_redirects = 10 + for _ in range(max_redirects): + FileUtils._validate_download_host(current_url, allowed_hosts) + response = requests.get(current_url, stream=True, timeout=60, allow_redirects=False) + if response.status_code in (301, 302, 303, 307, 308): + current_url = response.headers.get("Location", "") + response.close() + if not current_url: + raise SolidLSPException("Redirect without Location header") + continue + if response.status_code != 200: + log.error(f"Error downloading file '{url}': {response.status_code} {response.text}") + raise SolidLSPException("Error downloading file.") + break + else: + raise SolidLSPException("Too many redirects")🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/solidlsp/ls_utils.py` around lines 239 - 246, The current requests.get(...) in ls_utils (where response is assigned and FileUtils._validate_download_host(...) is called) follows redirects before host validation; change the download logic to call requests.get(..., allow_redirects=False, timeout=60) then, in a loop up to a safe max_redirects, inspect response.headers.get("location") and validate each redirect target with FileUtils._validate_download_host(...) before issuing the next requests.get(...) for that location (again with allow_redirects=False), finally perform the last GET to retrieve content after all intermediate redirect hosts are validated; ensure error handling for missing Location headers, non-200 terminal responses, and propagation of the same SolidLSPException behavior used currently.
421-425:⚠️ Potential issue | 🔴 CriticalTar extraction remains vulnerable to symlink/hardlink traversal.
The validation at line 423 only checks
tar_member.namebut doesn't reject symlink or hardlink members. Whenextractall()is called, it will materialize these links, potentially allowing path traversal viatar_member.linkname. This was flagged in a previous review.Reject non-regular tar members (symlinks, hardlinks, devices) before extraction:
🔒 Proposed fix to filter dangerous tar members
with tarfile.open(archive_path, tar_mode) as tar_ref: for tar_member in tar_ref.getmembers(): + # Reject symlinks, hardlinks, and device nodes to prevent traversal attacks + if tar_member.issym() or tar_member.islnk(): + raise SolidLSPException(f"Unsafe archive member '{tar_member.name}': symlinks and hardlinks are not allowed") + if tar_member.isdev() or tar_member.isfifo(): + raise SolidLSPException(f"Unsafe archive member '{tar_member.name}': device nodes and FIFOs are not allowed") FileUtils._validate_extraction_path(tar_member.name, target_path) - tar_ref.extractall(target_path) + # Extract only regular files and directories individually + for tar_member in tar_ref.getmembers(): + if tar_member.isfile() or tar_member.isdir(): + tar_ref.extract(tar_member, target_path)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/solidlsp/ls_utils.py` around lines 421 - 425, The tar extraction currently only validates tar_member.name via FileUtils._validate_extraction_path but still passes symlinks/hardlinks/devices into tar_ref.extractall, leaving a traversal risk; update the loop that iterates tar_ref.getmembers() to explicitly reject any non-regular members (use TarInfo.isreg() or check TarInfo.issym()/islnk()/isdev()/ischr()/isblk()/isfifo()) and raise/skip when such members are found (reject tar_member when issym/islnk/etc.), and also ensure any member with a non-empty linkname is considered dangerous; perform this filtration before calling tar_ref.extractall(target_path) so only regular files and directories are extracted (reference: FileUtils._validate_extraction_path, tar_ref.getmembers, tar_ref.extractall).docs/02-usage/001_features.md (1)
17-17:⚠️ Potential issue | 🟡 MinorFix typo: "qujuality" should be "quality".
- Tool results are compact JSON, keeping token usage low and output qujuality high. + Tool results are compact JSON, keeping token usage low and output quality high.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/02-usage/001_features.md` at line 17, Replace the misspelled word "qujuality" in the sentence "Tool results are compact JSON, keeping token usage low and output qujuality high." with the correct word "quality" so the line reads "Tool results are compact JSON, keeping token usage low and output quality high."; update the exact string shown in the doc to fix the typo.test/diagnostics_cases.py (1)
24-29:⚠️ Potential issue | 🟡 MinorRename the helper's
idparameter to clear Ruff A002.Line 27 shadows the builtin
id, which Ruff flags as A002. Renaming it to something likecase_idavoids a lint failure without changing the helper's behavior. As per coding guidelines,**/*.py: Use strict typing with mypy and format code with ruff.Suggested fix
def diagnostic_case_param( case: DiagnosticCase, *marks: MarkDecorator | Mark, - id: str, + case_id: str, ): - return pytest.param(case.language, case, marks=[*get_pytest_markers(case.language), *marks], id=id) + return pytest.param(case.language, case, marks=[*get_pytest_markers(case.language), *marks], id=case_id)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/diagnostics_cases.py` around lines 24 - 29, The parameter name `id` in the helper function `diagnostic_case_param` shadows the built-in and triggers Ruff A002; rename the parameter to `case_id` (or similar) in the `diagnostic_case_param` signature and replace its usage inside the function (the `id` argument passed to `pytest.param`) and update any call sites that pass a named `id=` to use `case_id=` so the behavior remains identical but the lint warning is resolved; ensure imports/annotations remain unchanged.docs/02-usage/070_security.md (1)
28-41:⚠️ Potential issue | 🟠 MajorScope the checksum guarantee to Serena's bundled versions.
This section reads as unconditional, but some override paths in this PR intentionally omit
expected_sha256for non-default language-server versions. As written, users will assume every configured version is hash-verified. Please narrow this to Serena's bundled/pinned defaults and call out that custom version overrides may have weaker checksum guarantees.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/02-usage/070_security.md` around lines 28 - 41, Update the Security docs to scope the checksum guarantee to Serena's bundled/pinned defaults: modify the "Integrity verification" bullet and the subsequent paragraph so they state that SHA256 checks are enforced for Serena's default/pinned downloads (the bundled versions) and that custom or overridden language-server versions which may omit the expected_sha256 field are not guaranteed to be hash-verified; specifically reference the "Integrity verification" bullet text and the term expected_sha256 so readers know which override paths may have weaker checksum guarantees.src/serena/tools/tools_base.py (1)
34-43:⚠️ Potential issue | 🟠 MajorUse a stable identity for before/after diagnostic diffing.
Because Lines 418-423 key the comparison on this dataclass, including exact start/end positions here will classify any pre-existing diagnostic that merely shifts after an insertion/deletion as "new". Keep the identity to stable fields such as message, severity, code/source, etc., and carry the range separately for display.
Also applies to: 418-423
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/serena/tools/tools_base.py` around lines 34 - 43, The DiagnosticIdentity dataclass currently includes exact start/end positions which makes before/after diffing unstable; change DiagnosticIdentity to only contain stable fields (message, severity, code_repr, source) and remove start_line/start_character/end_line/end_character from it, then carry the range separately (e.g., add a DiagnosticRange or keep range on the enclosing Diagnostic object) so the diffing logic that keys comparisons on DiagnosticIdentity uses only stable fields; update any code that constructs DiagnosticIdentity (and the comparison code that previously relied on it) to supply the range separately for display while using the reduced DiagnosticIdentity for equality/hash.src/solidlsp/language_servers/pascal_server.py (2)
620-624:⚠️ Potential issue | 🟠 MajorDon't store the selected
pasls_versionon class state.Lines 622-624 make the effective version and release URLs process-global. If two Pascal workspaces initialize with different
pasls_versionvalues, one instance can end up querying or downloading against the other instance's URLs. Keep the resolved version/URLs local to this setup flow or move them onto instance state instead.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/solidlsp/language_servers/pascal_server.py` around lines 620 - 624, The code assigns the resolved pasls_version and derived URLs onto class state (cls.PASLS_VERSION, cls.PASLS_RELEASES_URL, cls.PASLS_API_URL) which makes them process-global; instead keep the resolved value and URLs local to this setup flow (use the pasls_version local variable and local release/api URL variables) or store them on the server instance (e.g., as instance attributes created during initialization) so different Pascal workspaces using different pasls_version values do not collide; update uses that currently reference the class attributes to read from the local variables or the instance attributes (referencing pascal_settings, pasls_version, PASLS_VERSION, PASLS_RELEASES_URL, PASLS_API_URL, and Language.PASCAL to locate the code).
533-545:⚠️ Potential issue | 🟠 MajorKeep bundled hashes tied to the bundled release only.
These
RuntimeDependencyentries always carry the bundledv0.2.0SHA256s, even whenpasls_versionrewrites the download URLs to a different release. Combined with Lines 533-545, a custom version can either be checked against the wrong hash when_get_checksums()fails, or skip verification entirely when the checksums file is present but missing that asset. Clearsha256for non-bundled versions and only use the release-provided checksums on override installs.Also applies to: 650-686
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/solidlsp/language_servers/pascal_server.py` around lines 533 - 545, The code currently uses RuntimeDependency.sha256 (bundled v0.2.0 hashes) even when pasls_version rewrites download URLs, causing wrong or skipped checksum validation; change the logic in the installer flow (where expected_sha256 is computed using checksums.get(archive_filename) if checksums else dep.sha256, and in the similar block around the 650-686 region) to clear or ignore the bundled dep.sha256 whenever pasls_version is overridden (i.e., when pasls_version is set to a non-bundled release), so that expected_sha256 comes only from the release-provided _get_checksums() result (or None if not present); update the code that builds RuntimeDependency (or before verification) to set dep.sha256 = None when pasls_version != bundled_version and ensure cls._verify_checksum() is only called when expected_sha256 is non-None.src/solidlsp/language_servers/clojure_lsp.py (1)
139-149:⚠️ Potential issue | 🟠 MajorMake
clojure_lsp_versioninvalidate the cached executable.Line 143 still resolves a version-agnostic binary path under
self._ls_resources_dir, so once anyclojure-lspbinary exists this branch skips reinstalling and keeps using it even afterclojure_lsp_versionchanges. Use a versioned install directory or persist/check the installed version before reusing the cache.Suggested fix
clojure_lsp_version = self._custom_settings.get("clojure_lsp_version", ClojureLSP.CLOJURE_LSP_VERSION) deps = ClojureLSP._runtime_dependencies(clojure_lsp_version) + versioned_resources_dir = os.path.join(self._ls_resources_dir, clojure_lsp_version) dependency = deps.get_single_dep_for_current_platform() - clojurelsp_executable_path = deps.binary_path(self._ls_resources_dir) + clojurelsp_executable_path = deps.binary_path(versioned_resources_dir) if not os.path.exists(clojurelsp_executable_path): log.info( - f"Downloading and extracting clojure-lsp from {dependency.url} to {self._ls_resources_dir}", + f"Downloading and extracting clojure-lsp from {dependency.url} to {versioned_resources_dir}", ) - deps.install(self._ls_resources_dir) + deps.install(versioned_resources_dir)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/solidlsp/language_servers/clojure_lsp.py` around lines 139 - 149, The code uses deps.binary_path(self._ls_resources_dir) which is version-agnostic so changing clojure_lsp_version doesn't invalidate the cached executable; update the logic in the clojure_lsp startup path (where clojure_lsp_version, deps = ClojureLSP._runtime_dependencies(...), dependency = deps.get_single_dep_for_current_platform(), clojurelsp_executable_path = deps.binary_path(self._ls_resources_dir) and deps.install(self._ls_resources_dir) are used) to either compute a versioned install directory (e.g. join self._ls_resources_dir with clojure_lsp_version) or read/write a small version marker next to the binary and verify it matches clojure_lsp_version before reusing; if the version differs, call deps.install into the versioned path (or overwrite after removing the old) and then recompute clojurelsp_executable_path so the check and subsequent reuse are tied to the specific clojure_lsp_version.src/solidlsp/language_servers/ty_server.py (1)
4-7:⚠️ Potential issue | 🔴 CriticalUse
uv tool run, notuv x, for theuvfallback.The official uv CLI reference exposes
uv tool run, and the tools docs describeuvxas the alias for that command; they do not expose anxsubcommand onuv. This branch is therefore likely to fail on machines that haveuvinstalled but no separateuvxshim. (docs.astral.sh)Suggested fix
- - ty_version: Override the pinned ``ty`` version used with ``uvx`` / ``uv x`` + - ty_version: Override the pinned ``ty`` version used with ``uvx`` / ``uv tool run`` (default: the bundled Serena version). ... - return [uv_path, "x", "--from", f"ty=={ty_version}", "ty", "server"] + return [uv_path, "tool", "run", "--from", f"ty=={ty_version}", "ty", "server"]Also applies to: 63-66
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/solidlsp/language_servers/ty_server.py` around lines 4 - 7, The fallback command for invoking the bundled/pinned ty tool uses the wrong uv subcommand; update any occurrences that build a command using "uv x" or "uvx" to use the documented "uv tool run" form instead (but keep supporting the older "uvx" alias if present). Locate where ls_specific_settings["python_ty"], ls_path or ty_version are used to construct the subprocess invocation (e.g., in the function/method that starts the ty server or builds the command) and replace "uv x" with "uv tool run" in that command string/args so machines with only the canonical uv CLI will succeed. Ensure argument quoting/escaping and any existing fallback logic remain intact.
🧹 Nitpick comments (8)
test/resources/repos/kotlin/test_repo/gradlew.bat (1)
58-58: Use backslash for Windows path separator.The path uses a forward slash which is inconsistent with Windows conventions and may cause issues in some edge cases.
Suggested fix
-set JAVA_EXE=%JAVA_HOME%/bin/java.exe +set JAVA_EXE=%JAVA_HOME%\bin\java.exe🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/resources/repos/kotlin/test_repo/gradlew.bat` at line 58, In gradlew.bat update the JAVA_EXE assignment to use the Windows path separator: change the JAVA_EXE variable setting (the line assigning JAVA_EXE in gradlew.bat) to construct the path with backslashes instead of a forward slash so it follows Windows conventions and avoids potential path issues.test/resources/repos/typescript/test_repo/index.ts (2)
23-25: Consider documenting intent forunusedStandaloneFunctionat Line 23.Since it’s intentionally unreferenced, a brief comment can prevent accidental cleanup in future refactors.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/resources/repos/typescript/test_repo/index.ts` around lines 23 - 25, The function unusedStandaloneFunction is intentionally unreferenced; add a clear inline comment or JSDoc above the export explaining its purpose (e.g., preserved for tests, API contract, or future use) so it isn't removed by future cleanup or refactoring. Locate the exported function unusedStandaloneFunction and prepend a short explanatory comment that states why it must remain, and optionally include an `@internal` or `@deprecated` tag if appropriate to clarify intent to maintainers.
1-1: Use a type-only import forGreeter.
Greeteris only used as a type annotation (Line 17) and not instantiated, so using a type-only import reduces runtime imports and improves clarity.♻️ Proposed refactor
-import { ConsoleGreeter, Greeter } from "./formatters"; +import { ConsoleGreeter, type Greeter } from "./formatters";🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/resources/repos/typescript/test_repo/index.ts` at line 1, Replace the value import of Greeter with a type-only import: keep the runtime import for ConsoleGreeter but import Greeter using a type import (e.g., import type { Greeter } from "./formatters") since Greeter is only used as a type annotation; update the import statement(s) referencing ConsoleGreeter and Greeter accordingly so ConsoleGreeter remains a value import and Greeter is imported only for types.src/serena/tools/jetbrains_tools.py (1)
152-152: Inconsistent naming: class should beJetBrainsInlineSymbolTool.All other tool classes in this file follow the
JetBrains<Action>Toolnaming pattern (e.g.,JetBrainsMoveSymbolTool,JetBrainsSafeDeleteTool). This class is the only one missing theToolsuffix.Suggested fix
-class JetBrainsInlineSymbol(Tool, ToolMarkerSymbolicEdit, ToolMarkerOptional): +class JetBrainsInlineSymbolTool(Tool, ToolMarkerSymbolicEdit, ToolMarkerOptional):Note: This will require updating imports and references in other files (e.g.,
scripts/demo_run_tools.py, tool registries, and configuration files).🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/serena/tools/jetbrains_tools.py` at line 152, Rename the class JetBrainsInlineSymbol to JetBrainsInlineSymbolTool to match the existing JetBrains<Action>Tool naming convention; update the class declaration (keeping its current base classes Tool, ToolMarkerSymbolicEdit, ToolMarkerOptional) and then update all references and imports that use JetBrainsInlineSymbol (for example in scripts/demo_run_tools.py, any tool registries, and configuration files) to the new JetBrainsInlineSymbolTool identifier so imports and registrations remain consistent.src/solidlsp/language_servers/fsharp_language_server.py (1)
75-84: UnusedRuntimeDependencyCollectioninstance.This
RuntimeDependencyCollectionis instantiated but never used. The actual installation occurs via thesubprocess.runcall at line 104. Either remove this dead code or wire it through theRuntimeDependencyCollection.install()pattern used by other language servers to benefit from the verified download infrastructure (SHA256 + allowed_hosts).♻️ Proposed fix: Remove dead code
- RuntimeDependencyCollection( - [ - RuntimeDependency( - id="fsautocomplete", - description="FsAutoComplete (Ionide F# Language Server)", - command=f"dotnet tool install --tool-path ./ fsautocomplete --version {fsautocomplete_version}", - platform_id="any", - ), - ] - ) - # Install FsAutoComplete if not already installed🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/solidlsp/language_servers/fsharp_language_server.py` around lines 75 - 84, The RuntimeDependencyCollection/RuntimeDependency instantiation for fsautocomplete is dead code and should be removed or hooked into the verified-install flow; either delete the RuntimeDependencyCollection(...) block entirely, or create a variable (e.g., deps = RuntimeDependencyCollection([...])) and call deps.install(...) (matching the pattern used by other language servers) so the fsautocomplete_version dependency is installed via the verified SHA256/allowed_hosts mechanism rather than the current subprocess.run call; update or remove any duplicate subprocess.run installation accordingly and keep references to RuntimeDependency, RuntimeDependencyCollection, and fsautocomplete_version to locate the change.test/solidlsp/java/test_java_basic.py (1)
59-84: Class-levelifblock for conditional test definition.Same pattern issue as in the Go tests. Defining test methods inside a class-level
ifblock works but is an anti-pattern. Consider usingpytest.mark.skipiffor consistency:`@pytest.mark.skipif`( not language_has_verified_implementation_support(Language.JAVA), reason="Java implementation support not verified" ) `@pytest.mark.parametrize`("language_server", [Language.JAVA], indirect=True) def test_find_implementations(self, language_server: SolidLanguageServer) -> None: # ...🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/solidlsp/java/test_java_basic.py` around lines 59 - 84, The tests are conditionally defined inside a class-level if using language_has_verified_implementation_support(Language.JAVA); instead, remove that outer if and mark the tests with pytest.mark.skipif so they are defined but skipped when support is absent—apply `@pytest.mark.skipif`(not language_has_verified_implementation_support(Language.JAVA), reason="Java implementation support not verified") to the test functions (test_find_implementations and test_request_implementing_symbols) (or the containing test class) and keep the existing pytest.mark.parametrize for Language.JAVA.src/solidlsp/language_servers/luau_lsp.py (1)
163-174: Auxiliary file downloads lack SHA256 verification.The
_download_auxiliary_filemethod passesallowed_hosts=LUAU_DOCS_ALLOWED_HOSTSbut noexpected_sha256. This means Luau docs JSON files have host validation but no integrity checking.This is likely acceptable for frequently-updated documentation files where pinning hashes would be impractical, but worth noting for security-conscious deployments.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/solidlsp/language_servers/luau_lsp.py` around lines 163 - 174, The download helper _download_auxiliary_file currently calls FileUtils.download_file_verified with allowed_hosts=LUAU_DOCS_ALLOWED_HOSTS but no integrity check; update _download_auxiliary_file to accept or compute an expected SHA256 (e.g., add an optional expected_sha256: str | None parameter) and pass expected_sha256 to FileUtils.download_file_verified so the call becomes FileUtils.download_file_verified(url, str(path), allowed_hosts=LUAU_DOCS_ALLOWED_HOSTS, expected_sha256=expected_sha256); also update any callers of _download_auxiliary_file to supply a hash when available (or explicitly pass None) so integrity checking is enabled when a known SHA256 is provided.test/solidlsp/csharp/test_csharp_basic.py (1)
178-203: Class-levelifblock for conditional test definition.Same pattern issue as in Go and Java tests. Consider using
pytest.mark.skipiffor consistency and better test discoverability.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/solidlsp/csharp/test_csharp_basic.py` around lines 178 - 203, The tests are conditionally defined inside a runtime if (language_has_verified_implementation_support(Language.CSHARP)) which harms pytest discovery; instead annotate the test functions (test_find_implementations and test_request_implementing_symbols) with pytest.mark.skipif using the same predicate (not language_has_verified_implementation_support(Language.CSHARP)) and a brief reason so pytest can always discover the tests but skip them when C# implementation support is unavailable; apply the decorator to both test functions (or use a module-level pytestmark skipif) referencing Language.CSHARP and the language_has_verified_implementation_support helper.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/serena/tools/symbol_tools.py`:
- Around line 438-458: The current validation in _get_unique_captured_span only
rejects regexes with zero groups; change it to require exactly one capturing
group by checking match.re.groups == 1 and returning the existing error message
when it is not equal to 1 (i.e., reject patterns with multiple groups like
"(foo)?(bar)"). Apply the same exact change to the other regex-validation block
later in the file that performs the same group-count check so both locations
enforce an exact single capturing group.
In `@src/solidlsp/language_servers/lua_ls.py`:
- Around line 93-95: The code currently allows a custom lua_ls_version to be
used without an integrity check because expected_sha256 becomes None when
lua_ls_version != LUA_LS_VERSION; update the logic around lua_ls_version,
expected_sha256 and the download/install path in lua_ls.py so that any
non-default lua_ls_version must provide an explicit expected_sha256 (reject or
raise an error if missing), and only allow automatic downloads to proceed when
expected_sha256 is present; touch the retrieval site (lua_settings.get(...),
lua_ls_version) and the later download/verify code paths (where expected_sha256
is read and the archive is executed) to enforce this invariant and fail fast if
a custom version lacks a supplied hash.
In `@src/solidlsp/language_servers/marksman.py`:
- Around line 45-96: The _runtime_dependencies factory currently sets
sha256=None for non-default marksman versions which results in
download_file_verified(...) and _verify_sha256_if_configured() skipping
integrity checks; update _runtime_dependencies (the RuntimeDependency
constructions in marksman.py) to either require/provide platform-specific sha256
values for any non-default version or explicitly reject override versions by
raising an error (e.g., ValueError) when version != DEFAULT_MARKSMAN_VERSION,
ensuring downloads never proceed without an expected_sha256; reference
RuntimeDependency, _runtime_dependencies, download_file_verified, and
_verify_sha256_if_configured when making the change.
- Around line 100-101: The marksman/version check is missing: because
self._ls_resources_dir is version-agnostic, the existing binary at
marksman_executable_path will be reused regardless of marksman_version. Update
the install logic in marksman.py (the code that calls _runtime_dependencies and
computes marksman_executable_path) to record the installed version (e.g., write
a .installed_version file next to the binary) and on startup read and compare
that file against the desired marksman_version; if they differ, remove or
re-download the binary (invoke _runtime_dependencies) so the requested
marksman_version takes effect. Ensure reads/writes reference the same resource
dir (self._ls_resources_dir) and use the existing marksman_executable_path and
marksman_version symbols to locate/version the installation.
In `@src/solidlsp/language_servers/omnisharp.py`:
- Around line 224-233: The RazorOmnisharp cache directory is not version-aware;
modify the creation of razor_omnisharp_ls_dir so it includes the
razor_omnisharp_version (the same way other LS dirs use versioning) by joining
cls.ls_resources_dir(solidlsp_settings) with the RazorOmnisharp folder name plus
the razor_omnisharp_version (or a subfolder named by that version) before
checking os.path.exists and calling
FileUtils.download_and_extract_archive_verified; ensure
runtime_dependencies["RazorOmnisharp"] and
FileUtils.download_and_extract_archive_verified continue to use the same
directory variable so downloads/installations are version-scoped.
- Around line 210-222: The OmniSharp installer currently uses omnisharp_ls_dir =
os.path.join(cls.ls_resources_dir(solidlsp_settings), "OmniSharp") which is not
version-aware, so changing omnisharp_version has no effect; update the directory
key to include the version (e.g. incorporate solidlsp_settings.omnisharp_version
or runtime_dependencies["OmniSharp"]["version"] into the path) when building
omnisharp_ls_dir so each version gets its own folder, then adjust subsequent
references (omnisharp_executable_path, exists check, chmod) to use that
versioned omnisharp_ls_dir.
In `@src/solidlsp/language_servers/taplo_server.py`:
- Around line 132-141: The cache check isn't version-aware: include the chosen
taplo_version when building taplo_dir/taplo_executable so changing the setting
forces a new download. Modify the code that sets taplo_dir and taplo_executable
(currently using taplo_dir, taplo_executable and _get_taplo_download_url) to
incorporate the resolved taplo_version (e.g. join taplo_version into the
directory or filename, ensuring it's sanitized) and then perform the
os.path.exists/os.access check against that versioned path so different
taplo_version values don't reuse an incompatible cached binary.
In `@src/solidlsp/ls.py`:
- Around line 1337-1346: The current logic treats an empty successful pull
result as failure by checking "if not ret:" and so resurrects stale diagnostics;
change that condition to "if response is None:" so the cached fallback (calls to
_wait_for_published_diagnostics and _get_cached_published_published_diagnostics)
only runs when the pull actually failed or returned no response. Locate the
block using variables ret, response, published_diagnostics,
pull_diagnostics_failed and functions _wait_for_published_diagnostics and
_get_cached_published_diagnostics and replace the "if not ret:" guard with "if
response is None:" preserving the subsequent fallback logic.
- Around line 1320-1335: The response handling assumes required fields and has a
wrong assertion message: change the assertion text to say "expected dict"
instead of "expected list", and when building new_item in the loop use
item.get("severity") and item.get("code") (not item["severity"] / item["code"])
so missing optional LSP Diagnostic fields are handled safely; keep the
conditional inclusion of "source" as-is and continue constructing
ls_types.Diagnostic(**new_item) so optional None/missing values are preserved.
In `@test/serena/test_serena_agent.py`:
- Around line 1326-1333: Before calling SafeDeleteSymbol.apply inside the
project_file_modification_context, read and save the original file contents for
case.relative_path; after calling safe_delete_tool.apply (and before the with
block exits), re-read the file and assert its contents equal the saved original
so a blocked delete did not mutate the file. Use the existing symbols
project_file_modification_context, serena_agent.get_tool(SafeDeleteSymbol),
safe_delete_tool.apply and case.relative_path to locate where to add the pre-
and post-apply content checks.
---
Outside diff comments:
In `@src/solidlsp/language_servers/lua_ls.py`:
- Around line 52-75: The function _get_lua_ls_path currently returns any
system-installed lua-language-server (via shutil.which) before honoring a
configured lua_language_server_version, so project override can be ignored;
update _get_lua_ls_path (and the other similar lookup spots around
LuaLanguageServer.ls_resources_dir) to first check if solidlsp_settings and
solidlsp_settings.lua_language_server_version are set, then build and prefer
version-specific resource paths (e.g.,
Path(LuaLanguageServer.ls_resources_dir(solidlsp_settings)) / "lua" / <version>
/ "bin" / "lua-language-server" and the ".exe" variant), returning that path if
executable, and only if not found fall back to the existing possible_paths and
shutil.which lookup; ensure the same version-first logic is applied to the other
lookup sites referenced (the blocks that construct ls_resource_dir and
possible_paths).
---
Duplicate comments:
In `@docs/02-usage/001_features.md`:
- Line 17: Replace the misspelled word "qujuality" in the sentence "Tool results
are compact JSON, keeping token usage low and output qujuality high." with the
correct word "quality" so the line reads "Tool results are compact JSON, keeping
token usage low and output quality high."; update the exact string shown in the
doc to fix the typo.
In `@docs/02-usage/070_security.md`:
- Around line 28-41: Update the Security docs to scope the checksum guarantee to
Serena's bundled/pinned defaults: modify the "Integrity verification" bullet and
the subsequent paragraph so they state that SHA256 checks are enforced for
Serena's default/pinned downloads (the bundled versions) and that custom or
overridden language-server versions which may omit the expected_sha256 field are
not guaranteed to be hash-verified; specifically reference the "Integrity
verification" bullet text and the term expected_sha256 so readers know which
override paths may have weaker checksum guarantees.
In `@src/serena/tools/tools_base.py`:
- Around line 34-43: The DiagnosticIdentity dataclass currently includes exact
start/end positions which makes before/after diffing unstable; change
DiagnosticIdentity to only contain stable fields (message, severity, code_repr,
source) and remove start_line/start_character/end_line/end_character from it,
then carry the range separately (e.g., add a DiagnosticRange or keep range on
the enclosing Diagnostic object) so the diffing logic that keys comparisons on
DiagnosticIdentity uses only stable fields; update any code that constructs
DiagnosticIdentity (and the comparison code that previously relied on it) to
supply the range separately for display while using the reduced
DiagnosticIdentity for equality/hash.
In `@src/solidlsp/language_servers/clojure_lsp.py`:
- Around line 139-149: The code uses deps.binary_path(self._ls_resources_dir)
which is version-agnostic so changing clojure_lsp_version doesn't invalidate the
cached executable; update the logic in the clojure_lsp startup path (where
clojure_lsp_version, deps = ClojureLSP._runtime_dependencies(...), dependency =
deps.get_single_dep_for_current_platform(), clojurelsp_executable_path =
deps.binary_path(self._ls_resources_dir) and
deps.install(self._ls_resources_dir) are used) to either compute a versioned
install directory (e.g. join self._ls_resources_dir with clojure_lsp_version) or
read/write a small version marker next to the binary and verify it matches
clojure_lsp_version before reusing; if the version differs, call deps.install
into the versioned path (or overwrite after removing the old) and then recompute
clojurelsp_executable_path so the check and subsequent reuse are tied to the
specific clojure_lsp_version.
In `@src/solidlsp/language_servers/pascal_server.py`:
- Around line 620-624: The code assigns the resolved pasls_version and derived
URLs onto class state (cls.PASLS_VERSION, cls.PASLS_RELEASES_URL,
cls.PASLS_API_URL) which makes them process-global; instead keep the resolved
value and URLs local to this setup flow (use the pasls_version local variable
and local release/api URL variables) or store them on the server instance (e.g.,
as instance attributes created during initialization) so different Pascal
workspaces using different pasls_version values do not collide; update uses that
currently reference the class attributes to read from the local variables or the
instance attributes (referencing pascal_settings, pasls_version, PASLS_VERSION,
PASLS_RELEASES_URL, PASLS_API_URL, and Language.PASCAL to locate the code).
- Around line 533-545: The code currently uses RuntimeDependency.sha256 (bundled
v0.2.0 hashes) even when pasls_version rewrites download URLs, causing wrong or
skipped checksum validation; change the logic in the installer flow (where
expected_sha256 is computed using checksums.get(archive_filename) if checksums
else dep.sha256, and in the similar block around the 650-686 region) to clear or
ignore the bundled dep.sha256 whenever pasls_version is overridden (i.e., when
pasls_version is set to a non-bundled release), so that expected_sha256 comes
only from the release-provided _get_checksums() result (or None if not present);
update the code that builds RuntimeDependency (or before verification) to set
dep.sha256 = None when pasls_version != bundled_version and ensure
cls._verify_checksum() is only called when expected_sha256 is non-None.
In `@src/solidlsp/language_servers/ty_server.py`:
- Around line 4-7: The fallback command for invoking the bundled/pinned ty tool
uses the wrong uv subcommand; update any occurrences that build a command using
"uv x" or "uvx" to use the documented "uv tool run" form instead (but keep
supporting the older "uvx" alias if present). Locate where
ls_specific_settings["python_ty"], ls_path or ty_version are used to construct
the subprocess invocation (e.g., in the function/method that starts the ty
server or builds the command) and replace "uv x" with "uv tool run" in that
command string/args so machines with only the canonical uv CLI will succeed.
Ensure argument quoting/escaping and any existing fallback logic remain intact.
In `@src/solidlsp/ls_utils.py`:
- Around line 239-246: The current requests.get(...) in ls_utils (where response
is assigned and FileUtils._validate_download_host(...) is called) follows
redirects before host validation; change the download logic to call
requests.get(..., allow_redirects=False, timeout=60) then, in a loop up to a
safe max_redirects, inspect response.headers.get("location") and validate each
redirect target with FileUtils._validate_download_host(...) before issuing the
next requests.get(...) for that location (again with allow_redirects=False),
finally perform the last GET to retrieve content after all intermediate redirect
hosts are validated; ensure error handling for missing Location headers, non-200
terminal responses, and propagation of the same SolidLSPException behavior used
currently.
- Around line 421-425: The tar extraction currently only validates
tar_member.name via FileUtils._validate_extraction_path but still passes
symlinks/hardlinks/devices into tar_ref.extractall, leaving a traversal risk;
update the loop that iterates tar_ref.getmembers() to explicitly reject any
non-regular members (use TarInfo.isreg() or check
TarInfo.issym()/islnk()/isdev()/ischr()/isblk()/isfifo()) and raise/skip when
such members are found (reject tar_member when issym/islnk/etc.), and also
ensure any member with a non-empty linkname is considered dangerous; perform
this filtration before calling tar_ref.extractall(target_path) so only regular
files and directories are extracted (reference:
FileUtils._validate_extraction_path, tar_ref.getmembers, tar_ref.extractall).
In `@test/diagnostics_cases.py`:
- Around line 24-29: The parameter name `id` in the helper function
`diagnostic_case_param` shadows the built-in and triggers Ruff A002; rename the
parameter to `case_id` (or similar) in the `diagnostic_case_param` signature and
replace its usage inside the function (the `id` argument passed to
`pytest.param`) and update any call sites that pass a named `id=` to use
`case_id=` so the behavior remains identical but the lint warning is resolved;
ensure imports/annotations remain unchanged.
---
Nitpick comments:
In `@src/serena/tools/jetbrains_tools.py`:
- Line 152: Rename the class JetBrainsInlineSymbol to JetBrainsInlineSymbolTool
to match the existing JetBrains<Action>Tool naming convention; update the class
declaration (keeping its current base classes Tool, ToolMarkerSymbolicEdit,
ToolMarkerOptional) and then update all references and imports that use
JetBrainsInlineSymbol (for example in scripts/demo_run_tools.py, any tool
registries, and configuration files) to the new JetBrainsInlineSymbolTool
identifier so imports and registrations remain consistent.
In `@src/solidlsp/language_servers/fsharp_language_server.py`:
- Around line 75-84: The RuntimeDependencyCollection/RuntimeDependency
instantiation for fsautocomplete is dead code and should be removed or hooked
into the verified-install flow; either delete the
RuntimeDependencyCollection(...) block entirely, or create a variable (e.g.,
deps = RuntimeDependencyCollection([...])) and call deps.install(...) (matching
the pattern used by other language servers) so the fsautocomplete_version
dependency is installed via the verified SHA256/allowed_hosts mechanism rather
than the current subprocess.run call; update or remove any duplicate
subprocess.run installation accordingly and keep references to
RuntimeDependency, RuntimeDependencyCollection, and fsautocomplete_version to
locate the change.
In `@src/solidlsp/language_servers/luau_lsp.py`:
- Around line 163-174: The download helper _download_auxiliary_file currently
calls FileUtils.download_file_verified with
allowed_hosts=LUAU_DOCS_ALLOWED_HOSTS but no integrity check; update
_download_auxiliary_file to accept or compute an expected SHA256 (e.g., add an
optional expected_sha256: str | None parameter) and pass expected_sha256 to
FileUtils.download_file_verified so the call becomes
FileUtils.download_file_verified(url, str(path),
allowed_hosts=LUAU_DOCS_ALLOWED_HOSTS, expected_sha256=expected_sha256); also
update any callers of _download_auxiliary_file to supply a hash when available
(or explicitly pass None) so integrity checking is enabled when a known SHA256
is provided.
In `@test/resources/repos/kotlin/test_repo/gradlew.bat`:
- Line 58: In gradlew.bat update the JAVA_EXE assignment to use the Windows path
separator: change the JAVA_EXE variable setting (the line assigning JAVA_EXE in
gradlew.bat) to construct the path with backslashes instead of a forward slash
so it follows Windows conventions and avoids potential path issues.
In `@test/resources/repos/typescript/test_repo/index.ts`:
- Around line 23-25: The function unusedStandaloneFunction is intentionally
unreferenced; add a clear inline comment or JSDoc above the export explaining
its purpose (e.g., preserved for tests, API contract, or future use) so it isn't
removed by future cleanup or refactoring. Locate the exported function
unusedStandaloneFunction and prepend a short explanatory comment that states why
it must remain, and optionally include an `@internal` or `@deprecated` tag if
appropriate to clarify intent to maintainers.
- Line 1: Replace the value import of Greeter with a type-only import: keep the
runtime import for ConsoleGreeter but import Greeter using a type import (e.g.,
import type { Greeter } from "./formatters") since Greeter is only used as a
type annotation; update the import statement(s) referencing ConsoleGreeter and
Greeter accordingly so ConsoleGreeter remains a value import and Greeter is
imported only for types.
In `@test/solidlsp/csharp/test_csharp_basic.py`:
- Around line 178-203: The tests are conditionally defined inside a runtime if
(language_has_verified_implementation_support(Language.CSHARP)) which harms
pytest discovery; instead annotate the test functions (test_find_implementations
and test_request_implementing_symbols) with pytest.mark.skipif using the same
predicate (not language_has_verified_implementation_support(Language.CSHARP))
and a brief reason so pytest can always discover the tests but skip them when C#
implementation support is unavailable; apply the decorator to both test
functions (or use a module-level pytestmark skipif) referencing Language.CSHARP
and the language_has_verified_implementation_support helper.
In `@test/solidlsp/java/test_java_basic.py`:
- Around line 59-84: The tests are conditionally defined inside a class-level if
using language_has_verified_implementation_support(Language.JAVA); instead,
remove that outer if and mark the tests with pytest.mark.skipif so they are
defined but skipped when support is absent—apply `@pytest.mark.skipif`(not
language_has_verified_implementation_support(Language.JAVA), reason="Java
implementation support not verified") to the test functions
(test_find_implementations and test_request_implementing_symbols) (or the
containing test class) and keep the existing pytest.mark.parametrize for
Language.JAVA.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: c8e1110b-e8de-4472-b93e-746abdd540ab
⛔ Files ignored due to path filters (1)
test/resources/repos/kotlin/test_repo/gradle/wrapper/gradle-wrapper.jaris excluded by!**/*.jar
📒 Files selected for processing (124)
.serena/project.ymldocs/02-usage/001_features.mddocs/02-usage/050_configuration.mddocs/02-usage/070_security.mdpyproject.tomlscripts/demo_diagnostics.pyscripts/demo_find_defining_symbol.pyscripts/demo_find_implementing_symbol.pyscripts/demo_run_tools.pysrc/serena/code_editor.pysrc/serena/jetbrains/jetbrains_plugin_client.pysrc/serena/jetbrains/jetbrains_types.pysrc/serena/resources/config/internal_modes/jetbrains.ymlsrc/serena/resources/project.template.ymlsrc/serena/symbol.pysrc/serena/tools/file_tools.pysrc/serena/tools/jetbrains_tools.pysrc/serena/tools/symbol_tools.pysrc/serena/tools/tools_base.pysrc/solidlsp/language_servers/al_language_server.pysrc/solidlsp/language_servers/ansible_language_server.pysrc/solidlsp/language_servers/bash_language_server.pysrc/solidlsp/language_servers/clangd_language_server.pysrc/solidlsp/language_servers/clojure_lsp.pysrc/solidlsp/language_servers/common.pysrc/solidlsp/language_servers/csharp_language_server.pysrc/solidlsp/language_servers/dart_language_server.pysrc/solidlsp/language_servers/eclipse_jdtls.pysrc/solidlsp/language_servers/elixir_tools/elixir_tools.pysrc/solidlsp/language_servers/elm_language_server.pysrc/solidlsp/language_servers/fsharp_language_server.pysrc/solidlsp/language_servers/gopls.pysrc/solidlsp/language_servers/groovy_language_server.pysrc/solidlsp/language_servers/hlsl_language_server.pysrc/solidlsp/language_servers/intelephense.pysrc/solidlsp/language_servers/kotlin_language_server.pysrc/solidlsp/language_servers/lua_ls.pysrc/solidlsp/language_servers/luau_lsp.pysrc/solidlsp/language_servers/marksman.pysrc/solidlsp/language_servers/matlab_language_server.pysrc/solidlsp/language_servers/omnisharp.pysrc/solidlsp/language_servers/omnisharp/runtime_dependencies.jsonsrc/solidlsp/language_servers/pascal_server.pysrc/solidlsp/language_servers/phpactor.pysrc/solidlsp/language_servers/powershell_language_server.pysrc/solidlsp/language_servers/ruby_lsp.pysrc/solidlsp/language_servers/rust_analyzer.pysrc/solidlsp/language_servers/solidity_language_server.pysrc/solidlsp/language_servers/systemverilog_server.pysrc/solidlsp/language_servers/taplo_server.pysrc/solidlsp/language_servers/terraform_ls.pysrc/solidlsp/language_servers/ty_server.pysrc/solidlsp/language_servers/typescript_language_server.pysrc/solidlsp/language_servers/vts_language_server.pysrc/solidlsp/language_servers/vue_language_server.pysrc/solidlsp/language_servers/yaml_language_server.pysrc/solidlsp/ls.pysrc/solidlsp/ls_config.pysrc/solidlsp/ls_process.pysrc/solidlsp/ls_types.pysrc/solidlsp/ls_utils.pytest/conftest.pytest/diagnostics_cases.pytest/resources/repos/clojure/test_repo/src/test_app/diagnostics_sample.cljtest/resources/repos/cpp/test_repo/compile_commands.jsontest/resources/repos/cpp/test_repo/diagnostics_sample.cpptest/resources/repos/csharp/test_repo/DiagnosticsSample.cstest/resources/repos/csharp/test_repo/Program.cstest/resources/repos/csharp/test_repo/Services/ConsoleGreeter.cstest/resources/repos/csharp/test_repo/Services/IGreeter.cstest/resources/repos/fsharp/test_repo/DiagnosticsSample.fstest/resources/repos/fsharp/test_repo/Formatter.fstest/resources/repos/fsharp/test_repo/Program.fstest/resources/repos/fsharp/test_repo/Shapes.fstest/resources/repos/fsharp/test_repo/TestProject.fsprojtest/resources/repos/go/test_repo/diagnostics_sample.gotest/resources/repos/go/test_repo/main.gotest/resources/repos/java/test_repo/src/main/java/test_repo/ConsoleGreeter.javatest/resources/repos/java/test_repo/src/main/java/test_repo/DiagnosticsSample.javatest/resources/repos/java/test_repo/src/main/java/test_repo/Greeter.javatest/resources/repos/java/test_repo/src/main/java/test_repo/Main.javatest/resources/repos/kotlin/test_repo/gradle/wrapper/gradle-wrapper.propertiestest/resources/repos/kotlin/test_repo/gradlewtest/resources/repos/kotlin/test_repo/gradlew.battest/resources/repos/kotlin/test_repo/src/main/kotlin/test_repo/DiagnosticsSample.kttest/resources/repos/lean4/test_repo/DiagnosticsSample.leantest/resources/repos/lua/test_repo/main.luatest/resources/repos/lua/test_repo/src/animals.luatest/resources/repos/php/test_repo/diagnostics_sample.phptest/resources/repos/powershell/test_repo/diagnostics_sample.ps1test/resources/repos/python/test_repo/test_repo/diagnostics_sample.pytest/resources/repos/ruby/test_repo/lib.rbtest/resources/repos/ruby/test_repo/main.rbtest/resources/repos/rust/test_repo/src/diagnostics_sample.rstest/resources/repos/rust/test_repo/src/lib.rstest/resources/repos/rust/test_repo/src/main.rstest/resources/repos/typescript/test_repo/.serena/project.ymltest/resources/repos/typescript/test_repo/diagnostics_sample.tstest/resources/repos/typescript/test_repo/formatters.tstest/resources/repos/typescript/test_repo/index.tstest/serena/__snapshots__/test_symbol_editing.ambrtest/serena/test_serena_agent.pytest/serena/test_symbol.pytest/solidlsp/clojure/__init__.pytest/solidlsp/csharp/test_csharp_basic.pytest/solidlsp/csharp/test_csharp_nuget_download.pytest/solidlsp/dart/test_dart_basic.pytest/solidlsp/fortran/test_fortran_basic.pytest/solidlsp/fsharp/test_fsharp_basic.pytest/solidlsp/go/test_go_basic.pytest/solidlsp/java/test_java_basic.pytest/solidlsp/lua/test_lua_basic.pytest/solidlsp/luau/test_luau_dependency_provider.pytest/solidlsp/python/test_python_basic.pytest/solidlsp/python/test_retrieval_with_ignored_dirs.pytest/solidlsp/python/test_symbol_retrieval.pytest/solidlsp/ruby/test_ruby_symbol_retrieval.pytest/solidlsp/rust/test_rust_basic.pytest/solidlsp/solidity/test_solidity_basic.pytest/solidlsp/test_defining_symbol_matrix.pytest/solidlsp/test_diagnostics_matrix.pytest/solidlsp/test_ls_common.pytest/solidlsp/typescript/test_typescript_basic.pytest/solidlsp/util/test_ls_utils.py
✅ Files skipped from review due to trivial changes (30)
- test/resources/repos/java/test_repo/src/main/java/test_repo/Greeter.java
- test/solidlsp/dart/test_dart_basic.py
- test/resources/repos/cpp/test_repo/compile_commands.json
- test/resources/repos/lean4/test_repo/DiagnosticsSample.lean
- test/resources/repos/java/test_repo/src/main/java/test_repo/ConsoleGreeter.java
- test/resources/repos/kotlin/test_repo/gradle/wrapper/gradle-wrapper.properties
- test/resources/repos/typescript/test_repo/diagnostics_sample.ts
- src/serena/resources/project.template.yml
- pyproject.toml
- test/resources/repos/fsharp/test_repo/Shapes.fs
- test/resources/repos/csharp/test_repo/Services/IGreeter.cs
- test/resources/repos/ruby/test_repo/lib.rb
- test/resources/repos/typescript/test_repo/formatters.ts
- test/resources/repos/go/test_repo/diagnostics_sample.go
- test/solidlsp/rust/test_rust_basic.py
- test/solidlsp/fortran/test_fortran_basic.py
- test/resources/repos/csharp/test_repo/Services/ConsoleGreeter.cs
- test/resources/repos/fsharp/test_repo/TestProject.fsproj
- test/resources/repos/fsharp/test_repo/Formatter.fs
- test/resources/repos/fsharp/test_repo/DiagnosticsSample.fs
- test/solidlsp/util/test_ls_utils.py
- test/solidlsp/clojure/init.py
- src/serena/jetbrains/jetbrains_types.py
- src/solidlsp/language_servers/omnisharp/runtime_dependencies.json
- test/resources/repos/lua/test_repo/src/animals.lua
- src/solidlsp/language_servers/ansible_language_server.py
- src/serena/code_editor.py
- test/resources/repos/clojure/test_repo/src/test_app/diagnostics_sample.clj
- .serena/project.yml
- scripts/demo_diagnostics.py
🚧 Files skipped from review as they are similar to previous changes (51)
- test/resources/repos/java/test_repo/src/main/java/test_repo/Main.java
- test/resources/repos/rust/test_repo/src/main.rs
- test/resources/repos/csharp/test_repo/Program.cs
- test/resources/repos/ruby/test_repo/main.rb
- src/solidlsp/ls_types.py
- src/solidlsp/language_servers/rust_analyzer.py
- src/solidlsp/language_servers/gopls.py
- test/solidlsp/test_ls_common.py
- test/serena/test_symbol.py
- test/solidlsp/ruby/test_ruby_symbol_retrieval.py
- src/serena/resources/config/internal_modes/jetbrains.yml
- src/solidlsp/language_servers/solidity_language_server.py
- src/solidlsp/language_servers/bash_language_server.py
- test/resources/repos/kotlin/test_repo/src/main/kotlin/test_repo/DiagnosticsSample.kt
- test/solidlsp/luau/test_luau_dependency_provider.py
- test/resources/repos/rust/test_repo/src/diagnostics_sample.rs
- test/resources/repos/csharp/test_repo/DiagnosticsSample.cs
- src/solidlsp/language_servers/vts_language_server.py
- src/solidlsp/language_servers/systemverilog_server.py
- test/resources/repos/powershell/test_repo/diagnostics_sample.ps1
- test/resources/repos/rust/test_repo/src/lib.rs
- test/resources/repos/java/test_repo/src/main/java/test_repo/DiagnosticsSample.java
- test/resources/repos/go/test_repo/main.go
- test/resources/repos/lua/test_repo/main.lua
- src/solidlsp/language_servers/dart_language_server.py
- src/solidlsp/language_servers/typescript_language_server.py
- test/solidlsp/csharp/test_csharp_nuget_download.py
- test/solidlsp/typescript/test_typescript_basic.py
- src/solidlsp/language_servers/hlsl_language_server.py
- test/solidlsp/python/test_symbol_retrieval.py
- src/solidlsp/language_servers/powershell_language_server.py
- src/solidlsp/language_servers/vue_language_server.py
- test/serena/snapshots/test_symbol_editing.ambr
- src/solidlsp/language_servers/intelephense.py
- test/solidlsp/python/test_python_basic.py
- test/solidlsp/test_defining_symbol_matrix.py
- src/solidlsp/language_servers/terraform_ls.py
- test/solidlsp/fsharp/test_fsharp_basic.py
- test/solidlsp/solidity/test_solidity_basic.py
- test/solidlsp/python/test_retrieval_with_ignored_dirs.py
- src/solidlsp/language_servers/common.py
- scripts/demo_find_defining_symbol.py
- test/solidlsp/lua/test_lua_basic.py
- src/solidlsp/language_servers/groovy_language_server.py
- src/solidlsp/language_servers/al_language_server.py
- src/solidlsp/language_servers/elm_language_server.py
- src/solidlsp/language_servers/matlab_language_server.py
- test/conftest.py
- src/serena/jetbrains/jetbrains_plugin_client.py
- src/solidlsp/language_servers/eclipse_jdtls.py
- src/serena/symbol.py
| def _get_unique_captured_span(cls, match: re.Match[str], regex: str, search_scope_description: str) -> tuple[int, int] | str: | ||
| if match.re.groups == 0: | ||
| return ( | ||
| f"Error: Regex '{regex}' must contain exactly one capturing group that identifies the symbol usage in " | ||
| f"{search_scope_description}." | ||
| ) | ||
|
|
||
| matched_capture_spans = [span for span in match.regs[1:] if span != (-1, -1)] | ||
| if len(matched_capture_spans) != 1: | ||
| return ( | ||
| f"Error: Regex '{regex}' must produce exactly one matched capture in {search_scope_description}, " | ||
| f"but produced {len(matched_capture_spans)} for match '{cls._format_match_preview(match)}'." | ||
| ) | ||
|
|
||
| capture_start_offset, capture_end_offset = matched_capture_spans[0] | ||
| if capture_start_offset == capture_end_offset: | ||
| return ( | ||
| f"Error: Regex '{regex}' produced an empty capture in {search_scope_description}; " | ||
| "the capture must select the referenced symbol text." | ||
| ) | ||
| return capture_start_offset, capture_end_offset |
There was a problem hiding this comment.
Enforce regex.groups == 1, not just > 0.
The tool contract says the regex must contain exactly one capturing group, but the current checks only reject groups == 0. Patterns like (foo)?(bar) currently pass as long as one group happens to match, so invalid inputs can resolve silently.
✅ Tighten the validation
- if match.re.groups == 0:
+ if match.re.groups != 1:
return (
f"Error: Regex '{regex}' must contain exactly one capturing group that identifies the symbol usage in "
f"{search_scope_description}."
)
@@
- if compiled_regex.groups == 0:
+ if compiled_regex.groups != 1:
return (
f"Error: Regex '{regex}' must contain exactly one capturing group that identifies the symbol usage in "
f"{search_scope_description}."
)Also applies to: 491-495
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/serena/tools/symbol_tools.py` around lines 438 - 458, The current
validation in _get_unique_captured_span only rejects regexes with zero groups;
change it to require exactly one capturing group by checking match.re.groups ==
1 and returning the existing error message when it is not equal to 1 (i.e.,
reject patterns with multiple groups like "(foo)?(bar)"). Apply the same exact
change to the other regex-validation block later in the file that performs the
same group-count check so both locations enforce an exact single capturing
group.
| def _runtime_dependencies(cls, version: str) -> RuntimeDependencyCollection: | ||
| marksman_releases = f"https://github.com/artempyanykh/marksman/releases/download/{version}" | ||
| default_version = version == cls.DEFAULT_MARKSMAN_VERSION | ||
| return RuntimeDependencyCollection( | ||
| [ | ||
| RuntimeDependency( | ||
| id="marksman", | ||
| url=f"{marksman_releases}/marksman-linux-x64", | ||
| platform_id="linux-x64", | ||
| archive_type="binary", | ||
| binary_name="marksman", | ||
| sha256="b9cb666c643dfd9b699811fdfc445ed4c56be65c1d878c21d46847f0d7b0e475" if default_version else None, | ||
| allowed_hosts=MARKSMAN_ALLOWED_HOSTS, | ||
| ), | ||
| RuntimeDependency( | ||
| id="marksman", | ||
| url=f"{marksman_releases}/marksman-linux-arm64", | ||
| platform_id="linux-arm64", | ||
| archive_type="binary", | ||
| binary_name="marksman", | ||
| sha256="b8d6972a56f3f9b7bbbf7c77ef8998e3b66fa82fb03c01398e224144486c9e73" if default_version else None, | ||
| allowed_hosts=MARKSMAN_ALLOWED_HOSTS, | ||
| ), | ||
| RuntimeDependency( | ||
| id="marksman", | ||
| url=f"{marksman_releases}/marksman-macos", | ||
| platform_id="osx-x64", | ||
| archive_type="binary", | ||
| binary_name="marksman", | ||
| sha256="7e18803966231a33ee107d0d26f69b41f2f0dc1332c52dd9729c2e29fb77be83" if default_version else None, | ||
| allowed_hosts=MARKSMAN_ALLOWED_HOSTS, | ||
| ), | ||
| RuntimeDependency( | ||
| id="marksman", | ||
| url=f"{marksman_releases}/marksman-macos", | ||
| platform_id="osx-arm64", | ||
| archive_type="binary", | ||
| binary_name="marksman", | ||
| sha256="7e18803966231a33ee107d0d26f69b41f2f0dc1332c52dd9729c2e29fb77be83" if default_version else None, | ||
| allowed_hosts=MARKSMAN_ALLOWED_HOSTS, | ||
| ), | ||
| RuntimeDependency( | ||
| id="marksman", | ||
| url=f"{marksman_releases}/marksman.exe", | ||
| platform_id="win-x64", | ||
| archive_type="binary", | ||
| binary_name="marksman.exe", | ||
| sha256="39de9df039c8b0d627ac5918a9d8792ad20fc49e2461d1f5c906975c016799ec" if default_version else None, | ||
| allowed_hosts=MARKSMAN_ALLOWED_HOSTS, | ||
| ), | ||
| ] | ||
| ) |
There was a problem hiding this comment.
Don't silently bypass checksum verification for override versions.
sha256=None here is not a neutral default: it flows into download_file_verified(..., expected_sha256=None) in src/solidlsp/ls_utils.py:223-253, and _verify_sha256_if_configured() returns immediately in src/solidlsp/ls_utils.py:343-352. That means any non-default marksman_version downloads and executes an unverified binary. If overrides stay supported, they need pinned digests per platform; otherwise this should fail closed instead of dropping integrity checks.
🛡️ Minimal fail-closed guard
def _runtime_dependencies(cls, version: str) -> RuntimeDependencyCollection:
marksman_releases = f"https://github.com/artempyanykh/marksman/releases/download/{version}"
default_version = version == cls.DEFAULT_MARKSMAN_VERSION
+ if not default_version:
+ raise ValueError(
+ "Custom marksman_version overrides need pinned SHA256 values per platform"
+ )
return RuntimeDependencyCollection(🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/solidlsp/language_servers/marksman.py` around lines 45 - 96, The
_runtime_dependencies factory currently sets sha256=None for non-default
marksman versions which results in download_file_verified(...) and
_verify_sha256_if_configured() skipping integrity checks; update
_runtime_dependencies (the RuntimeDependency constructions in marksman.py) to
either require/provide platform-specific sha256 values for any non-default
version or explicitly reject override versions by raising an error (e.g.,
ValueError) when version != DEFAULT_MARKSMAN_VERSION, ensuring downloads never
proceed without an expected_sha256; reference RuntimeDependency,
_runtime_dependencies, download_file_verified, and _verify_sha256_if_configured
when making the change.
| marksman_version = self._custom_settings.get("marksman_version", self.DEFAULT_MARKSMAN_VERSION) | ||
| deps = self._runtime_dependencies(marksman_version) |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -euo pipefail
rg -n -C4 'class RuntimeDependencyCollection|def binary_path\(|def install\(|_ls_resources_dir' src/solidlspRepository: oraios/serena
Length of output: 47088
🏁 Script executed:
cat -n src/solidlsp/language_servers/marksman.py | head -120Repository: oraios/serena
Length of output: 6480
🏁 Script executed:
sed -n '77,120p' src/solidlsp/language_servers/common.pyRepository: oraios/serena
Length of output: 1810
🏁 Script executed:
sed -n '120,200p' src/solidlsp/language_servers/common.pyRepository: oraios/serena
Length of output: 2492
🏁 Script executed:
grep -n "\.installed_version\|version_file\|_version" src/solidlsp/language_servers/typescript_language_server.py | head -20Repository: oraios/serena
Length of output: 1553
🏁 Script executed:
grep -n "taplo_version\|\.installed_version\|version_file" src/solidlsp/language_servers/taplo_server.pyRepository: oraios/serena
Length of output: 466
🏁 Script executed:
sed -n '145,180p' src/solidlsp/language_servers/taplo_server.pyRepository: oraios/serena
Length of output: 1860
Marksman version override will not take effect if binary already exists.
The code retrieves marksman_version from custom settings (line 100) and passes it to _runtime_dependencies() (line 101), but the install directory self._ls_resources_dir (line 104) is version-agnostic. The existence check at line 106 uses only os.path.exists(marksman_executable_path), so any previously downloaded binary—regardless of version—will be reused, preventing version overrides from taking effect.
Similar servers handle this with version metadata: typescript_language_server.py writes a .installed_version file and compares the installed version against the expected version before reusing the binary. Consider adding equivalent version tracking to ensure version overrides work correctly.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/solidlsp/language_servers/marksman.py` around lines 100 - 101, The
marksman/version check is missing: because self._ls_resources_dir is
version-agnostic, the existing binary at marksman_executable_path will be reused
regardless of marksman_version. Update the install logic in marksman.py (the
code that calls _runtime_dependencies and computes marksman_executable_path) to
record the installed version (e.g., write a .installed_version file next to the
binary) and on startup read and compare that file against the desired
marksman_version; if they differ, remove or re-download the binary (invoke
_runtime_dependencies) so the requested marksman_version takes effect. Ensure
reads/writes reference the same resource dir (self._ls_resources_dir) and use
the existing marksman_executable_path and marksman_version symbols to
locate/version the installation.
| omnisharp_ls_dir = os.path.join(cls.ls_resources_dir(solidlsp_settings), "OmniSharp") | ||
| if not os.path.exists(omnisharp_ls_dir): | ||
| os.makedirs(omnisharp_ls_dir) | ||
| FileUtils.download_and_extract_archive(runtime_dependencies["OmniSharp"]["url"], omnisharp_ls_dir, "zip") | ||
| FileUtils.download_and_extract_archive_verified( | ||
| runtime_dependencies["OmniSharp"]["url"], | ||
| omnisharp_ls_dir, | ||
| "zip", | ||
| expected_sha256=runtime_dependencies["OmniSharp"].get("integrity"), | ||
| allowed_hosts=OMNISHARP_ALLOWED_HOSTS, | ||
| ) | ||
| omnisharp_executable_path = os.path.join(omnisharp_ls_dir, runtime_dependencies["OmniSharp"]["binaryName"]) | ||
| assert os.path.exists(omnisharp_executable_path) | ||
| os.chmod(omnisharp_executable_path, 0o755) |
There was a problem hiding this comment.
OmniSharp cache is not version-aware.
Similar to the Expert installer, omnisharp_ls_dir does not incorporate the version. Once OmniSharp is installed, changing omnisharp_version in settings has no effect. Consider including the version in the directory path:
Proposed fix
- omnisharp_ls_dir = os.path.join(cls.ls_resources_dir(solidlsp_settings), "OmniSharp")
+ omnisharp_ls_dir = os.path.join(cls.ls_resources_dir(solidlsp_settings), "OmniSharp", omnisharp_version)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| omnisharp_ls_dir = os.path.join(cls.ls_resources_dir(solidlsp_settings), "OmniSharp") | |
| if not os.path.exists(omnisharp_ls_dir): | |
| os.makedirs(omnisharp_ls_dir) | |
| FileUtils.download_and_extract_archive(runtime_dependencies["OmniSharp"]["url"], omnisharp_ls_dir, "zip") | |
| FileUtils.download_and_extract_archive_verified( | |
| runtime_dependencies["OmniSharp"]["url"], | |
| omnisharp_ls_dir, | |
| "zip", | |
| expected_sha256=runtime_dependencies["OmniSharp"].get("integrity"), | |
| allowed_hosts=OMNISHARP_ALLOWED_HOSTS, | |
| ) | |
| omnisharp_executable_path = os.path.join(omnisharp_ls_dir, runtime_dependencies["OmniSharp"]["binaryName"]) | |
| assert os.path.exists(omnisharp_executable_path) | |
| os.chmod(omnisharp_executable_path, 0o755) | |
| omnisharp_ls_dir = os.path.join(cls.ls_resources_dir(solidlsp_settings), "OmniSharp", omnisharp_version) | |
| if not os.path.exists(omnisharp_ls_dir): | |
| os.makedirs(omnisharp_ls_dir) | |
| FileUtils.download_and_extract_archive_verified( | |
| runtime_dependencies["OmniSharp"]["url"], | |
| omnisharp_ls_dir, | |
| "zip", | |
| expected_sha256=runtime_dependencies["OmniSharp"].get("integrity"), | |
| allowed_hosts=OMNISHARP_ALLOWED_HOSTS, | |
| ) | |
| omnisharp_executable_path = os.path.join(omnisharp_ls_dir, runtime_dependencies["OmniSharp"]["binaryName"]) | |
| assert os.path.exists(omnisharp_executable_path) | |
| os.chmod(omnisharp_executable_path, 0o755) |
🧰 Tools
🪛 Ruff (0.15.7)
[error] 222-222: os.chmod setting a permissive mask 0o755 on file or directory
(S103)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/solidlsp/language_servers/omnisharp.py` around lines 210 - 222, The
OmniSharp installer currently uses omnisharp_ls_dir =
os.path.join(cls.ls_resources_dir(solidlsp_settings), "OmniSharp") which is not
version-aware, so changing omnisharp_version has no effect; update the directory
key to include the version (e.g. incorporate solidlsp_settings.omnisharp_version
or runtime_dependencies["OmniSharp"]["version"] into the path) when building
omnisharp_ls_dir so each version gets its own folder, then adjust subsequent
references (omnisharp_executable_path, exists check, chmod) to use that
versioned omnisharp_ls_dir.
| razor_omnisharp_ls_dir = os.path.join(cls.ls_resources_dir(solidlsp_settings), "RazorOmnisharp") | ||
| if not os.path.exists(razor_omnisharp_ls_dir): | ||
| os.makedirs(razor_omnisharp_ls_dir) | ||
| FileUtils.download_and_extract_archive(runtime_dependencies["RazorOmnisharp"]["url"], razor_omnisharp_ls_dir, "zip") | ||
| FileUtils.download_and_extract_archive_verified( | ||
| runtime_dependencies["RazorOmnisharp"]["url"], | ||
| razor_omnisharp_ls_dir, | ||
| "zip", | ||
| expected_sha256=runtime_dependencies["RazorOmnisharp"].get("integrity"), | ||
| allowed_hosts=OMNISHARP_ALLOWED_HOSTS, | ||
| ) |
There was a problem hiding this comment.
RazorOmnisharp cache also not version-aware.
The same issue applies to razor_omnisharp_ls_dir. Include razor_omnisharp_version in the path to enable version switching.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/solidlsp/language_servers/omnisharp.py` around lines 224 - 233, The
RazorOmnisharp cache directory is not version-aware; modify the creation of
razor_omnisharp_ls_dir so it includes the razor_omnisharp_version (the same way
other LS dirs use versioning) by joining cls.ls_resources_dir(solidlsp_settings)
with the RazorOmnisharp folder name plus the razor_omnisharp_version (or a
subfolder named by that version) before checking os.path.exists and calling
FileUtils.download_and_extract_archive_verified; ensure
runtime_dependencies["RazorOmnisharp"] and
FileUtils.download_and_extract_archive_verified continue to use the same
directory variable so downloads/installations are version-scoped.
| taplo_dir = os.path.join(self._ls_resources_dir, "taplo") | ||
| os.makedirs(taplo_dir, exist_ok=True) | ||
|
|
||
| _, executable_name = _get_taplo_download_url() | ||
| taplo_version = self._custom_settings.get("taplo_version", TAPLO_VERSION) | ||
| _, executable_name = _get_taplo_download_url(taplo_version) | ||
| taplo_executable = os.path.join(taplo_dir, executable_name) | ||
|
|
||
| if os.path.exists(taplo_executable) and os.access(taplo_executable, os.X_OK): | ||
| log.info(f"Using cached Taplo at: {taplo_executable}") | ||
| return taplo_executable |
There was a problem hiding this comment.
Taplo cache is not version-aware.
The taplo_dir and taplo_executable paths don't incorporate the version. Changing taplo_version in settings won't trigger a re-download if a cached binary exists.
Proposed fix
taplo_dir = os.path.join(self._ls_resources_dir, "taplo")
+ taplo_version = self._custom_settings.get("taplo_version", TAPLO_VERSION)
+ taplo_dir = os.path.join(taplo_dir, taplo_version)
os.makedirs(taplo_dir, exist_ok=True)
- taplo_version = self._custom_settings.get("taplo_version", TAPLO_VERSION)
_, executable_name = _get_taplo_download_url(taplo_version)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| taplo_dir = os.path.join(self._ls_resources_dir, "taplo") | |
| os.makedirs(taplo_dir, exist_ok=True) | |
| _, executable_name = _get_taplo_download_url() | |
| taplo_version = self._custom_settings.get("taplo_version", TAPLO_VERSION) | |
| _, executable_name = _get_taplo_download_url(taplo_version) | |
| taplo_executable = os.path.join(taplo_dir, executable_name) | |
| if os.path.exists(taplo_executable) and os.access(taplo_executable, os.X_OK): | |
| log.info(f"Using cached Taplo at: {taplo_executable}") | |
| return taplo_executable | |
| taplo_dir = os.path.join(self._ls_resources_dir, "taplo") | |
| taplo_version = self._custom_settings.get("taplo_version", TAPLO_VERSION) | |
| taplo_dir = os.path.join(taplo_dir, taplo_version) | |
| os.makedirs(taplo_dir, exist_ok=True) | |
| _, executable_name = _get_taplo_download_url(taplo_version) | |
| taplo_executable = os.path.join(taplo_dir, executable_name) | |
| if os.path.exists(taplo_executable) and os.access(taplo_executable, os.X_OK): | |
| log.info(f"Using cached Taplo at: {taplo_executable}") | |
| return taplo_executable |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/solidlsp/language_servers/taplo_server.py` around lines 132 - 141, The
cache check isn't version-aware: include the chosen taplo_version when building
taplo_dir/taplo_executable so changing the setting forces a new download. Modify
the code that sets taplo_dir and taplo_executable (currently using taplo_dir,
taplo_executable and _get_taplo_download_url) to incorporate the resolved
taplo_version (e.g. join taplo_version into the directory or filename, ensuring
it's sanitized) and then perform the os.path.exists/os.access check against that
versioned path so different taplo_version values don't reuse an incompatible
cached binary.
| if response is not None: | ||
| assert isinstance(response, dict), ( | ||
| f"Unexpected response from Language Server (expected list, got {type(response)}): {response}" | ||
| ) | ||
| ret = [] | ||
| for item in response["items"]: # type: ignore | ||
| new_item: ls_types.Diagnostic = { | ||
| "uri": uri, | ||
| "severity": item["severity"], | ||
| "message": item["message"], | ||
| "range": item["range"], | ||
| "code": item["code"], # type: ignore | ||
| } | ||
| if "source" in item: | ||
| new_item["source"] = item["source"] | ||
| ret.append(ls_types.Diagnostic(**new_item)) |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, let's examine the specific file and lines mentioned
wc -l src/solidlsp/ls.pyRepository: oraios/serena
Length of output: 80
🏁 Script executed:
# Read the actual code at lines 1320-1335
sed -n '1320,1335p' src/solidlsp/ls.pyRepository: oraios/serena
Length of output: 859
🏁 Script executed:
# Find the ls_types.Diagnostic type definition
fd -e py | xargs grep -l "class Diagnostic\|Diagnostic.*TypedDict" | head -5Repository: oraios/serena
Length of output: 185
🏁 Script executed:
# Search for _store_published_diagnostics implementation
rg -n "_store_published_diagnostics" src/solidlsp/ls.pyRepository: oraios/serena
Length of output: 183
🏁 Script executed:
# Check the assert message context more widely
sed -n '1315,1340p' src/solidlsp/ls.pyRepository: oraios/serena
Length of output: 1312
🏁 Script executed:
# Look at ls_types.Diagnostic definition
cat -n src/solidlsp/ls_types.py | grep -A 20 "Diagnostic"Repository: oraios/serena
Length of output: 2089
🏁 Script executed:
# Check _store_published_diagnostics implementation
sed -n '557,620p' src/solidlsp/ls.pyRepository: oraios/serena
Length of output: 2745
🏁 Script executed:
# Search for severity and code handling in the codebase
rg -n "severity|code" src/solidlsp/ls_types.py -B 2 -A 2Repository: oraios/serena
Length of output: 2060
🏁 Script executed:
# Verify the exact LSP protocol spec reference for diagnostic optionality
web_search "LSP textDocument/diagnostic response Diagnostic severity code optional"Repository: oraios/serena
Length of output: 107
🌐 Web query:
LSP Language Server Protocol Diagnostic severity code optional fields specification
💡 Result:
In the Language Server Protocol (LSP), the Diagnostic structure includes an optional 'severity' field of type DiagnosticSeverity, which is an integer enum. DiagnosticSeverity values: - 1: Error - 2: Warning - 3: Information - 4: Hint The 'severity' field is optional. If omitted, the client interprets the diagnostic as error, warning, info, or hint. TypeScript interface (from LSP implementations and older specs): interface Diagnostic { range: Range; severity?: DiagnosticSeverity; // number (1-4) code?: number | string; source?: string; message: string; // additional optional fields like tags, relatedInformation, data (since 3.16), etc. } enum DiagnosticSeverity { Error = 1, Warning = 2, Information = 3, Hint = 4 } This has been consistent since early LSP versions (e.g., 2.x) through current (3.17/3.18).
Citations:
- 1: https://docs.rs/lsp-types/latest/lsp_types/struct.DiagnosticSeverity.html
- 2: https://github.com/reznikmm/ada-lsp/blob/master/doc/protocol.md
- 3: https://docs.rs/lsp-types/latest/lsp_types/struct.Diagnostic.html
- 4: https://github.com/Microsoft/language-server-protocol/blob/main/versions/protocol-2-x.md
Handle optional LSP Diagnostic fields defensively.
Per LSP specification, severity and code in Diagnostic responses are optional. Accessing them unconditionally with item["severity"] and item["code"] will raise KeyError on valid responses that omit these fields, even though _store_published_diagnostics() in the same file already handles both as optional using .get().
Also, the assertion message incorrectly states "expected list" when checking isinstance(response, dict).
🩹 Suggested fix
if response is not None:
assert isinstance(response, dict), (
- f"Unexpected response from Language Server (expected list, got {type(response)}): {response}"
+ f"Unexpected response from Language Server (expected dict, got {type(response)}): {response}"
)
ret = []
for item in response["items"]: # type: ignore
new_item: ls_types.Diagnostic = {
"uri": uri,
- "severity": item["severity"],
"message": item["message"],
"range": item["range"],
- "code": item["code"], # type: ignore
}
+ if "severity" in item:
+ new_item["severity"] = item["severity"] # type: ignore[assignment]
+ if "code" in item:
+ new_item["code"] = item["code"] # type: ignore[assignment]
if "source" in item:
new_item["source"] = item["source"]
ret.append(ls_types.Diagnostic(**new_item))📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if response is not None: | |
| assert isinstance(response, dict), ( | |
| f"Unexpected response from Language Server (expected list, got {type(response)}): {response}" | |
| ) | |
| ret = [] | |
| for item in response["items"]: # type: ignore | |
| new_item: ls_types.Diagnostic = { | |
| "uri": uri, | |
| "severity": item["severity"], | |
| "message": item["message"], | |
| "range": item["range"], | |
| "code": item["code"], # type: ignore | |
| } | |
| if "source" in item: | |
| new_item["source"] = item["source"] | |
| ret.append(ls_types.Diagnostic(**new_item)) | |
| if response is not None: | |
| assert isinstance(response, dict), ( | |
| f"Unexpected response from Language Server (expected dict, got {type(response)}): {response}" | |
| ) | |
| ret = [] | |
| for item in response["items"]: # type: ignore | |
| new_item: ls_types.Diagnostic = { | |
| "uri": uri, | |
| "message": item["message"], | |
| "range": item["range"], | |
| } | |
| if "severity" in item: | |
| new_item["severity"] = item["severity"] # type: ignore[assignment] | |
| if "code" in item: | |
| new_item["code"] = item["code"] # type: ignore[assignment] | |
| if "source" in item: | |
| new_item["source"] = item["source"] | |
| ret.append(ls_types.Diagnostic(**new_item)) |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/solidlsp/ls.py` around lines 1320 - 1335, The response handling assumes
required fields and has a wrong assertion message: change the assertion text to
say "expected dict" instead of "expected list", and when building new_item in
the loop use item.get("severity") and item.get("code") (not item["severity"] /
item["code"]) so missing optional LSP Diagnostic fields are handled safely; keep
the conditional inclusion of "source" as-is and continue constructing
ls_types.Diagnostic(**new_item) so optional None/missing values are preserved.
4d55c51 to
9f98221
Compare
BaseException -> Exception
9f98221 to
a2f51d1
Compare
There was a problem hiding this comment.
Actionable comments posted: 12
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src/solidlsp/language_servers/fsharp_language_server.py (1)
75-84:⚠️ Potential issue | 🟡 MinorRemove unused
RuntimeDependencyCollectioninstantiation.The
RuntimeDependencyCollectionobject is instantiated on lines 75-84 but never assigned or used. The actual installation is performed directly viasubprocess.run()on line 101. This is dead code that should be removed.Remove unused code
- RuntimeDependencyCollection( - [ - RuntimeDependency( - id="fsautocomplete", - description="FsAutoComplete (Ionide F# Language Server)", - command=f"dotnet tool install --tool-path ./ fsautocomplete --version {fsautocomplete_version}", - platform_id="any", - ), - ] - ) - # Install FsAutoComplete if not already installed🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/solidlsp/language_servers/fsharp_language_server.py` around lines 75 - 84, Remove the dead RuntimeDependencyCollection/RuntimeDependency instantiation for "fsautocomplete" (the RuntimeDependencyCollection(...) and nested RuntimeDependency(...)) since it's never assigned or used; delete that block and keep the actual installation path via subprocess.run(...) that performs the dotnet tool install for fsautocomplete, ensuring no other references to RuntimeDependencyCollection or RuntimeDependency remain in fsharp_language_server.py.
♻️ Duplicate comments (12)
test/serena/test_serena_agent.py (3)
1320-1349: 🛠️ Refactor suggestion | 🟠 MajorAdd snapshot coverage for these
SafeDeleteSymbolcases.
SafeDeleteSymbolis a symbolic edit tool, but these new cases only assert substrings/manual file checks. Please mirror the snapshot pattern used intest/serena/test_symbol_editing.pyso the structured tool output is pinned as well.As per coding guidelines,
test/**/*.py: Symbolic editing operations must have snapshot tests.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/serena/test_serena_agent.py` around lines 1320 - 1349, Both tests for SafeDeleteSymbol need snapshot coverage: update test_safe_delete_symbol_blocked_by_references and test_safe_delete_symbol_succeeds_when_no_references to assert the structured tool output with the snapshot fixture (pin the full tool result returned by SafeDeleteSymbol.apply) and, for the success case, also snapshot the post-edit file content from read_project_file; mirror the pattern used in test/serena/test_symbol_editing.py (use the snapshot fixture to save result and file_content) while keeping the existing substring/assert checks (referencing SafeDeleteSymbol, SUCCESS_RESULT, read_project_file, and the two test function names to locate the code).
1326-1333:⚠️ Potential issue | 🟠 MajorAssert the blocked delete is non-mutating before rollback.
project_file_modification_context(...)restores the file infinally, so this test still passes ifSafeDeleteSymboledits the file and then returns"Cannot delete". Capture the original contents inside thewithblock and compare again before it exits.Minimal fix
# wrap in modification context as a safety net: if the tool has a bug and deletes anyway, # the file will be restored, preventing corruption of test resources with project_file_modification_context(serena_agent, case.relative_path): + original_content = read_project_file(serena_agent.get_active_project(), case.relative_path) safe_delete_tool = serena_agent.get_tool(SafeDeleteSymbol) result =apply(name_path_pattern=case.name_path, relative_path=case.relative_path) assert "Cannot delete" in result, f"Expected deletion to be blocked due to existing references, but got: {result}" assert "referenced in" in result, f"Expected reference information in result, but got: {result}" + assert read_project_file(serena_agent.get_active_project(), case.relative_path) == original_content🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/serena/test_serena_agent.py` around lines 1326 - 1333, The test currently only checks SafeDeleteSymbol.apply's return value but not whether it mutated the file under test; inside the with project_file_modification_context(serena_agent, case.relative_path) block, read and store the original file contents (via serena_agent or its filesystem helper) before calling safe_delete_tool = serena_agent.get_tool(SafeDeleteSymbol) and safe_delete_tool.apply(...), then after calling apply assert the file contents are identical to the stored original (ensuring non-mutating behavior) in addition to the existing assertions about the returned message.
45-50:⚠️ Potential issue | 🟡 MinorRename the keyword-only
idparameter.Ruff A002 still fires here because the helper shadows Python’s builtin
id.As per coding guidelines,
**/*.py: Use strict typing with mypy and format code with ruff.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/serena/test_serena_agent.py` around lines 45 - 50, The parameter name id in BaseCase.to_pytest_param shadows the builtin id and triggers Ruff A002; rename it to a nonbuiltin name (e.g., case_id or param_id) in the method signature of BaseCase.to_pytest_param and update all call sites to pass the new name, keeping the body unchanged (still call pytest.param(self.language, self, marks=[*get_pytest_markers(self.language), *marks], id=case_id/param_id) or pass positionally) and ensure imports/reference to get_pytest_markers and pytest.param remain correct.test/diagnostics_cases.py (1)
24-29:⚠️ Potential issue | 🟡 MinorRename the keyword-only
idparameter.Ruff A002 still fires here because the helper shadows Python’s builtin
id, which can turn this new test module into a lint failure.As per coding guidelines,
**/*.py: Use strict typing with mypy and format code with ruff.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/diagnostics_cases.py` around lines 24 - 29, The helper function diagnostic_case_param currently defines a keyword-only parameter named id which shadows the built-in id and triggers Ruff A002; rename the parameter to something like case_id (update the signature of diagnostic_case_param to use case_id: str), then update the pytest.param call to pass id=case_id and update all call sites that pass id=... to use case_id; keep the rest of the signature (case, *marks) and typing intact so mypy and ruff remain satisfied.src/solidlsp/language_servers/ty_server.py (1)
4-7:⚠️ Potential issue | 🔴 CriticalUse
uv tool run, notuv x.The official uv docs only describe
uvxas the alias foruv tool run, and the ty docs show the language server being started asty server. The fallback here usesuv x, so environments that haveuvbut no separateuvxshim will invoke the wrong command. (docs.astral.sh)Minimal fix
- - ty_version: Override the pinned ``ty`` version used with ``uvx`` / ``uv x`` + - ty_version: Override the pinned ``ty`` version used with ``uvx`` / ``uv tool run`` ... - return [uv_path, "x", "--from", f"ty=={ty_version}", "ty", "server"] + return [uv_path, "tool", "run", "--from", f"ty=={ty_version}", "ty", "server"]Also applies to: 63-66
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/solidlsp/language_servers/ty_server.py` around lines 4 - 7, The docs and fallback command in ty_server.py incorrectly reference the `uv x` invocation; update any usage and comments that mention `uv x` (including the ls_specific_settings["python_ty"] doc string and the server start logic that constructs the command) to use the official `uv tool run` form (or the documented `uvx` alias) so the language server is started as `ty server` via `uv tool run ty server`; ensure both the textual help entries and the code path that falls back to `uv x` are changed to `uv tool run` (or `uvx`) to match the official docs.docs/02-usage/070_security.md (1)
28-41:⚠️ Potential issue | 🟠 MajorNarrow the checksum guarantee to Serena's bundled versions.
The code in this PR intentionally passes
expected_sha256=Nonewhen users override bundled dependency versions, so this section still overstates the guarantee. Please scope the “all of the following” list to Serena’s pinned defaults and explicitly call out that custom version overrides weaken checksum verification.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/02-usage/070_security.md` around lines 28 - 41, Update the "Pinned versions by default" bullet and the following paragraph (the block starting "In practice, this means that a downloaded artifact must match all of the following:") to clarify that the SHA256 integrity guarantee only applies to Serena's bundled/pinned defaults; explicitly state that when users override bundled dependency versions (where the code passes expected_sha256=None), checksum verification is not enforced and custom overrides weaken integrity guarantees. Adjust the list language to say these checks apply to Serena's pinned defaults, and add one concise sentence calling out that custom version overrides may skip SHA256 checks and thus reduce security.src/solidlsp/language_servers/omnisharp.py (1)
161-176:⚠️ Potential issue | 🟠 MajorScope the OmniSharp and Razor installs by version.
The new override settings only rewrite the archive URLs. Both downloads still extract into fixed
OmniSharpandRazorOmnisharpdirectories, so once one version exists on disk, changingomnisharp_versionorrazor_omnisharp_versionno longer has any effect.Minimal direction
- omnisharp_ls_dir = os.path.join(cls.ls_resources_dir(solidlsp_settings), "OmniSharp") + omnisharp_ls_dir = os.path.join(cls.ls_resources_dir(solidlsp_settings), "OmniSharp", omnisharp_version) ... - razor_omnisharp_ls_dir = os.path.join(cls.ls_resources_dir(solidlsp_settings), "RazorOmnisharp") + razor_omnisharp_ls_dir = os.path.join(cls.ls_resources_dir(solidlsp_settings), "RazorOmnisharp", razor_omnisharp_version)Also applies to: 210-234
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/solidlsp/language_servers/omnisharp.py` around lines 161 - 176, The current patch only rewrites the dependency URLs so multiple omnisharp versions clash on disk; in the OmniSharp and RazorOmnisharp handling inside omnisharp.py (the loop that checks dependency["id"] == "OmniSharp" and "RazorOmnisharp"), update how install paths are generated: where you currently replace DEFAULT_OMNISHARP_VERSION or DEFAULT_RAZOR_OMNISHARP_VERSION in dependency["installPath"] and dependency["installTestPath"], ensure the directory component that is the plain "OmniSharp" or "RazorOmnisharp" is also made version-scoped (for example include omnisharp_version or razor_omnisharp_version in the path name) and if those keys are missing, set them to a versioned default; apply the same change in the duplicate block later (lines ~210-234) and keep the integrity nulling behavior when a non-default version is used.src/solidlsp/ls.py (2)
1337-1346:⚠️ Potential issue | 🟠 MajorDon't revive cached diagnostics after an empty successful pull.
An empty
rethere means the pull request succeeded and the file currently has no diagnostics. Falling back onif not ret:can resurrect stale published diagnostics from an older generation. Gate this branch onresponse is Noneinstead.Suggested fix
- if not ret: + if response is None: published_diagnostics = self._wait_for_published_diagnostics( uri=uri, after_generation=diagnostics_before_request, timeout=2.5 if pull_diagnostics_failed else 0.5, )🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/solidlsp/ls.py` around lines 1337 - 1346, The branch that revives cached diagnostics should only run when the server did not respond, not when the server returned an empty list; change the condition from "if not ret:" to check the raw RPC response (e.g. "if response is None:") so you only call _wait_for_published_diagnostics(uri, after_generation=diagnostics_before_request, timeout=... ) and possibly _get_cached_published_diagnostics(uri) when there was no response; reference the local names ret, response, _wait_for_published_diagnostics, _get_cached_published_diagnostics, uri, diagnostics_before_request, and pull_diagnostics_failed and ensure the rest of the logic (setting published_diagnostics and assigning ret) remains the same.
1320-1335:⚠️ Potential issue | 🟠 MajorHandle optional diagnostic fields defensively.
severityandcodeare optional in LSP diagnostics. The directitem["severity"]/item["code"]access raisesKeyErroron valid responses, which makesrequest_text_document_diagnostics()fail even though_store_published_diagnostics()already handles the same fields defensively.Suggested fix
if response is not None: assert isinstance(response, dict), ( - f"Unexpected response from Language Server (expected list, got {type(response)}): {response}" + f"Unexpected response from Language Server (expected dict, got {type(response)}): {response}" ) ret = [] for item in response["items"]: # type: ignore new_item: ls_types.Diagnostic = { "uri": uri, - "severity": item["severity"], "message": item["message"], "range": item["range"], - "code": item["code"], # type: ignore } + if "severity" in item: + new_item["severity"] = item["severity"] + if "code" in item: + new_item["code"] = item["code"] # type: ignore[assignment] if "source" in item: new_item["source"] = item["source"] ret.append(ls_types.Diagnostic(**new_item))🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/solidlsp/ls.py` around lines 1320 - 1335, request_text_document_diagnostics() assumes every diagnostic has "severity" and "code" and directly indexes item["severity"]/item["code"], which can KeyError for valid LSP responses; update the code that builds new_item from response["items"] to mirror _store_published_diagnostics() by reading optional fields defensively (use item.get("severity") and item.get("code") or only set "severity"/"code" keys when present), and only attach "source", "severity", and "code" to new_item if they exist before constructing ls_types.Diagnostic to avoid KeyError on optional fields.src/solidlsp/ls_utils.py (2)
421-425:⚠️ Potential issue | 🔴 Critical
extractall()still permits tar traversal via links.Checking only
tar_member.nameis not enough: symlink, hardlink, FIFO, and device entries can still escapetarget_pathwhenextractall()materializes them. Extract members one by one and reject anything except regular files/directories.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/solidlsp/ls_utils.py` around lines 421 - 425, The current extraction loop only validates tar_member.name but then calls tar_ref.extractall(), which still allows symlinks/hardlinks/FIFOs/devices to escape target_path; update the tar extraction in the tarfile.open block to iterate members and perform strict per-member checks: use FileUtils._validate_extraction_path(tar_member.name, target_path) for each member, allow only regular files and directories (e.g., check tar_member.isreg() / tar_member.isdir()), explicitly reject symlinks, hardlinks, FIFOs and device entries, and for regular files extract via tar_ref.extractfile() + safe write to disk and recreate directories as needed instead of calling tar_ref.extractall(). Ensure the code references the existing FileUtils._validate_extraction_path and the tarfile.open context where tar_ref is used.
240-247:⚠️ Potential issue | 🔴 CriticalValidate redirect targets before following them.
requests.get()still follows redirects here, so an allowlisted URL can bounce to an unapproved host before Line 246 checksresponse.url. Disable automatic redirects and validate eachLocationhop before issuing the next request.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/solidlsp/ls_utils.py` around lines 240 - 247, requests.get currently follows redirects before you validate the final response URL; change the logic in the download flow around the requests.get call to disable automatic redirects (use allow_redirects=False), then implement a loop that inspects each redirect hop by reading the Location header on 3xx responses, call FileUtils._validate_download_host() on each Location target (and on the initial URL) before issuing the next requests.get, and only follow to the next hop if validation passes; if a hop is disallowed, raise SolidLSPException. Keep using the same timeout/stream parameters for subsequent requests and preserve the existing status_code/response handling once a non-redirect 200 response is obtained.src/serena/tools/tools_base.py (1)
34-43:⚠️ Potential issue | 🟠 MajorUse a stable identity when diffing diagnostics.
_format_lsp_edit_result_with_new_diagnostics()comparesDiagnosticIdentityvalues before and after an edit. Because the identity includes exact start/end positions, any pre-existing warning that just shifts after an insertion/deletion is reported as newly introduced. Compare on stable fields only and keep the range in a separate display object.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/serena/tools/tools_base.py` around lines 34 - 43, DiagnosticIdentity currently embeds exact ranges so _format_lsp_edit_result_with_new_diagnostics() treats shifted diagnostics as new; change the comparison identity to stable fields only (e.g., message, severity, code_repr, source) and remove start_line/start_character/end_line/end_character from DiagnosticIdentity or add a new StableDiagnosticIdentity used for diffing, then keep the original range fields in a separate display object (e.g., DiagnosticDisplay or original DiagnosticIdentityRange) so _format_lsp_edit_result_with_new_diagnostics() uses the stable identity for equality/sets while still rendering ranges from the separate display object.
🧹 Nitpick comments (6)
src/serena/jetbrains/jetbrains_plugin_client.py (1)
428-442: Tighten response typing for new refactor APIsThese methods expose
dict[str, Any]return types, which weakens mypy guarantees on a public client surface. Please switch to a concrete typed response (e.g., aTypedDict/DTO injetbrains_types) instead ofAny.As per coding guidelines, "
**/*.py: Use strict typing with mypy and format code with ruff".Also applies to: 522-557
src/solidlsp/language_servers/kotlin_language_server.py (1)
122-124: Allow checksum verification for custom Kotlin LSP versions.For non-default versions,
expected_sha256becomesNone, so integrity checks are skipped. Consider supporting akotlin_lsp_sha256override to preserve verification when users pin custom versions.💡 Suggested direction
- expected_sha256 = None - if kotlin_lsp_version == DEFAULT_KOTLIN_LSP_VERSION: - expected_sha256 = KOTLIN_LSP_SHA256_BY_SUFFIX[kotlin_suffix] + custom_sha256 = self._custom_settings.get("kotlin_lsp_sha256") + if kotlin_lsp_version == DEFAULT_KOTLIN_LSP_VERSION: + expected_sha256 = KOTLIN_LSP_SHA256_BY_SUFFIX[kotlin_suffix] + else: + expected_sha256 = custom_sha256Also applies to: 126-132
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/solidlsp/language_servers/kotlin_language_server.py` around lines 122 - 124, The code currently sets expected_sha256 to None for non-default kotlin_lsp_version, skipping checksum checks; update the logic in kotlin_language_server.py so that if a user-provided override (e.g., kotlin_lsp_sha256) is present it is used as expected_sha256, otherwise fall back to KOTLIN_LSP_SHA256_BY_SUFFIX[kotlin_suffix] when kotlin_lsp_version == DEFAULT_KOTLIN_LSP_VERSION; ensure the variable expected_sha256 is passed into the existing integrity verification call (the same code path that uses expected_sha256) so custom pinned versions still get checksum verification while preserving current behavior when no override or mapping exists.test/solidlsp/lua/test_lua_basic.py (1)
85-110: Same class-levelifpattern for conditional tests.This follows the same pattern used in Go and Rust tests. Consider using
pytest.mark.skipiffor consistency with pytest conventions.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/solidlsp/lua/test_lua_basic.py` around lines 85 - 110, Replace the enclosing if-block check with pytest.skipif decorators so tests skip when Lua implementations are not available; specifically, remove the surrounding if language_has_verified_implementation_support(Language.LUA) and add `@pytest.mark.skipif`(not language_has_verified_implementation_support(Language.LUA), reason="Lua implementation support not available") above the test functions test_find_implementations and test_request_implementing_symbols (they are the functions that call language_server.request_implementation and language_server.request_implementing_symbols) to match the pytest style used in other modules.src/solidlsp/language_servers/groovy_language_server.py (1)
95-97: Version tag extraction assumes hyphenated format.The expression
vscode_java_version.rsplit('-', 1)[0]extracts the tag by removing the build number suffix. This works correctly for the expected format"1.42.0-561"→ tag"v1.42.0".If a user provides a version without a hyphen (e.g.,
"1.43.0"), the tag becomes"v1.43.0"which may or may not match actual release tags. Consider documenting the expected version format in the docstring.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/solidlsp/language_servers/groovy_language_server.py` around lines 95 - 97, Update the docstring for the Groovy language server (around where groovy_settings, vscode_java_version and vscode_java_tag are defined and where solidlsp_settings.get_ls_specific_settings(Language.GROOVY) is used) to state the expected vscode_java_version format (e.g., "MAJOR.MINOR.PATCH-BUILD" like "1.42.0-561") and add a brief note that versions without a hyphen will be interpreted as tag "v{version}"; alternatively, implement validation logic to check for the hyphenated build suffix and raise a clear error or fallback behavior if absent so vscode_java_tag computation remains correct.src/solidlsp/ls_process.py (1)
571-579: Consider thread-safety for observer iteration.The
_notification_observerslist is iterated without synchronization whileon_any_notificationcan append to it from another thread. If observers are registered after the language server starts (while notifications are being processed), this could cause aRuntimeError: list changed size during iterationor miss newly added observers.If observers are only ever registered during initialization before
start()is called, this is fine. Otherwise, consider using a lock or copying the list before iteration.Optional: Copy list before iteration for thread-safety
def _notification_handler(self, response: StringDict) -> None: """ Handle the notification received from the server: call the appropriate callback function """ method = response.get("method", "") params = response.get("params") - for observer in self._notification_observers: + for observer in list(self._notification_observers): try: observer(method, params)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/solidlsp/ls_process.py` around lines 571 - 579, The iteration over self._notification_observers can race with concurrent appends from on_any_notification; to fix it, make the iteration safe by either protecting the list with a lock (add a threading.Lock or asyncio.Lock used by on_any_notification when appending and by the notification dispatcher when iterating) or simply snapshot the list before iterating (e.g., observers = list(self._notification_observers) and iterate over that). Update the registration path (on_any_notification) to use the same lock if you choose locking, and change the loop that currently iterates self._notification_observers to use the snapshot or the lock so you won't get "list changed size during iteration" or miss newly added observers.test/solidlsp/solidity/test_solidity_basic.py (1)
168-193: Consider usingpytest.mark.skipifinstead of class-leveliffor conditional tests.Using an
ifstatement at the class body level to conditionally define test methods works but is unconventional. With this pattern, the tests won't appear in pytest collection output when skipped, making it harder to see what's being gated.A more idiomatic approach would be:
`@pytest.mark.skipif`( not language_has_verified_implementation_support(Language.SOLIDITY), reason="Solidity LS does not have verified implementation support" ) `@pytest.mark.parametrize`("language_server", [Language.SOLIDITY], indirect=True) `@pytest.mark.parametrize`("repo_path", [Language.SOLIDITY], indirect=True) def test_find_implementations(self, language_server: SolidLanguageServer, repo_path: Path) -> None: ...This keeps the tests visible in collection and provides clear skip reasons. However, if this pattern is an established project convention, feel free to keep it for consistency.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/solidlsp/solidity/test_solidity_basic.py` around lines 168 - 193, The conditional at class/body level using if language_has_verified_implementation_support(Language.SOLIDITY) should be replaced with pytest.mark.skipif on the test functions so they are collected and show as skipped with a reason; update the decorators for test_find_implementations and test_request_implementing_symbols to use `@pytest.mark.skipif`(not language_has_verified_implementation_support(Language.SOLIDITY), reason="Solidity LS does not have verified implementation support") while keeping the existing `@pytest.mark.parametrize` decorators and signatures (language_server: SolidLanguageServer, repo_path: Path) so behavior is unchanged but tests are visible in pytest collection.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/serena/jetbrains/jetbrains_plugin_client.py`:
- Around line 631-648: list_inspections can end up passing None to _make_request
when request_data is empty (because _make_request converts falsy payloads to
None), which breaks servers expecting an empty JSON object; update
list_inspections so it always passes an actual dict body (e.g. pass request_data
or {} explicitly) to _make_request instead of allowing an empty dict to be
treated as falsy, referencing the list_inspections method and the _make_request
call to locate where to change the payload before the POST to
"/listInspections".
In `@src/serena/symbol.py`:
- Around line 47-49: LanguageServerSymbolLocation.__post_init__ now normalizes
path separators, but _symbol_identity still uses symbol.relative_path raw so
equivalent paths like "foo\\bar.py" and "foo/bar.py" produce duplicate keys;
update _symbol_identity to canonicalize the relative_path the same way (e.g., if
symbol.relative_path is not None, use symbol.relative_path.replace("\\", "/"))
before building the identity key, and apply the same normalization in any other
identity/key-building helpers used by get_symbol_diagnostics_by_location to
ensure consistent keys across calls.
In `@src/serena/tools/symbol_tools.py`:
- Around line 801-810: The diagnostics snapshot is only taken for relative_path
but rename_symbol may touch many files; first preview the rename to get the full
set of edited paths, then capture diagnostics for all those files before
applying the edit. Concretely: call a preview API on the editor (e.g.,
code_editor.preview_rename or equivalent) to obtain the full edited file list
from the result, pass that list into _capture_published_lsp_diagnostics_snapshot
(replace the single [EditedFilePath(relative_path, ...)]), then call
code_editor.rename_symbol to apply the change and finally call
_format_lsp_edit_result_with_new_diagnostics with the original diagnostics
snapshot and code_editor.get_last_edited_file_paths().
In `@src/solidlsp/language_servers/al_language_server.py`:
- Around line 258-268: The cache path for the AL extension does not include the
chosen version so changing al_extension_version has no effect; update the code
that builds al_extension_dir (and any related lookup logic) to incorporate
al_extension_version (or store/read a metadata file) so different versions map
to different directories and _download_al_extension is only skipped when the
directory/version match the requested al_extension_version; modify references to
al_extension_dir, al_extension_version, cls.ls_resources_dir and
_download_al_extension accordingly to implement versioned caching or metadata
validation.
- Around line 157-163: The call to
FileUtils.download_and_extract_archive_verified currently skips integrity checks
when url != AL_EXTENSION_URL; change it to always pass a non-None
expected_sha256 by looking up the hash for the requested URL (e.g., replace the
conditional with a lookup in a mapping like AL_EXTENSION_SHA256S[url] or raise
if missing) so FileUtils.download_and_extract_archive_verified is always given
an expected_sha256; update references around AL_EXTENSION_URL,
AL_EXTENSION_SHA256 and AL_EXTENSION_ALLOWED_HOSTS and ensure the new mapping is
used by the call in the function where
FileUtils.download_and_extract_archive_verified is invoked.
In `@src/solidlsp/language_servers/eclipse_jdtls.py`:
- Around line 155-161: The install checks for vscode-java and IntelliCode use
fixed directory names so changing vscode_java_version or intellicode_version
won't trigger re-download; update how vscode_java_path and
intellicode_directory_path are computed to include the specific version string
(use the normalized version from vscode_java_version and intellicode_version,
similar to how gradle_path uses gradle_version), and adjust the existence checks
and extraction logic in the install functions (the code that currently relies on
default_vscode_java_version/default_intellicode_version and the static names) to
look for the versioned directories/files so stale installs are detected and
replaced when versions change; ensure any helper functions that compute paths or
check installation presence use the version-aware names (vscode_java_path,
intellicode_directory_path, gradle_path, vscode_java_version,
intellicode_version).
- Around line 187-191: The manifest currently hard-codes internal bundle
filenames/paths ("jre_home_path", "jre_path", "lombok_jar_path",
"jdtls_launcher_jar_path", "jdtls_readonly_config_path" etc.), which breaks when
users override versions; change the code that builds these manifest entries to
discover those paths dynamically from the extracted bundle directories instead
of embedding fixed names. After extracting each archive, scan the extracted
folder for expected patterns (e.g., find a bin/java under the JRE extract,
search for lombok-*.jar, org.eclipse.equinox.launcher_*.jar,
com.microsoft.jdtls.intellicode.core-*.jar, and platform-specific config dirs)
and assign the resulting relative paths to the manifest keys; add clear
fallbacks and a descriptive error if multiple matches or none are found. Ensure
this replacement is applied to every place where those keys are set (the blocks
around the shown ranges 187–191, 199–203, 211–215, 223–227, 235–239, 250–251,
and 278–314) so custom version variables actually work.
In `@src/solidlsp/language_servers/intelephense.py`:
- Around line 62-64: The code pulls untyped values via self._custom_settings.get
into intelephense_version and npm_registry and passes them to
build_npm_install_command which expects str/Optional[str]; add explicit runtime
type validation and safe coercion before constructing the npm command: read
values with self._custom_settings.get("intelephense_version") and
self._custom_settings.get("npm_registry"), check isinstance(..., str) (or None
for npm_registry), raise or fallback to the default "1.14.4" for
intelephense_version and set npm_registry to None if not a string, then call
build_npm_install_command(intelephense_version, npm_registry). Ensure
validations are applied at both occurrences referenced in this file so mypy-safe
types are passed into build_npm_install_command.
In `@src/solidlsp/language_servers/matlab_language_server.py`:
- Around line 123-129: The code currently passes expected_sha256=None when url
!= MATLAB_EXTENSION_URL which disables integrity checks; change to fail-closed
by validating before calling FileUtils.download_and_extract_archive_verified: if
url != MATLAB_EXTENSION_URL and MATLAB_EXTENSION_SHA256 is not set (or is None),
raise an error (e.g., ValueError/RuntimeError) explaining a missing checksum for
non-default MATLAB extension versions; otherwise pass the provided
MATLAB_EXTENSION_SHA256 into expected_sha256. Reference
FileUtils.download_and_extract_archive_verified, url, MATLAB_EXTENSION_URL, and
MATLAB_EXTENSION_SHA256 to locate and update the logic.
In `@src/solidlsp/language_servers/vts_language_server.py`:
- Around line 75-77: The current install logic only checks for the existence of
the vts language server directory and ignores changes to vtsls_version; update
the install routine so it reads the desired vtsls_version from
vts_config.vtsls_version and compares it against the currently installed version
(e.g., by reading package.json or the installed package metadata in the vts
server directory) and if they differ, remove or replace the existing install and
reinstall the requested version using npm_registry; ensure this check is used
instead of a simple directory.exists check in the code that performs
installation/upgrade so changing vtsls_version triggers reinstall/switch.
In `@test/serena/test_serena_agent.py`:
- Around line 1070-1084: The test currently overwrites the reference
expectations when constructing the DiagnosticCase for
_assert_diagnostics_for_file; restore the reference fields so the assertion
checks both primary and reference diagnostics separately by using
diagnostic_case.reference_symbol_name_path for reference_symbol_name_path,
diagnostic_case.reference_symbol_identifier for reference_symbol_identifier, and
diagnostic_case.reference_message_fragment for reference_message_fragment (keep
the primary_* fields sourced from diagnostic_case as before); update the
DiagnosticCase construction in test_serena_agent.py around the call to
_assert_diagnostics_for_file so it no longer assigns primary values to the
reference_* fields.
---
Outside diff comments:
In `@src/solidlsp/language_servers/fsharp_language_server.py`:
- Around line 75-84: Remove the dead
RuntimeDependencyCollection/RuntimeDependency instantiation for "fsautocomplete"
(the RuntimeDependencyCollection(...) and nested RuntimeDependency(...)) since
it's never assigned or used; delete that block and keep the actual installation
path via subprocess.run(...) that performs the dotnet tool install for
fsautocomplete, ensuring no other references to RuntimeDependencyCollection or
RuntimeDependency remain in fsharp_language_server.py.
---
Duplicate comments:
In `@docs/02-usage/070_security.md`:
- Around line 28-41: Update the "Pinned versions by default" bullet and the
following paragraph (the block starting "In practice, this means that a
downloaded artifact must match all of the following:") to clarify that the
SHA256 integrity guarantee only applies to Serena's bundled/pinned defaults;
explicitly state that when users override bundled dependency versions (where the
code passes expected_sha256=None), checksum verification is not enforced and
custom overrides weaken integrity guarantees. Adjust the list language to say
these checks apply to Serena's pinned defaults, and add one concise sentence
calling out that custom version overrides may skip SHA256 checks and thus reduce
security.
In `@src/serena/tools/tools_base.py`:
- Around line 34-43: DiagnosticIdentity currently embeds exact ranges so
_format_lsp_edit_result_with_new_diagnostics() treats shifted diagnostics as
new; change the comparison identity to stable fields only (e.g., message,
severity, code_repr, source) and remove
start_line/start_character/end_line/end_character from DiagnosticIdentity or add
a new StableDiagnosticIdentity used for diffing, then keep the original range
fields in a separate display object (e.g., DiagnosticDisplay or original
DiagnosticIdentityRange) so _format_lsp_edit_result_with_new_diagnostics() uses
the stable identity for equality/sets while still rendering ranges from the
separate display object.
In `@src/solidlsp/language_servers/omnisharp.py`:
- Around line 161-176: The current patch only rewrites the dependency URLs so
multiple omnisharp versions clash on disk; in the OmniSharp and RazorOmnisharp
handling inside omnisharp.py (the loop that checks dependency["id"] ==
"OmniSharp" and "RazorOmnisharp"), update how install paths are generated: where
you currently replace DEFAULT_OMNISHARP_VERSION or
DEFAULT_RAZOR_OMNISHARP_VERSION in dependency["installPath"] and
dependency["installTestPath"], ensure the directory component that is the plain
"OmniSharp" or "RazorOmnisharp" is also made version-scoped (for example include
omnisharp_version or razor_omnisharp_version in the path name) and if those keys
are missing, set them to a versioned default; apply the same change in the
duplicate block later (lines ~210-234) and keep the integrity nulling behavior
when a non-default version is used.
In `@src/solidlsp/language_servers/ty_server.py`:
- Around line 4-7: The docs and fallback command in ty_server.py incorrectly
reference the `uv x` invocation; update any usage and comments that mention `uv
x` (including the ls_specific_settings["python_ty"] doc string and the server
start logic that constructs the command) to use the official `uv tool run` form
(or the documented `uvx` alias) so the language server is started as `ty server`
via `uv tool run ty server`; ensure both the textual help entries and the code
path that falls back to `uv x` are changed to `uv tool run` (or `uvx`) to match
the official docs.
In `@src/solidlsp/ls_utils.py`:
- Around line 421-425: The current extraction loop only validates
tar_member.name but then calls tar_ref.extractall(), which still allows
symlinks/hardlinks/FIFOs/devices to escape target_path; update the tar
extraction in the tarfile.open block to iterate members and perform strict
per-member checks: use FileUtils._validate_extraction_path(tar_member.name,
target_path) for each member, allow only regular files and directories (e.g.,
check tar_member.isreg() / tar_member.isdir()), explicitly reject symlinks,
hardlinks, FIFOs and device entries, and for regular files extract via
tar_ref.extractfile() + safe write to disk and recreate directories as needed
instead of calling tar_ref.extractall(). Ensure the code references the existing
FileUtils._validate_extraction_path and the tarfile.open context where tar_ref
is used.
- Around line 240-247: requests.get currently follows redirects before you
validate the final response URL; change the logic in the download flow around
the requests.get call to disable automatic redirects (use
allow_redirects=False), then implement a loop that inspects each redirect hop by
reading the Location header on 3xx responses, call
FileUtils._validate_download_host() on each Location target (and on the initial
URL) before issuing the next requests.get, and only follow to the next hop if
validation passes; if a hop is disallowed, raise SolidLSPException. Keep using
the same timeout/stream parameters for subsequent requests and preserve the
existing status_code/response handling once a non-redirect 200 response is
obtained.
In `@src/solidlsp/ls.py`:
- Around line 1337-1346: The branch that revives cached diagnostics should only
run when the server did not respond, not when the server returned an empty list;
change the condition from "if not ret:" to check the raw RPC response (e.g. "if
response is None:") so you only call _wait_for_published_diagnostics(uri,
after_generation=diagnostics_before_request, timeout=... ) and possibly
_get_cached_published_diagnostics(uri) when there was no response; reference the
local names ret, response, _wait_for_published_diagnostics,
_get_cached_published_diagnostics, uri, diagnostics_before_request, and
pull_diagnostics_failed and ensure the rest of the logic (setting
published_diagnostics and assigning ret) remains the same.
- Around line 1320-1335: request_text_document_diagnostics() assumes every
diagnostic has "severity" and "code" and directly indexes
item["severity"]/item["code"], which can KeyError for valid LSP responses;
update the code that builds new_item from response["items"] to mirror
_store_published_diagnostics() by reading optional fields defensively (use
item.get("severity") and item.get("code") or only set "severity"/"code" keys
when present), and only attach "source", "severity", and "code" to new_item if
they exist before constructing ls_types.Diagnostic to avoid KeyError on optional
fields.
In `@test/diagnostics_cases.py`:
- Around line 24-29: The helper function diagnostic_case_param currently defines
a keyword-only parameter named id which shadows the built-in id and triggers
Ruff A002; rename the parameter to something like case_id (update the signature
of diagnostic_case_param to use case_id: str), then update the pytest.param call
to pass id=case_id and update all call sites that pass id=... to use case_id;
keep the rest of the signature (case, *marks) and typing intact so mypy and ruff
remain satisfied.
In `@test/serena/test_serena_agent.py`:
- Around line 1320-1349: Both tests for SafeDeleteSymbol need snapshot coverage:
update test_safe_delete_symbol_blocked_by_references and
test_safe_delete_symbol_succeeds_when_no_references to assert the structured
tool output with the snapshot fixture (pin the full tool result returned by
SafeDeleteSymbol.apply) and, for the success case, also snapshot the post-edit
file content from read_project_file; mirror the pattern used in
test/serena/test_symbol_editing.py (use the snapshot fixture to save result and
file_content) while keeping the existing substring/assert checks (referencing
SafeDeleteSymbol, SUCCESS_RESULT, read_project_file, and the two test function
names to locate the code).
- Around line 1326-1333: The test currently only checks SafeDeleteSymbol.apply's
return value but not whether it mutated the file under test; inside the with
project_file_modification_context(serena_agent, case.relative_path) block, read
and store the original file contents (via serena_agent or its filesystem helper)
before calling safe_delete_tool = serena_agent.get_tool(SafeDeleteSymbol) and
safe_delete_tool.apply(...), then after calling apply assert the file contents
are identical to the stored original (ensuring non-mutating behavior) in
addition to the existing assertions about the returned message.
- Around line 45-50: The parameter name id in BaseCase.to_pytest_param shadows
the builtin id and triggers Ruff A002; rename it to a nonbuiltin name (e.g.,
case_id or param_id) in the method signature of BaseCase.to_pytest_param and
update all call sites to pass the new name, keeping the body unchanged (still
call pytest.param(self.language, self,
marks=[*get_pytest_markers(self.language), *marks], id=case_id/param_id) or pass
positionally) and ensure imports/reference to get_pytest_markers and
pytest.param remain correct.
---
Nitpick comments:
In `@src/solidlsp/language_servers/groovy_language_server.py`:
- Around line 95-97: Update the docstring for the Groovy language server (around
where groovy_settings, vscode_java_version and vscode_java_tag are defined and
where solidlsp_settings.get_ls_specific_settings(Language.GROOVY) is used) to
state the expected vscode_java_version format (e.g., "MAJOR.MINOR.PATCH-BUILD"
like "1.42.0-561") and add a brief note that versions without a hyphen will be
interpreted as tag "v{version}"; alternatively, implement validation logic to
check for the hyphenated build suffix and raise a clear error or fallback
behavior if absent so vscode_java_tag computation remains correct.
In `@src/solidlsp/language_servers/kotlin_language_server.py`:
- Around line 122-124: The code currently sets expected_sha256 to None for
non-default kotlin_lsp_version, skipping checksum checks; update the logic in
kotlin_language_server.py so that if a user-provided override (e.g.,
kotlin_lsp_sha256) is present it is used as expected_sha256, otherwise fall back
to KOTLIN_LSP_SHA256_BY_SUFFIX[kotlin_suffix] when kotlin_lsp_version ==
DEFAULT_KOTLIN_LSP_VERSION; ensure the variable expected_sha256 is passed into
the existing integrity verification call (the same code path that uses
expected_sha256) so custom pinned versions still get checksum verification while
preserving current behavior when no override or mapping exists.
In `@src/solidlsp/ls_process.py`:
- Around line 571-579: The iteration over self._notification_observers can race
with concurrent appends from on_any_notification; to fix it, make the iteration
safe by either protecting the list with a lock (add a threading.Lock or
asyncio.Lock used by on_any_notification when appending and by the notification
dispatcher when iterating) or simply snapshot the list before iterating (e.g.,
observers = list(self._notification_observers) and iterate over that). Update
the registration path (on_any_notification) to use the same lock if you choose
locking, and change the loop that currently iterates
self._notification_observers to use the snapshot or the lock so you won't get
"list changed size during iteration" or miss newly added observers.
In `@test/solidlsp/lua/test_lua_basic.py`:
- Around line 85-110: Replace the enclosing if-block check with pytest.skipif
decorators so tests skip when Lua implementations are not available;
specifically, remove the surrounding if
language_has_verified_implementation_support(Language.LUA) and add
`@pytest.mark.skipif`(not
language_has_verified_implementation_support(Language.LUA), reason="Lua
implementation support not available") above the test functions
test_find_implementations and test_request_implementing_symbols (they are the
functions that call language_server.request_implementation and
language_server.request_implementing_symbols) to match the pytest style used in
other modules.
In `@test/solidlsp/solidity/test_solidity_basic.py`:
- Around line 168-193: The conditional at class/body level using if
language_has_verified_implementation_support(Language.SOLIDITY) should be
replaced with pytest.mark.skipif on the test functions so they are collected and
show as skipped with a reason; update the decorators for
test_find_implementations and test_request_implementing_symbols to use
`@pytest.mark.skipif`(not
language_has_verified_implementation_support(Language.SOLIDITY),
reason="Solidity LS does not have verified implementation support") while
keeping the existing `@pytest.mark.parametrize` decorators and signatures
(language_server: SolidLanguageServer, repo_path: Path) so behavior is unchanged
but tests are visible in pytest collection.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: fe81fff9-3f14-4a70-a0f1-7e1a8357a48c
⛔ Files ignored due to path filters (1)
test/resources/repos/kotlin/test_repo/gradle/wrapper/gradle-wrapper.jaris excluded by!**/*.jar
📒 Files selected for processing (124)
.serena/project.ymldocs/02-usage/001_features.mddocs/02-usage/050_configuration.mddocs/02-usage/070_security.mdpyproject.tomlscripts/demo_diagnostics.pyscripts/demo_find_defining_symbol.pyscripts/demo_find_implementing_symbol.pyscripts/demo_run_tools.pysrc/serena/code_editor.pysrc/serena/jetbrains/jetbrains_plugin_client.pysrc/serena/jetbrains/jetbrains_types.pysrc/serena/resources/config/internal_modes/jetbrains.ymlsrc/serena/resources/project.template.ymlsrc/serena/symbol.pysrc/serena/tools/file_tools.pysrc/serena/tools/jetbrains_tools.pysrc/serena/tools/symbol_tools.pysrc/serena/tools/tools_base.pysrc/solidlsp/language_servers/al_language_server.pysrc/solidlsp/language_servers/ansible_language_server.pysrc/solidlsp/language_servers/bash_language_server.pysrc/solidlsp/language_servers/clangd_language_server.pysrc/solidlsp/language_servers/clojure_lsp.pysrc/solidlsp/language_servers/common.pysrc/solidlsp/language_servers/csharp_language_server.pysrc/solidlsp/language_servers/dart_language_server.pysrc/solidlsp/language_servers/eclipse_jdtls.pysrc/solidlsp/language_servers/elixir_tools/elixir_tools.pysrc/solidlsp/language_servers/elm_language_server.pysrc/solidlsp/language_servers/fsharp_language_server.pysrc/solidlsp/language_servers/gopls.pysrc/solidlsp/language_servers/groovy_language_server.pysrc/solidlsp/language_servers/hlsl_language_server.pysrc/solidlsp/language_servers/intelephense.pysrc/solidlsp/language_servers/kotlin_language_server.pysrc/solidlsp/language_servers/lua_ls.pysrc/solidlsp/language_servers/luau_lsp.pysrc/solidlsp/language_servers/marksman.pysrc/solidlsp/language_servers/matlab_language_server.pysrc/solidlsp/language_servers/omnisharp.pysrc/solidlsp/language_servers/omnisharp/runtime_dependencies.jsonsrc/solidlsp/language_servers/pascal_server.pysrc/solidlsp/language_servers/phpactor.pysrc/solidlsp/language_servers/powershell_language_server.pysrc/solidlsp/language_servers/ruby_lsp.pysrc/solidlsp/language_servers/rust_analyzer.pysrc/solidlsp/language_servers/solidity_language_server.pysrc/solidlsp/language_servers/systemverilog_server.pysrc/solidlsp/language_servers/taplo_server.pysrc/solidlsp/language_servers/terraform_ls.pysrc/solidlsp/language_servers/ty_server.pysrc/solidlsp/language_servers/typescript_language_server.pysrc/solidlsp/language_servers/vts_language_server.pysrc/solidlsp/language_servers/vue_language_server.pysrc/solidlsp/language_servers/yaml_language_server.pysrc/solidlsp/ls.pysrc/solidlsp/ls_config.pysrc/solidlsp/ls_process.pysrc/solidlsp/ls_types.pysrc/solidlsp/ls_utils.pytest/conftest.pytest/diagnostics_cases.pytest/resources/repos/clojure/test_repo/src/test_app/diagnostics_sample.cljtest/resources/repos/cpp/test_repo/compile_commands.jsontest/resources/repos/cpp/test_repo/diagnostics_sample.cpptest/resources/repos/csharp/test_repo/DiagnosticsSample.cstest/resources/repos/csharp/test_repo/Program.cstest/resources/repos/csharp/test_repo/Services/ConsoleGreeter.cstest/resources/repos/csharp/test_repo/Services/IGreeter.cstest/resources/repos/fsharp/test_repo/DiagnosticsSample.fstest/resources/repos/fsharp/test_repo/Formatter.fstest/resources/repos/fsharp/test_repo/Program.fstest/resources/repos/fsharp/test_repo/Shapes.fstest/resources/repos/fsharp/test_repo/TestProject.fsprojtest/resources/repos/go/test_repo/diagnostics_sample.gotest/resources/repos/go/test_repo/main.gotest/resources/repos/java/test_repo/src/main/java/test_repo/ConsoleGreeter.javatest/resources/repos/java/test_repo/src/main/java/test_repo/DiagnosticsSample.javatest/resources/repos/java/test_repo/src/main/java/test_repo/Greeter.javatest/resources/repos/java/test_repo/src/main/java/test_repo/Main.javatest/resources/repos/kotlin/test_repo/gradle/wrapper/gradle-wrapper.propertiestest/resources/repos/kotlin/test_repo/gradlewtest/resources/repos/kotlin/test_repo/gradlew.battest/resources/repos/kotlin/test_repo/src/main/kotlin/test_repo/DiagnosticsSample.kttest/resources/repos/lean4/test_repo/DiagnosticsSample.leantest/resources/repos/lua/test_repo/main.luatest/resources/repos/lua/test_repo/src/animals.luatest/resources/repos/php/test_repo/diagnostics_sample.phptest/resources/repos/powershell/test_repo/diagnostics_sample.ps1test/resources/repos/python/test_repo/test_repo/diagnostics_sample.pytest/resources/repos/ruby/test_repo/lib.rbtest/resources/repos/ruby/test_repo/main.rbtest/resources/repos/rust/test_repo/src/diagnostics_sample.rstest/resources/repos/rust/test_repo/src/lib.rstest/resources/repos/rust/test_repo/src/main.rstest/resources/repos/typescript/test_repo/.serena/project.ymltest/resources/repos/typescript/test_repo/diagnostics_sample.tstest/resources/repos/typescript/test_repo/formatters.tstest/resources/repos/typescript/test_repo/index.tstest/serena/__snapshots__/test_symbol_editing.ambrtest/serena/test_serena_agent.pytest/serena/test_symbol.pytest/solidlsp/clojure/__init__.pytest/solidlsp/csharp/test_csharp_basic.pytest/solidlsp/csharp/test_csharp_nuget_download.pytest/solidlsp/dart/test_dart_basic.pytest/solidlsp/fortran/test_fortran_basic.pytest/solidlsp/fsharp/test_fsharp_basic.pytest/solidlsp/go/test_go_basic.pytest/solidlsp/java/test_java_basic.pytest/solidlsp/lua/test_lua_basic.pytest/solidlsp/luau/test_luau_dependency_provider.pytest/solidlsp/python/test_python_basic.pytest/solidlsp/python/test_retrieval_with_ignored_dirs.pytest/solidlsp/python/test_symbol_retrieval.pytest/solidlsp/ruby/test_ruby_symbol_retrieval.pytest/solidlsp/rust/test_rust_basic.pytest/solidlsp/solidity/test_solidity_basic.pytest/solidlsp/test_defining_symbol_matrix.pytest/solidlsp/test_diagnostics_matrix.pytest/solidlsp/test_ls_common.pytest/solidlsp/typescript/test_typescript_basic.pytest/solidlsp/util/test_ls_utils.py
✅ Files skipped from review due to trivial changes (31)
- test/resources/repos/java/test_repo/src/main/java/test_repo/Greeter.java
- test/resources/repos/csharp/test_repo/Services/ConsoleGreeter.cs
- src/serena/resources/project.template.yml
- pyproject.toml
- src/solidlsp/language_servers/yaml_language_server.py
- test/solidlsp/dart/test_dart_basic.py
- test/resources/repos/cpp/test_repo/compile_commands.json
- test/resources/repos/kotlin/test_repo/src/main/kotlin/test_repo/DiagnosticsSample.kt
- test/resources/repos/java/test_repo/src/main/java/test_repo/ConsoleGreeter.java
- test/resources/repos/csharp/test_repo/Services/IGreeter.cs
- test/resources/repos/fsharp/test_repo/TestProject.fsproj
- test/resources/repos/fsharp/test_repo/Shapes.fs
- test/resources/repos/kotlin/test_repo/gradle/wrapper/gradle-wrapper.properties
- test/resources/repos/typescript/test_repo/formatters.ts
- test/resources/repos/ruby/test_repo/lib.rb
- test/resources/repos/go/test_repo/diagnostics_sample.go
- test/resources/repos/lua/test_repo/src/animals.lua
- test/solidlsp/ruby/test_ruby_symbol_retrieval.py
- src/serena/jetbrains/jetbrains_types.py
- test/resources/repos/fsharp/test_repo/DiagnosticsSample.fs
- scripts/demo_find_implementing_symbol.py
- test/resources/repos/fsharp/test_repo/Formatter.fs
- test/solidlsp/util/test_ls_utils.py
- test/resources/repos/kotlin/test_repo/gradlew
- test/solidlsp/csharp/test_csharp_basic.py
- test/solidlsp/typescript/test_typescript_basic.py
- src/solidlsp/language_servers/clojure_lsp.py
- src/solidlsp/language_servers/csharp_language_server.py
- src/solidlsp/language_servers/common.py
- src/solidlsp/language_servers/omnisharp/runtime_dependencies.json
- test/solidlsp/python/test_retrieval_with_ignored_dirs.py
🚧 Files skipped from review as they are similar to previous changes (43)
- test/resources/repos/java/test_repo/src/main/java/test_repo/Main.java
- test/resources/repos/rust/test_repo/src/main.rs
- src/solidlsp/language_servers/solidity_language_server.py
- test/resources/repos/typescript/test_repo/index.ts
- .serena/project.yml
- test/resources/repos/ruby/test_repo/main.rb
- test/resources/repos/java/test_repo/src/main/java/test_repo/DiagnosticsSample.java
- test/resources/repos/csharp/test_repo/Program.cs
- test/resources/repos/powershell/test_repo/diagnostics_sample.ps1
- src/solidlsp/language_servers/typescript_language_server.py
- test/solidlsp/test_ls_common.py
- test/solidlsp/luau/test_luau_dependency_provider.py
- test/resources/repos/typescript/test_repo/diagnostics_sample.ts
- test/resources/repos/clojure/test_repo/src/test_app/diagnostics_sample.clj
- src/solidlsp/language_servers/elm_language_server.py
- src/solidlsp/language_servers/bash_language_server.py
- test/resources/repos/csharp/test_repo/DiagnosticsSample.cs
- src/serena/resources/config/internal_modes/jetbrains.yml
- test/resources/repos/rust/test_repo/src/diagnostics_sample.rs
- src/solidlsp/language_servers/vue_language_server.py
- src/solidlsp/language_servers/systemverilog_server.py
- test/resources/repos/go/test_repo/main.go
- test/solidlsp/csharp/test_csharp_nuget_download.py
- src/solidlsp/language_servers/clangd_language_server.py
- src/solidlsp/language_servers/powershell_language_server.py
- test/solidlsp/fsharp/test_fsharp_basic.py
- test/solidlsp/clojure/init.py
- test/solidlsp/java/test_java_basic.py
- test/resources/repos/rust/test_repo/src/lib.rs
- src/solidlsp/language_servers/pascal_server.py
- src/solidlsp/language_servers/ansible_language_server.py
- scripts/demo_diagnostics.py
- src/serena/code_editor.py
- test/resources/repos/lean4/test_repo/DiagnosticsSample.lean
- scripts/demo_find_defining_symbol.py
- src/solidlsp/language_servers/elixir_tools/elixir_tools.py
- scripts/demo_run_tools.py
- src/solidlsp/language_servers/lua_ls.py
- docs/02-usage/050_configuration.md
- src/solidlsp/language_servers/dart_language_server.py
- test/serena/snapshots/test_symbol_editing.ambr
- test/resources/repos/typescript/test_repo/.serena/project.yml
- src/serena/tools/jetbrains_tools.py
| def __post_init__(self) -> None: | ||
| if self.relative_path is not None: | ||
| self.relative_path = self.relative_path.replace("/", os.path.sep) | ||
| self.relative_path = self.relative_path.replace("\\", "/") |
There was a problem hiding this comment.
Normalize the path in _symbol_identity too.
LanguageServerSymbolLocation.__post_init__ now canonicalizes separators, but _symbol_identity still keys on symbol.relative_path directly. If one LSP call returns foo\\bar.py and another returns foo/bar.py, get_symbol_diagnostics_by_location will keep both entries and duplicate diagnostics for the same symbol.
Minimal fix
`@staticmethod`
def _symbol_identity(symbol: LanguageServerSymbol) -> tuple[str | None, int | None, int | None, str]:
- return (symbol.relative_path, symbol.line, symbol.column, symbol.get_name_path())
+ return (symbol.location.relative_path, symbol.line, symbol.column, symbol.get_name_path())Also applies to: 955-956, 1066-1067
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/serena/symbol.py` around lines 47 - 49,
LanguageServerSymbolLocation.__post_init__ now normalizes path separators, but
_symbol_identity still uses symbol.relative_path raw so equivalent paths like
"foo\\bar.py" and "foo/bar.py" produce duplicate keys; update _symbol_identity
to canonicalize the relative_path the same way (e.g., if symbol.relative_path is
not None, use symbol.relative_path.replace("\\", "/")) before building the
identity key, and apply the same normalization in any other
identity/key-building helpers used by get_symbol_diagnostics_by_location to
ensure consistent keys across calls.
| # capturing diagnostics before the edit | ||
| code_editor = self.create_code_editor() | ||
| diagnostics_snapshot = self._capture_published_lsp_diagnostics_snapshot([EditedFilePath(relative_path, relative_path)]) | ||
|
|
||
| # applying the rename | ||
| status_message = code_editor.rename_symbol(name_path, relative_file_path=relative_path, new_name=new_name) | ||
| return status_message | ||
|
|
||
| return self._format_lsp_edit_result_with_new_diagnostics( | ||
| status_message, code_editor.get_last_edited_file_paths(), diagnostics_snapshot | ||
| ) |
There was a problem hiding this comment.
Capture the pre-edit baseline for every file a rename will touch.
The snapshot only covers relative_path, but rename_symbol() can edit many files. Every additional path returned by code_editor.get_last_edited_file_paths() is compared against an empty baseline, so existing warnings in those files are reported as newly introduced. Preview the rename edit first, or otherwise gather the full edited-path set before taking the diagnostics snapshot.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/serena/tools/symbol_tools.py` around lines 801 - 810, The diagnostics
snapshot is only taken for relative_path but rename_symbol may touch many files;
first preview the rename to get the full set of edited paths, then capture
diagnostics for all those files before applying the edit. Concretely: call a
preview API on the editor (e.g., code_editor.preview_rename or equivalent) to
obtain the full edited file list from the result, pass that list into
_capture_published_lsp_diagnostics_snapshot (replace the single
[EditedFilePath(relative_path, ...)]), then call code_editor.rename_symbol to
apply the change and finally call _format_lsp_edit_result_with_new_diagnostics
with the original diagnostics snapshot and
code_editor.get_last_edited_file_paths().
| FileUtils.download_and_extract_archive_verified( | ||
| url, | ||
| target_dir, | ||
| "zip", | ||
| expected_sha256=AL_EXTENSION_SHA256 if url == AL_EXTENSION_URL else None, | ||
| allowed_hosts=AL_EXTENSION_ALLOWED_HOSTS, | ||
| ) |
There was a problem hiding this comment.
SHA256 verification bypassed for non-default AL extension versions.
Same pattern as other language servers: when url != AL_EXTENSION_URL, integrity verification is skipped. This allows downloading unverified VS Code extensions.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/solidlsp/language_servers/al_language_server.py` around lines 157 - 163,
The call to FileUtils.download_and_extract_archive_verified currently skips
integrity checks when url != AL_EXTENSION_URL; change it to always pass a
non-None expected_sha256 by looking up the hash for the requested URL (e.g.,
replace the conditional with a lookup in a mapping like
AL_EXTENSION_SHA256S[url] or raise if missing) so
FileUtils.download_and_extract_archive_verified is always given an
expected_sha256; update references around AL_EXTENSION_URL, AL_EXTENSION_SHA256
and AL_EXTENSION_ALLOWED_HOSTS and ensure the new mapping is used by the call in
the function where FileUtils.download_and_extract_archive_verified is invoked.
| al_extension_dir = os.path.join(cls.ls_resources_dir(solidlsp_settings), "al-extension") | ||
| al_settings = solidlsp_settings.get_ls_specific_settings(Language.AL) | ||
| al_extension_version = al_settings.get("al_extension_version", AL_EXTENSION_VERSION) | ||
| al_extension_url = ( | ||
| "https://marketplace.visualstudio.com/_apis/public/gallery/publishers/ms-dynamics-smb/" | ||
| f"vsextensions/al/{al_extension_version}/vspackage" | ||
| ) | ||
|
|
||
| # AL extension version - using latest stable version | ||
| AL_VERSION = "latest" | ||
| url = f"https://marketplace.visualstudio.com/_apis/public/gallery/publishers/ms-dynamics-smb/vsextensions/al/{AL_VERSION}/vspackage" | ||
|
|
||
| log.info(f"Downloading AL extension from: {url}") | ||
| log.info(f"Downloading AL extension from: {al_extension_url}") | ||
|
|
||
| if cls._download_al_extension(url, al_extension_dir): | ||
| if cls._download_al_extension(al_extension_url, al_extension_dir): |
There was a problem hiding this comment.
AL extension version override won't take effect if cached extension exists.
The al_extension_dir path (line 258) doesn't incorporate the version. A previously downloaded extension at ~/.serena/ls_resources/al-extension/extension will be reused regardless of al_extension_version setting changes.
Consider including the version in the cache path or tracking the installed version in a metadata file.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/solidlsp/language_servers/al_language_server.py` around lines 258 - 268,
The cache path for the AL extension does not include the chosen version so
changing al_extension_version has no effect; update the code that builds
al_extension_dir (and any related lookup logic) to incorporate
al_extension_version (or store/read a metadata file) so different versions map
to different directories and _download_al_extension is only skipped when the
directory/version match the requested al_extension_version; modify references to
al_extension_dir, al_extension_version, cls.ls_resources_dir and
_download_al_extension accordingly to implement versioned caching or metadata
validation.
| intelephense_version = self._custom_settings.get("intelephense_version", "1.14.4") | ||
| npm_registry = self._custom_settings.get("npm_registry") | ||
|
|
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -euo pipefail
# 1) Inspect npm command builder + runtime dependency installer internals.
fd 'common.py$' src --exec rg -n -A30 -B10 \
'def build_npm_install_command|class RuntimeDependency|def install\(' {}
# 2) Locate subprocess usage and shell mode in solidlsp.
rg -n -C3 'subprocess\.(run|Popen)|shell\s*=' src/solidlsp -g '*.py'
# 3) Enumerate call sites that pass dynamic values into build_npm_install_command.
rg -n -C2 'build_npm_install_command\(' src/solidlsp -g '*.py'Repository: oraios/serena
Length of output: 50369
🏁 Script executed:
#!/bin/bash
# Inspect intelephense.py around lines 62-64 and 74
sed -n '55,80p' src/solidlsp/language_servers/intelephense.pyRepository: oraios/serena
Length of output: 1583
🏁 Script executed:
#!/bin/bash
# Verify the _custom_settings type and trace how RuntimeDependency.command is executed
# Check intelephense.py for _custom_settings declaration
sed -n '1,70p' src/solidlsp/language_servers/intelephense.py | head -70Repository: oraios/serena
Length of output: 3151
🏁 Script executed:
#!/bin/bash
# Check how _run_command handles shell=True and joined command strings
sed -n '100,130p' src/solidlsp/language_servers/common.pyRepository: oraios/serena
Length of output: 1251
🏁 Script executed:
#!/bin/bash
# Check the type of self._custom_settings in the parent class
rg -n "class LanguageServerDependencyProviderSinglePath|_custom_settings.*:" src/solidlsp -A5 | head -50Repository: oraios/serena
Length of output: 1899
🏁 Script executed:
#!/bin/bash
# Check how _custom_settings is initialized/typed
rg -n "_custom_settings" src/solidlsp/ls.py -B2 -A2Repository: oraios/serena
Length of output: 950
🏁 Script executed:
#!/bin/bash
# Verify current mypy/ruff configuration in the repo
find . -maxdepth 2 -name "pyproject.toml" -o -name "mypy.ini" -o -name ".ruff.toml" | xargs cat 2>/dev/null | head -100Repository: oraios/serena
Length of output: 2712
🏁 Script executed:
#!/bin/bash
# Check the CustomLSSettings type definition
rg -n "class CustomLSSettings|CustomLSSettings.*=" src/solidlsp -B2 -A15 | head -80Repository: oraios/serena
Length of output: 1378
🏁 Script executed:
#!/antml:
# Check the actual settings module
find src -name "settings.py" -exec cat {} \; | head -150Repository: oraios/serena
Length of output: 2779
🏁 Script executed:
#!/bin/bash
# Verify if mypy is configured and what its settings are
find . -name "pyproject.toml" -exec grep -A20 "\[tool.mypy\]" {} \;Repository: oraios/serena
Length of output: 746
🏁 Script executed:
#!/bin/bash
# Verify how the command is executed - trace from RuntimeDependency.command to subprocess
sed -n '83,99p' src/solidlsp/language_servers/common.pyRepository: oraios/serena
Length of output: 791
🏁 Script executed:
#!/bin/bash
# Check if there are mypy errors in intelephense.py currently
cd src && python -m mypy solidlsp/language_servers/intelephense.py --no-error-summary 2>&1 | grep -A2 "intelephense_version\|npm_registry" || echo "No mypy errors found for these variables"Repository: oraios/serena
Length of output: 97
🏁 Script executed:
#!/bin/bash
# Check all call sites of build_npm_install_command to see if they have same issue
rg -n "build_npm_install_command\(" src/solidlsp/language_servers -B3 -A1 | grep -E "(\.get\(|build_npm_install_command)" | head -60Repository: oraios/serena
Length of output: 2239
🏁 Script executed:
#!/bin/bash
# Check if there's a type: ignore or Any cast being used
sed -n '55,80p' src/solidlsp/language_servers/intelephense.py | cat -nRepository: oraios/serena
Length of output: 1765
🏁 Script executed:
#!/bin/bash
# Check the actual CustomLSSettings.get return type annotation
sed -n '46,65p' src/solidlsp/settings.py | cat -nRepository: oraios/serena
Length of output: 1013
🏁 Script executed:
#!/bin/bash
# Verify what happens when a non-string is passed to build_npm_install_command
python3 << 'EOF'
def build_npm_install_command(package_name: str, version: str, registry: str | None = None) -> list[str]:
"""Build a pinned npm install command for a package in a Serena-managed install directory."""
command = ["npm", "install", "--prefix", "./"]
if registry:
command.extend(["--registry", registry])
command.append(f"{package_name}@{version}")
return command
# Simulate the vulnerability - what if version is not a string?
try:
result = build_npm_install_command("intelephense", 123, None) # type: ignore
print("Result with int version:", result)
# Then on Linux, this gets joined:
joined = " ".join(result)
print("After shell join:", joined)
except Exception as e:
print(f"Error: {e}")
# What about a dict or list?
try:
result = build_npm_install_command("intelephense", {"version": "1.14.4"}, None) # type: ignore
print("Result with dict version:", result)
except Exception as e:
print(f"Error with dict: {e}")
EOFRepository: oraios/serena
Length of output: 298
Add explicit type validation for config values before npm command construction.
self._custom_settings.get() returns Any (untyped), allowing non-string values to reach build_npm_install_command(). Although the function signature expects str and str | None, the Any return type bypasses mypy's strict type checking. Validate intelephense_version and npm_registry before use to ensure proper command construction and comply with coding guidelines requiring strict typing.
Suggested patch
- intelephense_version = self._custom_settings.get("intelephense_version", "1.14.4")
- npm_registry = self._custom_settings.get("npm_registry")
+ intelephense_version_raw = self._custom_settings.get("intelephense_version", "1.14.4")
+ if not isinstance(intelephense_version_raw, str):
+ raise TypeError("php.intelephense_version must be a string")
+ intelephense_version = intelephense_version_raw
+
+ npm_registry_raw = self._custom_settings.get("npm_registry")
+ if npm_registry_raw is not None and not isinstance(npm_registry_raw, str):
+ raise TypeError("php.npm_registry must be a string when provided")
+ npm_registry = npm_registry_rawAlso applies to: 74-74
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/solidlsp/language_servers/intelephense.py` around lines 62 - 64, The code
pulls untyped values via self._custom_settings.get into intelephense_version and
npm_registry and passes them to build_npm_install_command which expects
str/Optional[str]; add explicit runtime type validation and safe coercion before
constructing the npm command: read values with
self._custom_settings.get("intelephense_version") and
self._custom_settings.get("npm_registry"), check isinstance(..., str) (or None
for npm_registry), raise or fallback to the default "1.14.4" for
intelephense_version and set npm_registry to None if not a string, then call
build_npm_install_command(intelephense_version, npm_registry). Ensure
validations are applied at both occurrences referenced in this file so mypy-safe
types are passed into build_npm_install_command.
| FileUtils.download_and_extract_archive_verified( | ||
| url, | ||
| target_dir, | ||
| "zip", | ||
| expected_sha256=MATLAB_EXTENSION_SHA256 if url == MATLAB_EXTENSION_URL else None, | ||
| allowed_hosts=MATLAB_EXTENSION_ALLOWED_HOSTS, | ||
| ) |
There was a problem hiding this comment.
SHA256 verification bypassed for non-default MATLAB extension versions.
When url != MATLAB_EXTENSION_URL, expected_sha256 is None, which skips integrity verification. Consider either:
- Requiring SHA256 for all versions (fail-closed)
- Logging a warning when downloading unverified content
- Documenting this security trade-off explicitly
🛡️ Fail-closed approach
FileUtils.download_and_extract_archive_verified(
url,
target_dir,
"zip",
- expected_sha256=MATLAB_EXTENSION_SHA256 if url == MATLAB_EXTENSION_URL else None,
+ expected_sha256=MATLAB_EXTENSION_SHA256 if url == MATLAB_EXTENSION_URL else ValueError(
+ "Custom matlab_extension_version requires a pinned SHA256 checksum"
+ ),
allowed_hosts=MATLAB_EXTENSION_ALLOWED_HOSTS,
)Or raise before calling the download function when using a non-default version without a checksum.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/solidlsp/language_servers/matlab_language_server.py` around lines 123 -
129, The code currently passes expected_sha256=None when url !=
MATLAB_EXTENSION_URL which disables integrity checks; change to fail-closed by
validating before calling FileUtils.download_and_extract_archive_verified: if
url != MATLAB_EXTENSION_URL and MATLAB_EXTENSION_SHA256 is not set (or is None),
raise an error (e.g., ValueError/RuntimeError) explaining a missing checksum for
non-default MATLAB extension versions; otherwise pass the provided
MATLAB_EXTENSION_SHA256 into expected_sha256. Reference
FileUtils.download_and_extract_archive_verified, url, MATLAB_EXTENSION_URL, and
MATLAB_EXTENSION_SHA256 to locate and update the logic.
| phpactor_version = self._custom_settings.get("phpactor_version", PHPACTOR_VERSION) | ||
| phpactor_phar_url = f"https://github.com/phpactor/phpactor/releases/download/{phpactor_version}/phpactor.phar" | ||
| # Verify PHP is installed |
There was a problem hiding this comment.
phpactor_version override won’t take effect if PHAR already exists.
With the current Line 72 guard, changing phpactor_version later won’t trigger a new download.
💡 Proposed fix
phpactor_phar_path = os.path.join(self._ls_resources_dir, "phpactor.phar")
- if not os.path.exists(phpactor_phar_path):
+ version_file = os.path.join(self._ls_resources_dir, ".phpactor_version")
+ installed_version = pathlib.Path(version_file).read_text(encoding="utf-8").strip() if os.path.exists(version_file) else None
+ needs_download = (not os.path.exists(phpactor_phar_path)) or (installed_version != str(phpactor_version))
+ if needs_download:
os.makedirs(self._ls_resources_dir, exist_ok=True)
log.info(f"Downloading phpactor PHAR from {phpactor_phar_url}")
FileUtils.download_and_extract_archive_verified(
phpactor_phar_url,
phpactor_phar_path,
"binary",
expected_sha256=PHPACTOR_PHAR_SHA256 if phpactor_version == PHPACTOR_VERSION else None,
allowed_hosts=PHPACTOR_ALLOWED_HOSTS,
)
+ pathlib.Path(version_file).write_text(str(phpactor_version), encoding="utf-8")Also applies to: 72-81
| vts_config = solidlsp_settings.get_ls_specific_settings(Language.TYPESCRIPT_VTS) | ||
| vtsls_version = vts_config.get("vtsls_version", "0.2.9") | ||
| npm_registry = vts_config.get("npm_registry") |
There was a problem hiding this comment.
vtsls_version override is ignored after first install.
Because Line 99 only checks directory existence, changing vtsls_version later will not reinstall or switch versions.
💡 Proposed fix
- # Install vtsls if not already installed
- if not os.path.exists(vts_ls_dir):
- os.makedirs(vts_ls_dir, exist_ok=True)
- deps.install(vts_ls_dir)
+ # Install/reinstall vtsls when missing or version changed
+ os.makedirs(vts_ls_dir, exist_ok=True)
+ version_file = os.path.join(vts_ls_dir, ".vtsls_version")
+ vts_executable_path = os.path.join(vts_ls_dir, "node_modules", ".bin", "vtsls")
+ installed_version = pathlib.Path(version_file).read_text(encoding="utf-8").strip() if os.path.exists(version_file) else None
+ needs_install = (not os.path.exists(vts_executable_path)) or (installed_version != str(vtsls_version))
+ if needs_install:
+ shutil.rmtree(os.path.join(vts_ls_dir, "node_modules"), ignore_errors=True)
+ deps.install(vts_ls_dir)
+ pathlib.Path(version_file).write_text(str(vtsls_version), encoding="utf-8")Also applies to: 99-101
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/solidlsp/language_servers/vts_language_server.py` around lines 75 - 77,
The current install logic only checks for the existence of the vts language
server directory and ignores changes to vtsls_version; update the install
routine so it reads the desired vtsls_version from vts_config.vtsls_version and
compares it against the currently installed version (e.g., by reading
package.json or the installed package metadata in the vts server directory) and
if they differ, remove or replace the existing install and reinstall the
requested version using npm_registry; ensure this check is used instead of a
simple directory.exists check in the code that performs installation/upgrade so
changing vtsls_version triggers reinstall/switch.
a2f51d1 to
cb3b20a
Compare
There was a problem hiding this comment.
Actionable comments posted: 2
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
test/solidlsp/csharp/test_csharp_nuget_download.py (1)
18-76:⚠️ Potential issue | 🟡 MinorThese rewritten NuGet tests lost the behavior they were supposed to lock down.
test_download_nuget_package_uses_direct_urlnever setssha256orallowed_hosts, so it doesn't actually prove those fields are forwarded, andtest_download_method_does_not_call_azure_feednow passes without asserting anything about the constructed URL at all. Please seed sentinel security values and assert the mocked helper saw the expected NuGet URL plus those metadata fields.Also applies to: 102-130
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/solidlsp/csharp/test_csharp_nuget_download.py` around lines 18 - 76, The tests lost asserting forwarding of sha256 and allowed_hosts and need sentinel values added: in test_download_nuget_package_uses_direct_url set RuntimeDependency.sha256 and .allowed_hosts to known sentinel strings/lists, update the fake_download_and_extract to capture those, and assert captured_calls contains the sentinel sha256 and allowed_hosts (and that called_url equals test_dependency.url and contains "nuget.org" and not "azure"); likewise update the other test (the one around lines 102-130) to seed sha256/allowed_hosts on its RuntimeDependency and assert the mocked FileUtils.download_and_extract_archive_verified was called with the expected NuGet URL and those metadata fields so the forwarding behavior of CSharpLanguageServer.DependencyProvider._download_nuget_package is locked down.
♻️ Duplicate comments (31)
test/resources/repos/python/test_repo/test_repo/diagnostics_sample.py (1)
5-5:⚠️ Potential issue | 🟠 MajorKeep intentional undefined names, but suppress Ruff F821 inline.
These diagnostics are intentional, but without targeted suppression they can break lint-gated CI.
Proposed fix
def broken_factory() -> User: - return missing_user + return missing_user # noqa: F821 - intentionally undefined for diagnostics fixture @@ def broken_consumer() -> None: created_user = broken_factory() print(created_user) - print(undefined_name) + print(undefined_name) # noqa: F821 - intentionally undefined for diagnostics fixtureAs per coding guidelines:
**/*.py: Use strict typing with mypy and format code with ruff.Also applies to: 11-11
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/resources/repos/python/test_repo/test_repo/diagnostics_sample.py` at line 5, The undefined name in the line "return missing_user" is intentional; add an inline Ruff suppression to avoid F821 failures by appending a per-line comment (e.g., "# noqa: F821") to that return statement in diagnostics_sample.py so the linter ignores the undefined-name error while leaving the intentional code unchanged.src/solidlsp/language_servers/phpactor.py (1)
51-81:⚠️ Potential issue | 🟠 Major
phpactor_versionis still ignored once the PHAR is cached.Line 72 only checks whether
phpactor.pharalready exists, so changingls_specific_settings["php_phpactor"]["phpactor_version"]later never refreshes the cached binary even though the new docstring advertises that override.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/solidlsp/language_servers/phpactor.py` around lines 51 - 81, The code currently ignores changes to phpactor_version because it only checks for a fixed phpactor.phar file; update the logic in the phpactor setup (symbols: phpactor_version, phpactor_phar_url, phpactor_phar_path, self._ls_resources_dir, FileUtils.download_and_extract_archive_verified) to make the cached PHAR version-aware — for example, include the version in the filename (e.g. phpactor-{phpactor_version}.phar) or store/read a small metadata file recording the downloaded version and its SHA; then check that metadata or filename against phpactor_version and re-download using phpactor_phar_url and PHPACTOR_PHAR_SHA256 when they differ (ensuring os.makedirs(..., exist_ok=True) and the existing download path logic remain intact).src/solidlsp/language_servers/matlab_language_server.py (2)
123-129:⚠️ Potential issue | 🟠 MajorFail closed for custom
matlab_extension_versions.When
url != MATLAB_EXTENSION_URL,expected_sha256becomesNone, so the managed download no longer has an expected digest to validate before Serena extracts and runs it. Require a pinned checksum for override versions or reject them.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/solidlsp/language_servers/matlab_language_server.py` around lines 123 - 129, The download currently calls FileUtils.download_and_extract_archive_verified(url, target_dir, "zip", expected_sha256=MATLAB_EXTENSION_SHA256 if url == MATLAB_EXTENSION_URL else None, ...), which leaves expected_sha256=None for override URLs; change this to fail-closed by requiring a pinned checksum for any non-default matlab extension: if url == MATLAB_EXTENSION_URL use MATLAB_EXTENSION_SHA256, otherwise require a provided checksum (e.g. a new variable/argument like matlab_extension_sha256 or a mapping keyed by matlab_extension_version) and raise/reject when it’s missing; update the call to always pass a non-None expected_sha256 to FileUtils.download_and_extract_archive_verified and validate presence of the checksum before attempting download.
159-163:⚠️ Potential issue | 🟠 MajorThe extension cache is still version-agnostic.
Both lookup and download use the fixed
matlab-extension/extensionpath, so once any version is cached there, changingmatlab_extension_versionkeeps returning the old install. Key the managed path by version or store/read an installed-version marker.Also applies to: 185-195
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/solidlsp/language_servers/matlab_language_server.py` around lines 159 - 163, The current cache path for the MATLAB extension uses a fixed folder ("matlab-extension/extension") under self._ls_resources_dir so different matlab_extension_version values still return the previously cached install; update the lookup and download logic (the code that computes default_path and the corresponding install path used in the download/install functions around the blocks referencing default_path and the matlab_extension_version) to include the version in the path (e.g., "matlab-extension/{matlab_extension_version}/extension") or alternatively write/read a small installed-version marker file next to the cached extension and validate it against self.matlab_extension_version before returning the path; ensure both the existence check (where default_path is constructed) and the install/download code use the same versioned path or marker check so changing matlab_extension_version returns the correct install.src/solidlsp/language_servers/ruby_lsp.py (1)
230-235:⚠️ Potential issue | 🟠 MajorInstall
ruby-lspthrough the same Ruby toolchain you launch with.When
use_rbenvis true this branch still runs plaingem install, which can install into the system Ruby while the server later starts via the rbenv-selected interpreter.🛠️ Suggested fix
- subprocess.run( - ["gem", "install", "ruby-lsp", "-v", ruby_lsp_version], + install_cmd = ["rbenv", "exec", "gem"] if use_rbenv else ["gem"] + install_cmd.extend(["install", "ruby-lsp", "-v", ruby_lsp_version]) + subprocess.run( + install_cmd, check=True, capture_output=True, cwd=repository_root_path, )🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/solidlsp/language_servers/ruby_lsp.py` around lines 230 - 235, The install call currently runs subprocess.run(["gem", "install", ...]) which can target the system Ruby; change it to invoke gem through the same Ruby interpreter used to launch the server (use the ruby_executable variable) so installs respect rbenv/rvm shims — e.g. call subprocess.run([ruby_executable, "-S", "gem", "install", "ruby-lsp", "-v", ruby_lsp_version], check=True, capture_output=True, cwd=repository_root_path) (or conditionally use this when use_rbenv is true) so the install uses the same toolchain as the server.docs/02-usage/001_features.md (1)
17-17:⚠️ Potential issue | 🟡 MinorFix the typo in the feature overview.
qujualityshould bequality.📝 Suggested fix
- Tool results are compact JSON, keeping token usage low and output qujuality high. + Tool results are compact JSON, keeping token usage low and output quality high.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/02-usage/001_features.md` at line 17, Fix the typo in the feature overview by replacing the misspelled word "qujuality" with "quality" in the string that currently reads "Tool results are compact JSON, keeping token usage low and output qujuality high." (search for that exact sentence / phrase to locate the text in the document).src/solidlsp/language_servers/marksman.py (2)
44-96:⚠️ Potential issue | 🟠 MajorFail closed for non-default
marksman_versions.For override versions each
RuntimeDependencygetssha256=None, so the verified download path no longer has a digest to validate against before executing the binary. Either require per-platform checksums for overrides or reject unverified overrides.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/solidlsp/language_servers/marksman.py` around lines 44 - 96, The _runtime_dependencies method currently sets sha256=None for non-default versions which allows unverified binaries; change _runtime_dependencies (in the class using DEFAULT_MARKSMAN_VERSION and MARKSMAN_ALLOWED_HOSTS) to fail closed by validating checksums when version != DEFAULT_MARKSMAN_VERSION: if an override version is supplied and you cannot determine per-platform sha256 values, raise an exception (e.g., ValueError) instead of returning RuntimeDependencyCollection; alternatively accept and require a mapping of per-platform checksums and use those to populate RuntimeDependency.sha256 for each RuntimeDependency before returning the collection.
100-113:⚠️ Potential issue | 🟠 MajorThe version override is still cached under a version-agnostic path.
Reuse is gated only by
os.path.exists(marksman_executable_path)insideself._ls_resources_dir, so once any Marksman binary is present, changingmarksman_versionkeeps using it. Persist/read an installed-version marker or key the install directory by version before skipping download.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/solidlsp/language_servers/marksman.py` around lines 100 - 113, The code currently checks os.path.exists(marksman_executable_path) under self._ls_resources_dir so a previously installed binary is reused regardless of marksman_version; change this so installs are version-scoped by including marksman_version in the installation path or by persisting/reading an installed-version marker before skipping download: construct a versioned directory (e.g., combine self._ls_resources_dir with marksman_version) or write/read a marker file next to deps.binary_path that records the installed version, and update the existence check and deps.install call (references: marksman_version, self._ls_resources_dir, deps.binary_path(...), deps.install(...), dependency) to use that version-specific location/marker so updating marksman_version triggers a fresh download/install.docs/02-usage/070_security.md (1)
28-41:⚠️ Potential issue | 🟠 MajorScope the checksum guarantee to Serena's bundled versions.
Several override paths in this PR intentionally call the verified-download helper with
expected_sha256=Nonefor non-bundled versions. As written, this section promises a stronger guarantee than the implementation provides. Narrow the four-check claim to bundled defaults and note the weaker guarantee for custom overrides vials_specific_settings.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/02-usage/070_security.md` around lines 28 - 41, Update the security doc text to scope the "four checks" guarantee to Serena's bundled/defaulted dependencies only, and explicitly state that override paths (which call the verified-download helper with expected_sha256=None) do not provide the SHA256 verification; mention that custom overrides via ls_specific_settings weaken the guarantee to host/version/extraction checks only and advise that users supply expected_sha256 for full verification.src/solidlsp/language_servers/al_language_server.py (2)
258-268:⚠️ Potential issue | 🟠 Major
al_extension_versionis ignored once an AL extension is cached.The install directory stays
.../al-extension, and_find_al_extension()checks that same location before this method runs. After the first download, changing the configured version just reuses the old extension. Use a versioned cache path or persist/check the installed version before skipping the download.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/solidlsp/language_servers/al_language_server.py` around lines 258 - 268, The code currently caches the AL extension in a fixed directory (al_extension_dir) so changing al_extension_version is ignored; update the logic in the class method that prepares/installs the extension to either: 1) include al_extension_version in the cache path (e.g., use ls_resources_dir(...)/f"al-extension-{al_extension_version}") so _find_al_extension() and _download_al_extension(al_extension_url, al_extension_dir) operate on a versioned folder, or 2) read and compare an installed version marker (e.g., a VERSION file under al_extension_dir) before skipping download and re-download when it differs; adjust calls to ls_resources_dir, _find_al_extension(), and _download_al_extension() accordingly so the selected al_extension_version is respected.
157-163:⚠️ Potential issue | 🟠 MajorAlways require a hash for AL VSIX downloads.
When
al_extension_versionpoints at any URL other thanAL_EXTENSION_URL, this call dropsexpected_sha256toNone. That makes override installs host-allowlisted only. Please either look up the requested version in a version→SHA map or reject unpinned overrides.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/solidlsp/language_servers/al_language_server.py` around lines 157 - 163, The call to FileUtils.download_and_extract_archive_verified currently passes expected_sha256=None for non-default URLs, allowing unpinned installs; update the logic in al_language_server.py so every requested AL VSIX URL must have a known SHA: add or use a version/URL→SHA mapping (e.g., AL_EXTENSION_SHA_MAP) and pass expected_sha256=AL_EXTENSION_SHA_MAP.get(url) (or a lookup by al_extension_version), and if the lookup returns None then raise/reject the override with a clear error instead of proceeding; keep the existing use of FileUtils.download_and_extract_archive_verified, AL_EXTENSION_ALLOWED_HOSTS, and AL_EXTENSION_URL/AL_EXTENSION_SHA256 symbols to locate the code to change.src/solidlsp/language_servers/omnisharp.py (1)
210-233:⚠️ Potential issue | 🟠 MajorMake the OmniSharp install caches version-aware.
Both cache roots ignore
omnisharp_version/razor_omnisharp_version. Once either directory exists, changing the configured version will keep reusing the old binaries instead of reinstalling the requested release.Suggested fix
- omnisharp_ls_dir = os.path.join(cls.ls_resources_dir(solidlsp_settings), "OmniSharp") + omnisharp_ls_dir = os.path.join(cls.ls_resources_dir(solidlsp_settings), "OmniSharp", omnisharp_version) @@ - razor_omnisharp_ls_dir = os.path.join(cls.ls_resources_dir(solidlsp_settings), "RazorOmnisharp") + razor_omnisharp_ls_dir = os.path.join( + cls.ls_resources_dir(solidlsp_settings), "RazorOmnisharp", razor_omnisharp_version + )🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/solidlsp/language_servers/omnisharp.py` around lines 210 - 233, The install cache directories omnisharp_ls_dir and razor_omnisharp_ls_dir are not version-aware so existing folders are reused even when omnisharp_version or razor_omnisharp_version changes; make the cache root include the configured version (e.g. append the value from runtime_dependencies["OmniSharp"]["version"] and ["RazorOmnisharp"]["version"] or store a small metadata file to compare versions) when constructing ls_resources_dir so the code in the method that calls ls_resources_dir will create distinct directories per version (or detect mismatch and re-download) before using omnisharp_executable_path and calling os.chmod. Ensure you reference ls_resources_dir, omnisharp_ls_dir, razor_omnisharp_ls_dir, and runtime_dependencies when implementing the change.src/solidlsp/language_servers/pascal_server.py (1)
533-546:⚠️ Potential issue | 🟠 MajorTie
dep.sha256to the selectedpasls_version.Overriding
pasls_versionchanges the release URLs, but the hardcoded hashes below still describe the bundledv0.2.0assets._atomic_install()then falls back to stale digests whenchecksums.sha256is unavailable, and the current ternary also lets a downloaded checksums file suppress Serena's bundled trust anchor for the default release. Make the per-platformsha256conditional onpasls_version == PASLS_VERSION, then preferdep.sha256overchecksums.get(...)when it exists.Also applies to: 620-624, 641-687
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/solidlsp/language_servers/pascal_server.py` around lines 533 - 546, The code in _atomic_install() uses checksums.get(archive_filename) falling back to dep.sha256 inconsistently and ignores the fact that dep.sha256 is only valid for the bundled default PASLS_VERSION; update the expected_sha256 resolution so that: if dep.sha256 is present and pasls_version == PASLS_VERSION then use dep.sha256 (prefer dep.sha256 over checksums.get(...)), otherwise use checksums.get(archive_filename) when available; ensure the per-platform hardcoded sha256 values are only trusted when pasls_version equals PASLS_VERSION (referencing dep.sha256, pasls_version, PASLS_VERSION, checksums and _atomic_install), and keep the same behavior of aborting on mismatch.src/serena/code_editor.py (1)
381-390:⚠️ Potential issue | 🟠 MajorPublish edited paths only after the edits actually apply.
If a later operation raises,
get_last_edited_file_paths()will report files that were never changed. Collect paths after each successful apply and publish the final list in afinally.Suggested fix
- # recording the affected files edited_file_paths: list[EditedFilePath] = [] - for operation in operations: - edited_file_paths.extend(operation.get_edited_file_paths()) - - self._set_last_edited_file_paths(edited_file_paths) - - # applying the edit operations - for operation in operations: - operation.apply() + try: + for operation in operations: + operation.apply() + edited_file_paths.extend(operation.get_edited_file_paths()) + finally: + self._set_last_edited_file_paths(edited_file_paths) return len(operations)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/serena/code_editor.py` around lines 381 - 390, The code currently collects edited file paths before applying operations so if a later operation.apply() raises, _set_last_edited_file_paths is called with files that were never changed; change the logic to apply each operation first, then after a successful operation call operation.get_edited_file_paths() and append to a local list (e.g., edited_file_paths), wrap the apply loop in try/finally and call self._set_last_edited_file_paths(edited_file_paths) in the finally block so only actually-applied edits are published; update references to EditedFilePath, operation.apply(), operation.get_edited_file_paths(), and self._set_last_edited_file_paths accordingly.src/solidlsp/language_servers/clojure_lsp.py (1)
139-148:⚠️ Potential issue | 🟠 Major
clojure_lsp_versionwon't invalidate an existing install.The selected version only changes the download URL. All variants here still resolve to the same Serena-managed executable path, and
deps.install()is skipped whenever that binary already exists. After the first install, later version overrides will keep reusing the oldclojure-lspunless the cache path or stored metadata becomes version-aware.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/solidlsp/language_servers/clojure_lsp.py` around lines 139 - 148, The code uses clojure_lsp_version to build the download URL but still resolves to a single shared binary path via deps.binary_path(self._ls_resources_dir), so upgrades are skipped if that file exists; update the installation logic in the block around clojure_lsp_version/_runtime_dependencies/deps.binary_path to make the install path or cache version-aware (e.g., include clojure_lsp_version in the resource directory or binary filename) or detect the installed binary version and force re-install when it differs, then call deps.install(self._ls_resources_dir) when the version does not match rather than only when the file exists.test/serena/test_serena_agent.py (4)
45-50:⚠️ Potential issue | 🟡 MinorRename
idhere as well to clear Ruff A002.
BaseCase.to_pytest_param(...)still shadows the builtin withid, so the same lint error remains in this file. Rename it tocase_idand pass that through topytest.param(..., id=case_id).As per coding guidelines,
**/*.py: Use strict typing with mypy and format code with ruff.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/serena/test_serena_agent.py` around lines 45 - 50, The parameter name `id` in BaseCase.to_pytest_param shadows the Python builtin; rename the parameter to `case_id` in the BaseCase.to_pytest_param signature and update the call to pytest.param to pass id=case_id (keep other args unchanged and preserve typing of *marks: MarkDecorator | Mark, case_id: str). Update any internal references from `id` to `case_id` to avoid Ruff A002.
1326-1333:⚠️ Potential issue | 🟠 MajorAssert the file stays untouched before the context manager restores it.
project_file_modification_context(...)rewrites the original contents infinally, so this still passes ifSafeDeleteSymbolmutates the file and then returns"Cannot delete". Capture the original contents before the call and compare them again before leaving thewithblock.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/serena/test_serena_agent.py` around lines 1326 - 1333, The test relies on project_file_modification_context restoring the file so it falsely passes even if SafeDeleteSymbol mutated the file; before entering the context (or immediately after entering but before calling safe_delete_tool.apply), read and store the original file contents at case.relative_path (using serena_agent or its filesystem helper), then after calling safe_delete_tool.apply (but still inside the with block) re-read the file and assert the contents equal the saved original to guarantee no in-place mutation occurred; reference project_file_modification_context, serena_agent, SafeDeleteSymbol, safe_delete_tool.apply and case.relative_path when locating where to add the read/save/assert.
1070-1084:⚠️ Potential issue | 🟠 MajorKeep the reference expectations distinct in the range test.
This reconstruction overwrites
reference_symbol_name_pathandreference_message_fragmentwith the primary values, so the assertion no longer proves the reference diagnostic was excluded by the range filter. A broken range filter can still pass here.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/serena/test_serena_agent.py` around lines 1070 - 1084, The test incorrectly sets reference expectations to the primary values, masking range-filter failures; in the call to self._assert_diagnostics_for_file construct the DiagnosticCase so reference_symbol_name_path uses diagnostic_case.reference_symbol_name_path and reference_message_fragment uses diagnostic_case.reference_message_fragment (keep reference_symbol_identifier as diagnostic_case.reference_symbol_identifier), leaving primary_* fields as-is so the test actually verifies the reference diagnostic was excluded by the range filter.
1273-1297: 🛠️ Refactor suggestion | 🟠 MajorUse snapshot assertions for these symbolic edit tests.
ReplaceSymbolBodyToolandSafeDeleteSymbolare symbolic edit operations, but these cases still spot-check a few substrings instead of snapshotting the full tool response. That leaves the response contract under-tested and misses the repo’s required snapshot coverage for symbolic edits.As per coding guidelines,
test/**/*.py: Symbolic editing operations must have snapshot tests.Also applies to: 1320-1349
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/serena/test_serena_agent.py` around lines 1273 - 1297, Replace the current substring assertions in test_replace_symbol_body_reports_new_diagnostics (and the similar test around lines 1320-1349 for SafeDeleteSymbol) with a snapshot assertion that records the full ReplaceSymbolBodyTool/SafeDeleteSymbol tool response; specifically capture the entire `result` (or the parsed `diagnostics`) via the project's snapshot fixture/assertion helper instead of checking for "missing_container" and "create_service_container" substrings so the full symbolic-edit contract is snapshotted and validated.test/diagnostics_cases.py (1)
24-29:⚠️ Potential issue | 🟡 MinorRename
idto avoid the Ruff A002 failure.
idstill shadows the builtin here, so this helper keeps tripping the same lint error. Rename it to something likecase_idand keeppytest.param(..., id=case_id).As per coding guidelines,
**/*.py: Use strict typing with mypy and format code with ruff.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/diagnostics_cases.py` around lines 24 - 29, Rename the shadowing parameter id in diagnostic_case_param to case_id: change the function signature to accept case_id (instead of id) and update the pytest.param call to pass id=case_id; update any call sites that pass the positional/keyword to use case_id accordingly so mypy/ruff A002 is resolved (refer to function diagnostic_case_param and the pytest.param(..., id=...) invocation).src/solidlsp/language_servers/ty_server.py (1)
58-66:⚠️ Potential issue | 🔴 CriticalSwitch the
uvfallback to the documentedtool runform.The fallback currently builds
uv x --from ..., which is the wrong shape for theuvbinary path. On machines that haveuvbut no separateuvxshim, this prevents the Ty server from starting.What is the documented `uv` command for running a tool from a package when `uvx` is unavailable: `uv tool run --from ...` or `uv x --from ...`?🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/solidlsp/language_servers/ty_server.py` around lines 58 - 66, The fallback invocation for when only uv is present uses the wrong subcommand shape; replace the list returned in the uv_path branch so it uses the documented form ["uv", "tool", "run", "--from", f"ty=={ty_version}", "ty", "server"] instead of ["uv", "x", "--from", ...]; update the branch that references uv_path (and mentions uvx_path/ty_version) to return that corrected command list so the Ty server starts when uv is installed without a separate uvx shim.src/serena/symbol.py (1)
954-956:⚠️ Potential issue | 🟡 MinorNormalize the path inside
_symbol_identity.
LanguageServerSymbolLocationnow canonicalizes separators, but this key still usessymbol.relative_pathraw. If one LS response returnsfoo\bar.pyand another returnsfoo/bar.py, the same symbol can still be deduplicated incorrectly and diagnostics get duplicated.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/serena/symbol.py` around lines 954 - 956, Update _symbol_identity so it returns a normalized/canonicalized relative path instead of raw symbol.relative_path: create or use LanguageServerSymbolLocation's canonicalization (e.g., build a LanguageServerSymbolLocation from symbol.relative_path or call its canonicalize helper) and return that normalized path as the first element of the tuple in _symbol_identity(symbol: LanguageServerSymbol) so separators are consistent across LS responses.src/serena/tools/tools_base.py (1)
34-43:⚠️ Potential issue | 🟠 MajorDon't include source ranges in
DiagnosticIdentity.The before/after diff still keys on start/end coordinates, so any pre-existing diagnostic that merely shifts because an edit inserted or removed lines is reclassified as “new”. Keep ranges for display, but compare snapshots on stable fields only.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/serena/tools/tools_base.py` around lines 34 - 43, DiagnosticIdentity currently includes source-range fields so diagnostics that only shift due to edits are treated as different; change DiagnosticIdentity to exclude ranges from identity/equality by removing start_line/start_character/end_line/end_character from the fields used for comparison (either remove them from the dataclass entirely and keep them in a separate display-only structure, or mark them with compare=False and hash=False on the dataclass fields) so only stable fields (message, severity, code_repr, source) are used for snapshot/keying; update any constructors or consumers that relied on the old signature (places creating DiagnosticIdentity instances or using them as dict keys) to supply/display ranges separately (e.g., a display_range attribute or helper) rather than as part of the equality/hash key.src/solidlsp/language_servers/lua_ls.py (1)
93-95:⚠️ Potential issue | 🟠 MajorDon't allow custom Lua LS versions without an explicit hash.
When
lua_language_server_versiondiffers fromLUA_LS_VERSION,expected_sha256still falls back toNone, so a custom version is downloaded and executed without integrity verification. Custom overrides should require an explicit SHA-256 (for example via a sibling setting) and fail fast when it is missing.Also applies to: 126-133
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/solidlsp/language_servers/lua_ls.py` around lines 93 - 95, When a custom lua_language_server_version is provided (lua_ls_version != LUA_LS_VERSION), require and read an explicit SHA-256 value from settings (e.g., sibling key like "lua_language_server_sha256") instead of falling back to None; if the SHA is missing or empty, fail fast (raise/abort with a clear error) before attempting to download/run. Update the logic around lua_settings / lua_ls_version / expected_sha256 (and the download/verify code paths referenced later around lines 126-133) to enforce this check and ensure the integrity verification step always has a non-null expected_sha256 for non-default versions.src/solidlsp/ls_utils.py (2)
223-247:⚠️ Potential issue | 🔴 CriticalValidate redirect hops before following them.
requests.get(..., stream=True)follows redirects by default, so the request can already land on an unapproved host before_validate_download_host(response.url, ...)runs. Disable auto-redirects and validate eachLocationhop before issuing the next request.Does `requests.get()` follow redirects by default, and if so, is validating `response.url` only after the call sufficient to enforce an allowed-host list?🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/solidlsp/ls_utils.py` around lines 223 - 247, The download_file_verified function currently validates only response.url after requests.get has already followed redirects; change it to disable auto-redirects (use allow_redirects=False) and implement a manual redirect-following loop: after each response with a redirect status (e.g., 3xx) read the Location header, resolve it to an absolute URL, call FileUtils._validate_download_host on that URL before issuing the next request, and repeat up to a sensible max redirect count (raise SolidLSPException on too many hops or missing Location); once you get a final 200 response, continue the existing streaming/write and final FileUtils._validate_download_host(response.url, ...) check as a last safeguard. Ensure you reference download_file_verified and FileUtils._validate_download_host when making these changes.
409-425:⚠️ Potential issue | 🔴 Critical
extractall()still leaves tar extraction unsafe.Validating
tar_member.namealone is not enough here.tarfile.extractall()can still materialize symlink, hardlink, and device entries, so a crafted archive can escape the extraction root even though the member name passed validation.Can Python `tarfile.extractall()` create symlinks or hardlinks from archive members, making validation of `TarInfo.name` alone insufficient to contain extraction to a target directory?🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/solidlsp/ls_utils.py` around lines 409 - 425, The tar extraction is unsafe because tarfile.extractall() can create symlinks/hardlinks/devices even if TarInfo.name was validated; change _extract_tar_archive to build a vetted members list and call tar_ref.extractall(members=safe_members) instead of extracting everything, iterating tar_ref.getmembers() and for each tar_member: call FileUtils._validate_extraction_path(tar_member.name, target_path), reject/skip any tar_member where tar_member.issym(), tar_member.islnk(), tar_member.isdev(), tar_member.ischr(), or tar_member.isblk() (or where a hardlink target tar_member.linkname would resolve outside target_path) and only allow regular files and directories (tar_member.isreg() or tar_member.isdir()); ensure you also validate resolved link targets (tar_member.linkname) when considering hardlinks, and only pass the filtered safe_members to tar_ref.extractall(members=safe_members). Reference: function _extract_tar_archive, tar_ref, tar_member, FileUtils._validate_extraction_path, and tar_member.{isreg, isdir, issym, islnk, isdev, linkname}.src/solidlsp/ls.py (3)
1337-1346:⚠️ Potential issue | 🟠 MajorDon't revive cached diagnostics after a successful empty pull.
An empty
itemslist is an authoritative “no diagnostics” result.if not ret:treats that the same as a failed pull and can resurrect stale cached diagnostics.🩹 Suggested fix
- if not ret: + if response is None: published_diagnostics = self._wait_for_published_diagnostics( uri=uri, after_generation=diagnostics_before_request, timeout=2.5 if pull_diagnostics_failed else 0.5, )🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/solidlsp/ls.py` around lines 1337 - 1346, The code treats an empty diagnostics list as falsy and resurrects stale cached diagnostics; change the logic to only attempt waiting/falling back when the pull actually failed (i.e., ret is None). Replace the truthiness check "if not ret:" with an explicit "if ret is None:" so that an authoritative empty list returned from the pull is preserved, and continue using _wait_for_published_diagnostics(uri, after_generation=diagnostics_before_request, timeout=...) and _get_cached_published_diagnostics(uri) only when ret is None.
1031-1034:⚠️ Potential issue | 🟡 MinorRaise
AssertionErrorinstead ofassert False.
assert Falseis stripped underpython -O, so this guard can disappear in optimized runs.Does Python remove `assert` statements when running with the `-O` optimization flag?🩹 Suggested fix
- assert False, f"Unexpected response from Language Server: {response}" + raise AssertionError(f"Unexpected response from Language Server: {response}")As per coding guidelines,
**/*.py: Use strict typing with mypy and format code with ruff.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/solidlsp/ls.py` around lines 1031 - 1034, Replace the runtime-optimized-away assert with an explicit exception: instead of using "assert False, f'Unexpected response ... {response}'" in the response handling block (where variables response, request_name, relative_file_path, line, column are available), raise an AssertionError (or a custom exception) with the same formatted message so the guard remains effective under python -O; update the else branch to explicitly raise AssertionError(f"Unexpected response from Language Server: {response}") to preserve behavior in optimized runs.
1320-1335:⚠️ Potential issue | 🟠 MajorTreat pull-diagnostic fields as optional.
severityandcodeare optional in LSP diagnostics. Indexing them directly here raisesKeyErroron valid responses, and the assertion message says “expected list” while the code is checking for adict._store_published_diagnostics()in this same file already handles both fields defensively.In the Language Server Protocol, are `Diagnostic.severity` and `Diagnostic.code` optional fields on diagnostic responses?🩹 Suggested fix
if response is not None: assert isinstance(response, dict), ( - f"Unexpected response from Language Server (expected list, got {type(response)}): {response}" + f"Unexpected response from Language Server (expected dict, got {type(response)}): {response}" ) ret = [] for item in response["items"]: # type: ignore new_item: ls_types.Diagnostic = { "uri": uri, - "severity": item["severity"], "message": item["message"], "range": item["range"], - "code": item["code"], # type: ignore } + if "severity" in item: + new_item["severity"] = item["severity"] # type: ignore[assignment] + if "code" in item: + new_item["code"] = item["code"] # type: ignore[assignment] if "source" in item: new_item["source"] = item["source"] ret.append(ls_types.Diagnostic(**new_item))🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/solidlsp/ls.py` around lines 1320 - 1335, The code in _store_published_diagnostics (processing variable response) wrongly asserts "expected list" while checking for a dict and indexes optional Diagnostic fields; update the assertion message to say "expected dict" and make severity and code optional by using safe lookups (e.g., item.get("severity") and item.get("code")) when building new_item before creating ls_types.Diagnostic; mirror the defensive handling used in _store_published_diagnostics (and elsewhere in ls.py) so "source" stays conditional and no KeyError is raised for valid responses missing severity or code.src/serena/tools/symbol_tools.py (2)
437-457:⚠️ Potential issue | 🟡 MinorRequire exactly one capturing group in both validators.
Both checks only reject
groups == 0, so patterns like(foo)?(bar)still pass even though this tool’s contract says the regex must contain exactly one capturing group.🩹 Suggested fix
- if match.re.groups == 0: + if match.re.groups != 1: return ( f"Error: Regex '{regex}' must contain exactly one capturing group that identifies the symbol usage in " f"{search_scope_description}." ) @@ - if compiled_regex.groups == 0: + if compiled_regex.groups != 1: return ( f"Error: Regex '{regex}' must contain exactly one capturing group that identifies the symbol usage in " f"{search_scope_description}." )Also applies to: 490-494
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/serena/tools/symbol_tools.py` around lines 437 - 457, The validator _get_unique_captured_span currently only rejects zero capturing groups; change the checks to require exactly one capturing group by validating match.re.groups == 1 (i.e., return an error when match.re.groups != 1) so patterns with multiple groups fail; apply the same fix to the other validator function handling capture-group validation (the similar logic around lines 490-494) so both enforce exactly one capturing group, and keep the existing error messages but update their conditions to use != 1 instead of checking for 0.
800-809:⚠️ Potential issue | 🟠 MajorSnapshot diagnostics for every file the rename will edit.
The baseline only covers
relative_path, butrename_symbol()can touch many files. Any additional path returned byget_last_edited_file_paths()is therefore diffed against an empty baseline, so pre-existing diagnostics get reported as newly introduced.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/serena/tools/symbol_tools.py` around lines 800 - 809, The current baseline snapshot only covers the single EditedFilePath(relative_path), but rename_symbol can touch many files so you must snapshot diagnostics for every file that will be edited before applying the change; to fix, obtain the full set of affected paths from the code editor (e.g. add or use a method on create_code_editor()/code_editor such as get_rename_affected_paths(name_path, relative_file_path) or a dry-run variant of rename_symbol that returns affected file paths), call _capture_published_lsp_diagnostics_snapshot with EditedFilePath for each affected path (including relative_path), then call rename_symbol and finally pass the real edited file paths into _format_lsp_edit_result_with_new_diagnostics (references: create_code_editor, _capture_published_lsp_diagnostics_snapshot, EditedFilePath, rename_symbol, get_last_edited_file_paths, _format_lsp_edit_result_with_new_diagnostics).
🧹 Nitpick comments (3)
test/solidlsp/typescript/test_typescript_basic.py (1)
36-61: Consider usingpytest.mark.skipifinstead of class-level conditional method definition.The current pattern defines test methods conditionally at class definition time, which works but is unconventional. Using
pytest.mark.skipifwould be more idiomatic and provide clearer test output when the tests are skipped.♻️ Suggested refactor using pytest.mark.skipif
- if language_has_verified_implementation_support(Language.TYPESCRIPT): - - `@pytest.mark.parametrize`("language_server", [Language.TYPESCRIPT], indirect=True) - def test_find_implementations(self, language_server: SolidLanguageServer) -> None: + `@pytest.mark.skipif`( + not language_has_verified_implementation_support(Language.TYPESCRIPT), + reason="TypeScript implementation support not verified" + ) + `@pytest.mark.parametrize`("language_server", [Language.TYPESCRIPT], indirect=True) + def test_find_implementations(self, language_server: SolidLanguageServer) -> None: repo_path = get_repo_path(Language.TYPESCRIPT) # ... rest of test ... - `@pytest.mark.parametrize`("language_server", [Language.TYPESCRIPT], indirect=True) - def test_request_implementing_symbols(self, language_server: SolidLanguageServer) -> None: + `@pytest.mark.skipif`( + not language_has_verified_implementation_support(Language.TYPESCRIPT), + reason="TypeScript implementation support not verified" + ) + `@pytest.mark.parametrize`("language_server", [Language.TYPESCRIPT], indirect=True) + def test_request_implementing_symbols(self, language_server: SolidLanguageServer) -> None: repo_path = get_repo_path(Language.TYPESCRIPT) # ... rest of test ...🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/solidlsp/typescript/test_typescript_basic.py` around lines 36 - 61, Replace the conditional-definition pattern around test_find_implementations and test_request_implementing_symbols with pytest.mark.skipif decorators so tests are declared always but skipped when TypeScript implementation support is absent; specifically, decorate each test (or the test class) with `@pytest.mark.skipif`(not language_has_verified_implementation_support(Language.TYPESCRIPT), reason="TypeScript implementation support not available") while keeping the existing `@pytest.mark.parametrize`("language_server", [Language.TYPESCRIPT], indirect=True) and the test bodies unchanged (references: test_find_implementations, test_request_implementing_symbols, language_has_verified_implementation_support, Language.TYPESCRIPT).src/solidlsp/language_servers/fsharp_language_server.py (1)
71-84: UnusedRuntimeDependencyCollectioninstance.The
RuntimeDependencyCollectionis instantiated but never used. The actual installation logic at lines 101-115 usessubprocess.rundirectly. This appears to be dead code that should either be removed or integrated.♻️ Option 1: Remove unused code
fsharp_settings = solidlsp_settings.get_ls_specific_settings(Language.FSHARP) fsautocomplete_version = fsharp_settings.get("fsautocomplete_version", FSAUTOCOMPLETE_VERSION) dotnet_exe = DotNETUtil("8.0", allow_higher_version=True).get_dotnet_path_or_raise() - RuntimeDependencyCollection( - [ - RuntimeDependency( - id="fsautocomplete", - description="FsAutoComplete (Ionide F# Language Server)", - command=f"dotnet tool install --tool-path ./ fsautocomplete --version {fsautocomplete_version}", - platform_id="any", - ), - ] - ) - # Install FsAutoComplete if not already installed🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/solidlsp/language_servers/fsharp_language_server.py` around lines 71 - 84, The RuntimeDependencyCollection/RuntimeDependency instantiation block (the RuntimeDependencyCollection([...]) call that creates an fsautocomplete dependency) is dead code — remove that block and any now-unused imports (RuntimeDependencyCollection, RuntimeDependency) so the file relies on the existing subprocess.run-based installation logic later in the file; alternatively, if you prefer to use the RuntimeDependency abstraction, replace the subprocess.run install logic with code that uses the created RuntimeDependencyCollection instance and its installer API, but do not keep both. Ensure no other references to RuntimeDependencyCollection/RuntimeDependency remain after removal.src/serena/tools/file_tools.py (1)
65-88: Extract the diagnostics-edit boilerplate into one helper.The same
EditedFilePath+ snapshot capture + result-format sequence is now copied into every file-edit tool. A small shared helper would keep the edit-path mapping and diagnostics reporting consistent as more edit tools adopt this flow.Also applies to: 223-235, 258-266, 295-304, 332-340
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/serena/tools/file_tools.py` around lines 65 - 88, Duplicate diagnostics-edit boilerplate (creating EditedFilePath list, calling _capture_published_lsp_diagnostics_snapshot before edits, and returning via _format_lsp_edit_result_with_new_diagnostics) appears across multiple methods in file_tools.py; extract this into a single helper (e.g., a private method like _with_diagnostics_for_edit or similar) that accepts the relative_path (or list of paths) and a callable performing the write and returns the formatted result; replace the repeated sequences in methods that currently construct EditedFilePath, call _capture_published_lsp_diagnostics_snapshot, perform the file write, and then call _format_lsp_edit_result_with_new_diagnostics to instead call the new helper so the EditedFilePath creation, snapshot capture and result formatting are centralized and reused.
| from serena.code_editor import CodeEditor, LanguageServerCodeEditor | ||
| from serena.symbol import LanguageServerSymbolRetriever |
There was a problem hiding this comment.
Import or qualify EditedFilePath before using it in the new annotations.
Both helper signatures refer to EditedFilePath, but the type never enters the TYPE_CHECKING imports. Ruff already flags this as F821, so the file still fails linting until the symbol is imported or fully qualified.
As per coding guidelines, **/*.py: Use strict typing with mypy and format code with ruff.
Also applies to: 331-334, 374-379
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/serena/tools/tools_base.py` around lines 26 - 27, The signatures in
tools_base.py refer to EditedFilePath but that symbol isn't imported, causing
F821; add a proper import for EditedFilePath inside the existing TYPE_CHECKING
block (e.g., from serena.types import EditedFilePath) or fully qualify the
annotation (serena.types.EditedFilePath) wherever the helpers reference it (also
update the other occurrences around the mentioned helper signatures). Ensure you
update the annotations in the helper functions that currently reference
EditedFilePath so the name is resolved at type-check time and ruff/mypy stop
flagging the missing symbol.
| if not self._has_waited_for_cross_file_references: | ||
| # Some LS require waiting for a while before they can return cross-file definitions. | ||
| # Some LS require waiting for a while before they can return cross-file definitions / implementations. | ||
| # This is a workaround for such LS that don't have a reliable "finished initializing" signal. | ||
| sleep(self._get_wait_time_for_cross_file_referencing()) | ||
| self._has_waited_for_cross_file_references = True | ||
|
|
||
| with self.open_file(relative_file_path): | ||
| # sending request to the language server and waiting for response | ||
| definition_params = cast( | ||
| request_params = cast( |
There was a problem hiding this comment.
Wait for cross-file warm-up after didOpen.
This helper sets _has_waited_for_cross_file_references before the file is opened in the LS. For servers that only start indexing after the first didOpen, the first definition/implementation query can still return incomplete cross-file results. request_references() already does this warm-up inside the open-file context.
🩹 Suggested fix
- if not self._has_waited_for_cross_file_references:
- # Some LS require waiting for a while before they can return cross-file definitions / implementations.
- # This is a workaround for such LS that don't have a reliable "finished initializing" signal.
- sleep(self._get_wait_time_for_cross_file_referencing())
- self._has_waited_for_cross_file_references = True
-
with self.open_file(relative_file_path):
+ if not self._has_waited_for_cross_file_references:
+ # Some LS require waiting for a while before they can return cross-file definitions / implementations.
+ # This needs to happen after at least one file was opened in the LS.
+ sleep(self._get_wait_time_for_cross_file_referencing())
+ self._has_waited_for_cross_file_references = True
request_params = cast(🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/solidlsp/ls.py` around lines 970 - 977, The wait-for-cross-file warm-up
is being done before the language server receives the didOpen notification; move
the sleep into the open-file context so indexing happens first: open the file
with self.open_file(relative_file_path) before calling
sleep(self._get_wait_time_for_cross_file_referencing()), then set
self._has_waited_for_cross_file_references = True after the sleep; this mirrors
how request_references() performs its warm-up and ensures cross-file indexing
has started before sending definition/implementation requests.
Summary by CodeRabbit
New Features
Improvements
Documentation
Tests
Chores