Conversation
| } | ||
|
|
||
| test "CLI sim command argument validation" { | ||
|
|
| try std.testing.expect(std.mem.eql(u8, startup_steps[3], "create_beam_node")); | ||
| try std.testing.expect(std.mem.eql(u8, startup_steps[4], "run_node")); | ||
| } | ||
|
|
There was a problem hiding this comment.
i am not sure what you are testing in this entire file, you assign something and then you test the assignment
pkgs/cli/src/main.zig
Outdated
| try beam_node_2.run(); | ||
| try clock.run(); | ||
| }, | ||
| .sim => |simcmd| { |
There was a problem hiding this comment.
we don't need this command, current beam command is already a 2 (out of 3 node) sim
point of this entire PR was to actually start the sim, actually test that the node is up, actually make an http request to metrics endpoint and see if we can get the metrics etc
|
Hello @g11tech I have made the changes |
gballet
left a comment
There was a problem hiding this comment.
I am running into an issue where the tests will hang the second time, because the connection is refused. I have been trying to understand what goes wrong, but haven't found out so far. Anyhow, I don't think we can merge this until this is fixed.
build.zig
Outdated
| const install_cli = b.addInstallArtifact(cli_exe, .{}); | ||
|
|
||
| // Create simtest step that runs all tests (unit + integration) | ||
| const simtests = b.step("simtest", "Run all tests including integration tests"); |
There was a problem hiding this comment.
this is not what should happen: there should be simtests, that are the integration tests, and then the tests, which are their separate executable. No need to bundle them together.
| if (retry_count % 20 == 0) { | ||
| std.debug.print("DEBUG: Connection attempt {} failed: {}\n", .{ retry_count, err }); | ||
| } |
There was a problem hiding this comment.
this is a problem: if it can't connect, then it will spin forever (which happens)
There was a problem hiding this comment.
max_wait_time is what will break the loop I think (!20 secs)
I have addressed this not sure if my solution is the best walk around for it but i had to use |
e9b0a9b to
e6251cb
Compare
| runs-on: ${{ matrix.os }} | ||
| strategy: | ||
| matrix: | ||
| os: [ubuntu-latest, macos-latest] |
There was a problem hiding this comment.
| os: [ubuntu-latest, macos-latest] | |
| os: [ubuntu-latest, macos-latest, windows-latest] |
I successfully built the new sim command and added a set of very fast unit tests for it. These tests confirm that the command correctly understands its arguments, like port numbers and network settings, and they run in under a second.
We are missing tests that check if the live web server started by the sim command is actually working. Our current tests don't make real HTTP requests to the /metrics and /health endpoints to verify they are online and responding correctly.
This gap exists because of a trade-off between speed and completeness. To properly test the web server, we would need to start the entire blockchain simulation, which is very slow.
In short, the command itself is fully built, but we don't have automated tests to prove its web server component works because those tests are too slow to run regularly.
closes #154