Skip to content

Conversation

@connor4312
Copy link
Contributor

@connor4312 connor4312 commented Sep 9, 2025

Hacked together a bit to fix #41. The use of a proxy here is fairly
cursed but I wanted to validate if this approach would fix it, which
it does -- tests pass and the 24s case goes is 90ms on my machine.

Feel free to rewrite or suggest a cleaner approach 😛
I wasn't sure if (given we do this approach) you wanted to duplicate the
relevant functions or introduce a common wrapper for all their usages.

Hacked together a bit to fix jridgewell#41. The use of a proxy here is fairly
cursed but I wanted to validate if this approach would fix it, which
it does -- tests pass and the 24s case goes is 90ms on my machine.

Feel free to rewrite or suggest a cleaner approach 😛
const segs = lines[sourceLine];
if (!segs) continue;

segs.sort((a, b) => a[0] - b[0]);
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If I'm not mistaken, we can just stop here and everything after this in the loop can be removed.

We don't need to use the memos, or binary search (or the Proxy), this was just an optimization I thought would be useful assuming the map didn't backtrack much. But that's obviously not the case, I should have considered that } or ) on a minified input. These seem to cause an n^2 insert behavior.

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I opened merged #43 to add benchmarks for generatedPositionFor. Using that, and using a post-sort (essentially what you've written here), I get:

Generated Positions init:
trace-mapping:    decoded generatedPositionFor x 52.18 ops/sec ±3.17% (64 runs sampled)
trace-mapping:    encoded generatedPositionFor x 30.68 ops/sec ±4.93% (53 runs sampled)
trace-mapping latest:    decoded generatedPositionFor x 0.03 ops/sec ±26.98% (5 runs sampled)
trace-mapping latest:    encoded generatedPositionFor x 0.03 ops/sec ±1.17% (5 runs sampled)
Fastest is trace-mapping:    decoded generatedPositionFor

Generated Positions speed:
trace-mapping:    decoded generatedPositionFor x 83,037,060 ops/sec ±0.95% (96 runs sampled)
trace-mapping:    encoded generatedPositionFor x 84,660,759 ops/sec ±0.49% (99 runs sampled)
trace-mapping latest:    decoded generatedPositionFor x 85,427,139 ops/sec ±1.29% (97 runs sampled)
trace-mapping latest:    encoded generatedPositionFor x 84,919,638 ops/sec ±0.70% (100 runs sampled)
Fastest is trace-mapping latest:    decoded generatedPositionFor

So something like a 1022x speedup? That's pretty awesome.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, I see! Done, makes it more like a 1400x speedup on my machine 😁

Generated Positions init:
trace-mapping:    decoded generatedPositionFor x 73.96 ops/sec ±2.48% (65 runs sampled)
trace-mapping latest:    decoded generatedPositionFor x 0.05 ops/sec ±3.40% (5 runs sampled)

This is faster for init and access. And it doesn't seem to hurt memory.
@jridgewell
Copy link
Owner

I made a small modification to use a real array, and it helped considerably:

# old
Memory Usage:
trace-mapping decoded       31660064 bytes
trace-mapping encoded       69702160 bytes

Generated Positions init:
trace-mapping:    decoded generatedPositionFor x 45.59 ops/sec ±4.46% (61 runs sampled)
trace-mapping:    encoded generatedPositionFor x 27.54 ops/sec ±7.87% (50 runs sampled)

Generated Positions speed:
trace-mapping:    decoded generatedPositionFor x 18,489,273 ops/sec ±0.73% (96 runs sampled)
trace-mapping:    encoded generatedPositionFor x 17,975,730 ops/sec ±1.67% (90 runs sampled)
#new
Memory Usage:
trace-mapping decoded       31765552 bytes
trace-mapping encoded       68249808 bytes

Generated Positions init:
trace-mapping:    decoded generatedPositionFor x 51.71 ops/sec ±6.41% (57 runs sampled)
trace-mapping:    encoded generatedPositionFor x 29.62 ops/sec ±6.67% (54 runs sampled)

Generated Positions speed:
trace-mapping:    decoded generatedPositionFor x 99,559,487 ops/sec ±0.43% (95 runs sampled)
trace-mapping:    encoded generatedPositionFor x 100,778,769 ops/sec ±0.32% (95 runs sampled)

@jridgewell jridgewell merged commit a8e7a23 into jridgewell:main Sep 10, 2025
1 check passed
@connor4312
Copy link
Contributor Author

awesome thanks for the merge!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

trace-mapping is slow for maps from minified sources

2 participants