-
Notifications
You must be signed in to change notification settings - Fork 5
fix: pathological slow case in bySource #42
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Hacked together a bit to fix jridgewell#41. The use of a proxy here is fairly cursed but I wanted to validate if this approach would fix it, which it does -- tests pass and the 24s case goes is 90ms on my machine. Feel free to rewrite or suggest a cleaner approach 😛
| const segs = lines[sourceLine]; | ||
| if (!segs) continue; | ||
|
|
||
| segs.sort((a, b) => a[0] - b[0]); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If I'm not mistaken, we can just stop here and everything after this in the loop can be removed.
We don't need to use the memos, or binary search (or the Proxy), this was just an optimization I thought would be useful assuming the map didn't backtrack much. But that's obviously not the case, I should have considered that } or ) on a minified input. These seem to cause an n^2 insert behavior.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I opened merged #43 to add benchmarks for generatedPositionFor. Using that, and using a post-sort (essentially what you've written here), I get:
Generated Positions init:
trace-mapping: decoded generatedPositionFor x 52.18 ops/sec ±3.17% (64 runs sampled)
trace-mapping: encoded generatedPositionFor x 30.68 ops/sec ±4.93% (53 runs sampled)
trace-mapping latest: decoded generatedPositionFor x 0.03 ops/sec ±26.98% (5 runs sampled)
trace-mapping latest: encoded generatedPositionFor x 0.03 ops/sec ±1.17% (5 runs sampled)
Fastest is trace-mapping: decoded generatedPositionFor
Generated Positions speed:
trace-mapping: decoded generatedPositionFor x 83,037,060 ops/sec ±0.95% (96 runs sampled)
trace-mapping: encoded generatedPositionFor x 84,660,759 ops/sec ±0.49% (99 runs sampled)
trace-mapping latest: decoded generatedPositionFor x 85,427,139 ops/sec ±1.29% (97 runs sampled)
trace-mapping latest: encoded generatedPositionFor x 84,919,638 ops/sec ±0.70% (100 runs sampled)
Fastest is trace-mapping latest: decoded generatedPositionFor
So something like a 1022x speedup? That's pretty awesome.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah, I see! Done, makes it more like a 1400x speedup on my machine 😁
Generated Positions init:
trace-mapping: decoded generatedPositionFor x 73.96 ops/sec ±2.48% (65 runs sampled)
trace-mapping latest: decoded generatedPositionFor x 0.05 ops/sec ±3.40% (5 runs sampled)
41b5691 to
e2f5db1
Compare
This is faster for init and access. And it doesn't seem to hurt memory.
|
I made a small modification to use a real array, and it helped considerably: |
|
awesome thanks for the merge! |
Hacked together a bit to fix #41. The use of a proxy here is fairly
cursed but I wanted to validate if this approach would fix it, which
it does -- tests pass and the 24s case goes is 90ms on my machine.
Feel free to rewrite or suggest a cleaner approach 😛
I wasn't sure if (given we do this approach) you wanted to duplicate the
relevant functions or introduce a common wrapper for all their usages.