Skip to content

Use tokenizer for extraction; add benchmark#424

Merged
mre merged 77 commits intomasterfrom
extractor
Dec 16, 2021
Merged

Use tokenizer for extraction; add benchmark#424
mre merged 77 commits intomasterfrom
extractor

Conversation

@mre
Copy link
Copy Markdown
Member

@mre mre commented Dec 15, 2021

This avoids creating a DOM tree for link extraction and instead uses a TokenSink for on-the-fly extraction.
In my hyperfine benchmarks it was about 10-25% faster than the master.

Old: 4.557 s ± 0.404 s
New: 3.832 s ± 0.131 s

The performance fluctuates a little less as well.

I also added a few more element/attribute pairs which contain links according to the HTML spec. These occur very rarely, but it's good to parse them for completeness' sake.

Furthermore tried to clean up a lot of papercuts around our types. We now differentiate between a RawUri (stringy-types) and a Uri, which is a properly parsed URI type.
The extractor now only deals with extracting RawUris while the collector creates the request objects.

If someone wants to look into the code, I'd be happy for a review.
(I'll squash the commits before I merge of course. ^^)

mre and others added 30 commits September 13, 2021 19:57
Previously we collected all inputs in one vector
before checking the links, which is not ideal.
Especially when reading many inputs (e.g. by using a glob pattern),
this could cause issues like running out of file handles.

By moving to streams we avoid that scenario. This is also the first
step towards improving performance for many inputs.
Because we can't know the amount of links without blocking
To stay as close to the pre-stream behaviour, we want to stop processing
as soon as an Err value appears in the stream. This is easiest when the
stream is consumed in the main thread.
Previously, the stream was consumed in a tokio task and the main thread
waited for responses.
Now, a tokio task waits for responses (and displays them/registers
response stats) and the main thread sends links to the ClientPool.
To ensure that the main thread waits for all responses to have arrived
before finishing the ProgressBar and printing the stats, it waits for
the show_results_task to finish.
Replaced with `futures::StreamExt::for_each_concurrent`.
Tendril is not Send by default
@mre
Copy link
Copy Markdown
Member Author

mre commented Dec 16, 2021

The flamegraph shows a much more flat call stack. (It's interactive, but you have to download it and open it in a browser.)

@mre mre merged commit 166c86c into master Dec 16, 2021
@mre mre deleted the extractor branch December 16, 2021 17:45
This was referenced Dec 17, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant