Closed
Description
We found that one cause of the performance decline of libtask is that compared to calling a function directly, there's some extra allocation in calling it as an Instruction. Here's a simple demonstration:
using BenchmarkTools
function f(a, b, c)
return a * b + c
end
# call f directly
function f1()
a, b, c = 1, 2, 3
f(a, b, c)
end
# emulate calling f in an instruction
const data = Dict(:a=>1, :b=>2, :c=>3)
const args = [:a, :b, :c]
function f2()
as = map(x -> data[x], args)
f(as...)
end
@btime f1()
@btime f2()
In instruction, we have the arguments in a dictionary (bindings), and we should pull them out and put them into a vector/tuple before we call the original function, this is where the extra allocation is introduced.
Is there a way to avoid this extra allocation?
Metadata
Metadata
Assignees
Labels
No labels