Right now, any contract which has a list of keys it must lock to succeed will write their own version of the following boilerplate (pseudocode, of course):
lt = READ # or WRITE
for each k in keys:
l = try_lock(k, lt)
if l is not successful:
return failure
This isn't particularly hard to write or difficult to write correctly, but it means many execution context switches (from the runner waiting on the broker(?), and vice versa), and will likely be frequently needed.
A hypothetical try_batch_lock() which takes a vector of keys in place of a single key, would reduce this boilerplate to the much simpler:
lt = READ # or WRITE
if try_batch_lock(keys, lt) is not successful:
return failure
Moreover, this could dramatically reduce the roundtrip count between the runner and the external execution environment.
Right now, any contract which has a list of keys it must lock to succeed will write their own version of the following boilerplate (pseudocode, of course):
This isn't particularly hard to write or difficult to write correctly, but it means many execution context switches (from the runner waiting on the broker(?), and vice versa), and will likely be frequently needed.
A hypothetical
try_batch_lock()which takes a vector of keys in place of a single key, would reduce this boilerplate to the much simpler:Moreover, this could dramatically reduce the roundtrip count between the runner and the external execution environment.