Skip to content

Lazy evaluation #582

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 39 commits into
base: master
Choose a base branch
from
Open

Lazy evaluation #582

wants to merge 39 commits into from

Conversation

bgrant
Copy link
Contributor

@bgrant bgrant commented Aug 25, 2014

Based on #580.

For example:

        with context.lazy_eval():
            a = context.zeros((52, 62))
            b = context.ones((52, 62))
            c = context.ones((52, 62)) + 1
            d = (2*a + (3*b + 4*c)) / 2
            e = globalapi.negative(d * d)

Our implementation

On the client, when <context>.lazy == True:

  • Sends are intercepted and queued (in <context>._sendq)
  • Recvs return immediately, queueing a lazy placeholder object inside a returned proxy object
  • When <context>.sync() is called, the both queues are sent to the engines (one queue per engine actually), and a real recv is blocked upon.

On the engines, when a 'process_message_queue' message is received (containing the queues):

  • Each message in the sendq is processed, one at a time, and the placeholder values from the recvq are used to feed values forward into the engine-side computation
  • Client-sends (return values) are queued, and sent as one message once the queue is processed.

On the client:

  • The client iterates through this return queue, and the lazy placeholders inside the originally reserved proxies are replaced by the real return values.

A note on my names

There's a Context attribute called lazy, and a Context method called lazy_eval() that is a context manager (decorated by contextlib.contextmanager) that sets and unsets lazy under the hood. If you have better ideas for those names, let me know.

Benchmark

I also added a simple benchmark in examples/lazy_eval. I time a loop that computes tanh on a DistArray for a settable number of iterations, both in lazy mode or in the default eager mode. Lazy mode seems to beat eager, but it's not as dramatic as I would have expected. I should probably think through the benchmark more.

@bgrant bgrant added this to the 0.6 milestone Aug 25, 2014
@bgrant
Copy link
Contributor Author

bgrant commented Sep 3, 2014

This is ready for a look @kwmsmith.

Conflicts:
	distarray/globalapi/context.py
	distarray/globalapi/tests/test_context.py
@kwmsmith
Copy link
Contributor

kwmsmith commented Sep 3, 2014

Closes #301

assert_array_equal(d.toarray(), a.toarray() + b.toarray() + c.toarray())

def test_temporary_add(self):
from pprint import pprint
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here and below, looks like some pprints made it through.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed.

@kwmsmith
Copy link
Contributor

kwmsmith commented Sep 4, 2014

There are 21 skips in the IPython test run -- is that normal?

@bgrant
Copy link
Contributor Author

bgrant commented Sep 4, 2014

I did add a bunch of MPI-only tests to test lazy-eval...

@@ -860,7 +865,7 @@ def delete_key(self, key, targets=None):
if MPIContext.INTERCOMM:
self._send_msg(msg, targets=targets)

def __init__(self, targets=None):
def __init__(self, targets=None, lazy=False):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So does this allow you to create a context that is always lazy?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, or at least lazy from the get-go. It really just sets <context>.lazy = True right off the bat. You can still set it to False later if you want.

@bgrant
Copy link
Contributor Author

bgrant commented Oct 16, 2015

Postponing to 0.7.

@bgrant bgrant modified the milestones: 0.7, 0.6 Oct 16, 2015
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants