Skip to content

Conversation

pbelevich
Copy link
Contributor

@pbelevich pbelevich commented Jan 16, 2020

The purpose of this PR is to enable PyTorch dispatching on at::Generator* parameters and demonstrate how it can be used in cpp extensions to implement custom RNG.

  1. CustomRNGKeyId value added to DispatchKey enum and DispatchKeySet key_set_ added to at::Generator
  2. The overloaded operator()(at::Generator* gen) added to MultiDispatchKeySet.
  3. The existing CPUGenerator and CUDAGenerator class are supplied with CPUTensorId and CUDATensorId dispatch keys
  4. The implementation of CPU's cauchy_kernel(as an example, because it's already moved to ATen) was templatized and moved to ATen/native/cpu/DistributionTemplates.h to make it available for cpp extensions
  5. Minor CMake changes to make native/cpu tensors available for cpp extensions
  6. RegisterCustomRNG test that demonstrates how CustomCPUGenerator class can be implemented and how custom_rng_cauchy_ native function can be registered to handle Tensor::cauchy_ calls.

Stack from ghstack:

Differential Revision: D19604558

[ghstack-poisoned]
pbelevich added a commit that referenced this pull request Jan 16, 2020
ghstack-source-id: 6dda064
Pull Request resolved: #32325
@kostmo
Copy link
Member

kostmo commented Jan 17, 2020

💊 CircleCI build failures summary and remediations

As of commit 11be126:

None of the build failures appear to be your fault.

  • 1/2 broken upstream at merge base c729614 since Jan 29

    Please rebase on the viable/strict branch (expand for instructions)

    If your commit is newer than viable/strict, you can try basing on an older, stable commit:

    git fetch origin viable/strict
    git rebase --onto viable/strict $(git merge-base origin/master HEAD)
    

    If your commit is older than viable/strict:

    git fetch origin viable/strict
    git rebase viable/strict
    

    Check out the recency history of this "viable master" tracking branch.

  • 1/2 recognized as flaky ❄️

    • Re-run these jobs?

Detailed failure analysis

One may explore the probable reasons each build failed interactively on the Dr. CI website.

❄️ 1 failure recognized as flaky

The following build failures have been detected as flaky and may not be your fault:

See CircleCI build caffe2_onnx_py2_gcc5_ubuntu16_04_test (1/1)

Step: "Test" (full log | pattern match details) ❄️

Jan 29 05:41:45 E RuntimeError: required keyword attribute 'Y_scale' is undefined
Jan 29 05:41:45 k = 'Y_scale' 
Jan 29 05:41:45  
Jan 29 05:41:45     def _node_getitem(self, k): 
Jan 29 05:41:45         r""" 
Jan 29 05:41:45         Accessor for attributes of a node which is polymorphic over 
Jan 29 05:41:45         return type. 
Jan 29 05:41:45      
Jan 29 05:41:45         NB: This is monkey-patched onto Node. 
Jan 29 05:41:45         """ 
Jan 29 05:41:45 >       sel = self.kindOf(k) 
Jan 29 05:41:45 E       RuntimeError: required keyword attribute 'Y_scale' is undefined 
Jan 29 05:41:45  
Jan 29 05:41:45 ../.local/lib/python2.7/site-packages/torch/onnx/utils.py:843: RuntimeError 
Jan 29 05:41:45 =============================== warnings summary =============================== 
Jan 29 05:41:45 /usr/local/lib/python2.7/dist-packages/scipy/_lib/_numpy_compat.py:10 
Jan 29 05:41:45   /usr/local/lib/python2.7/dist-packages/scipy/_lib/_numpy_compat.py:10: DeprecationWarning: Importing from numpy.testing.nosetester is deprecated since 1.15.0, import from numpy.testing instead. 
Jan 29 05:41:45     from numpy.testing.nosetester import import_nose 
Jan 29 05:41:45  
Jan 29 05:41:45 /usr/local/lib/python2.7/dist-packages/scipy/stats/morestats.py:12 
Jan 29 05:41:45   /usr/local/lib/python2.7/dist-packages/scipy/stats/morestats.py:12: DeprecationWarning: Importing from numpy.testing.decorators is deprecated since numpy 1.15.0, import from numpy.testing instead. 
Jan 29 05:41:45     from numpy.testing.decorators import setastest 

🚧 1 upstream failure recognized by patterns:

These builds matched patterns, but were probably caused by upstream breakages:


This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions on the GitHub issue tracker.

This comment has been revised 38 times.

pbelevich added a commit that referenced this pull request Jan 17, 2020
ghstack-source-id: 1432b23
Pull Request resolved: #32325
pbelevich added a commit that referenced this pull request Jan 19, 2020
ghstack-source-id: fd6981d
Pull Request resolved: #32325
@pbelevich pbelevich requested review from ezyang and nairbv January 19, 2020 20:28
@ezyang
Copy link
Contributor

ezyang commented Jan 21, 2020

I know what's going on in this PR, but others won't necessarily. Try to add more of a description to your PR.

@@ -34,7 +34,7 @@ if(USE_ROCM)
endif()

# NB: If you edit these globs, you'll have to update setup.py package_data as well
FILE(GLOB base_h "*.h" "detail/*.h" "cpu/*.h")
FILE(GLOB base_h "*.h" "detail/*.h" "cpu/*.h" "cpu/vec256/*.h" "quantized/*.h")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Interesting. Does this fix some recompilation problems?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's better to put unrelated changes like this in a separate PR

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We need native/cpu/* and other headers to be available in cpp extensions because we are going to compile templatized distribution kernels

@ezyang
Copy link
Contributor

ezyang commented Jan 22, 2020

Looks quite reasonable! Needs a test!

pbelevich added a commit that referenced this pull request Jan 24, 2020
ghstack-source-id: 62da0f5
Pull Request resolved: #32325
Copy link
Contributor

@ezyang ezyang left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The WIP tag was removed but many of the requested changes have not been made.

@pbelevich pbelevich changed the title Custom RNG DispatchKey [WIP] Custom RNG DispatchKey Jan 27, 2020
The purpose of this PR is to enable PyTorch dispatching on `at::Generator*` parameters and demonstrate how it can be used in cpp extensions to implement custom RNG.
1. `CustomRNGKeyId` value added to DispatchKey enum and `DispatchKeySet key_set_` added to `at::Generator`
2. The overloaded `operator()(at::Generator* gen)` added to MultiDispatchKeySet.
3. The existing CPUGenerator and CUDAGenerator class are supplied with CPUTensorId and CUDATensorId dispatch keys
4. The implementation of CPU's `cauchy_kernel`(as an example, because it's already moved to ATen) was templatized and moved to `ATen/native/cpu/DistributionTemplates.h` to make it available for cpp extensions
5. Minor CMake changes to make native/cpu tensors available for cpp extensions
6. RegisterCustomRNG test that demonstrates how CustomCPUGenerator class can be implemented and how custom_rng_cauchy_ native function can be registered to handle Tensor::cauchy_ calls.




[ghstack-poisoned]
The purpose of this PR is to enable PyTorch dispatching on `at::Generator*` parameters and demonstrate how it can be used in cpp extensions to implement custom RNG.
1. `CustomRNGKeyId` value added to DispatchKey enum and `DispatchKeySet key_set_` added to `at::Generator`
2. The overloaded `operator()(at::Generator* gen)` added to MultiDispatchKeySet.
3. The existing CPUGenerator and CUDAGenerator class are supplied with CPUTensorId and CUDATensorId dispatch keys
4. The implementation of CPU's `cauchy_kernel`(as an example, because it's already moved to ATen) was templatized and moved to `ATen/native/cpu/DistributionTemplates.h` to make it available for cpp extensions
5. Minor CMake changes to make native/cpu tensors available for cpp extensions
6. RegisterCustomRNG test that demonstrates how CustomCPUGenerator class can be implemented and how custom_rng_cauchy_ native function can be registered to handle Tensor::cauchy_ calls.




[ghstack-poisoned]
pbelevich added a commit that referenced this pull request Jan 27, 2020
ghstack-source-id: 716298d
Pull Request resolved: #32325
@pbelevich pbelevich requested a review from ezyang January 27, 2020 20:44
@ezyang
Copy link
Contributor

ezyang commented Jan 28, 2020

Not WIP now I guess?

~CustomCPUGenerator() = default;
uint32_t random() { return 42; }
uint64_t random64() { return 42; }
void set_current_seed(uint64_t seed) override { throw "not implemented"; }
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think this is actually valid C++ :>

return self;
}

template <typename T, typename RNG>
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Err, are these copy pastes from the template::? Why not just call native::templates::cauchy_kernel directly?

Copy link
Contributor

@ezyang ezyang left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@pbelevich pbelevich changed the title [WIP] Custom RNG DispatchKey Custom RNG DispatchKey Jan 28, 2020
The purpose of this PR is to enable PyTorch dispatching on `at::Generator*` parameters and demonstrate how it can be used in cpp extensions to implement custom RNG.
1. `CustomRNGKeyId` value added to DispatchKey enum and `DispatchKeySet key_set_` added to `at::Generator`
2. The overloaded `operator()(at::Generator* gen)` added to MultiDispatchKeySet.
3. The existing CPUGenerator and CUDAGenerator class are supplied with CPUTensorId and CUDATensorId dispatch keys
4. The implementation of CPU's `cauchy_kernel`(as an example, because it's already moved to ATen) was templatized and moved to `ATen/native/cpu/DistributionTemplates.h` to make it available for cpp extensions
5. Minor CMake changes to make native/cpu tensors available for cpp extensions
6. RegisterCustomRNG test that demonstrates how CustomCPUGenerator class can be implemented and how custom_rng_cauchy_ native function can be registered to handle Tensor::cauchy_ calls.




[ghstack-poisoned]
pbelevich added a commit that referenced this pull request Jan 28, 2020
ghstack-source-id: cab5acc
Pull Request resolved: #32325
The purpose of this PR is to enable PyTorch dispatching on `at::Generator*` parameters and demonstrate how it can be used in cpp extensions to implement custom RNG.
1. `CustomRNGKeyId` value added to DispatchKey enum and `DispatchKeySet key_set_` added to `at::Generator`
2. The overloaded `operator()(at::Generator* gen)` added to MultiDispatchKeySet.
3. The existing CPUGenerator and CUDAGenerator class are supplied with CPUTensorId and CUDATensorId dispatch keys
4. The implementation of CPU's `cauchy_kernel`(as an example, because it's already moved to ATen) was templatized and moved to `ATen/native/cpu/DistributionTemplates.h` to make it available for cpp extensions
5. Minor CMake changes to make native/cpu tensors available for cpp extensions
6. RegisterCustomRNG test that demonstrates how CustomCPUGenerator class can be implemented and how custom_rng_cauchy_ native function can be registered to handle Tensor::cauchy_ calls.


Differential Revision: [D19604558](https://our.internmc.facebook.com/intern/diff/D19604558)

[ghstack-poisoned]
pbelevich added a commit that referenced this pull request Jan 28, 2020
ghstack-source-id: d1a8c15
Pull Request resolved: #32325
The purpose of this PR is to enable PyTorch dispatching on `at::Generator*` parameters and demonstrate how it can be used in cpp extensions to implement custom RNG.
1. `CustomRNGKeyId` value added to DispatchKey enum and `DispatchKeySet key_set_` added to `at::Generator`
2. The overloaded `operator()(at::Generator* gen)` added to MultiDispatchKeySet.
3. The existing CPUGenerator and CUDAGenerator class are supplied with CPUTensorId and CUDATensorId dispatch keys
4. The implementation of CPU's `cauchy_kernel`(as an example, because it's already moved to ATen) was templatized and moved to `ATen/native/cpu/DistributionTemplates.h` to make it available for cpp extensions
5. Minor CMake changes to make native/cpu tensors available for cpp extensions
6. RegisterCustomRNG test that demonstrates how CustomCPUGenerator class can be implemented and how custom_rng_cauchy_ native function can be registered to handle Tensor::cauchy_ calls.


Differential Revision: [D19604558](https://our.internmc.facebook.com/intern/diff/D19604558)

[ghstack-poisoned]
The purpose of this PR is to enable PyTorch dispatching on `at::Generator*` parameters and demonstrate how it can be used in cpp extensions to implement custom RNG.
1. `CustomRNGKeyId` value added to DispatchKey enum and `DispatchKeySet key_set_` added to `at::Generator`
2. The overloaded `operator()(at::Generator* gen)` added to MultiDispatchKeySet.
3. The existing CPUGenerator and CUDAGenerator class are supplied with CPUTensorId and CUDATensorId dispatch keys
4. The implementation of CPU's `cauchy_kernel`(as an example, because it's already moved to ATen) was templatized and moved to `ATen/native/cpu/DistributionTemplates.h` to make it available for cpp extensions
5. Minor CMake changes to make native/cpu tensors available for cpp extensions
6. RegisterCustomRNG test that demonstrates how CustomCPUGenerator class can be implemented and how custom_rng_cauchy_ native function can be registered to handle Tensor::cauchy_ calls.


Differential Revision: [D19604558](https://our.internmc.facebook.com/intern/diff/D19604558)

[ghstack-poisoned]
pbelevich added a commit that referenced this pull request Jan 29, 2020
ghstack-source-id: 82ac88c
Pull Request resolved: #32325
The purpose of this PR is to enable PyTorch dispatching on `at::Generator*` parameters and demonstrate how it can be used in cpp extensions to implement custom RNG.
1. `CustomRNGKeyId` value added to DispatchKey enum and `DispatchKeySet key_set_` added to `at::Generator`
2. The overloaded `operator()(at::Generator* gen)` added to MultiDispatchKeySet.
3. The existing CPUGenerator and CUDAGenerator class are supplied with CPUTensorId and CUDATensorId dispatch keys
4. The implementation of CPU's `cauchy_kernel`(as an example, because it's already moved to ATen) was templatized and moved to `ATen/native/cpu/DistributionTemplates.h` to make it available for cpp extensions
5. Minor CMake changes to make native/cpu tensors available for cpp extensions
6. RegisterCustomRNG test that demonstrates how CustomCPUGenerator class can be implemented and how custom_rng_cauchy_ native function can be registered to handle Tensor::cauchy_ calls.


Differential Revision: [D19604558](https://our.internmc.facebook.com/intern/diff/D19604558)

[ghstack-poisoned]
pbelevich added a commit that referenced this pull request Jan 29, 2020
ghstack-source-id: 99c5c54
Pull Request resolved: #32325
wuhuikx pushed a commit to wuhuikx/pytorch that referenced this pull request Jan 30, 2020
Summary:
Pull Request resolved: pytorch#32325

The purpose of this PR is to enable PyTorch dispatching on `at::Generator*` parameters and demonstrate how it can be used in cpp extensions to implement custom RNG.
1. `CustomRNGKeyId` value added to DispatchKey enum and `DispatchKeySet key_set_` added to `at::Generator`
2. The overloaded `operator()(at::Generator* gen)` added to MultiDispatchKeySet.
3. The existing CPUGenerator and CUDAGenerator class are supplied with CPUTensorId and CUDATensorId dispatch keys
4. The implementation of CPU's `cauchy_kernel`(as an example, because it's already moved to ATen) was templatized and moved to `ATen/native/cpu/DistributionTemplates.h` to make it available for cpp extensions
5. Minor CMake changes to make native/cpu tensors available for cpp extensions
6. RegisterCustomRNG test that demonstrates how CustomCPUGenerator class can be implemented and how custom_rng_cauchy_ native function can be registered to handle Tensor::cauchy_ calls.

Test Plan: Imported from OSS

Differential Revision: D19604558

Pulled By: pbelevich

fbshipit-source-id: 2619f14076cee5742094a0be832d8530bba72728
@facebook-github-bot
Copy link
Contributor

@pbelevich merged this pull request in b1c85dd.

@facebook-github-bot facebook-github-bot deleted the gh/pbelevich/66/head branch February 2, 2020 15:18
ttumiel pushed a commit to ttumiel/pytorch that referenced this pull request Mar 4, 2020
Summary:
Pull Request resolved: pytorch#32325

The purpose of this PR is to enable PyTorch dispatching on `at::Generator*` parameters and demonstrate how it can be used in cpp extensions to implement custom RNG.
1. `CustomRNGKeyId` value added to DispatchKey enum and `DispatchKeySet key_set_` added to `at::Generator`
2. The overloaded `operator()(at::Generator* gen)` added to MultiDispatchKeySet.
3. The existing CPUGenerator and CUDAGenerator class are supplied with CPUTensorId and CUDATensorId dispatch keys
4. The implementation of CPU's `cauchy_kernel`(as an example, because it's already moved to ATen) was templatized and moved to `ATen/native/cpu/DistributionTemplates.h` to make it available for cpp extensions
5. Minor CMake changes to make native/cpu tensors available for cpp extensions
6. RegisterCustomRNG test that demonstrates how CustomCPUGenerator class can be implemented and how custom_rng_cauchy_ native function can be registered to handle Tensor::cauchy_ calls.

Test Plan: Imported from OSS

Differential Revision: D19604558

Pulled By: pbelevich

fbshipit-source-id: 2619f14076cee5742094a0be832d8530bba72728
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants