-
Notifications
You must be signed in to change notification settings - Fork 537
Closed
Description
🚀 Feature
Batch-Processing for Noise Tunnel
Motivation
Methods like IntegratedGradients, which perform multiple steps per input have an internal_batch_size argument. However, Noise Tunnel lacks this as far as I can see. I think it would be quite useful to have a similar argument for Noise Tunnel, since underlying methods like Saliency cannot do batch-processing. This means that OOM issues arise with NoiseGrad-Saliency (and others).
Pitch
Add an argument batch_size_nt, controlling the noise-tunnel internal batch size.
Alternatives
Add a internal_batch_size argument to all attribution methods, but this would be a lot of work.
Additional context
None
NarineK
Metadata
Metadata
Assignees
Labels
No labels