Skip to content

Support for PyTorch's optimize_for_inference mode #8499

@ananthsub

Description

@ananthsub

🚀 Feature

Leverage PyTorch's optimize_for_inference mode for performance benefits during model evaluation and inference

PyTorch has recently introduced an experimental API optimize_for_inference:

Motivation

Reap performance improvements

Pitch

This can be used during Trainer.predict in place of the no_grad if optimize_for_inference is available: https://github.com/PyTorchLightning/pytorch-lightning/blob/4c79b3a5b343866217784c66d122819c59a92c1d/pytorch_lightning/trainer/trainer.py#L1078-L1083

Alternatives

Keep as is

Additional context

If you enjoy PL, check out our other projects:

  • Metrics: Machine learning metrics for distributed, scalable PyTorch applications.
  • Flash: The fastest way to get a Lightning baseline! A collection of tasks for fast prototyping, baselining, finetuning and solving problems with deep learning
  • Bolts: Pretrained SOTA Deep Learning models, callbacks and more for research and production with PyTorch Lightning and PyTorch
  • Lightning Transformers: Flexible interface for high performance research using SOTA Transformers leveraging Pytorch Lightning, Transformers, and Hydra.

Metadata

Metadata

Assignees

No one assigned

    Labels

    featureIs an improvement or enhancementhelp wantedOpen to be worked on

    Type

    No type

    Projects

    No projects

    Milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions