Skip to content

chore(deps): bump torch from 2.3.1+cxx11.abi to 2.7.1 in /backend/python/bark in the pip group #5671

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

dependabot[bot]
Copy link
Contributor

@dependabot dependabot bot commented on behalf of github Jun 17, 2025

Bumps the pip group in /backend/python/bark with 1 update: torch.

Updates torch from 2.3.1+cxx11.abi to 2.7.1

Release notes

Sourced from torch's releases.

PyTorch 2.7.1 Release, bug fix release

This release is meant to fix the following issues (regressions / silent correctness):

Torch.compile

Fix Excessive cudagraph re-recording for HF LLM models (#152287) Fix torch.compile on some HuggingFace models (#151154) Fix crash due to Exception raised inside torch.autocast (#152503) Improve Error logging in torch.compile (#149831) Mark mutable custom operators as cacheable in torch.compile (#151194) Implement workaround for a graph break with older version einops (#153925) Fix an issue with tensor.view(dtype).copy_(...) (#151598)

Flex Attention

Fix assertion error due to inductor permuting inputs to flex attention (#151959) Fix performance regression on nanogpt speedrun (#152641)

Distributed

Fix extra CUDA context created by barrier (#149144) Fix an issue related to Distributed Fused Adam in Rocm/APEX when using nccl_ub feature (#150010) Add a workaround random hang in non-blocking API mode in NCCL 2.26 (#154055)

MacOS

Fix MacOS compilation error with Clang 17 (#151316) Fix binary kernels produce incorrect results when one of the tensor arguments is from a wrapped scalar on MPS devices (#152997)

Other

Improve PyTorch Wheel size due to introduction of addition of 128 bit vectorization (#148320) (#152396) Fix fmsub function definition (#152075) Fix Floating point exception in torch.mkldnn_max_pool2d (#151848) Fix abnormal inference output with XPU:1 device (#153067) Fix Illegal Instruction Caused by grid_sample on Windows (#152613) Fix ONNX decomposition does not preserve custom CompositeImplicitAutograd ops (#151826) Fix error with dynamic linking of libgomp library (#150084) Fix segfault in profiler with Python 3.13 (#153848)

PyTorch 2.7.0 Release Notes

Highlights

... (truncated)

Commits

Dependabot compatibility score

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


Dependabot commands and options

You can trigger Dependabot actions by commenting on this PR:

  • @dependabot rebase will rebase this PR
  • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
  • @dependabot merge will merge this PR after your CI passes on it
  • @dependabot squash and merge will squash and merge this PR after your CI passes on it
  • @dependabot cancel merge will cancel a previously requested merge and block automerging
  • @dependabot reopen will reopen this PR if it is closed
  • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
  • @dependabot show <dependency name> ignore conditions will show all of the ignore conditions of the specified dependency
  • @dependabot ignore <dependency name> major version will close this group update PR and stop Dependabot creating any more for the specific dependency's major version (unless you unignore this specific dependency's major version or upgrade to it yourself)
  • @dependabot ignore <dependency name> minor version will close this group update PR and stop Dependabot creating any more for the specific dependency's minor version (unless you unignore this specific dependency's minor version or upgrade to it yourself)
  • @dependabot ignore <dependency name> will close this group update PR and stop Dependabot creating any more for the specific dependency (unless you unignore this specific dependency or upgrade to it yourself)
  • @dependabot unignore <dependency name> will remove all of the ignore conditions of the specified dependency
  • @dependabot unignore <dependency name> <ignore condition> will remove the ignore condition of the specified dependency and ignore conditions
    You can disable automated security fix PRs for this repo from the Security Alerts page.

Bumps the pip group in /backend/python/bark with 1 update: [torch](https://github.com/pytorch/pytorch).


Updates `torch` from 2.3.1+cxx11.abi to 2.7.1
- [Release notes](https://github.com/pytorch/pytorch/releases)
- [Changelog](https://github.com/pytorch/pytorch/blob/main/RELEASE.md)
- [Commits](https://github.com/pytorch/pytorch/commits/v2.7.1)

---
updated-dependencies:
- dependency-name: torch
  dependency-version: 2.7.1
  dependency-type: direct:production
  dependency-group: pip
...

Signed-off-by: dependabot[bot] <[email protected]>
@dependabot dependabot bot added dependencies python Pull requests that update Python code labels Jun 17, 2025
Copy link

netlify bot commented Jun 17, 2025

Deploy Preview for localai ready!

Name Link
🔨 Latest commit fe755ad
🔍 Latest deploy log https://app.netlify.com/projects/localai/deploys/6850e64fea06870008a87662
😎 Deploy Preview https://deploy-preview-5671--localai.netlify.app
📱 Preview on mobile
Toggle QR Code...

QR Code

Use your smartphone camera to open QR code link.

To edit notification comments on pull requests, go to your Netlify project configuration.

@github-actions github-actions bot enabled auto-merge (squash) June 17, 2025 04:13
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
dependencies python Pull requests that update Python code
Projects
None yet
Development

Successfully merging this pull request may close these issues.

0 participants