Skip to content

Commit 513713f

Browse files
committed
upgrade readme and version
1 parent ff69c06 commit 513713f

File tree

2 files changed

+23
-2
lines changed

2 files changed

+23
-2
lines changed

README.md

Lines changed: 22 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -39,7 +39,28 @@
3939

4040
## News 📢
4141

42-
* 🔥 **Fully compatible with 🤗HuggingFace**, it enables seamless execution of any Transformers/Diffusers models on MindSpore across all hardware platforms (GPU/Ascend/CPU).
42+
***MindNLP Core support Pytorch compatible:** To meet ecosystem compatibility requirements, we provide the `mindnlp.core` module to support compatibility with PyTorch interfaces. This module is built upon MindSpore's foundational APIs and operators, enabling model development using syntax similar to PyTorch. It also supports taking over torch interfaces through a Proxy, allowing the use of MindSpore for acceleration on Ascend hardware without the need for code modifications. The specific usage is as follows:
43+
44+
```python
45+
import mindnlp # import mindnlp lib will enable proxy automaticlly
46+
import torch
47+
from torch import nn
48+
49+
# all torch.xx apis will be mapped to mindnlp.core.xx
50+
net = nn.Linear(10, 5)
51+
x = torch.randn(3, 10)
52+
out = net(x)
53+
print(out.shape)
54+
# core.Size([3, 5])
55+
```
56+
57+
It is particularly noteworthy that MindNLP supports several features not yet available in MindSpore, which enables better support for model serialization, heterogeneous computing, and other scenarios:
58+
1. ​Dispatch Mechanism Support: Operators are dispatched to the appropriate backend based on Tensor.device.
59+
2. ​Meta Device Support: Allows for shape inference without performing actual computations.
60+
3. ​Numpy as CPU Backend: Supports using NumPy as a CPU backend for acceleration.
61+
4. ​Tensor.to for Heterogeneous Data Movement: Facilitates the movement of data across different devices using `Tensor.to`.
62+
63+
* 🔥 **Fully compatible with 🤗HuggingFace:** It enables seamless execution of any Transformers/Diffusers models on MindSpore across all hardware platforms (GPU/Ascend/CPU).
4364

4465
You may still invoke models through MindNLP as shown in the example code below:
4566

setup.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -64,7 +64,7 @@ def run(self):
6464
_create_namespace_links() # 安装后创建链接
6565

6666

67-
version = '0.5.0'
67+
version = '0.5.0rc1'
6868
cur_dir = os.path.dirname(os.path.realpath(__file__))
6969
pkg_dir = os.path.join(cur_dir, 'build')
7070

0 commit comments

Comments
 (0)