Skip to content

Commit 86570a7

Browse files
authored
fix tokenizer not support return_tensor='ms' (#2165)
1 parent 5c81102 commit 86570a7

File tree

3 files changed

+5
-2
lines changed

3 files changed

+5
-2
lines changed

README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -70,7 +70,7 @@
7070
tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
7171
model = AutoModel.from_pretrained("bert-base-uncased")
7272

73-
inputs = tokenizer("Hello world!")
73+
inputs = tokenizer("Hello world!", return_tensors='ms')
7474
outputs = model(**inputs)
7575
```
7676

mindnlp/core/npu/__init__.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -43,7 +43,7 @@ def device_count():
4343
return 0
4444
if GlobalComm.INITED:
4545
return get_group_size()
46-
return ms_device_count()
46+
return 1
4747

4848
def current_device():
4949
return core.device('npu', 0)

mindnlp/transformers/__init__.py

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -56,6 +56,9 @@ def empty_fn(*args, **kwargs):
5656
transformers.tokenization_utils_base.PreTrainedTokenizerBase.apply_chat_template = apply_chat_template_wrapper(
5757
transformers.tokenization_utils_base.PreTrainedTokenizerBase.apply_chat_template
5858
)
59+
transformers.tokenization_utils_base.PreTrainedTokenizerBase.__call__ = apply_chat_template_wrapper(
60+
transformers.tokenization_utils_base.PreTrainedTokenizerBase.__call__
61+
)
5962

6063
transformers.pipelines.pipeline = dtype_wrapper(transformers.pipelines.pipeline)
6164
transformers.modeling_utils.caching_allocator_warmup = empty_fn

0 commit comments

Comments
 (0)