Skip to content

bug: thai2rom errors #1088

@jkingd0n

Description

@jkingd0n

Description

Encountered a couple errors when attempting to run the thai2rom engine on my GPU.

  File "/home/james/projects/sandbox/.venv/lib/python3.10/site-packages/pythainlp/transliterate/thai2rom.py", line 151, in forward
    sequences_output = sequences_output.index_select(
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument index in method wrapper_CUDA__index_select)
  File "/home/james/projects/sandbox/.venv/lib/python3.10/site-packages/torch/nn/utils/rnn.py", line 341, in pack_padded_sequence
    data, batch_sizes = _VF._pack_padded_sequence(input, lengths, batch_first)
RuntimeError: 'lengths' argument should be a 1D CPU int64 tensor, but got 1D cuda:0 Long tensor

I've fixed it locally and will be opening a PR.

Expected results

Expected to have thai2rom use my GPU with no errors.

Current results

Errors out, cannot use GPU for thai2rom

Steps to reproduce

Run thai2rom engine. Example code:

from pythainlp.transliterate import romanize

thai = "สวัสดีครับ"

engines = ["royin", "thai2rom", "thai2rom_onnx", "tltk", "lookup"]

for engine in engines:
    print("Engine:", engine)
    result_romanized = romanize(thai, engine=engine)
    print(result_romanized + "\n")

PyThaiNLP version

5.1.0

Python version

3.10.12

Operating system and version

Windows 10

More info

No response

Possible solution

No response

Files

No response

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugbugs in the library

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions