Skip to content

GLM-4V 模型显存使用量计算bug #1968

@Jalen-Zhong

Description

@Jalen-Zhong

System Info / 系統信息

Ubuntu18.04
python==3.10

Running Xinference with Docker? / 是否使用 Docker 运行 Xinfernece?

  • docker / docker
  • pip install / 通过 pip install 安装
  • installation from source / 从源码安装

Version info / 版本信息

xinference==0.13.3

The command used to start Xinference / 用以启动 xinference 的命令

XINFERENCE_MODEL_SRC=modelscope xinference cal-model-mem -s 9 -f pytorch -c 8192 -n glm-4v

Reproduction / 复现过程

  1. 输入cmd:
    XINFERENCE_MODEL_SRC=modelscope xinference cal-model-mem -s 9 -f pytorch -c 8192 -n glm-4v
    2.cmd输出:
    Traceback (most recent call last): File "/root/anaconda3/envs/glm-4v-x/bin/xinference", line 8, in <module> sys.exit(cli()) File "/root/anaconda3/envs/glm-4v-x/lib/python3.10/site-packages/click/core.py", line 1157, in __call__ return self.main(*args, **kwargs) File "/root/anaconda3/envs/glm-4v-x/lib/python3.10/site-packages/click/core.py", line 1078, in main rv = self.invoke(ctx) File "/root/anaconda3/envs/glm-4v-x/lib/python3.10/site-packages/click/core.py", line 1688, in invoke return _process_result(sub_ctx.command.invoke(sub_ctx)) File "/root/anaconda3/envs/glm-4v-x/lib/python3.10/site-packages/click/core.py", line 1434, in invoke return ctx.invoke(self.callback, **ctx.params) File "/root/anaconda3/envs/glm-4v-x/lib/python3.10/site-packages/click/core.py", line 783, in invoke return __callback(*args, **kwargs) File "/root/anaconda3/envs/glm-4v-x/lib/python3.10/site-packages/xinference/deploy/cmdline.py", line 1561, in cal_model_mem mem_info = estimate_llm_gpu_memory( File "/root/anaconda3/envs/glm-4v-x/lib/python3.10/site-packages/xinference/model/llm/memory.py", line 102, in estimate_llm_gpu_memory info = get_model_layers_info( File "/root/anaconda3/envs/glm-4v-x/lib/python3.10/site-packages/xinference/model/llm/memory.py", line 227, in get_model_layers_info return load_model_config_json(config_path) File "/root/anaconda3/envs/glm-4v-x/lib/python3.10/site-packages/xinference/model/llm/memory.py", line 186, in load_model_config_json vocab_size=int(_load_item_from_json(config_data, "vocab_size")), File "/root/anaconda3/envs/glm-4v-x/lib/python3.10/site-packages/xinference/model/llm/memory.py", line 179, in _load_item_from_json raise ValueError("load ModelLayersInfo: missing %s" % (keys[0])) ValueError: load ModelLayersInfo: missing vocab_size

Expected behavior / 期待表现

修复glm-4v显存计算问题。另外,–quantization {precision}参数也有问题,建议一并查改。

Metadata

Metadata

Assignees

No one assigned

    Labels

    Type

    No type

    Projects

    No projects

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions