First of all thanks for the repo, looks ideal.
I'm using gpt-x-alpaca-13b-native-4bit-128g-cuda.pt which can be found at repo anon8231489123/gpt4-x-alpaca-13b-native-4bit-128g on HF.
The error I'm receiving is
invalid model file (bad magic [got 0x4034b50 want 0x67676a74])
Is this something which should be compatible?