-
-
Notifications
You must be signed in to change notification settings - Fork 5.6k
Description
FP8 is a (non-standard) floating point numbers format used on modern GPUs and CPUs (e.g. ARMv9 ISA), and it's slowly becoming more and more popular in certain workloads (especially machine learning-related, as fp16 did a few years ago), so I think we should consider at least adding codegen support for it in Julia. One problem of course is that, being non-standard, fp8 is underspecified and there are actually multiple possible implementations depending on the size of the exponent and the mantissa you choose, we should probably try to support all the most commonly used implementations in real hardware, at least. A package which provides software emulation is https://github.com/chengchingwen/DLFP8Types.jl. The suggested feature would be useful for https://github.com/JuliaGPU/WeirdFloats.jl (although it's not entirely necessary, using a primitive type is sufficient for the time being for an experimental package, but it requires reinterpreting all the numbers all the time, not good in generic code)