Skip to content

Add codegen support for FP8 #58827

@giordano

Description

@giordano

FP8 is a (non-standard) floating point numbers format used on modern GPUs and CPUs (e.g. ARMv9 ISA), and it's slowly becoming more and more popular in certain workloads (especially machine learning-related, as fp16 did a few years ago), so I think we should consider at least adding codegen support for it in Julia. One problem of course is that, being non-standard, fp8 is underspecified and there are actually multiple possible implementations depending on the size of the exponent and the mantissa you choose, we should probably try to support all the most commonly used implementations in real hardware, at least. A package which provides software emulation is https://github.com/chengchingwen/DLFP8Types.jl. The suggested feature would be useful for https://github.com/JuliaGPU/WeirdFloats.jl (although it's not entirely necessary, using a primitive type is sufficient for the time being for an experimental package, but it requires reinterpreting all the numbers all the time, not good in generic code)

Metadata

Metadata

Assignees

No one assigned

    Labels

    compiler:codegenGeneration of LLVM IR and native codefeatureIndicates new feature / enhancement requests

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions