Skip to content

Implementations of formatting traits for signed integers can be ambiguous #42860

Closed
@Enet4

Description

@Enet4

I was recently formatting a signed integer primitive to hexadecimal using the standard formatter API, hoping that the result would be aware of the value's sign. It turns out that the implementations of UpperHex (and relatives such as LowerHex and Binary) for signed integers simply treat these numbers as unsigned (or just a sequence of bits).

println!("{:X}", -15i32);   // prints "FFFFFFF1",   expected "-F"

I posted this concern first as an SO question. A way around this is to make a newtype with another formatting implementation.

I can make arguments on both sides whether it should (or not) behave like this, but what actually concerns me most is that there seems to be no mention of this behaviour in the documentation. It appears that formatting trait implementations do not have to abide to a value's sign, but then the fact that a negative integer is treated as an unsigned number for formatting purposes can be unexpected for some people, especially when the docs do not clarify this situation.

To sum up: should we improve the documentation regarding what makes a valid formatting trait implementation? Should we also (or just) document further their implementations for integers in particular? I am willing to collaborate with the necessary changes one we're clear about what should be improved.

Metadata

Metadata

Assignees

No one assigned

    Labels

    A-docsArea: Documentation for any part of the project, including the compiler, standard library, and toolsC-enhancementCategory: An issue proposing an enhancement or a PR with one.P-mediumMedium priority

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions