Source-Changes-D archive
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]
Re: CVS commit: src/tests/lib/libm
> Date: Thu, 2 May 2024 21:04:38 +0200
> From: Roland Illig <roland.illig%gmx.de@localhost>
>
> Am 02.05.2024 um 05:30 schrieb Robert Elz:
> > Use intmax_t instead of long int when trying to represent very large
> > integers (10^50 or so), so we don't exceed the capacity of systems where
> > long int is only 32 bits.
>
> I particularly avoid the types 'long' and 'long double', as they vary
> between the platforms.
In this case, the whole point of the exercise is to test a long double
function nearbyintl distinctly from the double function nearbyint.
The integer result could have been int64_t instead of intmax_t (and
maybe it should be).
I wrote it as long mainly because I copied the nearbyint tests to make
the nearbyintl tests and just forgot to update the long to make it
work on LP32 platforms -- I tested on amd64 before committing and I
was thinking about how sparc64/aarch64 must have broken nearbyintl
(which the test has now confirmed they are).
> Curiously, intmax_t is a 64-bit type even on amd64, where __int128_t is
> also available, but I don't use that because that type is not predefined.
Yes, although intmax_t looks convenient for arithmetic, it was a
mistake to bake it into any ABI like printf "%jd", because it means
expanding intmax_t from 64-bit to 128-bit breaks the ABI.
Had the rule been to use PRIdMAX instead of "%jd" in the C standard,
and had all intmax_t-related functions been defined as macros or
static inlines, this problem could have been avoided. But it's too
late for that now, so intmax_t is effectively just a confusingly-named
alias for int64_t in practice.
Home |
Main Index |
Thread Index |
Old Index