* Problems (a bug?) with UINT_MAX from kernel.h
@ 2007-06-05 14:42 Richard Purdie
2007-06-05 15:20 ` John Anthony Kazos Jr.
` (2 more replies)
0 siblings, 3 replies; 4+ messages in thread
From: Richard Purdie @ 2007-06-05 14:42 UTC (permalink / raw)
To: LKML
The kernel uses UINT_MAX defined from kernel.h in a variety of places.
While looking at the behaviour of the LZO code, I noticed it seemed to
think an int was 8 bytes large on my 32 bit i386 machine. It isn't but
why did it think that?
kernel.h says:
#define INT_MAX ((int)(~0U>>1))
#define INT_MIN (-INT_MAX - 1)
#define UINT_MAX (~0U)
#define LONG_MAX ((long)(~0UL>>1))
#define LONG_MIN (-LONG_MAX - 1)
#define ULONG_MAX (~0UL)
#define LLONG_MAX ((long long)(~0ULL>>1))
#define LLONG_MIN (-LLONG_MAX - 1)
#define ULLONG_MAX (~0ULL)
If I try to compile the code fragment below, I see the error:
#define UINT_MAX (~0U)
#if (0xffffffffffffffff == UINT_MAX)
#error argh
#endif
I've tested this on several systems with a variety of gcc versions with
the same result. I've tried various other ways of testing this all with
the same conclusion, UINT_MAX is wrong.
The *LONG* definitions above should work as gcc is forced to a certain
type. Where just 0U is specified, I don't think it will work as intended
as gcc seems to automatically increase the type to fit the value and
avoid truncation ending up with a long long.
If I change the above to:
/* Handle GCC = 3.2 */
#if !defined(__INT_MAX__)
#define INT_MAX 0x7fffffff
#else
#define INT_MAX (__INT_MAX__)
#endif
#define INT_MIN (-INT_MAX - 1)
#define UINT_MAX ((INT_MAX<<1)+1)
I get the expected result of an int being 4 bytes long. Is there a
better solution? Its probably better that whats there now but could
break a machine using gcc 3.2 that doesn't have int size = 4 bytes...
(gcc <= 3.2 doesn't define __INT_MAX__)
Richard
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: Problems (a bug?) with UINT_MAX from kernel.h
2007-06-05 14:42 Problems (a bug?) with UINT_MAX from kernel.h Richard Purdie
@ 2007-06-05 15:20 ` John Anthony Kazos Jr.
2007-06-05 15:57 ` Andreas Schwab
2007-06-05 18:35 ` H. Peter Anvin
2 siblings, 0 replies; 4+ messages in thread
From: John Anthony Kazos Jr. @ 2007-06-05 15:20 UTC (permalink / raw)
To: Richard Purdie; +Cc: LKML
> The kernel uses UINT_MAX defined from kernel.h in a variety of places.
>
> While looking at the behaviour of the LZO code, I noticed it seemed to
> think an int was 8 bytes large on my 32 bit i386 machine. It isn't but
> why did it think that?
>
> kernel.h says:
>
> #define INT_MAX ((int)(~0U>>1))
> #define INT_MIN (-INT_MAX - 1)
> #define UINT_MAX (~0U)
> #define LONG_MAX ((long)(~0UL>>1))
> #define LONG_MIN (-LONG_MAX - 1)
> #define ULONG_MAX (~0UL)
> #define LLONG_MAX ((long long)(~0ULL>>1))
> #define LLONG_MIN (-LLONG_MAX - 1)
> #define ULLONG_MAX (~0ULL)
>
> If I try to compile the code fragment below, I see the error:
>
> #define UINT_MAX (~0U)
> #if (0xffffffffffffffff == UINT_MAX)
> #error argh
> #endif
>
> I've tested this on several systems with a variety of gcc versions with
> the same result. I've tried various other ways of testing this all with
> the same conclusion, UINT_MAX is wrong.
>
> The *LONG* definitions above should work as gcc is forced to a certain
> type. Where just 0U is specified, I don't think it will work as intended
> as gcc seems to automatically increase the type to fit the value and
> avoid truncation ending up with a long long.
>
> If I change the above to:
>
> /* Handle GCC = 3.2 */
> #if !defined(__INT_MAX__)
> #define INT_MAX 0x7fffffff
> #else
> #define INT_MAX (__INT_MAX__)
> #endif
> #define INT_MIN (-INT_MAX - 1)
> #define UINT_MAX ((INT_MAX<<1)+1)
For one thing, the C standard specifies that literals fit into signed
before unsigned if they're not specified as unsigned in the first place.
Shifting signed values (INT_MAX<<1) is wrong on some architectures and
foolish on all. You would have to do "(unsigned int)INT_MAX << 1".
> I get the expected result of an int being 4 bytes long. Is there a
> better solution? Its probably better that whats there now but could
> break a machine using gcc 3.2 that doesn't have int size = 4 bytes...
>
> (gcc <= 3.2 doesn't define __INT_MAX__)
The C standard specifies that a numeric literal constant ending with "U"
may be interpreted as unsigned int, unsigned long int, or unsigned long
long int, in that order, choosing the first in which it will fit. So "~0U"
is correct. If it's not working, I suggest you file a compiler bug/defect
report.
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: Problems (a bug?) with UINT_MAX from kernel.h
2007-06-05 14:42 Problems (a bug?) with UINT_MAX from kernel.h Richard Purdie
2007-06-05 15:20 ` John Anthony Kazos Jr.
@ 2007-06-05 15:57 ` Andreas Schwab
2007-06-05 18:35 ` H. Peter Anvin
2 siblings, 0 replies; 4+ messages in thread
From: Andreas Schwab @ 2007-06-05 15:57 UTC (permalink / raw)
To: Richard Purdie; +Cc: LKML
Richard Purdie <richard@openedhand.com> writes:
> If I try to compile the code fragment below, I see the error:
>
> #define UINT_MAX (~0U)
> #if (0xffffffffffffffff == UINT_MAX)
> #error argh
> #endif
The preprocessor computes all expressions with the largest available
range. It does not know anything about types.
Andreas.
--
Andreas Schwab, SuSE Labs, schwab@suse.de
SuSE Linux Products GmbH, Maxfeldstraße 5, 90409 Nürnberg, Germany
PGP key fingerprint = 58CA 54C7 6D53 942B 1756 01D3 44D5 214B 8276 4ED5
"And now for something completely different."
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: Problems (a bug?) with UINT_MAX from kernel.h
2007-06-05 14:42 Problems (a bug?) with UINT_MAX from kernel.h Richard Purdie
2007-06-05 15:20 ` John Anthony Kazos Jr.
2007-06-05 15:57 ` Andreas Schwab
@ 2007-06-05 18:35 ` H. Peter Anvin
2 siblings, 0 replies; 4+ messages in thread
From: H. Peter Anvin @ 2007-06-05 18:35 UTC (permalink / raw)
To: Richard Purdie; +Cc: LKML
Richard Purdie wrote:
> The kernel uses UINT_MAX defined from kernel.h in a variety of places.
>
> While looking at the behaviour of the LZO code, I noticed it seemed to
> think an int was 8 bytes large on my 32 bit i386 machine. It isn't but
> why did it think that?
>
> kernel.h says:
>
> #define INT_MAX ((int)(~0U>>1))
> #define INT_MIN (-INT_MAX - 1)
> #define UINT_MAX (~0U)
> #define LONG_MAX ((long)(~0UL>>1))
> #define LONG_MIN (-LONG_MAX - 1)
> #define ULONG_MAX (~0UL)
> #define LLONG_MAX ((long long)(~0ULL>>1))
> #define LLONG_MIN (-LLONG_MAX - 1)
> #define ULLONG_MAX (~0ULL)
>
> If I try to compile the code fragment below, I see the error:
>
> #define UINT_MAX (~0U)
> #if (0xffffffffffffffff == UINT_MAX)
> #error argh
> #endif
>
> I've tested this on several systems with a variety of gcc versions with
> the same result. I've tried various other ways of testing this all with
> the same conclusion, UINT_MAX is wrong.
>
C99 states that all arithmetic in the preprocessor is done in
(u)intmax_t, regardless of prefixes or suffixes.
-hpa
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2007-06-05 18:36 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2007-06-05 14:42 Problems (a bug?) with UINT_MAX from kernel.h Richard Purdie
2007-06-05 15:20 ` John Anthony Kazos Jr.
2007-06-05 15:57 ` Andreas Schwab
2007-06-05 18:35 ` H. Peter Anvin
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox