linux-arch.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Re: [tip:x86/asm] x86, bitops:  Change bitops to be native operand size
       [not found]   ` <5ac67859-a0b2-47f5-bdc2-c2a52b8d6885@email.android.com>
@ 2013-11-10 22:44     ` Joe Perches
  2013-11-10 22:44       ` Joe Perches
  2013-11-11  2:06       ` H. Peter Anvin
  0 siblings, 2 replies; 11+ messages in thread
From: Joe Perches @ 2013-11-10 22:44 UTC (permalink / raw)
  To: H. Peter Anvin, linux-arch
  Cc: mingo, linux-kernel, tglx, james.t.kukunas, hpa, Linus Torvalds,
	David Miller

(adding linux-arch, and possible patch below)

On Sun, 2013-11-10 at 14:10 -0800, H. Peter Anvin wrote:
> Yes, on the generic it is int.
> 
> The problem is in part that some architectures have bitop
> instructions with specific behavior.

I think that all bitop indices should be changed
to unsigned (int or long, probably long) for all
arches.

Is there any impediment to that?

I didn't find a negative index used anywhere
but I didn't do an exhaustive search.

There are many different arch specific declarations
of <foo>_bit functions.

For instance:

$ git grep -w clear_bit arch|grep "bitops\.h.*static"
arch/arc/include/asm/bitops.h:static inline void clear_bit(unsigned long nr, volatile unsigned long *m)
arch/arc/include/asm/bitops.h:static inline void clear_bit(unsigned long nr, volatile unsigned long *m)
arch/avr32/include/asm/bitops.h:static inline void clear_bit(int nr, volatile void * addr)
arch/blackfin/include/asm/bitops.h:static inline void clear_bit(int nr, volatile unsigned long *addr)
arch/frv/include/asm/bitops.h:static inline void clear_bit(unsigned long nr, volatile void *addr)
arch/hexagon/include/asm/bitops.h:static inline void clear_bit(int nr, volatile void *addr)
arch/m32r/include/asm/bitops.h:static __inline__ void clear_bit(int nr, volatile void * addr)
arch/metag/include/asm/bitops.h:static inline void clear_bit(unsigned int bit, volatile unsigned long *p)
arch/mips/include/asm/bitops.h:static inline void clear_bit(unsigned long nr, volatile unsigned long *addr)
arch/parisc/include/asm/bitops.h:static __inline__ void clear_bit(int nr, volatile unsigned long * addr)
arch/powerpc/include/asm/bitops.h:static __inline__ void clear_bit(int nr, volatile unsigned long *addr)
arch/s390/include/asm/bitops.h:static inline void clear_bit(unsigned long nr, volatile unsigned long *ptr)
arch/xtensa/include/asm/bitops.h:static inline void clear_bit(unsigned int bit, volatile unsigned long *p)

> Joe Perches <joe@perches.com> wrote:
> >On Tue, 2013-07-16 at 18:15 -0700, tip-bot for H. Peter Anvin wrote:
> >> Commit-ID:  9b710506a03b01a9fdd83962912bc9d8237b82e8
> >[]
> >> x86, bitops: Change bitops to be native operand size
> >> 
> >> Change the bitops operation to be naturally "long", i.e. 63 bits on
> >> the 64-bit kernel.  Additional bugs are likely to crop up in the
> >> future.
> >
> >> We already have bugs which machines with > 16 TiB of memory in a
> >> single node, as can happen if memory is interleaved.  The x86 bitop
> >> operations take a signed index, so using an unsigned type is not an
> >> option.
> >
> >I think it odd that any bitop index nr should be
> >anything other than unsigned long for any arch.
> >
> >Why should this arch be any different than the
> >defined type in Documentation/atomic_ops.txt?
> >
> >What value is a negative index when the bitmap
> >array address passed is the starting 0th bit?
> >
> >btw: asm-generic/bitops.h doesn't match
> >Documentation/atomic_ops.txt either.

---

 include/asm-generic/bitops/atomic.h | 15 +++++++++------
 1 file changed, 9 insertions(+), 6 deletions(-)

diff --git a/include/asm-generic/bitops/atomic.h b/include/asm-generic/bitops/atomic.h
index 9ae6c34..e4feee5 100644
--- a/include/asm-generic/bitops/atomic.h
+++ b/include/asm-generic/bitops/atomic.h
@@ -62,7 +62,7 @@ extern arch_spinlock_t __atomic_hash[ATOMIC_HASH_SIZE] __lock_aligned;
  * Note that @nr may be almost arbitrarily large; this function is not
  * restricted to acting on a single-word quantity.
  */
-static inline void set_bit(int nr, volatile unsigned long *addr)
+static inline void set_bit(unsigned long nr, volatile unsigned long *addr)
 {
 	unsigned long mask = BIT_MASK(nr);
 	unsigned long *p = ((unsigned long *)addr) + BIT_WORD(nr);
@@ -83,7 +83,7 @@ static inline void set_bit(int nr, volatile unsigned long *addr)
  * you should call smp_mb__before_clear_bit() and/or smp_mb__after_clear_bit()
  * in order to ensure changes are visible on other processors.
  */
-static inline void clear_bit(int nr, volatile unsigned long *addr)
+static inline void clear_bit(unsigned long nr, volatile unsigned long *addr)
 {
 	unsigned long mask = BIT_MASK(nr);
 	unsigned long *p = ((unsigned long *)addr) + BIT_WORD(nr);
@@ -104,7 +104,7 @@ static inline void clear_bit(int nr, volatile unsigned long *addr)
  * Note that @nr may be almost arbitrarily large; this function is not
  * restricted to acting on a single-word quantity.
  */
-static inline void change_bit(int nr, volatile unsigned long *addr)
+static inline void change_bit(unsigned long nr, volatile unsigned long *addr)
 {
 	unsigned long mask = BIT_MASK(nr);
 	unsigned long *p = ((unsigned long *)addr) + BIT_WORD(nr);
@@ -124,7 +124,8 @@ static inline void change_bit(int nr, volatile unsigned long *addr)
  * It may be reordered on other architectures than x86.
  * It also implies a memory barrier.
  */
-static inline int test_and_set_bit(int nr, volatile unsigned long *addr)
+static inline int test_and_set_bit(unsigned long nr,
+				   volatile unsigned long *addr)
 {
 	unsigned long mask = BIT_MASK(nr);
 	unsigned long *p = ((unsigned long *)addr) + BIT_WORD(nr);
@@ -148,7 +149,8 @@ static inline int test_and_set_bit(int nr, volatile unsigned long *addr)
  * It can be reorderdered on other architectures other than x86.
  * It also implies a memory barrier.
  */
-static inline int test_and_clear_bit(int nr, volatile unsigned long *addr)
+static inline int test_and_clear_bit(unsigned long nr,
+				     volatile unsigned long *addr)
 {
 	unsigned long mask = BIT_MASK(nr);
 	unsigned long *p = ((unsigned long *)addr) + BIT_WORD(nr);
@@ -171,7 +173,8 @@ static inline int test_and_clear_bit(int nr, volatile unsigned long *addr)
  * This operation is atomic and cannot be reordered.
  * It also implies a memory barrier.
  */
-static inline int test_and_change_bit(int nr, volatile unsigned long *addr)
+static inline int test_and_change_bit(unsigned long nr,
+				      volatile unsigned long *addr)
 {
 	unsigned long mask = BIT_MASK(nr);
 	unsigned long *p = ((unsigned long *)addr) + BIT_WORD(nr);

^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: [tip:x86/asm] x86, bitops:  Change bitops to be native operand size
  2013-11-10 22:44     ` [tip:x86/asm] x86, bitops: Change bitops to be native operand size Joe Perches
@ 2013-11-10 22:44       ` Joe Perches
  2013-11-11  2:06       ` H. Peter Anvin
  1 sibling, 0 replies; 11+ messages in thread
From: Joe Perches @ 2013-11-10 22:44 UTC (permalink / raw)
  To: H. Peter Anvin, linux-arch
  Cc: mingo, linux-kernel, tglx, james.t.kukunas, hpa, Linus Torvalds,
	David Miller

(adding linux-arch, and possible patch below)

On Sun, 2013-11-10 at 14:10 -0800, H. Peter Anvin wrote:
> Yes, on the generic it is int.
> 
> The problem is in part that some architectures have bitop
> instructions with specific behavior.

I think that all bitop indices should be changed
to unsigned (int or long, probably long) for all
arches.

Is there any impediment to that?

I didn't find a negative index used anywhere
but I didn't do an exhaustive search.

There are many different arch specific declarations
of <foo>_bit functions.

For instance:

$ git grep -w clear_bit arch|grep "bitops\.h.*static"
arch/arc/include/asm/bitops.h:static inline void clear_bit(unsigned long nr, volatile unsigned long *m)
arch/arc/include/asm/bitops.h:static inline void clear_bit(unsigned long nr, volatile unsigned long *m)
arch/avr32/include/asm/bitops.h:static inline void clear_bit(int nr, volatile void * addr)
arch/blackfin/include/asm/bitops.h:static inline void clear_bit(int nr, volatile unsigned long *addr)
arch/frv/include/asm/bitops.h:static inline void clear_bit(unsigned long nr, volatile void *addr)
arch/hexagon/include/asm/bitops.h:static inline void clear_bit(int nr, volatile void *addr)
arch/m32r/include/asm/bitops.h:static __inline__ void clear_bit(int nr, volatile void * addr)
arch/metag/include/asm/bitops.h:static inline void clear_bit(unsigned int bit, volatile unsigned long *p)
arch/mips/include/asm/bitops.h:static inline void clear_bit(unsigned long nr, volatile unsigned long *addr)
arch/parisc/include/asm/bitops.h:static __inline__ void clear_bit(int nr, volatile unsigned long * addr)
arch/powerpc/include/asm/bitops.h:static __inline__ void clear_bit(int nr, volatile unsigned long *addr)
arch/s390/include/asm/bitops.h:static inline void clear_bit(unsigned long nr, volatile unsigned long *ptr)
arch/xtensa/include/asm/bitops.h:static inline void clear_bit(unsigned int bit, volatile unsigned long *p)

> Joe Perches <joe@perches.com> wrote:
> >On Tue, 2013-07-16 at 18:15 -0700, tip-bot for H. Peter Anvin wrote:
> >> Commit-ID:  9b710506a03b01a9fdd83962912bc9d8237b82e8
> >[]
> >> x86, bitops: Change bitops to be native operand size
> >> 
> >> Change the bitops operation to be naturally "long", i.e. 63 bits on
> >> the 64-bit kernel.  Additional bugs are likely to crop up in the
> >> future.
> >
> >> We already have bugs which machines with > 16 TiB of memory in a
> >> single node, as can happen if memory is interleaved.  The x86 bitop
> >> operations take a signed index, so using an unsigned type is not an
> >> option.
> >
> >I think it odd that any bitop index nr should be
> >anything other than unsigned long for any arch.
> >
> >Why should this arch be any different than the
> >defined type in Documentation/atomic_ops.txt?
> >
> >What value is a negative index when the bitmap
> >array address passed is the starting 0th bit?
> >
> >btw: asm-generic/bitops.h doesn't match
> >Documentation/atomic_ops.txt either.

---

 include/asm-generic/bitops/atomic.h | 15 +++++++++------
 1 file changed, 9 insertions(+), 6 deletions(-)

diff --git a/include/asm-generic/bitops/atomic.h b/include/asm-generic/bitops/atomic.h
index 9ae6c34..e4feee5 100644
--- a/include/asm-generic/bitops/atomic.h
+++ b/include/asm-generic/bitops/atomic.h
@@ -62,7 +62,7 @@ extern arch_spinlock_t __atomic_hash[ATOMIC_HASH_SIZE] __lock_aligned;
  * Note that @nr may be almost arbitrarily large; this function is not
  * restricted to acting on a single-word quantity.
  */
-static inline void set_bit(int nr, volatile unsigned long *addr)
+static inline void set_bit(unsigned long nr, volatile unsigned long *addr)
 {
 	unsigned long mask = BIT_MASK(nr);
 	unsigned long *p = ((unsigned long *)addr) + BIT_WORD(nr);
@@ -83,7 +83,7 @@ static inline void set_bit(int nr, volatile unsigned long *addr)
  * you should call smp_mb__before_clear_bit() and/or smp_mb__after_clear_bit()
  * in order to ensure changes are visible on other processors.
  */
-static inline void clear_bit(int nr, volatile unsigned long *addr)
+static inline void clear_bit(unsigned long nr, volatile unsigned long *addr)
 {
 	unsigned long mask = BIT_MASK(nr);
 	unsigned long *p = ((unsigned long *)addr) + BIT_WORD(nr);
@@ -104,7 +104,7 @@ static inline void clear_bit(int nr, volatile unsigned long *addr)
  * Note that @nr may be almost arbitrarily large; this function is not
  * restricted to acting on a single-word quantity.
  */
-static inline void change_bit(int nr, volatile unsigned long *addr)
+static inline void change_bit(unsigned long nr, volatile unsigned long *addr)
 {
 	unsigned long mask = BIT_MASK(nr);
 	unsigned long *p = ((unsigned long *)addr) + BIT_WORD(nr);
@@ -124,7 +124,8 @@ static inline void change_bit(int nr, volatile unsigned long *addr)
  * It may be reordered on other architectures than x86.
  * It also implies a memory barrier.
  */
-static inline int test_and_set_bit(int nr, volatile unsigned long *addr)
+static inline int test_and_set_bit(unsigned long nr,
+				   volatile unsigned long *addr)
 {
 	unsigned long mask = BIT_MASK(nr);
 	unsigned long *p = ((unsigned long *)addr) + BIT_WORD(nr);
@@ -148,7 +149,8 @@ static inline int test_and_set_bit(int nr, volatile unsigned long *addr)
  * It can be reorderdered on other architectures other than x86.
  * It also implies a memory barrier.
  */
-static inline int test_and_clear_bit(int nr, volatile unsigned long *addr)
+static inline int test_and_clear_bit(unsigned long nr,
+				     volatile unsigned long *addr)
 {
 	unsigned long mask = BIT_MASK(nr);
 	unsigned long *p = ((unsigned long *)addr) + BIT_WORD(nr);
@@ -171,7 +173,8 @@ static inline int test_and_clear_bit(int nr, volatile unsigned long *addr)
  * This operation is atomic and cannot be reordered.
  * It also implies a memory barrier.
  */
-static inline int test_and_change_bit(int nr, volatile unsigned long *addr)
+static inline int test_and_change_bit(unsigned long nr,
+				      volatile unsigned long *addr)
 {
 	unsigned long mask = BIT_MASK(nr);
 	unsigned long *p = ((unsigned long *)addr) + BIT_WORD(nr);



^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: [tip:x86/asm] x86, bitops:  Change bitops to be native operand size
  2013-11-10 22:44     ` [tip:x86/asm] x86, bitops: Change bitops to be native operand size Joe Perches
  2013-11-10 22:44       ` Joe Perches
@ 2013-11-11  2:06       ` H. Peter Anvin
  2013-11-11  2:22         ` Joe Perches
  1 sibling, 1 reply; 11+ messages in thread
From: H. Peter Anvin @ 2013-11-11  2:06 UTC (permalink / raw)
  To: Joe Perches, linux-arch
  Cc: mingo, linux-kernel, tglx, james.t.kukunas, hpa, Linus Torvalds,
	David Miller

On 11/10/2013 02:44 PM, Joe Perches wrote:
> (adding linux-arch, and possible patch below)
> 
> On Sun, 2013-11-10 at 14:10 -0800, H. Peter Anvin wrote:
>> Yes, on the generic it is int.
>>
>> The problem is in part that some architectures have bitop
>> instructions with specific behavior.
> 
> I think that all bitop indices should be changed
> to unsigned (int or long, probably long) for all
> arches.
> 
> Is there any impediment to that?
> 

It is at the very best misleading.  On x86 bit indicies will be signed
no matter what the data type says, and having an unsigned data type
being interpreted as signed seems like really dangerous.

On the other hand, for the generic implementation unsigned long makes sense.

We might need a bitindex_t or something like that for it to be clean.

	-hpa

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [tip:x86/asm] x86, bitops:  Change bitops to be native operand size
  2013-11-11  2:06       ` H. Peter Anvin
@ 2013-11-11  2:22         ` Joe Perches
  2013-11-11 23:34           ` H. Peter Anvin
  0 siblings, 1 reply; 11+ messages in thread
From: Joe Perches @ 2013-11-11  2:22 UTC (permalink / raw)
  To: H. Peter Anvin
  Cc: linux-arch, mingo, linux-kernel, tglx, james.t.kukunas, hpa,
	Linus Torvalds, David Miller

On Sun, 2013-11-10 at 18:06 -0800, H. Peter Anvin wrote:
> On 11/10/2013 02:44 PM, Joe Perches wrote:
> > On Sun, 2013-11-10 at 14:10 -0800, H. Peter Anvin wrote:
> >> Yes, on the generic it is int.
> >> The problem is in part that some architectures have bitop
> >> instructions with specific behavior.
> > I think that all bitop indices should be changed
> > to unsigned (int or long, probably long) for all
> > arches.
> > Is there any impediment to that?
> It is at the very best misleading.  On x86 bit indicies will be signed
> no matter what the data type says,

?

> and having an unsigned data type
> being interpreted as signed seems like really dangerous.
> On the other hand, for the generic implementation unsigned long makes sense.
> We might need a bitindex_t or something like that for it to be clean.

Is there really any reason to introduce bitindex_t?

Perhaps the current x86 bitops asm code is being conflated
with the ideal implementation?

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [tip:x86/asm] x86, bitops:  Change bitops to be native operand size
  2013-11-11  2:22         ` Joe Perches
@ 2013-11-11 23:34           ` H. Peter Anvin
  2013-11-12  2:54             ` Joe Perches
  0 siblings, 1 reply; 11+ messages in thread
From: H. Peter Anvin @ 2013-11-11 23:34 UTC (permalink / raw)
  To: Joe Perches, H. Peter Anvin
  Cc: linux-arch, mingo, linux-kernel, tglx, james.t.kukunas,
	Linus Torvalds, David Miller

On 11/10/2013 06:22 PM, Joe Perches wrote:
> 
> Perhaps the current x86 bitops asm code is being conflated
> with the ideal implementation?
> 

Yes, by you.

x86 has instructions that operate on signed bitindicies.  It doesn't
have instructions that operate on unsigned bitindicies.  Unless someone
is willing to do the work to prove that shift and mask is actually
faster than using the hardware instructions (which I doubt, but it is
always a possibility), that's what we have.

	-hpa

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [tip:x86/asm] x86, bitops:  Change bitops to be native operand size
  2013-11-11 23:34           ` H. Peter Anvin
@ 2013-11-12  2:54             ` Joe Perches
  2013-11-12  3:15               ` Linus Torvalds
  0 siblings, 1 reply; 11+ messages in thread
From: Joe Perches @ 2013-11-12  2:54 UTC (permalink / raw)
  To: H. Peter Anvin
  Cc: H. Peter Anvin, linux-arch, mingo, linux-kernel, tglx,
	james.t.kukunas, Linus Torvalds, David Miller

On Mon, 2013-11-11 at 15:34 -0800, H. Peter Anvin wrote:
> On 11/10/2013 06:22 PM, Joe Perches wrote:
> > 
> > Perhaps the current x86 bitops asm code is being conflated
> > with the ideal implementation?
> > 
> Yes, by you.

Really?  I don't think so.

How does the use of signed long for an index where
no negative values are possible or the use of a
negative int for BIT_MASK make sense?

> x86 has instructions that operate on signed bitindicies.

indices.

> It doesn't
> have instructions that operate on unsigned bitindicies.  Unless someone
> is willing to do the work to prove that shift and mask is actually
> faster than using the hardware instructions (which I doubt, but it is
> always a possibility), that's what we have.

That doesn't mean x86 is the ideal implementation.

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [tip:x86/asm] x86, bitops: Change bitops to be native operand size
  2013-11-12  2:54             ` Joe Perches
@ 2013-11-12  3:15               ` Linus Torvalds
  2013-11-12  4:08                 ` Joe Perches
  0 siblings, 1 reply; 11+ messages in thread
From: Linus Torvalds @ 2013-11-12  3:15 UTC (permalink / raw)
  To: Joe Perches
  Cc: H. Peter Anvin, H. Peter Anvin, linux-arch, Ingo Molnar,
	Linux Kernel Mailing List, Thomas Gleixner, james.t.kukunas,
	David Miller

On Tue, Nov 12, 2013 at 11:54 AM, Joe Perches <joe@perches.com> wrote:
> On Mon, 2013-11-11 at 15:34 -0800, H. Peter Anvin wrote:
>> On 11/10/2013 06:22 PM, Joe Perches wrote:
>> >
>> > Perhaps the current x86 bitops asm code is being conflated
>> > with the ideal implementation?
>> >
>> Yes, by you.
>
> Really?  I don't think so.

What you think in this case doesn't really matter, does it? There are
actual facts, and then there is your thinking, and guess which one
matters?

Peter is absolutely correct, and has shown remarkable restraint trying
to explain it to you. The fact is, the x86 bitop instructions act on a
signed index. Making the index be "unsigned long" would violate the
actual *behavior* of the function, so it would be singularly stupid.

Talking about "ideal implementation" is also singularly stupid.
There's this fascinating thing called "reality", and you should try to
re-aquaint yourself with it.

Don't bother replying to this thread.

               Linus

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [tip:x86/asm] x86, bitops: Change bitops to be native operand size
  2013-11-12  3:15               ` Linus Torvalds
@ 2013-11-12  4:08                 ` Joe Perches
  2013-11-12  8:52                   ` Geert Uytterhoeven
  0 siblings, 1 reply; 11+ messages in thread
From: Joe Perches @ 2013-11-12  4:08 UTC (permalink / raw)
  To: Linus Torvalds
  Cc: H. Peter Anvin, H. Peter Anvin, linux-arch, Ingo Molnar,
	Linux Kernel Mailing List, Thomas Gleixner, james.t.kukunas,
	David Miller

On Tue, 2013-11-12 at 12:15 +0900, Linus Torvalds wrote:
> Talking about "ideal implementation" is also singularly stupid.

I just want the various arch implementations to match
the docs.  I know that's stupid.

Maybe if you really don't want to discuss things, you
should fix the documentation.

Documentation/atomic_ops.txt

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [tip:x86/asm] x86, bitops: Change bitops to be native operand size
  2013-11-12  4:08                 ` Joe Perches
@ 2013-11-12  8:52                   ` Geert Uytterhoeven
  2013-11-30 23:16                     ` Rob Landley
  0 siblings, 1 reply; 11+ messages in thread
From: Geert Uytterhoeven @ 2013-11-12  8:52 UTC (permalink / raw)
  To: Joe Perches
  Cc: Linus Torvalds, H. Peter Anvin, H. Peter Anvin, linux-arch,
	Ingo Molnar, Linux Kernel Mailing List, Thomas Gleixner,
	james.t.kukunas, David Miller

On Tue, Nov 12, 2013 at 5:08 AM, Joe Perches <joe@perches.com> wrote:
> On Tue, 2013-11-12 at 12:15 +0900, Linus Torvalds wrote:
>> Talking about "ideal implementation" is also singularly stupid.
>
> I just want the various arch implementations to match
> the docs.  I know that's stupid.
>
> Maybe if you really don't want to discuss things, you
> should fix the documentation.

E.g. by adding a paragraph that the actual allowed range of indices may be
a subset of "unsigned long" on some architectures.
Or if we know that everyone supports at least 31 resp. 63 bits, that it may
be limited to 31 resp. 63 unsigned bits, which is the positive range subset of
"long".

Gr{oetje,eeting}s,

                        Geert

--
Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- geert@linux-m68k.org

In personal conversations with technical people, I call myself a hacker. But
when I'm talking to journalists I just say "programmer" or something like that.
                                -- Linus Torvalds

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [tip:x86/asm] x86, bitops: Change bitops to be native operand size
  2013-11-12  8:52                   ` Geert Uytterhoeven
@ 2013-11-30 23:16                     ` Rob Landley
  2013-11-30 23:16                       ` Rob Landley
  0 siblings, 1 reply; 11+ messages in thread
From: Rob Landley @ 2013-11-30 23:16 UTC (permalink / raw)
  To: Geert Uytterhoeven
  Cc: Joe Perches, H. Peter Anvin, H. Peter Anvin, linux-arch,
	Ingo Molnar, Linux Kernel Mailing List, Thomas Gleixner,
	james.t.kukunas, David Miller

On 11/12/2013 02:52:57 AM, Geert Uytterhoeven wrote:
> On Tue, Nov 12, 2013 at 5:08 AM, Joe Perches <joe@perches.com> wrote:
> > On Tue, 2013-11-12 at 12:15 +0900, Linus Torvalds wrote:
> >> Talking about "ideal implementation" is also singularly stupid.
> >
> > I just want the various arch implementations to match
> > the docs.  I know that's stupid.
> >
> > Maybe if you really don't want to discuss things, you
> > should fix the documentation.
> 
> E.g. by adding a paragraph that the actual allowed range of indices  
> may be
> a subset of "unsigned long" on some architectures.
> Or if we know that everyone supports at least 31 resp. 63 bits, that  
> it may
> be limited to 31 resp. 63 unsigned bits, which is the positive range  
> subset of
> "long".

If this ever turns into an actual patch to this file, could you cc: me  
on it so I can marshal it upstream? (Not enough domain expertise for me  
to produce it myself...)

Thanks,

Rob

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [tip:x86/asm] x86, bitops: Change bitops to be native operand size
  2013-11-30 23:16                     ` Rob Landley
@ 2013-11-30 23:16                       ` Rob Landley
  0 siblings, 0 replies; 11+ messages in thread
From: Rob Landley @ 2013-11-30 23:16 UTC (permalink / raw)
  To: Geert Uytterhoeven
  Cc: Joe Perches, H. Peter Anvin, H. Peter Anvin, linux-arch,
	Ingo Molnar, Linux Kernel Mailing List, Thomas Gleixner,
	james.t.kukunas, David Miller

On 11/12/2013 02:52:57 AM, Geert Uytterhoeven wrote:
> On Tue, Nov 12, 2013 at 5:08 AM, Joe Perches <joe@perches.com> wrote:
> > On Tue, 2013-11-12 at 12:15 +0900, Linus Torvalds wrote:
> >> Talking about "ideal implementation" is also singularly stupid.
> >
> > I just want the various arch implementations to match
> > the docs.  I know that's stupid.
> >
> > Maybe if you really don't want to discuss things, you
> > should fix the documentation.
> 
> E.g. by adding a paragraph that the actual allowed range of indices  
> may be
> a subset of "unsigned long" on some architectures.
> Or if we know that everyone supports at least 31 resp. 63 bits, that  
> it may
> be limited to 31 resp. 63 unsigned bits, which is the positive range  
> subset of
> "long".

If this ever turns into an actual patch to this file, could you cc: me  
on it so I can marshal it upstream? (Not enough domain expertise for me  
to produce it myself...)

Thanks,

Rob

^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2013-12-01  2:28 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
     [not found] <tip-z61ofiwe90xeyb461o72h8ya@git.kernel.org>
     [not found] ` <1384117768.3081.10.camel@joe-AO722>
     [not found]   ` <5ac67859-a0b2-47f5-bdc2-c2a52b8d6885@email.android.com>
2013-11-10 22:44     ` [tip:x86/asm] x86, bitops: Change bitops to be native operand size Joe Perches
2013-11-10 22:44       ` Joe Perches
2013-11-11  2:06       ` H. Peter Anvin
2013-11-11  2:22         ` Joe Perches
2013-11-11 23:34           ` H. Peter Anvin
2013-11-12  2:54             ` Joe Perches
2013-11-12  3:15               ` Linus Torvalds
2013-11-12  4:08                 ` Joe Perches
2013-11-12  8:52                   ` Geert Uytterhoeven
2013-11-30 23:16                     ` Rob Landley
2013-11-30 23:16                       ` Rob Landley

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).