* Re: [PATCH] locking/qspinlock: Ensure node is initialised before updating prev->next
2018-01-31 12:20 [PATCH] locking/qspinlock: Ensure node is initialised before updating prev->next Will Deacon
@ 2018-01-31 12:38 ` Peter Zijlstra
2018-01-31 14:11 ` Andrea Parri
2018-02-03 2:24 ` kbuild test robot
2018-02-03 2:26 ` kbuild test robot
2 siblings, 1 reply; 5+ messages in thread
From: Peter Zijlstra @ 2018-01-31 12:38 UTC (permalink / raw)
To: Will Deacon; +Cc: linux-kernel, Ingo Molnar
On Wed, Jan 31, 2018 at 12:20:46PM +0000, Will Deacon wrote:
> diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c
> index 294294c71ba4..1ebbc366a31d 100644
> --- a/kernel/locking/qspinlock.c
> +++ b/kernel/locking/qspinlock.c
> @@ -408,16 +408,15 @@ void queued_spin_lock_slowpath(struct qspinlock *lock, u32 val)
> */
> if (old & _Q_TAIL_MASK) {
> prev = decode_tail(old);
> +
> /*
> - * The above xchg_tail() is also a load of @lock which generates,
> - * through decode_tail(), a pointer.
> - *
> - * The address dependency matches the RELEASE of xchg_tail()
> - * such that the access to @prev must happen after.
> + * We must ensure that the stores to @node are observed before
> + * the write to prev->next. The address dependency on xchg_tail
> + * is not sufficient to ensure this because the read component
> + * of xchg_tail is unordered with respect to the initialisation
> + * of node.
> */
> - smp_read_barrier_depends();
Right, except you're patching old code here, please try again on a tree
that includes commit:
548095dea63f ("locking: Remove smp_read_barrier_depends() from queued_spin_lock_slowpath()")
> -
> - WRITE_ONCE(prev->next, node);
> + smp_store_release(prev->next, node);
>
> pv_wait_node(node, prev);
> arch_mcs_spin_lock_contended(&node->locked);
> --
> 2.1.4
>
^ permalink raw reply [flat|nested] 5+ messages in thread* Re: [PATCH] locking/qspinlock: Ensure node is initialised before updating prev->next
2018-01-31 12:38 ` Peter Zijlstra
@ 2018-01-31 14:11 ` Andrea Parri
0 siblings, 0 replies; 5+ messages in thread
From: Andrea Parri @ 2018-01-31 14:11 UTC (permalink / raw)
To: Peter Zijlstra; +Cc: Will Deacon, linux-kernel, Ingo Molnar
On Wed, Jan 31, 2018 at 01:38:59PM +0100, Peter Zijlstra wrote:
> On Wed, Jan 31, 2018 at 12:20:46PM +0000, Will Deacon wrote:
> > diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c
> > index 294294c71ba4..1ebbc366a31d 100644
> > --- a/kernel/locking/qspinlock.c
> > +++ b/kernel/locking/qspinlock.c
> > @@ -408,16 +408,15 @@ void queued_spin_lock_slowpath(struct qspinlock *lock, u32 val)
> > */
> > if (old & _Q_TAIL_MASK) {
> > prev = decode_tail(old);
> > +
> > /*
> > - * The above xchg_tail() is also a load of @lock which generates,
> > - * through decode_tail(), a pointer.
> > - *
> > - * The address dependency matches the RELEASE of xchg_tail()
> > - * such that the access to @prev must happen after.
> > + * We must ensure that the stores to @node are observed before
> > + * the write to prev->next. The address dependency on xchg_tail
> > + * is not sufficient to ensure this because the read component
> > + * of xchg_tail is unordered with respect to the initialisation
> > + * of node.
> > */
> > - smp_read_barrier_depends();
>
> Right, except you're patching old code here, please try again on a tree
> that includes commit:
>
> 548095dea63f ("locking: Remove smp_read_barrier_depends() from queued_spin_lock_slowpath()")
BTW, which loads was/is the smp_read_barrier_depends() supposed to order? ;)
I was somehow guessing that this barrier was/is there to "order" the load
from xchg_tail() with the address-dependent loads from pv_wait_node(); is
this true? (Does Will's patch really remove the reliance on the barrier?)
Andrea
>
> > -
> > - WRITE_ONCE(prev->next, node);
> > + smp_store_release(prev->next, node);
> >
> > pv_wait_node(node, prev);
> > arch_mcs_spin_lock_contended(&node->locked);
> > --
> > 2.1.4
> >
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [PATCH] locking/qspinlock: Ensure node is initialised before updating prev->next
2018-01-31 12:20 [PATCH] locking/qspinlock: Ensure node is initialised before updating prev->next Will Deacon
2018-01-31 12:38 ` Peter Zijlstra
@ 2018-02-03 2:24 ` kbuild test robot
2018-02-03 2:26 ` kbuild test robot
2 siblings, 0 replies; 5+ messages in thread
From: kbuild test robot @ 2018-02-03 2:24 UTC (permalink / raw)
To: Will Deacon
Cc: kbuild-all, linux-kernel, Will Deacon, Peter Zijlstra,
Ingo Molnar
[-- Attachment #1: Type: text/plain, Size: 3683 bytes --]
Hi Will,
I love your patch! Perhaps something to improve:
[auto build test WARNING on v4.15]
[cannot apply to tip/locking/core tip/core/locking tip/auto-latest next-20180202]
[if your patch is applied to the wrong git tree, please drop us a note to help improve the system]
url: https://github.com/0day-ci/linux/commits/Will-Deacon/locking-qspinlock-Ensure-node-is-initialised-before-updating-prev-next/20180203-095222
config: sparc64-allyesconfig (attached as .config)
compiler: sparc64-linux-gnu-gcc (Debian 7.2.0-11) 7.2.0
reproduce:
wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
chmod +x ~/bin/make.cross
# save the attached .config to linux build tree
make.cross ARCH=sparc64
All warnings (new ones prefixed by >>):
In file included from include/linux/kernel.h:10:0,
from include/linux/list.h:9,
from include/linux/smp.h:12,
from kernel/locking/qspinlock.c:25:
kernel/locking/qspinlock.c: In function 'queued_spin_lock_slowpath':
include/linux/compiler.h:264:8: error: conversion to non-scalar type requested
union { typeof(x) __val; char __c[1]; } __u = \
^
>> arch/sparc/include/asm/barrier_64.h:45:2: note: in expansion of macro 'WRITE_ONCE'
WRITE_ONCE(*p, v); \
^~~~~~~~~~
include/asm-generic/barrier.h:157:33: note: in expansion of macro '__smp_store_release'
#define smp_store_release(p, v) __smp_store_release(p, v)
^~~~~~~~~~~~~~~~~~~
kernel/locking/qspinlock.c:419:3: note: in expansion of macro 'smp_store_release'
smp_store_release(prev->next, node);
^~~~~~~~~~~~~~~~~
--
In file included from include/linux/kernel.h:10:0,
from include/linux/list.h:9,
from include/linux/smp.h:12,
from kernel//locking/qspinlock.c:25:
kernel//locking/qspinlock.c: In function 'queued_spin_lock_slowpath':
include/linux/compiler.h:264:8: error: conversion to non-scalar type requested
union { typeof(x) __val; char __c[1]; } __u = \
^
>> arch/sparc/include/asm/barrier_64.h:45:2: note: in expansion of macro 'WRITE_ONCE'
WRITE_ONCE(*p, v); \
^~~~~~~~~~
include/asm-generic/barrier.h:157:33: note: in expansion of macro '__smp_store_release'
#define smp_store_release(p, v) __smp_store_release(p, v)
^~~~~~~~~~~~~~~~~~~
kernel//locking/qspinlock.c:419:3: note: in expansion of macro 'smp_store_release'
smp_store_release(prev->next, node);
^~~~~~~~~~~~~~~~~
vim +/WRITE_ONCE +45 arch/sparc/include/asm/barrier_64.h
d550bbd4 David Howells 2012-03-28 40
45d9b859 Michael S. Tsirkin 2015-12-27 41 #define __smp_store_release(p, v) \
47933ad4 Peter Zijlstra 2013-11-06 42 do { \
47933ad4 Peter Zijlstra 2013-11-06 43 compiletime_assert_atomic_type(*p); \
47933ad4 Peter Zijlstra 2013-11-06 44 barrier(); \
76695af2 Andrey Konovalov 2015-08-02 @45 WRITE_ONCE(*p, v); \
47933ad4 Peter Zijlstra 2013-11-06 46 } while (0)
47933ad4 Peter Zijlstra 2013-11-06 47
:::::: The code at line 45 was first introduced by commit
:::::: 76695af20c015206cffb84b15912be6797d0cca2 locking, arch: use WRITE_ONCE()/READ_ONCE() in smp_store_release()/smp_load_acquire()
:::::: TO: Andrey Konovalov <andreyknvl@google.com>
:::::: CC: Ingo Molnar <mingo@kernel.org>
---
0-DAY kernel test infrastructure Open Source Technology Center
https://lists.01.org/pipermail/kbuild-all Intel Corporation
[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 52960 bytes --]
^ permalink raw reply [flat|nested] 5+ messages in thread* Re: [PATCH] locking/qspinlock: Ensure node is initialised before updating prev->next
2018-01-31 12:20 [PATCH] locking/qspinlock: Ensure node is initialised before updating prev->next Will Deacon
2018-01-31 12:38 ` Peter Zijlstra
2018-02-03 2:24 ` kbuild test robot
@ 2018-02-03 2:26 ` kbuild test robot
2 siblings, 0 replies; 5+ messages in thread
From: kbuild test robot @ 2018-02-03 2:26 UTC (permalink / raw)
To: Will Deacon
Cc: kbuild-all, linux-kernel, Will Deacon, Peter Zijlstra,
Ingo Molnar
[-- Attachment #1: Type: text/plain, Size: 3522 bytes --]
Hi Will,
I love your patch! Yet something to improve:
[auto build test ERROR on v4.15]
[cannot apply to tip/locking/core tip/core/locking tip/auto-latest next-20180202]
[if your patch is applied to the wrong git tree, please drop us a note to help improve the system]
url: https://github.com/0day-ci/linux/commits/Will-Deacon/locking-qspinlock-Ensure-node-is-initialised-before-updating-prev-next/20180203-095222
config: x86_64-randconfig-x017-201804 (attached as .config)
compiler: gcc-7 (Debian 7.2.0-12) 7.2.1 20171025
reproduce:
# save the attached .config to linux build tree
make ARCH=x86_64
All error/warnings (new ones prefixed by >>):
In file included from include/linux/kernel.h:10:0,
from include/linux/list.h:9,
from include/linux/smp.h:12,
from kernel/locking/qspinlock.c:25:
kernel/locking/qspinlock.c: In function 'queued_spin_lock_slowpath':
>> include/linux/compiler.h:264:8: error: conversion to non-scalar type requested
union { typeof(x) __val; char __c[1]; } __u = \
^
>> arch/x86/include/asm/barrier.h:71:2: note: in expansion of macro 'WRITE_ONCE'
WRITE_ONCE(*p, v); \
^~~~~~~~~~
include/asm-generic/barrier.h:157:33: note: in expansion of macro '__smp_store_release'
#define smp_store_release(p, v) __smp_store_release(p, v)
^~~~~~~~~~~~~~~~~~~
>> kernel/locking/qspinlock.c:419:3: note: in expansion of macro 'smp_store_release'
smp_store_release(prev->next, node);
^~~~~~~~~~~~~~~~~
--
In file included from include/linux/kernel.h:10:0,
from include/linux/list.h:9,
from include/linux/smp.h:12,
from kernel//locking/qspinlock.c:25:
kernel//locking/qspinlock.c: In function 'queued_spin_lock_slowpath':
>> include/linux/compiler.h:264:8: error: conversion to non-scalar type requested
union { typeof(x) __val; char __c[1]; } __u = \
^
>> arch/x86/include/asm/barrier.h:71:2: note: in expansion of macro 'WRITE_ONCE'
WRITE_ONCE(*p, v); \
^~~~~~~~~~
include/asm-generic/barrier.h:157:33: note: in expansion of macro '__smp_store_release'
#define smp_store_release(p, v) __smp_store_release(p, v)
^~~~~~~~~~~~~~~~~~~
kernel//locking/qspinlock.c:419:3: note: in expansion of macro 'smp_store_release'
smp_store_release(prev->next, node);
^~~~~~~~~~~~~~~~~
vim +/WRITE_ONCE +71 arch/x86/include/asm/barrier.h
47933ad4 Peter Zijlstra 2013-11-06 66
1638fb72 Michael S. Tsirkin 2015-12-27 67 #define __smp_store_release(p, v) \
47933ad4 Peter Zijlstra 2013-11-06 68 do { \
47933ad4 Peter Zijlstra 2013-11-06 69 compiletime_assert_atomic_type(*p); \
47933ad4 Peter Zijlstra 2013-11-06 70 barrier(); \
76695af2 Andrey Konovalov 2015-08-02 @71 WRITE_ONCE(*p, v); \
47933ad4 Peter Zijlstra 2013-11-06 72 } while (0)
47933ad4 Peter Zijlstra 2013-11-06 73
:::::: The code at line 71 was first introduced by commit
:::::: 76695af20c015206cffb84b15912be6797d0cca2 locking, arch: use WRITE_ONCE()/READ_ONCE() in smp_store_release()/smp_load_acquire()
:::::: TO: Andrey Konovalov <andreyknvl@google.com>
:::::: CC: Ingo Molnar <mingo@kernel.org>
---
0-DAY kernel test infrastructure Open Source Technology Center
https://lists.01.org/pipermail/kbuild-all Intel Corporation
[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 34751 bytes --]
^ permalink raw reply [flat|nested] 5+ messages in thread