* [RFC PATCH v1 2/5] locking/qspinlock: Refactor xchg_tail to use atomic_fetch_and_or
@ 2021-07-28 11:49 Rui Wang
2021-07-28 12:27 ` Boqun Feng
0 siblings, 1 reply; 2+ messages in thread
From: Rui Wang @ 2021-07-28 11:49 UTC (permalink / raw)
To: Peter Zijlstra, Ingo Molnar, Will Deacon, Arnd Bergmann
Cc: Waiman Long, Boqun Feng, Guo Ren, linux-arch, Rui Wang, hev,
Xuefeng Li, Huacai Chen, Jiaxun Yang, Huacai Chen
From: wangrui <wangrui@loongson.cn>
Signed-by-off: Rui Wang <wangrui@loongson.cn>
Signed-by-off: hev <r@hev.cc>
---
kernel/locking/qspinlock.c | 18 ++----------------
1 file changed, 2 insertions(+), 16 deletions(-)
diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c
index cbff6ba53d56..350363e14e38 100644
--- a/kernel/locking/qspinlock.c
+++ b/kernel/locking/qspinlock.c
@@ -219,22 +219,8 @@ static __always_inline void clear_pending_set_locked(struct qspinlock *lock)
*/
static __always_inline u32 xchg_tail(struct qspinlock *lock, u32 tail)
{
- u32 old, new, val = atomic_read(&lock->val);
-
- for (;;) {
- new = (val & _Q_LOCKED_PENDING_MASK) | tail;
- /*
- * We can use relaxed semantics since the caller ensures that
- * the MCS node is properly initialized before updating the
- * tail.
- */
- old = atomic_cmpxchg_relaxed(&lock->val, val, new);
- if (old == val)
- break;
-
- val = old;
- }
- return old;
+ const u32 mask = _Q_LOCKED_PENDING_MASK;
+ return atomic_fetch_and_or_relaxed(mask, tail, &lock->val);
}
#endif /* _Q_PENDING_BITS == 8 */
--
2.32.0
^ permalink raw reply related [flat|nested] 2+ messages in thread
* Re: [RFC PATCH v1 2/5] locking/qspinlock: Refactor xchg_tail to use atomic_fetch_and_or
2021-07-28 11:49 [RFC PATCH v1 2/5] locking/qspinlock: Refactor xchg_tail to use atomic_fetch_and_or Rui Wang
@ 2021-07-28 12:27 ` Boqun Feng
0 siblings, 0 replies; 2+ messages in thread
From: Boqun Feng @ 2021-07-28 12:27 UTC (permalink / raw)
To: Rui Wang
Cc: Peter Zijlstra, Ingo Molnar, Will Deacon, Arnd Bergmann,
Waiman Long, Guo Ren, linux-arch, hev, Xuefeng Li, Huacai Chen,
Jiaxun Yang, Huacai Chen
On Wed, Jul 28, 2021 at 07:49:23PM +0800, Rui Wang wrote:
> From: wangrui <wangrui@loongson.cn>
>
Please add the explanation of why the change here, ditto for rest of the
patchset.
> Signed-by-off: Rui Wang <wangrui@loongson.cn>
> Signed-by-off: hev <r@hev.cc>
> ---
> kernel/locking/qspinlock.c | 18 ++----------------
> 1 file changed, 2 insertions(+), 16 deletions(-)
>
> diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c
> index cbff6ba53d56..350363e14e38 100644
> --- a/kernel/locking/qspinlock.c
> +++ b/kernel/locking/qspinlock.c
> @@ -219,22 +219,8 @@ static __always_inline void clear_pending_set_locked(struct qspinlock *lock)
> */
> static __always_inline u32 xchg_tail(struct qspinlock *lock, u32 tail)
> {
> - u32 old, new, val = atomic_read(&lock->val);
> -
> - for (;;) {
> - new = (val & _Q_LOCKED_PENDING_MASK) | tail;
> - /*
> - * We can use relaxed semantics since the caller ensures that
> - * the MCS node is properly initialized before updating the
> - * tail.
> - */
> - old = atomic_cmpxchg_relaxed(&lock->val, val, new);
> - if (old == val)
> - break;
> -
> - val = old;
> - }
> - return old;
> + const u32 mask = _Q_LOCKED_PENDING_MASK;
There should be a blank line between definition and code, also please
keep the comments for why we use _relaxed() here.
Regards,
Boqun
> + return atomic_fetch_and_or_relaxed(mask, tail, &lock->val);
> }
> #endif /* _Q_PENDING_BITS == 8 */
>
> --
> 2.32.0
>
^ permalink raw reply [flat|nested] 2+ messages in thread
end of thread, other threads:[~2021-07-28 12:27 UTC | newest]
Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2021-07-28 11:49 [RFC PATCH v1 2/5] locking/qspinlock: Refactor xchg_tail to use atomic_fetch_and_or Rui Wang
2021-07-28 12:27 ` Boqun Feng
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).