* [PATCH] x86/cmpxchg: Remove superfluous definitions
@ 2016-09-26 18:11 Nikolay Borisov
2016-09-27 11:02 ` Peter Zijlstra
2016-09-30 12:00 ` [tip:locking/core] x86/cmpxchg, locking/atomics: " tip-bot for Nikolay Borisov
0 siblings, 2 replies; 4+ messages in thread
From: Nikolay Borisov @ 2016-09-26 18:11 UTC (permalink / raw)
To: peterz; +Cc: hpa, x86, mingo, linux-kernel, Nikolay Borisov
cmpxchg contained definitions for unused (x)add_* operations, dating back
to the original ticket spinlock implementation. Nowadays these are
unused so remove them.
Signed-off-by: Nikolay Borisov <n.borisov.lkml@gmail.com>
---
arch/x86/include/asm/cmpxchg.h | 44 ------------------------------------------
1 file changed, 44 deletions(-)
diff --git a/arch/x86/include/asm/cmpxchg.h b/arch/x86/include/asm/cmpxchg.h
index 9733361fed6f..97848cdfcb1a 100644
--- a/arch/x86/include/asm/cmpxchg.h
+++ b/arch/x86/include/asm/cmpxchg.h
@@ -158,53 +158,9 @@ extern void __add_wrong_size(void)
* value of "*ptr".
*
* xadd() is locked when multiple CPUs are online
- * xadd_sync() is always locked
- * xadd_local() is never locked
*/
#define __xadd(ptr, inc, lock) __xchg_op((ptr), (inc), xadd, lock)
#define xadd(ptr, inc) __xadd((ptr), (inc), LOCK_PREFIX)
-#define xadd_sync(ptr, inc) __xadd((ptr), (inc), "lock; ")
-#define xadd_local(ptr, inc) __xadd((ptr), (inc), "")
-
-#define __add(ptr, inc, lock) \
- ({ \
- __typeof__ (*(ptr)) __ret = (inc); \
- switch (sizeof(*(ptr))) { \
- case __X86_CASE_B: \
- asm volatile (lock "addb %b1, %0\n" \
- : "+m" (*(ptr)) : "qi" (inc) \
- : "memory", "cc"); \
- break; \
- case __X86_CASE_W: \
- asm volatile (lock "addw %w1, %0\n" \
- : "+m" (*(ptr)) : "ri" (inc) \
- : "memory", "cc"); \
- break; \
- case __X86_CASE_L: \
- asm volatile (lock "addl %1, %0\n" \
- : "+m" (*(ptr)) : "ri" (inc) \
- : "memory", "cc"); \
- break; \
- case __X86_CASE_Q: \
- asm volatile (lock "addq %1, %0\n" \
- : "+m" (*(ptr)) : "ri" (inc) \
- : "memory", "cc"); \
- break; \
- default: \
- __add_wrong_size(); \
- } \
- __ret; \
- })
-
-/*
- * add_*() adds "inc" to "*ptr"
- *
- * __add() takes a lock prefix
- * add_smp() is locked when multiple CPUs are online
- * add_sync() is always locked
- */
-#define add_smp(ptr, inc) __add((ptr), (inc), LOCK_PREFIX)
-#define add_sync(ptr, inc) __add((ptr), (inc), "lock; ")
#define __cmpxchg_double(pfx, p1, p2, o1, o2, n1, n2) \
({ \
--
2.7.4
^ permalink raw reply related [flat|nested] 4+ messages in thread
* Re: [PATCH] x86/cmpxchg: Remove superfluous definitions
2016-09-26 18:11 [PATCH] x86/cmpxchg: Remove superfluous definitions Nikolay Borisov
@ 2016-09-27 11:02 ` Peter Zijlstra
2016-09-27 12:18 ` Ingo Molnar
2016-09-30 12:00 ` [tip:locking/core] x86/cmpxchg, locking/atomics: " tip-bot for Nikolay Borisov
1 sibling, 1 reply; 4+ messages in thread
From: Peter Zijlstra @ 2016-09-27 11:02 UTC (permalink / raw)
To: Nikolay Borisov; +Cc: hpa, x86, mingo, linux-kernel
On Mon, Sep 26, 2016 at 09:11:18PM +0300, Nikolay Borisov wrote:
> cmpxchg contained definitions for unused (x)add_* operations, dating back
> to the original ticket spinlock implementation. Nowadays these are
> unused so remove them.
https://lkml.kernel.org/r/20160518184302.GO3193@twins.programming.kicks-ass.net
should go first though... Ingo?
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [PATCH] x86/cmpxchg: Remove superfluous definitions
2016-09-27 11:02 ` Peter Zijlstra
@ 2016-09-27 12:18 ` Ingo Molnar
0 siblings, 0 replies; 4+ messages in thread
From: Ingo Molnar @ 2016-09-27 12:18 UTC (permalink / raw)
To: Peter Zijlstra; +Cc: Nikolay Borisov, hpa, x86, mingo, linux-kernel
* Peter Zijlstra <peterz@infradead.org> wrote:
> On Mon, Sep 26, 2016 at 09:11:18PM +0300, Nikolay Borisov wrote:
> > cmpxchg contained definitions for unused (x)add_* operations, dating back
> > to the original ticket spinlock implementation. Nowadays these are
> > unused so remove them.
>
> https://lkml.kernel.org/r/20160518184302.GO3193@twins.programming.kicks-ass.net
>
> should go first though... Ingo?
Sure, no objections from me!
Thanks,
Ingo
^ permalink raw reply [flat|nested] 4+ messages in thread
* [tip:locking/core] x86/cmpxchg, locking/atomics: Remove superfluous definitions
2016-09-26 18:11 [PATCH] x86/cmpxchg: Remove superfluous definitions Nikolay Borisov
2016-09-27 11:02 ` Peter Zijlstra
@ 2016-09-30 12:00 ` tip-bot for Nikolay Borisov
1 sibling, 0 replies; 4+ messages in thread
From: tip-bot for Nikolay Borisov @ 2016-09-30 12:00 UTC (permalink / raw)
To: linux-tip-commits
Cc: akpm, hpa, linux-kernel, paulmck, mingo, peterz, torvalds, tglx,
n.borisov.lkml
Commit-ID: 08645077b7f9f7824dbaf1959b0e014a894c8acc
Gitweb: http://git.kernel.org/tip/08645077b7f9f7824dbaf1959b0e014a894c8acc
Author: Nikolay Borisov <n.borisov.lkml@gmail.com>
AuthorDate: Mon, 26 Sep 2016 21:11:18 +0300
Committer: Ingo Molnar <mingo@kernel.org>
CommitDate: Fri, 30 Sep 2016 10:56:01 +0200
x86/cmpxchg, locking/atomics: Remove superfluous definitions
cmpxchg contained definitions for unused (x)add_* operations, dating back
to the original ticket spinlock implementation. Nowadays these are
unused so remove them.
Signed-off-by: Nikolay Borisov <n.borisov.lkml@gmail.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: hpa@zytor.com
Link: http://lkml.kernel.org/r/1474913478-17757-1-git-send-email-n.borisov.lkml@gmail.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
arch/x86/include/asm/cmpxchg.h | 44 ------------------------------------------
1 file changed, 44 deletions(-)
diff --git a/arch/x86/include/asm/cmpxchg.h b/arch/x86/include/asm/cmpxchg.h
index 9733361..97848cd 100644
--- a/arch/x86/include/asm/cmpxchg.h
+++ b/arch/x86/include/asm/cmpxchg.h
@@ -158,53 +158,9 @@ extern void __add_wrong_size(void)
* value of "*ptr".
*
* xadd() is locked when multiple CPUs are online
- * xadd_sync() is always locked
- * xadd_local() is never locked
*/
#define __xadd(ptr, inc, lock) __xchg_op((ptr), (inc), xadd, lock)
#define xadd(ptr, inc) __xadd((ptr), (inc), LOCK_PREFIX)
-#define xadd_sync(ptr, inc) __xadd((ptr), (inc), "lock; ")
-#define xadd_local(ptr, inc) __xadd((ptr), (inc), "")
-
-#define __add(ptr, inc, lock) \
- ({ \
- __typeof__ (*(ptr)) __ret = (inc); \
- switch (sizeof(*(ptr))) { \
- case __X86_CASE_B: \
- asm volatile (lock "addb %b1, %0\n" \
- : "+m" (*(ptr)) : "qi" (inc) \
- : "memory", "cc"); \
- break; \
- case __X86_CASE_W: \
- asm volatile (lock "addw %w1, %0\n" \
- : "+m" (*(ptr)) : "ri" (inc) \
- : "memory", "cc"); \
- break; \
- case __X86_CASE_L: \
- asm volatile (lock "addl %1, %0\n" \
- : "+m" (*(ptr)) : "ri" (inc) \
- : "memory", "cc"); \
- break; \
- case __X86_CASE_Q: \
- asm volatile (lock "addq %1, %0\n" \
- : "+m" (*(ptr)) : "ri" (inc) \
- : "memory", "cc"); \
- break; \
- default: \
- __add_wrong_size(); \
- } \
- __ret; \
- })
-
-/*
- * add_*() adds "inc" to "*ptr"
- *
- * __add() takes a lock prefix
- * add_smp() is locked when multiple CPUs are online
- * add_sync() is always locked
- */
-#define add_smp(ptr, inc) __add((ptr), (inc), LOCK_PREFIX)
-#define add_sync(ptr, inc) __add((ptr), (inc), "lock; ")
#define __cmpxchg_double(pfx, p1, p2, o1, o2, n1, n2) \
({ \
^ permalink raw reply related [flat|nested] 4+ messages in thread
end of thread, other threads:[~2016-09-30 12:01 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2016-09-26 18:11 [PATCH] x86/cmpxchg: Remove superfluous definitions Nikolay Borisov
2016-09-27 11:02 ` Peter Zijlstra
2016-09-27 12:18 ` Ingo Molnar
2016-09-30 12:00 ` [tip:locking/core] x86/cmpxchg, locking/atomics: " tip-bot for Nikolay Borisov
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox