* [PATCH v2 0/1] local_lock: Move this_cpu_ptr() notation from internal to main header @ 2025-06-10 11:02 Sebastian Andrzej Siewior 2025-06-10 11:02 ` [PATCH v2 1/1] " Sebastian Andrzej Siewior 0 siblings, 1 reply; 4+ messages in thread From: Sebastian Andrzej Siewior @ 2025-06-10 11:02 UTC (permalink / raw) To: linux-kernel, linux-rt-devel Cc: tglx, Peter Zijlstra, Ingo Molnar, Will Deacon, Waiman Long, Boqun Feng, Alexei Starovoitov, Sebastian Andrzej Siewior While looking at what needs extra locks for PREEMPT_RT in order to rid of the lock in local_bh_disable() I stumbled uppon two users which need to lock the structure but the pointer is no longer per_cpu. The moves this_cpu_ptr() from the internal header to the main one in order to free the name space and have the __ prefix function to do the same but without the this_cpu_ptr(). This gives us local_lock_nested_bh() -> on per-CPU memory __local_lock_nested_bh() -> on local memory. This change has been made to all local_lock*() functions. I made an example for the crypto user https://lore.kernel.org/all/20250514110750.852919-3-bigeasy@linutronix.de/ and would route if via crypto once this is accepted. v1…v2: https://lore.kernel.org/all/20250514110750.852919-1-bigeasy@linutronix.de/ - Repost without the crypto user. Sebastian Andrzej Siewior (1): local_lock: Move this_cpu_ptr() notation from internal to main header. include/linux/local_lock.h | 20 +++++++++---------- include/linux/local_lock_internal.h | 30 ++++++++++++++--------------- 2 files changed, 25 insertions(+), 25 deletions(-) -- 2.49.0 ^ permalink raw reply [flat|nested] 4+ messages in thread
* [PATCH v2 1/1] local_lock: Move this_cpu_ptr() notation from internal to main header. 2025-06-10 11:02 [PATCH v2 0/1] local_lock: Move this_cpu_ptr() notation from internal to main header Sebastian Andrzej Siewior @ 2025-06-10 11:02 ` Sebastian Andrzej Siewior 2025-06-28 8:08 ` Sebastian Andrzej Siewior 2025-06-28 19:06 ` Waiman Long 0 siblings, 2 replies; 4+ messages in thread From: Sebastian Andrzej Siewior @ 2025-06-10 11:02 UTC (permalink / raw) To: linux-kernel, linux-rt-devel Cc: tglx, Peter Zijlstra, Ingo Molnar, Will Deacon, Waiman Long, Boqun Feng, Alexei Starovoitov, Sebastian Andrzej Siewior The local_lock.h is the main entry for the local_lock_t type and provides wrappers around internal functions prefixed with __ in local_lock_internal.h. Move the this_cpu_ptr() dereference of the variable from the internal to the main header. Since it is all macro implemented, this_cpu_ptr() will still happen within the preempt/ IRQ disabled section. This will free the internal implementation (__) to be used on local_lock_t types which are local variables and must not be accessed via this_cpu_ptr(). Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> --- include/linux/local_lock.h | 20 +++++++++---------- include/linux/local_lock_internal.h | 30 ++++++++++++++--------------- 2 files changed, 25 insertions(+), 25 deletions(-) diff --git a/include/linux/local_lock.h b/include/linux/local_lock.h index 16a2ee4f8310b..2ba8464195244 100644 --- a/include/linux/local_lock.h +++ b/include/linux/local_lock.h @@ -13,13 +13,13 @@ * local_lock - Acquire a per CPU local lock * @lock: The lock variable */ -#define local_lock(lock) __local_lock(lock) +#define local_lock(lock) __local_lock(this_cpu_ptr(lock)) /** * local_lock_irq - Acquire a per CPU local lock and disable interrupts * @lock: The lock variable */ -#define local_lock_irq(lock) __local_lock_irq(lock) +#define local_lock_irq(lock) __local_lock_irq(this_cpu_ptr(lock)) /** * local_lock_irqsave - Acquire a per CPU local lock, save and disable @@ -28,19 +28,19 @@ * @flags: Storage for interrupt flags */ #define local_lock_irqsave(lock, flags) \ - __local_lock_irqsave(lock, flags) + __local_lock_irqsave(this_cpu_ptr(lock), flags) /** * local_unlock - Release a per CPU local lock * @lock: The lock variable */ -#define local_unlock(lock) __local_unlock(lock) +#define local_unlock(lock) __local_unlock(this_cpu_ptr(lock)) /** * local_unlock_irq - Release a per CPU local lock and enable interrupts * @lock: The lock variable */ -#define local_unlock_irq(lock) __local_unlock_irq(lock) +#define local_unlock_irq(lock) __local_unlock_irq(this_cpu_ptr(lock)) /** * local_unlock_irqrestore - Release a per CPU local lock and restore @@ -49,7 +49,7 @@ * @flags: Interrupt flags to restore */ #define local_unlock_irqrestore(lock, flags) \ - __local_unlock_irqrestore(lock, flags) + __local_unlock_irqrestore(this_cpu_ptr(lock), flags) /** * local_lock_init - Runtime initialize a lock instance @@ -64,7 +64,7 @@ * locking constrains it will _always_ fail to acquire the lock in NMI or * HARDIRQ context on PREEMPT_RT. */ -#define local_trylock(lock) __local_trylock(lock) +#define local_trylock(lock) __local_trylock(this_cpu_ptr(lock)) /** * local_trylock_irqsave - Try to acquire a per CPU local lock, save and disable @@ -77,7 +77,7 @@ * HARDIRQ context on PREEMPT_RT. */ #define local_trylock_irqsave(lock, flags) \ - __local_trylock_irqsave(lock, flags) + __local_trylock_irqsave(this_cpu_ptr(lock), flags) DEFINE_GUARD(local_lock, local_lock_t __percpu*, local_lock(_T), @@ -91,10 +91,10 @@ DEFINE_LOCK_GUARD_1(local_lock_irqsave, local_lock_t __percpu, unsigned long flags) #define local_lock_nested_bh(_lock) \ - __local_lock_nested_bh(_lock) + __local_lock_nested_bh(this_cpu_ptr(_lock)) #define local_unlock_nested_bh(_lock) \ - __local_unlock_nested_bh(_lock) + __local_unlock_nested_bh(this_cpu_ptr(_lock)) DEFINE_GUARD(local_lock_nested_bh, local_lock_t __percpu*, local_lock_nested_bh(_T), diff --git a/include/linux/local_lock_internal.h b/include/linux/local_lock_internal.h index 8d5ac16a9b179..b4d7b24882835 100644 --- a/include/linux/local_lock_internal.h +++ b/include/linux/local_lock_internal.h @@ -99,14 +99,14 @@ do { \ local_trylock_t *tl; \ local_lock_t *l; \ \ - l = (local_lock_t *)this_cpu_ptr(lock); \ + l = (local_lock_t *)(lock); \ tl = (local_trylock_t *)l; \ _Generic((lock), \ - __percpu local_trylock_t *: ({ \ + local_trylock_t *: ({ \ lockdep_assert(tl->acquired == 0); \ WRITE_ONCE(tl->acquired, 1); \ }), \ - __percpu local_lock_t *: (void)0); \ + local_lock_t *: (void)0); \ local_lock_acquire(l); \ } while (0) @@ -133,7 +133,7 @@ do { \ local_trylock_t *tl; \ \ preempt_disable(); \ - tl = this_cpu_ptr(lock); \ + tl = (lock); \ if (READ_ONCE(tl->acquired)) { \ preempt_enable(); \ tl = NULL; \ @@ -150,7 +150,7 @@ do { \ local_trylock_t *tl; \ \ local_irq_save(flags); \ - tl = this_cpu_ptr(lock); \ + tl = (lock); \ if (READ_ONCE(tl->acquired)) { \ local_irq_restore(flags); \ tl = NULL; \ @@ -167,15 +167,15 @@ do { \ local_trylock_t *tl; \ local_lock_t *l; \ \ - l = (local_lock_t *)this_cpu_ptr(lock); \ + l = (local_lock_t *)(lock); \ tl = (local_trylock_t *)l; \ local_lock_release(l); \ _Generic((lock), \ - __percpu local_trylock_t *: ({ \ + local_trylock_t *: ({ \ lockdep_assert(tl->acquired == 1); \ WRITE_ONCE(tl->acquired, 0); \ }), \ - __percpu local_lock_t *: (void)0); \ + local_lock_t *: (void)0); \ } while (0) #define __local_unlock(lock) \ @@ -199,11 +199,11 @@ do { \ #define __local_lock_nested_bh(lock) \ do { \ lockdep_assert_in_softirq(); \ - local_lock_acquire(this_cpu_ptr(lock)); \ + local_lock_acquire((lock)); \ } while (0) #define __local_unlock_nested_bh(lock) \ - local_lock_release(this_cpu_ptr(lock)) + local_lock_release((lock)) #else /* !CONFIG_PREEMPT_RT */ @@ -227,7 +227,7 @@ typedef spinlock_t local_trylock_t; #define __local_lock(__lock) \ do { \ migrate_disable(); \ - spin_lock(this_cpu_ptr((__lock))); \ + spin_lock((__lock)); \ } while (0) #define __local_lock_irq(lock) __local_lock(lock) @@ -241,7 +241,7 @@ typedef spinlock_t local_trylock_t; #define __local_unlock(__lock) \ do { \ - spin_unlock(this_cpu_ptr((__lock))); \ + spin_unlock((__lock)); \ migrate_enable(); \ } while (0) @@ -252,12 +252,12 @@ typedef spinlock_t local_trylock_t; #define __local_lock_nested_bh(lock) \ do { \ lockdep_assert_in_softirq_func(); \ - spin_lock(this_cpu_ptr(lock)); \ + spin_lock((lock)); \ } while (0) #define __local_unlock_nested_bh(lock) \ do { \ - spin_unlock(this_cpu_ptr((lock))); \ + spin_unlock((lock)); \ } while (0) #define __local_trylock(lock) \ @@ -268,7 +268,7 @@ do { \ __locked = 0; \ } else { \ migrate_disable(); \ - __locked = spin_trylock(this_cpu_ptr((lock))); \ + __locked = spin_trylock((lock)); \ if (!__locked) \ migrate_enable(); \ } \ -- 2.49.0 ^ permalink raw reply related [flat|nested] 4+ messages in thread
* Re: [PATCH v2 1/1] local_lock: Move this_cpu_ptr() notation from internal to main header. 2025-06-10 11:02 ` [PATCH v2 1/1] " Sebastian Andrzej Siewior @ 2025-06-28 8:08 ` Sebastian Andrzej Siewior 2025-06-28 19:06 ` Waiman Long 1 sibling, 0 replies; 4+ messages in thread From: Sebastian Andrzej Siewior @ 2025-06-28 8:08 UTC (permalink / raw) To: linux-kernel, linux-rt-devel Cc: tglx, Peter Zijlstra, Ingo Molnar, Will Deacon, Waiman Long, Boqun Feng, Alexei Starovoitov On 2025-06-10 13:02:04 [+0200], To linux-kernel@vger.kernel.org wrote: > The local_lock.h is the main entry for the local_lock_t type and > provides wrappers around internal functions prefixed with __ in > local_lock_internal.h. > > Move the this_cpu_ptr() dereference of the variable from the internal to > the main header. Since it is all macro implemented, this_cpu_ptr() will > still happen within the preempt/ IRQ disabled section. > This will free the internal implementation (__) to be used on > local_lock_t types which are local variables and must not be accessed > via this_cpu_ptr(). > > Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> A gentle ping. Sebastian ^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [PATCH v2 1/1] local_lock: Move this_cpu_ptr() notation from internal to main header. 2025-06-10 11:02 ` [PATCH v2 1/1] " Sebastian Andrzej Siewior 2025-06-28 8:08 ` Sebastian Andrzej Siewior @ 2025-06-28 19:06 ` Waiman Long 1 sibling, 0 replies; 4+ messages in thread From: Waiman Long @ 2025-06-28 19:06 UTC (permalink / raw) To: Sebastian Andrzej Siewior, linux-kernel, linux-rt-devel Cc: tglx, Peter Zijlstra, Ingo Molnar, Will Deacon, Boqun Feng, Alexei Starovoitov On 6/10/25 7:02 AM, Sebastian Andrzej Siewior wrote: > The local_lock.h is the main entry for the local_lock_t type and > provides wrappers around internal functions prefixed with __ in > local_lock_internal.h. > > Move the this_cpu_ptr() dereference of the variable from the internal to > the main header. Since it is all macro implemented, this_cpu_ptr() will > still happen within the preempt/ IRQ disabled section. > This will free the internal implementation (__) to be used on > local_lock_t types which are local variables and must not be accessed > via this_cpu_ptr(). > > Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> > --- > include/linux/local_lock.h | 20 +++++++++---------- > include/linux/local_lock_internal.h | 30 ++++++++++++++--------------- > 2 files changed, 25 insertions(+), 25 deletions(-) > > diff --git a/include/linux/local_lock.h b/include/linux/local_lock.h > index 16a2ee4f8310b..2ba8464195244 100644 > --- a/include/linux/local_lock.h > +++ b/include/linux/local_lock.h > @@ -13,13 +13,13 @@ > * local_lock - Acquire a per CPU local lock > * @lock: The lock variable > */ > -#define local_lock(lock) __local_lock(lock) > +#define local_lock(lock) __local_lock(this_cpu_ptr(lock)) > > /** > * local_lock_irq - Acquire a per CPU local lock and disable interrupts > * @lock: The lock variable > */ > -#define local_lock_irq(lock) __local_lock_irq(lock) > +#define local_lock_irq(lock) __local_lock_irq(this_cpu_ptr(lock)) > > /** > * local_lock_irqsave - Acquire a per CPU local lock, save and disable > @@ -28,19 +28,19 @@ > * @flags: Storage for interrupt flags > */ > #define local_lock_irqsave(lock, flags) \ > - __local_lock_irqsave(lock, flags) > + __local_lock_irqsave(this_cpu_ptr(lock), flags) > > /** > * local_unlock - Release a per CPU local lock > * @lock: The lock variable > */ > -#define local_unlock(lock) __local_unlock(lock) > +#define local_unlock(lock) __local_unlock(this_cpu_ptr(lock)) > > /** > * local_unlock_irq - Release a per CPU local lock and enable interrupts > * @lock: The lock variable > */ > -#define local_unlock_irq(lock) __local_unlock_irq(lock) > +#define local_unlock_irq(lock) __local_unlock_irq(this_cpu_ptr(lock)) > > /** > * local_unlock_irqrestore - Release a per CPU local lock and restore > @@ -49,7 +49,7 @@ > * @flags: Interrupt flags to restore > */ > #define local_unlock_irqrestore(lock, flags) \ > - __local_unlock_irqrestore(lock, flags) > + __local_unlock_irqrestore(this_cpu_ptr(lock), flags) > > /** > * local_lock_init - Runtime initialize a lock instance > @@ -64,7 +64,7 @@ > * locking constrains it will _always_ fail to acquire the lock in NMI or > * HARDIRQ context on PREEMPT_RT. > */ > -#define local_trylock(lock) __local_trylock(lock) > +#define local_trylock(lock) __local_trylock(this_cpu_ptr(lock)) > > /** > * local_trylock_irqsave - Try to acquire a per CPU local lock, save and disable > @@ -77,7 +77,7 @@ > * HARDIRQ context on PREEMPT_RT. > */ > #define local_trylock_irqsave(lock, flags) \ > - __local_trylock_irqsave(lock, flags) > + __local_trylock_irqsave(this_cpu_ptr(lock), flags) > > DEFINE_GUARD(local_lock, local_lock_t __percpu*, > local_lock(_T), > @@ -91,10 +91,10 @@ DEFINE_LOCK_GUARD_1(local_lock_irqsave, local_lock_t __percpu, > unsigned long flags) > > #define local_lock_nested_bh(_lock) \ > - __local_lock_nested_bh(_lock) > + __local_lock_nested_bh(this_cpu_ptr(_lock)) > > #define local_unlock_nested_bh(_lock) \ > - __local_unlock_nested_bh(_lock) > + __local_unlock_nested_bh(this_cpu_ptr(_lock)) > > DEFINE_GUARD(local_lock_nested_bh, local_lock_t __percpu*, > local_lock_nested_bh(_T), > diff --git a/include/linux/local_lock_internal.h b/include/linux/local_lock_internal.h > index 8d5ac16a9b179..b4d7b24882835 100644 > --- a/include/linux/local_lock_internal.h > +++ b/include/linux/local_lock_internal.h > @@ -99,14 +99,14 @@ do { \ > local_trylock_t *tl; \ > local_lock_t *l; \ > \ > - l = (local_lock_t *)this_cpu_ptr(lock); \ > + l = (local_lock_t *)(lock); \ > tl = (local_trylock_t *)l; \ > _Generic((lock), \ > - __percpu local_trylock_t *: ({ \ > + local_trylock_t *: ({ \ > lockdep_assert(tl->acquired == 0); \ > WRITE_ONCE(tl->acquired, 1); \ > }), \ > - __percpu local_lock_t *: (void)0); \ > + local_lock_t *: (void)0); \ > local_lock_acquire(l); \ > } while (0) > > @@ -133,7 +133,7 @@ do { \ > local_trylock_t *tl; \ > \ > preempt_disable(); \ > - tl = this_cpu_ptr(lock); \ > + tl = (lock); \ > if (READ_ONCE(tl->acquired)) { \ > preempt_enable(); \ > tl = NULL; \ > @@ -150,7 +150,7 @@ do { \ > local_trylock_t *tl; \ > \ > local_irq_save(flags); \ > - tl = this_cpu_ptr(lock); \ > + tl = (lock); \ > if (READ_ONCE(tl->acquired)) { \ > local_irq_restore(flags); \ > tl = NULL; \ > @@ -167,15 +167,15 @@ do { \ > local_trylock_t *tl; \ > local_lock_t *l; \ > \ > - l = (local_lock_t *)this_cpu_ptr(lock); \ > + l = (local_lock_t *)(lock); \ > tl = (local_trylock_t *)l; \ > local_lock_release(l); \ > _Generic((lock), \ > - __percpu local_trylock_t *: ({ \ > + local_trylock_t *: ({ \ > lockdep_assert(tl->acquired == 1); \ > WRITE_ONCE(tl->acquired, 0); \ > }), \ > - __percpu local_lock_t *: (void)0); \ > + local_lock_t *: (void)0); \ > } while (0) > > #define __local_unlock(lock) \ > @@ -199,11 +199,11 @@ do { \ > #define __local_lock_nested_bh(lock) \ > do { \ > lockdep_assert_in_softirq(); \ > - local_lock_acquire(this_cpu_ptr(lock)); \ > + local_lock_acquire((lock)); \ > } while (0) > > #define __local_unlock_nested_bh(lock) \ > - local_lock_release(this_cpu_ptr(lock)) > + local_lock_release((lock)) > > #else /* !CONFIG_PREEMPT_RT */ > > @@ -227,7 +227,7 @@ typedef spinlock_t local_trylock_t; > #define __local_lock(__lock) \ > do { \ > migrate_disable(); \ > - spin_lock(this_cpu_ptr((__lock))); \ > + spin_lock((__lock)); \ > } while (0) > > #define __local_lock_irq(lock) __local_lock(lock) > @@ -241,7 +241,7 @@ typedef spinlock_t local_trylock_t; > > #define __local_unlock(__lock) \ > do { \ > - spin_unlock(this_cpu_ptr((__lock))); \ > + spin_unlock((__lock)); \ > migrate_enable(); \ > } while (0) > > @@ -252,12 +252,12 @@ typedef spinlock_t local_trylock_t; > #define __local_lock_nested_bh(lock) \ > do { \ > lockdep_assert_in_softirq_func(); \ > - spin_lock(this_cpu_ptr(lock)); \ > + spin_lock((lock)); \ > } while (0) > > #define __local_unlock_nested_bh(lock) \ > do { \ > - spin_unlock(this_cpu_ptr((lock))); \ > + spin_unlock((lock)); \ > } while (0) > > #define __local_trylock(lock) \ > @@ -268,7 +268,7 @@ do { \ > __locked = 0; \ > } else { \ > migrate_disable(); \ > - __locked = spin_trylock(this_cpu_ptr((lock))); \ > + __locked = spin_trylock((lock)); \ > if (!__locked) \ > migrate_enable(); \ > } \ It looks better if the trailing '\' of multi-line macros can be made aligned on the right side. Other than that, it looks good to me. Acked-by: Waiman Long <longman@redhat.com> ^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2025-06-28 19:06 UTC | newest] Thread overview: 4+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2025-06-10 11:02 [PATCH v2 0/1] local_lock: Move this_cpu_ptr() notation from internal to main header Sebastian Andrzej Siewior 2025-06-10 11:02 ` [PATCH v2 1/1] " Sebastian Andrzej Siewior 2025-06-28 8:08 ` Sebastian Andrzej Siewior 2025-06-28 19:06 ` Waiman Long
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).