From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A9E9F33343B for ; Fri, 8 May 2026 04:21:26 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1778214086; cv=none; b=jm92zrPnc7M8s1EibzspYXBhBuRr6x0H0ivlcc+ZhfRW6C2NPchJ1bGmCZOnIwe15fT6GZSKJDBHS9X2sB+xY1aPoeKWnTUfY21qz8bV1ttUGDlfvh66fetRSbsEtcGt2LiZIXhZ6OOi5lwgEVUSAWasZmbVlK1DB9GE2MEzdFI= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1778214086; c=relaxed/simple; bh=fEiRMpjZPeT26JuQ4SWkld8jLFOaF9e2qGoLwZMsxR0=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=uMadMfJ7WBLs4/LF+Vp/3pj+XoHcovFLwWc3XC1KBvYV0qLZQFaQeFkKlBw49jaf1bhNzURfGrsnjPAR28tB24eqzcqdFu2i6N4V9YEqMb4lj/Xo+eYEhqsToRfCmlDbzBQLxGlqIkinhQUtCEhxNcUNQD03Hjv5CByaZhM7UD8= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=DSS2z0x6; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="DSS2z0x6" Received: by smtp.kernel.org (Postfix) with ESMTPSA id BC01DC2BCC9; Fri, 8 May 2026 04:21:24 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1778214086; bh=fEiRMpjZPeT26JuQ4SWkld8jLFOaF9e2qGoLwZMsxR0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=DSS2z0x6mGp96K2q8xoty1gM1xDZaEsTyzr8TgcJv3/M6uC9YE5ezJ9TcM4W+9oIM Hr94ap/0ixvidOPQbnAx9R/Z68EPXsD5gBOx8QiqwWA7KQK7Hjx+MeVRLe29iNxTfM bQUizI1oKVfsUOJTHH2WX06vP17W5QsYt6U9Zy8DZs5GIfq+GLODOj8dhBO0HCc9Dw g8xn2eebsIEzZh2H/N4ud2G3kbvq/xKDjEvVPCfgbQKlPi41jP/0AK3Jxt3SRaU/JR 2QS71y0p1XGKc1HOTlJURnliQc0VZEMCRFRbsjkzbyXxX2cQ6St9f+iiUHKr0qJB9x SptXKiXOr6LDA== Received: from phl-compute-03.internal (phl-compute-03.internal [10.202.2.43]) by mailfauth.phl.internal (Postfix) with ESMTP id 1315FF40078; Fri, 8 May 2026 00:21:24 -0400 (EDT) Received: from phl-frontend-03 ([10.202.2.162]) by phl-compute-03.internal (MEProxy); Fri, 08 May 2026 00:21:24 -0400 X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeefhedrtddtgddutdelfeekucetufdoteggodetrf dotffvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfurfetoffkrfgpnffqhgenuceu rghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmnecujf gurhephffvvefufffkofgjfhgggfestdekredtredttdenucfhrhhomhepuehoqhhunhcu hfgvnhhguceosghoqhhunheskhgvrhhnvghlrdhorhhgqeenucggtffrrghtthgvrhhnpe ekieffgeevhefgudffveffheettdfgkeeilefhhfduhedugedvhedtteegvdeugfenucff ohhmrghinhepmhhsghhiugdrlhhinhhknecuvehluhhsthgvrhfuihiivgepudenucfrrg hrrghmpehmrghilhhfrhhomhepsghoqhhunhdomhgvshhmthhprghuthhhphgvrhhsohhn rghlihhthidqudeijedtleekgeejuddqudejjeekheehhedvqdgsohhquhhnpeepkhgvrh hnvghlrdhorhhgsehfihigmhgvrdhnrghmvgdpnhgspghrtghpthhtohepheejpdhmohgu vgepshhmthhpohhuthdprhgtphhtthhopehpvghtvghriiesihhnfhhrrgguvggrugdroh hrghdprhgtphhtthhopegtrghtrghlihhnrdhmrghrihhnrghssegrrhhmrdgtohhmpdhr tghpthhtohepfihilhhlsehkvghrnhgvlhdrohhrghdprhgtphhtthhopehjohhnrghsse hsohhuthhhphholhgvrdhsvgdprhgtphhtthhopehsthgvfhgrnhdrkhhrihhsthhirghn shhsohhnsehsrghunhgrlhgrhhhtihdrfhhipdhrtghpthhtohepshhhohhrnhgvsehgmh grihhlrdgtohhmpdhrtghpthhtohephhgtrgeslhhinhhugidrihgsmhdrtghomhdprhgt phhtthhopehgohhrsehlihhnuhigrdhisghmrdgtohhmpdhrtghpthhtoheprghgohhrug gvvghvsehlihhnuhigrdhisghmrdgtohhm X-ME-Proxy: Feedback-ID: i8dbe485b:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Fri, 8 May 2026 00:21:23 -0400 (EDT) From: Boqun Feng To: Peter Zijlstra Cc: Catalin Marinas , Will Deacon , Jonas Bonn , Stefan Kristiansson , Stafford Horne , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Christian Borntraeger , Sven Schnelle , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" , Arnd Bergmann , Juri Lelli , Vincent Guittot , Dietmar Eggemann , Steven Rostedt , Ben Segall , Mel Gorman , Valentin Schneider , K Prateek Nayak , Boqun Feng , Waiman Long , Andrew Morton , Miguel Ojeda , Gary Guo , =?UTF-8?q?Bj=C3=B6rn=20Roy=20Baron?= , Benno Lossin , Andreas Hindborg , Alice Ryhl , Trevor Gross , Danilo Krummrich , Jinjie Ruan , Ada Couprie Diaz , Lyude Paul , Sohil Mehta , Pawan Gupta , "Xin Li (Intel)" , Sean Christopherson , Nikunj A Dadhania , Joel Fernandes , Andy Shevchenko , Randy Dunlap , Yury Norov , Sebastian Andrzej Siewior , linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, linux-openrisc@vger.kernel.org, linux-s390@vger.kernel.org, linux-arch@vger.kernel.org, rust-for-linux@vger.kernel.org, Boqun Feng Subject: [PATCH 05/11] irq & spin_lock: Add counted interrupt disabling/enabling Date: Thu, 7 May 2026 21:21:05 -0700 Message-ID: <20260508042111.24358-6-boqun@kernel.org> X-Mailer: git-send-email 2.50.1 In-Reply-To: <20260508042111.24358-1-boqun@kernel.org> References: <20260508042111.24358-1-boqun@kernel.org> Precedence: bulk X-Mailing-List: linux-s390@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit From: Boqun Feng Currently the nested interrupt disabling and enabling is present by _irqsave() and _irqrestore() APIs, which are relatively unsafe, for example: spin_lock_irqsave(l1, flag1); spin_lock_irqsave(l2, flag2); spin_unlock_irqrestore(l1, flags1); // accesses to interrupt-disable protect data will cause races. This is even easier to triggered with guard facilities: unsigned long flag2; scoped_guard(spin_lock_irqsave, l1) { spin_lock_irqsave(l2, flag2); } // l2 locked but interrupts are enabled. spin_unlock_irqrestore(l2, flag2); (Hand-to-hand locking critical sections are not uncommon for a fine-grained lock design) And because this unsafety, Rust cannot easily wrap the interrupt-disabling locks in a safe API, which complicates the design. To resolve this, introduce a new set of interrupt disabling APIs: * local_interrupt_disable(); * local_interrupt_enable(); They work like local_irq_save() and local_irq_restore() except that 1) the outermost local_interrupt_disable() call save the interrupt state into a percpu variable, so that the outermost local_interrupt_enable() can restore the state, and 2) a percpu counter is added to record the nest level of these calls, so that interrupts are not accidentally enabled inside the outermost critical section. Also add the corresponding spin_lock primitives: spin_lock_irq_disable() and spin_unlock_irq_enable(), as a result, code as follow: spin_lock_irq_disable(l1); spin_lock_irq_disable(l2); spin_unlock_irq_enable(l1); // Interrupts are still disabled. spin_unlock_irq_enable(l2); doesn't have the issue that interrupts are accidentally enabled. This also makes the wrapper of interrupt-disabling locks on Rust easier to design. Signed-off-by: Boqun Feng Signed-off-by: Lyude Paul Signed-off-by: Boqun Feng Link: https://patch.msgid.link/20260121223933.1568682-6-lyude@redhat.com --- include/linux/interrupt_rc.h | 63 ++++++++++++++++++++++++++++++++ include/linux/preempt.h | 4 ++ include/linux/spinlock.h | 25 +++++++++++++ include/linux/spinlock_api_smp.h | 27 ++++++++++++++ include/linux/spinlock_api_up.h | 9 +++++ include/linux/spinlock_rt.h | 15 ++++++++ kernel/locking/spinlock.c | 29 +++++++++++++++ kernel/softirq.c | 3 ++ 8 files changed, 175 insertions(+) create mode 100644 include/linux/interrupt_rc.h diff --git a/include/linux/interrupt_rc.h b/include/linux/interrupt_rc.h new file mode 100644 index 000000000000..d6d05498731b --- /dev/null +++ b/include/linux/interrupt_rc.h @@ -0,0 +1,63 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * include/linux/interrupt_rc.h - refcounted local processor interrupt + * management. + * + * Since the implementation of this API currently depends on + * local_irq_save()/local_irq_restore(), we split this into it's own header to + * make it easier to include without hitting circular header dependencies. + */ + +#ifndef __LINUX_INTERRUPT_RC_H +#define __LINUX_INTERRUPT_RC_H + +#include +#include +#ifdef CONFIG_SMP +#include +#endif + +/* Per-cpu interrupt disabling state for local_interrupt_{disable,enable}() */ +struct interrupt_disable_state { + unsigned long flags; +}; + +DECLARE_PER_CPU(struct interrupt_disable_state, local_interrupt_disable_state); + +static inline void local_interrupt_disable(void) +{ + unsigned long flags; + int new_count; + + new_count = hardirq_disable_enter(); + + if ((new_count & HARDIRQ_DISABLE_MASK) == HARDIRQ_DISABLE_OFFSET) { + local_irq_save(flags); + raw_cpu_write(local_interrupt_disable_state.flags, flags); + } +} + +static inline void local_interrupt_enable(void) +{ + int new_count; + + new_count = hardirq_disable_exit(); + + if ((new_count & HARDIRQ_DISABLE_MASK) == 0) { + unsigned long flags; + + flags = raw_cpu_read(local_interrupt_disable_state.flags); + local_irq_restore(flags); + /* + * TODO: re-read preempt count can be avoided, but it needs + * should_resched() taking another parameter as the current + * preempt count + */ +#ifdef PREEMPTION + if (should_resched(0)) + __preempt_schedule(); +#endif + } +} + +#endif /* !__LINUX_INTERRUPT_RC_H */ diff --git a/include/linux/preempt.h b/include/linux/preempt.h index e2d3079d3f5f..33fc4c814a9f 100644 --- a/include/linux/preempt.h +++ b/include/linux/preempt.h @@ -151,6 +151,10 @@ static __always_inline unsigned char interrupt_context_level(void) #define in_softirq() (softirq_count()) #define in_interrupt() (irq_count()) +#define hardirq_disable_count() ((preempt_count() & HARDIRQ_DISABLE_MASK) >> HARDIRQ_DISABLE_SHIFT) +#define hardirq_disable_enter() __preempt_count_add_return(HARDIRQ_DISABLE_OFFSET) +#define hardirq_disable_exit() __preempt_count_sub_return(HARDIRQ_DISABLE_OFFSET) + /* * The preempt_count offset after preempt_disable(); */ diff --git a/include/linux/spinlock.h b/include/linux/spinlock.h index 241277cd34cf..66fa699fff19 100644 --- a/include/linux/spinlock.h +++ b/include/linux/spinlock.h @@ -57,6 +57,7 @@ #include #include #include +#include #include #include #include @@ -273,9 +274,11 @@ static inline void do_raw_spin_unlock(raw_spinlock_t *lock) __releases(lock) #endif #define raw_spin_lock_irq(lock) _raw_spin_lock_irq(lock) +#define raw_spin_lock_irq_disable(lock) _raw_spin_lock_irq_disable(lock) #define raw_spin_lock_bh(lock) _raw_spin_lock_bh(lock) #define raw_spin_unlock(lock) _raw_spin_unlock(lock) #define raw_spin_unlock_irq(lock) _raw_spin_unlock_irq(lock) +#define raw_spin_unlock_irq_enable(lock) _raw_spin_unlock_irq_enable(lock) #define raw_spin_unlock_irqrestore(lock, flags) \ do { \ @@ -290,6 +293,13 @@ static inline void do_raw_spin_unlock(raw_spinlock_t *lock) __releases(lock) #define raw_spin_trylock_irqsave(lock, flags) _raw_spin_trylock_irqsave(lock, &(flags)) +#define raw_spin_trylock_irq_disable(lock) \ +({ \ + local_interrupt_disable(); \ + raw_spin_trylock(lock) ? \ + 1 : ({ local_interrupt_enable(); 0; }); \ +}) + #ifndef CONFIG_PREEMPT_RT /* Include rwlock functions for !RT */ #include @@ -372,6 +382,11 @@ static __always_inline void spin_lock_irq(spinlock_t *lock) raw_spin_lock_irq(&lock->rlock); } +static __always_inline void spin_lock_irq_disable(spinlock_t *lock) +{ + raw_spin_lock_irq_disable(&lock->rlock); +} + #define spin_lock_irqsave(lock, flags) \ do { \ raw_spin_lock_irqsave(spinlock_check(lock), flags); \ @@ -402,6 +417,11 @@ static __always_inline void spin_unlock_irq(spinlock_t *lock) raw_spin_unlock_irq(&lock->rlock); } +static __always_inline void spin_unlock_irq_enable(spinlock_t *lock) +{ + raw_spin_unlock_irq_enable(&lock->rlock); +} + static __always_inline void spin_unlock_irqrestore(spinlock_t *lock, unsigned long flags) __releases(lock) __no_context_analysis { @@ -427,6 +447,11 @@ static __always_inline bool _spin_trylock_irqsave(spinlock_t *lock, unsigned lon } #define spin_trylock_irqsave(lock, flags) _spin_trylock_irqsave(lock, &(flags)) +static __always_inline int spin_trylock_irq_disable(spinlock_t *lock) +{ + return raw_spin_trylock_irq_disable(&lock->rlock); +} + /** * spin_is_locked() - Check whether a spinlock is locked. * @lock: Pointer to the spinlock. diff --git a/include/linux/spinlock_api_smp.h b/include/linux/spinlock_api_smp.h index bda5e7a390cd..a05f507b6979 100644 --- a/include/linux/spinlock_api_smp.h +++ b/include/linux/spinlock_api_smp.h @@ -28,6 +28,8 @@ _raw_spin_lock_nest_lock(raw_spinlock_t *lock, struct lockdep_map *map) void __lockfunc _raw_spin_lock_bh(raw_spinlock_t *lock) __acquires(lock); void __lockfunc _raw_spin_lock_irq(raw_spinlock_t *lock) __acquires(lock); +void __lockfunc _raw_spin_lock_irq_disable(raw_spinlock_t *lock) + __acquires(lock); unsigned long __lockfunc _raw_spin_lock_irqsave(raw_spinlock_t *lock) __acquires(lock); @@ -39,6 +41,7 @@ int __lockfunc _raw_spin_trylock_bh(raw_spinlock_t *lock) __cond_acquires(true, void __lockfunc _raw_spin_unlock(raw_spinlock_t *lock) __releases(lock); void __lockfunc _raw_spin_unlock_bh(raw_spinlock_t *lock) __releases(lock); void __lockfunc _raw_spin_unlock_irq(raw_spinlock_t *lock) __releases(lock); +void __lockfunc _raw_spin_unlock_irq_enable(raw_spinlock_t *lock) __releases(lock); void __lockfunc _raw_spin_unlock_irqrestore(raw_spinlock_t *lock, unsigned long flags) __releases(lock); @@ -55,6 +58,11 @@ _raw_spin_unlock_irqrestore(raw_spinlock_t *lock, unsigned long flags) #define _raw_spin_lock_irq(lock) __raw_spin_lock_irq(lock) #endif +/* Use the same config as spin_lock_irq() temporarily. */ +#ifdef CONFIG_INLINE_SPIN_LOCK_IRQ +#define _raw_spin_lock_irq_disable(lock) __raw_spin_lock_irq_disable(lock) +#endif + #ifdef CONFIG_INLINE_SPIN_LOCK_IRQSAVE #define _raw_spin_lock_irqsave(lock) __raw_spin_lock_irqsave(lock) #endif @@ -79,6 +87,11 @@ _raw_spin_unlock_irqrestore(raw_spinlock_t *lock, unsigned long flags) #define _raw_spin_unlock_irq(lock) __raw_spin_unlock_irq(lock) #endif +/* Use the same config as spin_unlock_irq() temporarily. */ +#ifdef CONFIG_INLINE_SPIN_UNLOCK_IRQ +#define _raw_spin_unlock_irq_enable(lock) __raw_spin_unlock_irq_enable(lock) +#endif + #ifdef CONFIG_INLINE_SPIN_UNLOCK_IRQRESTORE #define _raw_spin_unlock_irqrestore(lock, flags) __raw_spin_unlock_irqrestore(lock, flags) #endif @@ -143,6 +156,13 @@ static inline void __raw_spin_lock_irq(raw_spinlock_t *lock) LOCK_CONTENDED(lock, do_raw_spin_trylock, do_raw_spin_lock); } +static inline void __raw_spin_lock_irq_disable(raw_spinlock_t *lock) +{ + local_interrupt_disable(); + spin_acquire(&lock->dep_map, 0, 0, _RET_IP_); + LOCK_CONTENDED(lock, do_raw_spin_trylock, do_raw_spin_lock); +} + static inline void __raw_spin_lock_bh(raw_spinlock_t *lock) __acquires(lock) __no_context_analysis { @@ -188,6 +208,13 @@ static inline void __raw_spin_unlock_irq(raw_spinlock_t *lock) preempt_enable(); } +static inline void __raw_spin_unlock_irq_enable(raw_spinlock_t *lock) +{ + spin_release(&lock->dep_map, _RET_IP_); + do_raw_spin_unlock(lock); + local_interrupt_enable(); +} + static inline void __raw_spin_unlock_bh(raw_spinlock_t *lock) __releases(lock) { diff --git a/include/linux/spinlock_api_up.h b/include/linux/spinlock_api_up.h index a9d5c7c66e03..e0dea85ac45d 100644 --- a/include/linux/spinlock_api_up.h +++ b/include/linux/spinlock_api_up.h @@ -42,6 +42,9 @@ #define __LOCK_IRQSAVE(lock, flags, ...) \ do { local_irq_save(flags); __LOCK(lock, ##__VA_ARGS__); } while (0) +#define __LOCK_IRQ_DISABLE(lock, ...) \ + do { local_interrupt_disable(); __LOCK(lock, ##__VA_ARGS__); } while (0) + #define ___UNLOCK_(lock) \ do { __release(lock); (void)(lock); } while (0) @@ -61,6 +64,10 @@ #define __UNLOCK_IRQRESTORE(lock, flags, ...) \ do { local_irq_restore(flags); __UNLOCK(lock, ##__VA_ARGS__); } while (0) +#define __UNLOCK_IRQ_ENABLE(lock, ...) \ + do { __UNLOCK(lock, ##__VA_ARGS__); local_interrupt_enable(); } while (0) + + #define _raw_spin_lock(lock) __LOCK(lock) #define _raw_spin_lock_nested(lock, subclass) __LOCK(lock) #define _raw_read_lock(lock) __LOCK(lock, shared) @@ -70,6 +77,7 @@ #define _raw_read_lock_bh(lock) __LOCK_BH(lock, shared) #define _raw_write_lock_bh(lock) __LOCK_BH(lock) #define _raw_spin_lock_irq(lock) __LOCK_IRQ(lock) +#define _raw_spin_lock_irq_disable(lock) __LOCK_IRQ_DISABLE(lock) #define _raw_read_lock_irq(lock) __LOCK_IRQ(lock, shared) #define _raw_write_lock_irq(lock) __LOCK_IRQ(lock) #define _raw_spin_lock_irqsave(lock, flags) __LOCK_IRQSAVE(lock, flags) @@ -132,6 +140,7 @@ static __always_inline int _raw_write_trylock_irqsave(rwlock_t *lock, unsigned l #define _raw_write_unlock_bh(lock) __UNLOCK_BH(lock) #define _raw_read_unlock_bh(lock) __UNLOCK_BH(lock, shared) #define _raw_spin_unlock_irq(lock) __UNLOCK_IRQ(lock) +#define _raw_spin_unlock_irq_enable(lock) __UNLOCK_IRQ_ENABLE(lock) #define _raw_read_unlock_irq(lock) __UNLOCK_IRQ(lock, shared) #define _raw_write_unlock_irq(lock) __UNLOCK_IRQ(lock) #define _raw_spin_unlock_irqrestore(lock, flags) \ diff --git a/include/linux/spinlock_rt.h b/include/linux/spinlock_rt.h index 373618a4243c..c5a8f3f31a2d 100644 --- a/include/linux/spinlock_rt.h +++ b/include/linux/spinlock_rt.h @@ -96,6 +96,11 @@ static __always_inline void spin_lock_irq(spinlock_t *lock) rt_spin_lock(lock); } +static __always_inline void spin_lock_irq_disable(spinlock_t *lock) +{ + rt_spin_lock(lock); +} + #define spin_lock_irqsave(lock, flags) \ do { \ typecheck(unsigned long, flags); \ @@ -122,6 +127,11 @@ static __always_inline void spin_unlock_irq(spinlock_t *lock) rt_spin_unlock(lock); } +static __always_inline void spin_unlock_irq_enable(spinlock_t *lock) +{ + rt_spin_unlock(lock); +} + static __always_inline void spin_unlock_irqrestore(spinlock_t *lock, unsigned long flags) __releases(lock) @@ -131,6 +141,11 @@ static __always_inline void spin_unlock_irqrestore(spinlock_t *lock, #define spin_trylock(lock) rt_spin_trylock(lock) +static __always_inline int spin_trylock_irq_disable(spinlock_t *lock) +{ + return rt_spin_trylock(lock); +} + #define spin_trylock_bh(lock) rt_spin_trylock_bh(lock) #define spin_trylock_irq(lock) rt_spin_trylock(lock) diff --git a/kernel/locking/spinlock.c b/kernel/locking/spinlock.c index b42d293da38b..764641f6ec57 100644 --- a/kernel/locking/spinlock.c +++ b/kernel/locking/spinlock.c @@ -129,6 +129,19 @@ static void __lockfunc __raw_##op##_lock_bh(locktype##_t *lock) \ */ BUILD_LOCK_OPS(spin, raw_spinlock, __acquires); +/* No rwlock_t variants for now, so just build this function by hand */ +static void __lockfunc __raw_spin_lock_irq_disable(raw_spinlock_t *lock) +{ + for (;;) { + local_interrupt_disable(); + if (likely(do_raw_spin_trylock(lock))) + break; + local_interrupt_enable(); + + arch_spin_relax(&lock->raw_lock); + } +} + #ifndef CONFIG_PREEMPT_RT BUILD_LOCK_OPS(read, rwlock, __acquires_shared); BUILD_LOCK_OPS(write, rwlock, __acquires); @@ -176,6 +189,14 @@ noinline void __lockfunc _raw_spin_lock_irq(raw_spinlock_t *lock) EXPORT_SYMBOL(_raw_spin_lock_irq); #endif +#ifndef CONFIG_INLINE_SPIN_LOCK_IRQ +noinline void __lockfunc _raw_spin_lock_irq_disable(raw_spinlock_t *lock) +{ + __raw_spin_lock_irq_disable(lock); +} +EXPORT_SYMBOL_GPL(_raw_spin_lock_irq_disable); +#endif + #ifndef CONFIG_INLINE_SPIN_LOCK_BH noinline void __lockfunc _raw_spin_lock_bh(raw_spinlock_t *lock) { @@ -208,6 +229,14 @@ noinline void __lockfunc _raw_spin_unlock_irq(raw_spinlock_t *lock) EXPORT_SYMBOL(_raw_spin_unlock_irq); #endif +#ifndef CONFIG_INLINE_SPIN_UNLOCK_IRQ +noinline void __lockfunc _raw_spin_unlock_irq_enable(raw_spinlock_t *lock) +{ + __raw_spin_unlock_irq_enable(lock); +} +EXPORT_SYMBOL_GPL(_raw_spin_unlock_irq_enable); +#endif + #ifndef CONFIG_INLINE_SPIN_UNLOCK_BH noinline void __lockfunc _raw_spin_unlock_bh(raw_spinlock_t *lock) { diff --git a/kernel/softirq.c b/kernel/softirq.c index 10af5ed859e7..6fa83aabae47 100644 --- a/kernel/softirq.c +++ b/kernel/softirq.c @@ -88,6 +88,9 @@ EXPORT_PER_CPU_SYMBOL_GPL(hardirqs_enabled); EXPORT_PER_CPU_SYMBOL_GPL(hardirq_context); #endif +DEFINE_PER_CPU(struct interrupt_disable_state, local_interrupt_disable_state); +EXPORT_PER_CPU_SYMBOL_GPL(local_interrupt_disable_state); + DEFINE_PER_CPU(unsigned int, nmi_nesting); /* -- 2.50.1 (Apple Git-155)