From mboxrd@z Thu Jan 1 00:00:00 1970 From: Mark Rutland Subject: [PATCH v3.16.y] percpu: make this_cpu_generic_read() atomic w.r.t. interrupts Date: Wed, 18 Oct 2017 14:40:22 +0100 Message-ID: <20171018134022.25282-1-mark.rutland@arm.com> Return-path: Sender: stable-owner@vger.kernel.org To: stable@vger.kernel.org Cc: Mark Rutland , Arnd Bergmann , Christoph Lameter , Peter Zijlstra , Pranith Kumar , Tejun Heo , Thomas Gleixner , linux-arch@vger.kernel.org List-Id: linux-arch.vger.kernel.org Commit e88d62cd4b2f0b1ae55e9008e79c2794b1fc914d upstream. As raw_cpu_generic_read() is a plain read from a raw_cpu_ptr() address, it's possible (albeit unlikely) that the compiler will split the access across multiple instructions. In this_cpu_generic_read() we disable preemption but not interrupts before calling raw_cpu_generic_read(). Thus, an interrupt could be taken in the middle of the split load instructions. If a this_cpu_write() or RMW this_cpu_*() op is made to the same variable in the interrupt handling path, this_cpu_read() will return a torn value. For native word types, we can avoid tearing using READ_ONCE(), but this won't work in all cases (e.g. 64-bit types on most 32-bit platforms). This patch reworks this_cpu_generic_read() to use READ_ONCE() where possible, otherwise falling back to disabling interrupts. Signed-off-by: Mark Rutland Cc: Arnd Bergmann Cc: Christoph Lameter Cc: Peter Zijlstra Cc: Pranith Kumar Cc: Tejun Heo Cc: Thomas Gleixner Cc: linux-arch@vger.kernel.org Cc: stable@vger.kernel.org Signed-off-by: Tejun Heo [Mark: backport to v3.16.y] Signed-off-by: Mark Rutland --- include/linux/percpu.h | 29 +++++++++++++++++++++++++---- 1 file changed, 25 insertions(+), 4 deletions(-) diff --git a/include/linux/percpu.h b/include/linux/percpu.h index 8419053d0f2e..37ff713a7c7f 100644 --- a/include/linux/percpu.h +++ b/include/linux/percpu.h @@ -276,12 +276,33 @@ do { \ * used. */ -#define _this_cpu_generic_read(pcp) \ -({ typeof(pcp) ret__; \ +#define __this_cpu_generic_read_nopreempt(pcp) \ +({ \ + typeof(pcp) __ret; \ preempt_disable(); \ - ret__ = *this_cpu_ptr(&(pcp)); \ + __ret = ACCESS_ONCE(*raw_cpu_ptr(&(pcp))); \ preempt_enable(); \ - ret__; \ + __ret; \ +}) + +#define __this_cpu_generic_read_noirq(pcp) \ +({ \ + typeof(pcp) __ret; \ + unsigned long __flags; \ + raw_local_irq_save(__flags); \ + __ret = *raw_cpu_ptr(&(pcp)); \ + raw_local_irq_restore(__flags); \ + __ret; \ +}) + +#define _this_cpu_generic_read(pcp) \ +({ \ + typeof(pcp) __ret; \ + if (__native_word(pcp)) \ + __ret = __this_cpu_generic_read_nopreempt(pcp); \ + else \ + __ret = __this_cpu_generic_read_noirq(pcp); \ + __ret; \ }) #ifndef this_cpu_read -- 2.11.0 From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from foss.arm.com ([217.140.101.70]:40952 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753823AbdJRNkb (ORCPT ); Wed, 18 Oct 2017 09:40:31 -0400 From: Mark Rutland Subject: [PATCH v3.16.y] percpu: make this_cpu_generic_read() atomic w.r.t. interrupts Date: Wed, 18 Oct 2017 14:40:22 +0100 Message-ID: <20171018134022.25282-1-mark.rutland@arm.com> Sender: linux-arch-owner@vger.kernel.org List-ID: To: stable@vger.kernel.org Cc: Mark Rutland , Arnd Bergmann , Christoph Lameter , Peter Zijlstra , Pranith Kumar , Tejun Heo , Thomas Gleixner , linux-arch@vger.kernel.org Message-ID: <20171018134022.bdYuXgl-4Z6IhO7OtP6B6HcQkAmrT4TtC1NS9paEsOw@z> Commit e88d62cd4b2f0b1ae55e9008e79c2794b1fc914d upstream. As raw_cpu_generic_read() is a plain read from a raw_cpu_ptr() address, it's possible (albeit unlikely) that the compiler will split the access across multiple instructions. In this_cpu_generic_read() we disable preemption but not interrupts before calling raw_cpu_generic_read(). Thus, an interrupt could be taken in the middle of the split load instructions. If a this_cpu_write() or RMW this_cpu_*() op is made to the same variable in the interrupt handling path, this_cpu_read() will return a torn value. For native word types, we can avoid tearing using READ_ONCE(), but this won't work in all cases (e.g. 64-bit types on most 32-bit platforms). This patch reworks this_cpu_generic_read() to use READ_ONCE() where possible, otherwise falling back to disabling interrupts. Signed-off-by: Mark Rutland Cc: Arnd Bergmann Cc: Christoph Lameter Cc: Peter Zijlstra Cc: Pranith Kumar Cc: Tejun Heo Cc: Thomas Gleixner Cc: linux-arch@vger.kernel.org Cc: stable@vger.kernel.org Signed-off-by: Tejun Heo [Mark: backport to v3.16.y] Signed-off-by: Mark Rutland --- include/linux/percpu.h | 29 +++++++++++++++++++++++++---- 1 file changed, 25 insertions(+), 4 deletions(-) diff --git a/include/linux/percpu.h b/include/linux/percpu.h index 8419053d0f2e..37ff713a7c7f 100644 --- a/include/linux/percpu.h +++ b/include/linux/percpu.h @@ -276,12 +276,33 @@ do { \ * used. */ -#define _this_cpu_generic_read(pcp) \ -({ typeof(pcp) ret__; \ +#define __this_cpu_generic_read_nopreempt(pcp) \ +({ \ + typeof(pcp) __ret; \ preempt_disable(); \ - ret__ = *this_cpu_ptr(&(pcp)); \ + __ret = ACCESS_ONCE(*raw_cpu_ptr(&(pcp))); \ preempt_enable(); \ - ret__; \ + __ret; \ +}) + +#define __this_cpu_generic_read_noirq(pcp) \ +({ \ + typeof(pcp) __ret; \ + unsigned long __flags; \ + raw_local_irq_save(__flags); \ + __ret = *raw_cpu_ptr(&(pcp)); \ + raw_local_irq_restore(__flags); \ + __ret; \ +}) + +#define _this_cpu_generic_read(pcp) \ +({ \ + typeof(pcp) __ret; \ + if (__native_word(pcp)) \ + __ret = __this_cpu_generic_read_nopreempt(pcp); \ + else \ + __ret = __this_cpu_generic_read_noirq(pcp); \ + __ret; \ }) #ifndef this_cpu_read -- 2.11.0