From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Catalin Marinas , Will Deacon , Russell King Subject: [ 049/100] ARM: 7658/1: mm: fix race updating mm->context.id on ASID rollover Date: Tue, 12 Mar 2013 15:31:34 -0700 Message-Id: <20130312223128.263018588@linuxfoundation.org> In-Reply-To: <20130312223122.884099393@linuxfoundation.org> References: <20130312223122.884099393@linuxfoundation.org> Sender: linux-kernel-owner@vger.kernel.org List-ID: 3.8-stable review patch. If anyone has any objections, please let me know. ------------------ From: Will Deacon commit 37f47e3d62533c931b04cb409f2eb299e6342331 upstream. If a thread triggers an ASID rollover, other threads of the same process must be made to wait until the mm->context.id for the shared mm_struct has been updated to new generation and associated book-keeping (e.g. TLB invalidation) has ben performed. However, there is a *tiny* window where both mm->context.id and the relevant active_asids entry are updated to the new generation, but the TLB flush has not been performed, which could allow another thread to return to userspace with a dirty TLB, potentially leading to data corruption. In reality this will never occur because one CPU would need to perform a context-switch in the time it takes another to do a couple of atomic test/set operations but we should plug the race anyway. This patch moves the active_asids update until after the potential TLB flush on context-switch. Reviewed-by: Catalin Marinas Signed-off-by: Will Deacon Signed-off-by: Russell King Signed-off-by: Greg Kroah-Hartman --- arch/arm/mm/context.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) --- a/arch/arm/mm/context.c +++ b/arch/arm/mm/context.c @@ -204,11 +204,11 @@ void check_and_switch_context(struct mm_ if ((mm->context.id ^ atomic64_read(&asid_generation)) >> ASID_BITS) new_context(mm, cpu); - atomic64_set(&per_cpu(active_asids, cpu), mm->context.id); - cpumask_set_cpu(cpu, mm_cpumask(mm)); - if (cpumask_test_and_clear_cpu(cpu, &tlb_flush_pending)) local_flush_tlb_all(); + + atomic64_set(&per_cpu(active_asids, cpu), mm->context.id); + cpumask_set_cpu(cpu, mm_cpumask(mm)); raw_spin_unlock_irqrestore(&cpu_asid_lock, flags); switch_mm_fastpath: