From mboxrd@z Thu Jan 1 00:00:00 1970 From: Peter Zijlstra Subject: Re: [PATCH 06/17] arm: mmu_gather rework Date: Mon, 28 Feb 2011 15:18:47 +0100 Message-ID: <1298902727.2428.10867.camel@twins> References: <20110217162327.434629380@chello.nl> <20110217163235.106239192@chello.nl> <1298565253.2428.288.camel@twins> <1298657083.2428.2483.camel@twins> <20110225215123.GA10026@flint.arm.linux.org.uk> <1298893487.2428.10537.camel@twins> Mime-Version: 1.0 Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable Return-path: In-Reply-To: <1298893487.2428.10537.camel@twins> Sender: owner-linux-mm@kvack.org To: Russell King Cc: Andrea Arcangeli , Avi Kivity , Thomas Gleixner , Rik van Riel , Ingo Molnar , akpm@linux-foundation.org, Linus Torvalds , linux-kernel@vger.kernel.org, linux-arch@vger.kernel.org, linux-mm@kvack.org, Benjamin Herrenschmidt , David Miller , Hugh Dickins , Mel Gorman , Nick Piggin , Paul McKenney , Yanmin Zhang , "Luck,Tony" , PaulMundt , Chris Metcalf List-Id: linux-arch.vger.kernel.org On Mon, 2011-02-28 at 12:44 +0100, Peter Zijlstra wrote: > unmap_region() > tlb_gather_mmu() > unmap_vmas() > for (; vma; vma =3D vma->vm_next) > unmao_page_range() > tlb_start_vma() -> flush cache range So why is this correct? Can't we race with a concurrent access to the memory region (munmap() vs other thread access race)? While unmap_region() callers will have removed the vma from the tree so faults will not be satisfied, TLBs might still be present and allow us to access the memory and thereby reloading it in the cache. > zap_*_range() > ptep_get_and_clear_full() -> batch/track external tlbs > tlb_remove_tlb_entry() -> batch/track external tlbs > tlb_remove_page() -> track range/batch page > tlb_end_vma() -> flush tlb range >=20 > [ for architectures that have hardware page table walkers > concurrent faults can still load the page tables ] >=20 > free_pgtables() > while (vma) > unlink_*_vma() > free_*_range() > *_free_tlb() > tlb_finish_mmu() >=20 > free vmas=20 -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/ Don't email: email@kvack.org From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from bombadil.infradead.org ([18.85.46.34]:50037 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754420Ab1B1OTy convert rfc822-to-8bit (ORCPT ); Mon, 28 Feb 2011 09:19:54 -0500 Subject: Re: [PATCH 06/17] arm: mmu_gather rework From: Peter Zijlstra In-Reply-To: <1298893487.2428.10537.camel@twins> References: <20110217162327.434629380@chello.nl> <20110217163235.106239192@chello.nl> <1298565253.2428.288.camel@twins> <1298657083.2428.2483.camel@twins> <20110225215123.GA10026@flint.arm.linux.org.uk> <1298893487.2428.10537.camel@twins> Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 8BIT Date: Mon, 28 Feb 2011 15:18:47 +0100 Message-ID: <1298902727.2428.10867.camel@twins> Mime-Version: 1.0 Sender: linux-arch-owner@vger.kernel.org List-ID: To: Russell King Cc: Andrea Arcangeli , Avi Kivity , Thomas Gleixner , Rik van Riel , Ingo Molnar , akpm@linux-foundation.org, Linus Torvalds , linux-kernel@vger.kernel.org, linux-arch@vger.kernel.org, linux-mm@kvack.org, Benjamin Herrenschmidt , David Miller , Hugh Dickins , Mel Gorman , Nick Piggin , Paul McKenney , Yanmin Zhang , "Luck,Tony" , PaulMundt , Chris Metcalf Message-ID: <20110228141847.z3GqlHqGbd5pCYYnPmCP6CRTxYKJpgxsVvjEL7Jruy8@z> On Mon, 2011-02-28 at 12:44 +0100, Peter Zijlstra wrote: > unmap_region() > tlb_gather_mmu() > unmap_vmas() > for (; vma; vma = vma->vm_next) > unmao_page_range() > tlb_start_vma() -> flush cache range So why is this correct? Can't we race with a concurrent access to the memory region (munmap() vs other thread access race)? While unmap_region() callers will have removed the vma from the tree so faults will not be satisfied, TLBs might still be present and allow us to access the memory and thereby reloading it in the cache. > zap_*_range() > ptep_get_and_clear_full() -> batch/track external tlbs > tlb_remove_tlb_entry() -> batch/track external tlbs > tlb_remove_page() -> track range/batch page > tlb_end_vma() -> flush tlb range > > [ for architectures that have hardware page table walkers > concurrent faults can still load the page tables ] > > free_pgtables() > while (vma) > unlink_*_vma() > free_*_range() > *_free_tlb() > tlb_finish_mmu() > > free vmas