From mboxrd@z Thu Jan 1 00:00:00 1970 From: Martin Schwidefsky Subject: Re: [PATCH v2 2/2] mm: speed up mremap by 500x on large regions Date: Mon, 15 Oct 2018 10:18:14 +0200 Message-ID: <20181015101814.306d257c@mschwideX1> References: <20181012013756.11285-1-joel@joelfernandes.org> <20181012013756.11285-2-joel@joelfernandes.org> <6580a62b-69c6-f2e3-767c-bd36b977bea2@de.ibm.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Return-path: DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Message-Id:MIME-Version:References: In-Reply-To:Subject:To:From:Date:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=TFXAfRLhu5cIf4L510/BQAqvIhtuEs93AfiOie+E5DI=; b=lGT0oj/+CACwvo iEfl15LQVKC+KxhjQESyvPjmqpvlLh4ua49bh7Tc/+jcH/qzffLzHVJUWfTLsXhExjuxIsT9p/JCq uwuXAvao//TVE9My20XgtYnTBt9TBxXJmqvzRK/46b2Uktx8i/hsr7TLTTejeknArxCFrPHE9sKD6 INNTp8zArJ0ffZU2No/oCYhZTy5KkSUDz9SxBOZh+yFMH7nz7/YzlgL06clKnze4Kidi6JQ13zGBV 4q8An9fzt/bDiZBxVBd1SdGP6Q4k+T7UgheDTX1p08eMz3LIL1QOQgoz6OlBmdD0szO5rfbw2gm5B 9l9oNHhFRvcIasyMskFg==; In-Reply-To: <6580a62b-69c6-f2e3-767c-bd36b977bea2@de.ibm.com> List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+glpr-linux-riscv=m.gmane.org@lists.infradead.org To: Christian Borntraeger Cc: linux-mips@linux-mips.org, Rich Felker , linux-ia64@vger.kernel.org, linux-sh@vger.kernel.org, Peter Zijlstra , Catalin Marinas , Dave Hansen , Will Deacon , mhocko@kernel.org, linux-mm@kvack.org, lokeshgidra@google.com, "Joel Fernandes (Google)" , linux-riscv@lists.infradead.org, elfring@users.sourceforge.net, Jonas Bonn , linux-s390@vger.kernel.org, dancol@google.com, Yoshinori Sato , sparclinux@vger.kernel.org, linux-xtensa@linux-xtensa.org, linux-hexagon@vger.kernel.org, Helge Deller , "maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT)" , hughd@google.com, "James E.J. Bottomley" , kasan-dev@googlegroups.com, kvmarm@lists.cs.colu On Mon, 15 Oct 2018 09:10:53 +0200 Christian Borntraeger wrote: > On 10/12/2018 03:37 AM, Joel Fernandes (Google) wrote: > > Android needs to mremap large regions of memory during memory management > > related operations. The mremap system call can be really slow if THP is > > not enabled. The bottleneck is move_page_tables, which is copying each > > pte at a time, and can be really slow across a large map. Turning on THP > > may not be a viable option, and is not for us. This patch speeds up the > > performance for non-THP system by copying at the PMD level when possible. > > > > The speed up is three orders of magnitude. On a 1GB mremap, the mremap > > completion times drops from 160-250 millesconds to 380-400 microseconds. > > > > Before: > > Total mremap time for 1GB data: 242321014 nanoseconds. > > Total mremap time for 1GB data: 196842467 nanoseconds. > > Total mremap time for 1GB data: 167051162 nanoseconds. > > > > After: > > Total mremap time for 1GB data: 385781 nanoseconds. > > Total mremap time for 1GB data: 388959 nanoseconds. > > Total mremap time for 1GB data: 402813 nanoseconds. > > > > Incase THP is enabled, the optimization is skipped. I also flush the > > tlb every time we do this optimization since I couldn't find a way to > > determine if the low-level PTEs are dirty. It is seen that the cost of > > doing so is not much compared the improvement, on both x86-64 and arm64. > > > > Cc: minchan@kernel.org > > Cc: pantin@google.com > > Cc: hughd@google.com > > Cc: lokeshgidra@google.com > > Cc: dancol@google.com > > Cc: mhocko@kernel.org > > Cc: kirill@shutemov.name > > Cc: akpm@linux-foundation.org > > Signed-off-by: Joel Fernandes (Google) > > --- > > mm/mremap.c | 62 +++++++++++++++++++++++++++++++++++++++++++++++++++++ > > 1 file changed, 62 insertions(+) > > > > diff --git a/mm/mremap.c b/mm/mremap.c > > index 9e68a02a52b1..d82c485822ef 100644 > > --- a/mm/mremap.c > > +++ b/mm/mremap.c > > @@ -191,6 +191,54 @@ static void move_ptes(struct vm_area_struct *vma, pmd_t *old_pmd, > > drop_rmap_locks(vma); > > } > > > > +static bool move_normal_pmd(struct vm_area_struct *vma, unsigned long old_addr, > > + unsigned long new_addr, unsigned long old_end, > > + pmd_t *old_pmd, pmd_t *new_pmd, bool *need_flush) > > +{ > > + spinlock_t *old_ptl, *new_ptl; > > + struct mm_struct *mm = vma->vm_mm; > > + > > + if ((old_addr & ~PMD_MASK) || (new_addr & ~PMD_MASK) > > + || old_end - old_addr < PMD_SIZE) > > + return false; > > + > > + /* > > + * The destination pmd shouldn't be established, free_pgtables() > > + * should have release it. > > + */ > > + if (WARN_ON(!pmd_none(*new_pmd))) > > + return false; > > + > > + /* > > + * We don't have to worry about the ordering of src and dst > > + * ptlocks because exclusive mmap_sem prevents deadlock. > > + */ > > + old_ptl = pmd_lock(vma->vm_mm, old_pmd); > > + if (old_ptl) { > > + pmd_t pmd; > > + > > + new_ptl = pmd_lockptr(mm, new_pmd); > > + if (new_ptl != old_ptl) > > + spin_lock_nested(new_ptl, SINGLE_DEPTH_NESTING); > > + > > + /* Clear the pmd */ > > + pmd = *old_pmd; > > + pmd_clear(old_pmd); > > Adding Martin Schwidefsky. > Is this mapping maybe still in use on other CPUs? If yes, I think for > s390 we need to flush here as well (in other word we might need to introduce > pmd_clear_flush). On s390 you have to use instructions like CRDTE,IPTE or IDTE > to modify page table entries that are still in use. Otherwise you can get a > delayed access exception which is - in contrast to page faults - not recoverable. Just clearing an active pmd would be broken for s390. We need the equivalent of the ptep_get_and_clear() function for pmds. For s390 this function would look like this: static inline pte_t pmdp_get_and_clear(struct mm_struct *mm, unsigned long addr, pmd_t *pmdp) { return pmdp_xchg_lazy(mm, addr, pmdp, __pmd(_SEGMENT_ENTRY_INVALID)); } Just like pmdp_huge_get_and_clear() in fact. > > > > > + > > + VM_BUG_ON(!pmd_none(*new_pmd)); > > + > > + /* Set the new pmd */ > > + set_pmd_at(mm, new_addr, new_pmd, pmd); > > + if (new_ptl != old_ptl) > > + spin_unlock(new_ptl); > > + spin_unlock(old_ptl); > > + > > + *need_flush = true; > > + return true; > > + } > > + return false; > > +} > > + So the idea is to move the pmd entry to the new location, dragging the whole pte table to a new location with a different address. I wonder if that is safe in regard to get_user_pages_fast(). > > unsigned long move_page_tables(struct vm_area_struct *vma, > > unsigned long old_addr, struct vm_area_struct *new_vma, > > unsigned long new_addr, unsigned long len, > > @@ -239,7 +287,21 @@ unsigned long move_page_tables(struct vm_area_struct *vma, > > split_huge_pmd(vma, old_pmd, old_addr); > > if (pmd_trans_unstable(old_pmd)) > > continue; > > + } else if (extent == PMD_SIZE) { > > + bool moved; > > + > > + /* See comment in move_ptes() */ > > + if (need_rmap_locks) > > + take_rmap_locks(vma); > > + moved = move_normal_pmd(vma, old_addr, new_addr, > > + old_end, old_pmd, new_pmd, > > + &need_flush); > > + if (need_rmap_locks) > > + drop_rmap_locks(vma); > > + if (moved) > > + continue; > > } > > + > > if (pte_alloc(new_vma->vm_mm, new_pmd)) > > break; > > next = (new_addr + PMD_SIZE) & PMD_MASK; > > -- blue skies, Martin. "Reality continues to ruin my life." - Calvin.