From: Martin Schwidefsky <schwidefsky@de.ibm.com>
To: Christian Borntraeger <borntraeger@de.ibm.com>
Cc: "Joel Fernandes (Google)" <joel@joelfernandes.org>,
linux-kernel@vger.kernel.org, kernel-team@android.com,
minchan@kernel.org, pantin@google.com, hughd@google.com,
lokeshgidra@google.com, dancol@google.com, mhocko@kernel.org,
kirill@shutemov.name, akpm@linux-foundation.org,
Andrey Ryabinin <aryabinin@virtuozzo.com>,
Andy Lutomirski <luto@kernel.org>, Borislav Petkov <bp@alien8.de>,
Catalin Marinas <catalin.marinas@arm.com>,
Chris Zankel <chris@zankel.net>,
Dave Hansen <dave.hansen@linux.intel.com>,
"David S. Miller" <davem@davemloft.net>,
elfring@users.sourceforge.net, Fenghua Yu <fenghua.yu@intel.com>,
Geert Uytterhoeven <geert@linux-m68k.org>,
Guan Xuetao <gxt@pku.edu.cn>, Helge Deller <deller@gmx.de>,
Ingo Molnar <mingo@redhat.com>,
"James E.J. Bottomley" <jejb@parisc-linux.org>,
Jeff Dike <jdike@addtoit.com>, Jonas Bonn <jonas@southpole.se>,
Julia Lawall <Julia.Lawall@lip6.fr>,
kasan-dev@googlegroups.com, kvmarm@lists.cs.columbia.edu,
Ley Foon Tan <lftan@altera.com>,
linux-alpha@vger.kernel.org,
linux-arm-kernel@lists.infradead.org,
linux-hexagon@vger.kernel.org, linux-ia64@vger.kernel.org,
linux-m68k@lists.linux-m68k.org, linux-mips@linux-mips.org,
linux-mm@kvack.org, linux-parisc@vger.kernel.org,
linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org,
linux-s390@vger.kernel.org, linux-sh@vger.kernel.org,
linux-snps-arc@lists.infradead.org, linux-um@lists.infradead.org,
linux-xtensa@linux-xtensa.org, Max Filippov <jcmvbkbc@gmail.com>,
nios2-dev@lists.rocketboards.org, openrisc@lists.librecores.org,
Peter Zijlstra <peterz@infradead.org>,
Richard Weinberger <richard@nod.at>,
Rich Felker <dalias@libc.org>, Sam Creasey <sammy@sammy.net>,
sparclinux@vger.kernel.org, Stafford Horne <shorne@gmail.com>,
Stefan Kristiansson <stefan.kristiansson@saunalahti.fi>,
Thomas Gleixner <tglx@linutronix.de>,
Tony Luck <tony.luck@intel.com>,
Will Deacon <will.deacon@arm.com>,
"maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT)"
<x86@kernel.org>, Yoshinori Sato <ysato@users.sourceforge.jp>
Subject: Re: [PATCH v2 2/2] mm: speed up mremap by 500x on large regions
Date: Mon, 15 Oct 2018 10:18:14 +0200 [thread overview]
Message-ID: <20181015101814.306d257c@mschwideX1> (raw)
In-Reply-To: <6580a62b-69c6-f2e3-767c-bd36b977bea2@de.ibm.com>
On Mon, 15 Oct 2018 09:10:53 +0200
Christian Borntraeger <borntraeger@de.ibm.com> wrote:
> On 10/12/2018 03:37 AM, Joel Fernandes (Google) wrote:
> > Android needs to mremap large regions of memory during memory management
> > related operations. The mremap system call can be really slow if THP is
> > not enabled. The bottleneck is move_page_tables, which is copying each
> > pte at a time, and can be really slow across a large map. Turning on THP
> > may not be a viable option, and is not for us. This patch speeds up the
> > performance for non-THP system by copying at the PMD level when possible.
> >
> > The speed up is three orders of magnitude. On a 1GB mremap, the mremap
> > completion times drops from 160-250 millesconds to 380-400 microseconds.
> >
> > Before:
> > Total mremap time for 1GB data: 242321014 nanoseconds.
> > Total mremap time for 1GB data: 196842467 nanoseconds.
> > Total mremap time for 1GB data: 167051162 nanoseconds.
> >
> > After:
> > Total mremap time for 1GB data: 385781 nanoseconds.
> > Total mremap time for 1GB data: 388959 nanoseconds.
> > Total mremap time for 1GB data: 402813 nanoseconds.
> >
> > Incase THP is enabled, the optimization is skipped. I also flush the
> > tlb every time we do this optimization since I couldn't find a way to
> > determine if the low-level PTEs are dirty. It is seen that the cost of
> > doing so is not much compared the improvement, on both x86-64 and arm64.
> >
> > Cc: minchan@kernel.org
> > Cc: pantin@google.com
> > Cc: hughd@google.com
> > Cc: lokeshgidra@google.com
> > Cc: dancol@google.com
> > Cc: mhocko@kernel.org
> > Cc: kirill@shutemov.name
> > Cc: akpm@linux-foundation.org
> > Signed-off-by: Joel Fernandes (Google) <joel@joelfernandes.org>
> > ---
> > mm/mremap.c | 62 +++++++++++++++++++++++++++++++++++++++++++++++++++++
> > 1 file changed, 62 insertions(+)
> >
> > diff --git a/mm/mremap.c b/mm/mremap.c
> > index 9e68a02a52b1..d82c485822ef 100644
> > --- a/mm/mremap.c
> > +++ b/mm/mremap.c
> > @@ -191,6 +191,54 @@ static void move_ptes(struct vm_area_struct *vma, pmd_t *old_pmd,
> > drop_rmap_locks(vma);
> > }
> >
> > +static bool move_normal_pmd(struct vm_area_struct *vma, unsigned long old_addr,
> > + unsigned long new_addr, unsigned long old_end,
> > + pmd_t *old_pmd, pmd_t *new_pmd, bool *need_flush)
> > +{
> > + spinlock_t *old_ptl, *new_ptl;
> > + struct mm_struct *mm = vma->vm_mm;
> > +
> > + if ((old_addr & ~PMD_MASK) || (new_addr & ~PMD_MASK)
> > + || old_end - old_addr < PMD_SIZE)
> > + return false;
> > +
> > + /*
> > + * The destination pmd shouldn't be established, free_pgtables()
> > + * should have release it.
> > + */
> > + if (WARN_ON(!pmd_none(*new_pmd)))
> > + return false;
> > +
> > + /*
> > + * We don't have to worry about the ordering of src and dst
> > + * ptlocks because exclusive mmap_sem prevents deadlock.
> > + */
> > + old_ptl = pmd_lock(vma->vm_mm, old_pmd);
> > + if (old_ptl) {
> > + pmd_t pmd;
> > +
> > + new_ptl = pmd_lockptr(mm, new_pmd);
> > + if (new_ptl != old_ptl)
> > + spin_lock_nested(new_ptl, SINGLE_DEPTH_NESTING);
> > +
> > + /* Clear the pmd */
> > + pmd = *old_pmd;
> > + pmd_clear(old_pmd);
>
> Adding Martin Schwidefsky.
> Is this mapping maybe still in use on other CPUs? If yes, I think for
> s390 we need to flush here as well (in other word we might need to introduce
> pmd_clear_flush). On s390 you have to use instructions like CRDTE,IPTE or IDTE
> to modify page table entries that are still in use. Otherwise you can get a
> delayed access exception which is - in contrast to page faults - not recoverable.
Just clearing an active pmd would be broken for s390. We need the equivalent
of the ptep_get_and_clear() function for pmds. For s390 this function would
look like this:
static inline pte_t pmdp_get_and_clear(struct mm_struct *mm,
unsigned long addr, pmd_t *pmdp)
{
return pmdp_xchg_lazy(mm, addr, pmdp, __pmd(_SEGMENT_ENTRY_INVALID));
}
Just like pmdp_huge_get_and_clear() in fact.
>
>
>
> > +
> > + VM_BUG_ON(!pmd_none(*new_pmd));
> > +
> > + /* Set the new pmd */
> > + set_pmd_at(mm, new_addr, new_pmd, pmd);
> > + if (new_ptl != old_ptl)
> > + spin_unlock(new_ptl);
> > + spin_unlock(old_ptl);
> > +
> > + *need_flush = true;
> > + return true;
> > + }
> > + return false;
> > +}
> > +
So the idea is to move the pmd entry to the new location, dragging
the whole pte table to a new location with a different address.
I wonder if that is safe in regard to get_user_pages_fast().
> > unsigned long move_page_tables(struct vm_area_struct *vma,
> > unsigned long old_addr, struct vm_area_struct *new_vma,
> > unsigned long new_addr, unsigned long len,
> > @@ -239,7 +287,21 @@ unsigned long move_page_tables(struct vm_area_struct *vma,
> > split_huge_pmd(vma, old_pmd, old_addr);
> > if (pmd_trans_unstable(old_pmd))
> > continue;
> > + } else if (extent == PMD_SIZE) {
> > + bool moved;
> > +
> > + /* See comment in move_ptes() */
> > + if (need_rmap_locks)
> > + take_rmap_locks(vma);
> > + moved = move_normal_pmd(vma, old_addr, new_addr,
> > + old_end, old_pmd, new_pmd,
> > + &need_flush);
> > + if (need_rmap_locks)
> > + drop_rmap_locks(vma);
> > + if (moved)
> > + continue;
> > }
> > +
> > if (pte_alloc(new_vma->vm_mm, new_pmd))
> > break;
> > next = (new_addr + PMD_SIZE) & PMD_MASK;
> >
--
blue skies,
Martin.
"Reality continues to ruin my life." - Calvin.
next prev parent reply other threads:[~2018-10-15 8:18 UTC|newest]
Thread overview: 38+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-10-12 1:37 [PATCH v2 1/2] treewide: remove unused address argument from pte_alloc functions Joel Fernandes (Google)
2018-10-12 1:37 ` [PATCH v2 2/2] mm: speed up mremap by 500x on large regions Joel Fernandes (Google)
2018-10-12 11:30 ` Kirill A. Shutemov
2018-10-12 11:36 ` Kirill A. Shutemov
2018-10-12 12:50 ` Joel Fernandes
2018-10-12 13:19 ` Kirill A. Shutemov
2018-10-12 16:57 ` Joel Fernandes
2018-10-12 21:33 ` Kirill A. Shutemov
2018-10-12 18:18 ` David Miller
2018-10-13 1:35 ` Joel Fernandes
2018-10-13 1:39 ` Daniel Colascione
2018-10-13 1:44 ` Joel Fernandes
2018-10-13 1:54 ` Daniel Colascione
2018-10-13 2:10 ` Joel Fernandes
2018-10-13 2:25 ` Daniel Colascione
2018-10-13 17:50 ` Joel Fernandes
2018-10-12 18:02 ` David Miller
2018-10-12 14:09 ` Anton Ivanov
2018-10-12 14:37 ` Kirill A. Shutemov
2018-10-12 14:48 ` Anton Ivanov
2018-10-12 16:42 ` Anton Ivanov
2018-10-12 16:50 ` Joel Fernandes
2018-10-12 16:58 ` Anton Ivanov
2018-10-12 17:06 ` Joel Fernandes
2018-10-12 21:40 ` Kirill A. Shutemov
2018-10-13 6:10 ` Anton Ivanov
2018-10-15 7:10 ` Christian Borntraeger
2018-10-15 8:18 ` Martin Schwidefsky [this message]
2018-10-16 2:08 ` Joel Fernandes
2018-10-12 11:09 ` [PATCH v2 1/2] treewide: remove unused address argument from pte_alloc functions Kirill A. Shutemov
2018-10-12 16:37 ` Joel Fernandes
2018-10-12 13:56 ` Anton Ivanov
2018-10-12 16:34 ` Joel Fernandes
2018-10-12 16:38 ` Julia Lawall
2018-10-12 16:46 ` Joel Fernandes
2018-10-12 18:51 ` SF Markus Elfring
2018-10-12 19:42 ` Joel Fernandes
2018-10-13 9:22 ` SF Markus Elfring
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20181015101814.306d257c@mschwideX1 \
--to=schwidefsky@de.ibm.com \
--cc=Julia.Lawall@lip6.fr \
--cc=akpm@linux-foundation.org \
--cc=aryabinin@virtuozzo.com \
--cc=borntraeger@de.ibm.com \
--cc=bp@alien8.de \
--cc=catalin.marinas@arm.com \
--cc=chris@zankel.net \
--cc=dalias@libc.org \
--cc=dancol@google.com \
--cc=dave.hansen@linux.intel.com \
--cc=davem@davemloft.net \
--cc=deller@gmx.de \
--cc=elfring@users.sourceforge.net \
--cc=fenghua.yu@intel.com \
--cc=geert@linux-m68k.org \
--cc=gxt@pku.edu.cn \
--cc=hughd@google.com \
--cc=jcmvbkbc@gmail.com \
--cc=jdike@addtoit.com \
--cc=jejb@parisc-linux.org \
--cc=joel@joelfernandes.org \
--cc=jonas@southpole.se \
--cc=kasan-dev@googlegroups.com \
--cc=kernel-team@android.com \
--cc=kirill@shutemov.name \
--cc=kvmarm@lists.cs.columbia.edu \
--cc=lftan@altera.com \
--cc=linux-alpha@vger.kernel.org \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-hexagon@vger.kernel.org \
--cc=linux-ia64@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-m68k@lists.linux-m68k.org \
--cc=linux-mips@linux-mips.org \
--cc=linux-mm@kvack.org \
--cc=linux-parisc@vger.kernel.org \
--cc=linux-riscv@lists.infradead.org \
--cc=linux-s390@vger.kernel.org \
--cc=linux-sh@vger.kernel.org \
--cc=linux-snps-arc@lists.infradead.org \
--cc=linux-um@lists.infradead.org \
--cc=linux-xtensa@linux-xtensa.org \
--cc=linuxppc-dev@lists.ozlabs.org \
--cc=lokeshgidra@google.com \
--cc=luto@kernel.org \
--cc=mhocko@kernel.org \
--cc=minchan@kernel.org \
--cc=mingo@redhat.com \
--cc=nios2-dev@lists.rocketboards.org \
--cc=openrisc@lists.librecores.org \
--cc=pantin@google.com \
--cc=peterz@infradead.org \
--cc=richard@nod.at \
--cc=sammy@sammy.net \
--cc=shorne@gmail.com \
--cc=sparclinux@vger.kernel.org \
--cc=stefan.kristiansson@saunalahti.fi \
--cc=tglx@linutronix.de \
--cc=tony.luck@intel.com \
--cc=will.deacon@arm.com \
--cc=x86@kernel.org \
--cc=ysato@users.sourceforge.jp \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).