From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from out1-smtp.messagingengine.com ([66.111.4.25]:55821 "EHLO out1-smtp.messagingengine.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752124AbdGCIxm (ORCPT ); Mon, 3 Jul 2017 04:53:42 -0400 Date: Mon, 3 Jul 2017 10:53:42 +0200 From: Greg KH To: Mark Rutland Cc: stable@vger.kernel.org Subject: Re: [PATCH v4.9.y] mm: numa: avoid waiting on freed migrated pages Message-ID: <20170703085342.GG11757@kroah.com> References: <1497863849-21277-1-git-send-email-mark.rutland@arm.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1497863849-21277-1-git-send-email-mark.rutland@arm.com> Sender: stable-owner@vger.kernel.org List-ID: On Mon, Jun 19, 2017 at 10:17:29AM +0100, Mark Rutland wrote: > commit 3c226c637b69104f6b9f1c6ec5b08d7b741b3229 upstream. > > In do_huge_pmd_numa_page(), we attempt to handle a migrating thp pmd by > waiting until the pmd is unlocked before we return and retry. However, we > can race with migrate_misplaced_transhuge_page(): > > // do_huge_pmd_numa_page // migrate_misplaced_transhuge_page() > // Holds 0 refs on page // Holds 2 refs on page > > vmf->ptl = pmd_lock(vma->vm_mm, vmf->pmd); > /* ... */ > if (pmd_trans_migrating(*vmf->pmd)) { > page = pmd_page(*vmf->pmd); > spin_unlock(vmf->ptl); > ptl = pmd_lock(mm, pmd); > if (page_count(page) != 2)) { > /* roll back */ > } > /* ... */ > mlock_migrate_page(new_page, page); > /* ... */ > spin_unlock(ptl); > put_page(page); > put_page(page); // page freed here > wait_on_page_locked(page); > goto out; > } > > This can result in the freed page having its waiters flag set > unexpectedly, which trips the PAGE_FLAGS_CHECK_AT_PREP checks in the page > alloc/free functions. This has been observed on arm64 KVM guests. > > We can avoid this by having do_huge_pmd_numa_page() take a reference on > the page before dropping the pmd lock, mirroring what we do in > __migration_entry_wait(). > > When we hit the race, migrate_misplaced_transhuge_page() will see the > reference and abort the migration, as it may do today in other cases. > > Fixes: b8916634b77bffb2 ("mm: Prevent parallel splits during THP migration") > Link: http://lkml.kernel.org/r/1497349722-6731-2-git-send-email-will.deacon@arm.com > Signed-off-by: Mark Rutland > Signed-off-by: Will Deacon > Acked-by: Steve Capper > Acked-by: Kirill A. Shutemov > Acked-by: Vlastimil Babka > Cc: Mel Gorman > Cc: > Signed-off-by: Andrew Morton > --- > mm/huge_memory.c | 6 ++++++ > 1 file changed, 6 insertions(+) Thanks for this and the other backports, all now applied. greg k-h