From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.1 required=3.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE, SPF_PASS,T_DKIMWL_WL_HIGH,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 75B45C28CC1 for ; Sat, 1 Jun 2019 13:18:01 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 486B727267 for ; Sat, 1 Jun 2019 13:18:01 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1559395081; bh=lJU+MwtdUqpxJ6U2Kw9WErA+m+fp8zQU34Yp9GSvqrI=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=uisf/1tSVn9Mbq96bI6Wje+6t3jiF+ktoGXUYSSuTeRW/84JNCPQ8MehzS/L+QYtH VNpwLWlqyuSISJrf+B+q1wOeNRk5EYth2EjBqnQ0kn16LICZmGR/1LIkChNXKyKBIk +Y0fHg2B/seoP4INiWjYz3i5W2cfrmJUZhv3Bgqg= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727611AbfFANR5 (ORCPT ); Sat, 1 Jun 2019 09:17:57 -0400 Received: from mail.kernel.org ([198.145.29.99]:44828 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727585AbfFANR4 (ORCPT ); Sat, 1 Jun 2019 09:17:56 -0400 Received: from sasha-vm.mshome.net (c-73-47-72-35.hsd1.nh.comcast.net [73.47.72.35]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 9B57525BFE; Sat, 1 Jun 2019 13:17:54 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1559395075; bh=lJU+MwtdUqpxJ6U2Kw9WErA+m+fp8zQU34Yp9GSvqrI=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=AObX8iZg/Su+suBdAXD9IXNXEXvcSBUc0lMwl9UFMuXh7oPXdhx4Byy8LqUKDHzai A5pHlUeftn6koXFmfNpct2fQr3lOy9/f5/rT9DJ9pZSgQlsuHjObwVzDhOPoDIH8it qT6G84MoNQjWr3o3qAejJdZOsfZzl0r1SIouN1lo= From: Sasha Levin To: linux-kernel@vger.kernel.org, stable@vger.kernel.org Cc: "Aneesh Kumar K.V" , Andrew Morton , Dan Williams , Andrea Arcangeli , Linus Torvalds , Sasha Levin , linux-fsdevel@vger.kernel.org, linux-nvdimm@lists.01.org, linux-mm@kvack.org Subject: [PATCH AUTOSEL 5.1 020/186] mm: page_mkclean vs MADV_DONTNEED race Date: Sat, 1 Jun 2019 09:13:56 -0400 Message-Id: <20190601131653.24205-20-sashal@kernel.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190601131653.24205-1-sashal@kernel.org> References: <20190601131653.24205-1-sashal@kernel.org> MIME-Version: 1.0 X-stable: review X-Patchwork-Hint: Ignore Content-Transfer-Encoding: 8bit Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org From: "Aneesh Kumar K.V" [ Upstream commit 024eee0e83f0df52317be607ca521e0fc572aa07 ] MADV_DONTNEED is handled with mmap_sem taken in read mode. We call page_mkclean without holding mmap_sem. MADV_DONTNEED implies that pages in the region are unmapped and subsequent access to the pages in that range is handled as a new page fault. This implies that if we don't have parallel access to the region when MADV_DONTNEED is run we expect those range to be unallocated. w.r.t page_mkclean() we need to make sure that we don't break the MADV_DONTNEED semantics. MADV_DONTNEED check for pmd_none without holding pmd_lock. This implies we skip the pmd if we temporarily mark pmd none. Avoid doing that while marking the page clean. Keep the sequence same for dax too even though we don't support MADV_DONTNEED for dax mapping The bug was noticed by code review and I didn't observe any failures w.r.t test run. This is similar to commit 58ceeb6bec86d9140f9d91d71a710e963523d063 Author: Kirill A. Shutemov Date: Thu Apr 13 14:56:26 2017 -0700 thp: fix MADV_DONTNEED vs. MADV_FREE race commit ced108037c2aa542b3ed8b7afd1576064ad1362a Author: Kirill A. Shutemov Date: Thu Apr 13 14:56:20 2017 -0700 thp: fix MADV_DONTNEED vs. numa balancing race Link: http://lkml.kernel.org/r/20190321040610.14226-1-aneesh.kumar@linux.ibm.com Signed-off-by: Aneesh Kumar K.V Reviewed-by: Andrew Morton Cc: Dan Williams Cc:"Kirill A . Shutemov" Cc: Andrea Arcangeli Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds Signed-off-by: Sasha Levin --- fs/dax.c | 2 +- mm/rmap.c | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/fs/dax.c b/fs/dax.c index 83009875308c5..f74386293632d 100644 --- a/fs/dax.c +++ b/fs/dax.c @@ -814,7 +814,7 @@ static void dax_entry_mkclean(struct address_space *mapping, pgoff_t index, goto unlock_pmd; flush_cache_page(vma, address, pfn); - pmd = pmdp_huge_clear_flush(vma, address, pmdp); + pmd = pmdp_invalidate(vma, address, pmdp); pmd = pmd_wrprotect(pmd); pmd = pmd_mkclean(pmd); set_pmd_at(vma->vm_mm, address, pmdp, pmd); diff --git a/mm/rmap.c b/mm/rmap.c index b30c7c71d1d92..76c8dfd3ae1cd 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -928,7 +928,7 @@ static bool page_mkclean_one(struct page *page, struct vm_area_struct *vma, continue; flush_cache_page(vma, address, page_to_pfn(page)); - entry = pmdp_huge_clear_flush(vma, address, pmd); + entry = pmdp_invalidate(vma, address, pmd); entry = pmd_wrprotect(entry); entry = pmd_mkclean(entry); set_pmd_at(vma->vm_mm, address, pmd, entry); -- 2.20.1