From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=3.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E53BCC43387 for ; Fri, 28 Dec 2018 10:45:25 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id A33C520855 for ; Fri, 28 Dec 2018 10:45:25 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1545993925; bh=U0ZetAMhx3RPHNhI4I5LPz43JpTR41bP5oeVauVgbrA=; h=Subject:To:Cc:From:Date:List-ID:From; b=g1WHXU7l/qMzg1tsdCzvi+1Arn70RX1yXYDavwSMwoFe4+Qbia1pLM0akuEGgSvLL +1tKcmH9/dccTR4a4oJRUR9dkAgWe9tbQkVPcB30XeiZ5bonlcqypT4tO9+QQlHmNO skqOKMjJDvf9R0NbFF6ypOxyLc1FLQ5p3aDiXw+A= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730984AbeL1KpZ (ORCPT ); Fri, 28 Dec 2018 05:45:25 -0500 Received: from out1-smtp.messagingengine.com ([66.111.4.25]:50243 "EHLO out1-smtp.messagingengine.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1731518AbeL1KpY (ORCPT ); Fri, 28 Dec 2018 05:45:24 -0500 Received: from compute6.internal (compute6.nyi.internal [10.202.2.46]) by mailout.nyi.internal (Postfix) with ESMTP id AAF6B2318E; Fri, 28 Dec 2018 05:45:23 -0500 (EST) Received: from mailfrontend1 ([10.202.2.162]) by compute6.internal (MEProxy); Fri, 28 Dec 2018 05:45:23 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:content-type :date:from:message-id:mime-version:subject:to:x-me-proxy :x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s=fm1; bh=JgDj+J iiQYlHy6icEU6DPe4IV9X967KC17qoCTlE4us=; b=TL3X3zxtfzb6s8FQERCgtY 6Qn2eluxJsjG0pkfA1tp3qEW8GFE9GkIyyG6XXB6JnXK9wXIe9lcOANXxbjSaLC2 rlYtRTcQEH8C/wYlPAbabiXKwzAsR0RecZhx9QJXZeXBC0yPIod6KiTsLn6kaMFC jUrlrI9tPB1Mg3Iv3yTveNQBvcR8vjMHySk5H+1fniV61n+b51OVB0TkL4SXwLRD C/bdEeq+NVCyzbEcYyGt3SHzZ2AAxOrV9ith2kqeEtnPH9VoLg3JQgigMj+Icu2J 0u+/W3uSxQyf/0FzutE2AOvVc3NXDRUglp/BF9hgNvku6zd+/DWAPqQIqdaqNbpQ == X-ME-Sender: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedtledrtdehgddvudculddtuddrgedtkedrtddtmd cutefuodetggdotefrodftvfcurfhrohhfihhlvgemucfhrghsthforghilhdpqfhuthen uceurghilhhouhhtmecufedttdenucenucfjughrpefuvffhfffkgggtgfesthekredttd dtlfenucfhrhhomhepoehgrhgvghhkhheslhhinhhugihfohhunhgurghtihhonhdrohhr gheqnecuffhomhgrihhnpehkvghrnhgvlhdrohhrghenucfkphepkeefrdekiedrkeelrd dutdejnecurfgrrhgrmhepmhgrihhlfhhrohhmpehgrhgvgheskhhrohgrhhdrtghomhen ucevlhhushhtvghrufhiiigvpedt X-ME-Proxy: Received: from localhost (5356596b.cm-6-7b.dynamic.ziggo.nl [83.86.89.107]) by mail.messagingengine.com (Postfix) with ESMTPA id 5E123E4597; Fri, 28 Dec 2018 05:45:21 -0500 (EST) Subject: FAILED: patch "[PATCH] mm: thp: fix flags for pmd migration when split" failed to apply to 4.14-stable tree To: peterx@redhat.com, aarcange@redhat.com, akpm@linux-foundation.org, aneesh.kumar@linux.vnet.ibm.com, dave.jiang@intel.com, jrdr.linux@gmail.com, khlebnikov@yandex-team.ru, kirill.shutemov@linux.intel.com, mhocko@suse.com, stable@vger.kernel.org, torvalds@linux-foundation.org, william.kucharski@oracle.com, willy@infradead.org, zi.yan@cs.rutgers.edu Cc: From: Date: Fri, 28 Dec 2018 11:45:19 +0100 Message-ID: <154599391933141@kroah.com> MIME-Version: 1.0 Content-Type: text/plain; charset=ANSI_X3.4-1968 Content-Transfer-Encoding: 8bit Sender: stable-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org The patch below does not apply to the 4.14-stable tree. If someone wants it applied there, or to any other stable or longterm tree, then please email the backport, including the original git commit id to . thanks, greg k-h ------------------ original commit in Linus's tree ------------------ >From 2e83ee1d8694a61d0d95a5b694f2e61e8dde8627 Mon Sep 17 00:00:00 2001 From: Peter Xu Date: Fri, 21 Dec 2018 14:30:50 -0800 Subject: [PATCH] mm: thp: fix flags for pmd migration when split When splitting a huge migrating PMD, we'll transfer all the existing PMD bits and apply them again onto the small PTEs. However we are fetching the bits unconditionally via pmd_soft_dirty(), pmd_write() or pmd_yound() while actually they don't make sense at all when it's a migration entry. Fix them up. Since at it, drop the ifdef together as not needed. Note that if my understanding is correct about the problem then if without the patch there is chance to lose some of the dirty bits in the migrating pmd pages (on x86_64 we're fetching bit 11 which is part of swap offset instead of bit 2) and it could potentially corrupt the memory of an userspace program which depends on the dirty bit. Link: http://lkml.kernel.org/r/20181213051510.20306-1-peterx@redhat.com Signed-off-by: Peter Xu Reviewed-by: Konstantin Khlebnikov Reviewed-by: William Kucharski Acked-by: Kirill A. Shutemov Cc: Andrea Arcangeli Cc: Matthew Wilcox Cc: Michal Hocko Cc: Dave Jiang Cc: "Aneesh Kumar K.V" Cc: Souptick Joarder Cc: Konstantin Khlebnikov Cc: Zi Yan Cc: [4.14+] Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 5da55b38b1b7..e84a10b0d310 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2144,23 +2144,25 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd, */ old_pmd = pmdp_invalidate(vma, haddr, pmd); -#ifdef CONFIG_ARCH_ENABLE_THP_MIGRATION pmd_migration = is_pmd_migration_entry(old_pmd); - if (pmd_migration) { + if (unlikely(pmd_migration)) { swp_entry_t entry; entry = pmd_to_swp_entry(old_pmd); page = pfn_to_page(swp_offset(entry)); - } else -#endif + write = is_write_migration_entry(entry); + young = false; + soft_dirty = pmd_swp_soft_dirty(old_pmd); + } else { page = pmd_page(old_pmd); + if (pmd_dirty(old_pmd)) + SetPageDirty(page); + write = pmd_write(old_pmd); + young = pmd_young(old_pmd); + soft_dirty = pmd_soft_dirty(old_pmd); + } VM_BUG_ON_PAGE(!page_count(page), page); page_ref_add(page, HPAGE_PMD_NR - 1); - if (pmd_dirty(old_pmd)) - SetPageDirty(page); - write = pmd_write(old_pmd); - young = pmd_young(old_pmd); - soft_dirty = pmd_soft_dirty(old_pmd); /* * Withdraw the table only after we mark the pmd entry invalid.