From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.8 required=3.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH, MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A5A6CC7618F for ; Fri, 19 Jul 2019 19:07:14 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 805372085A for ; Fri, 19 Jul 2019 19:07:14 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=nvidia.com header.i=@nvidia.com header.b="o+RBABok" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732521AbfGSTHN (ORCPT ); Fri, 19 Jul 2019 15:07:13 -0400 Received: from hqemgate14.nvidia.com ([216.228.121.143]:11182 "EHLO hqemgate14.nvidia.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1732480AbfGSTHK (ORCPT ); Fri, 19 Jul 2019 15:07:10 -0400 Received: from hqpgpgate101.nvidia.com (Not Verified[216.228.121.13]) by hqemgate14.nvidia.com (using TLS: TLSv1.2, DES-CBC3-SHA) id ; Fri, 19 Jul 2019 12:07:09 -0700 Received: from hqmail.nvidia.com ([172.20.161.6]) by hqpgpgate101.nvidia.com (PGP Universal service); Fri, 19 Jul 2019 12:07:09 -0700 X-PGP-Universal: processed; by hqpgpgate101.nvidia.com on Fri, 19 Jul 2019 12:07:09 -0700 Received: from HQMAIL104.nvidia.com (172.18.146.11) by HQMAIL106.nvidia.com (172.18.146.12) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Fri, 19 Jul 2019 19:07:04 +0000 Received: from hqnvemgw01.nvidia.com (172.20.150.20) by HQMAIL104.nvidia.com (172.18.146.11) with Microsoft SMTP Server (TLS) id 15.0.1473.3 via Frontend Transport; Fri, 19 Jul 2019 19:07:05 +0000 Received: from rcampbell-dev.nvidia.com (Not Verified[10.110.48.66]) by hqnvemgw01.nvidia.com with Trustwave SEG (v7,5,8,10121) id ; Fri, 19 Jul 2019 12:07:04 -0700 From: Ralph Campbell To: CC: , Ralph Campbell , =?UTF-8?q?J=C3=A9r=C3=B4me=20Glisse?= , "Kirill A. Shutemov" , Mike Kravetz , Christoph Hellwig , Jason Gunthorpe , John Hubbard , , Andrew Morton Subject: [PATCH 3/3] mm/hmm: Fix bad subpage pointer in try_to_unmap_one Date: Fri, 19 Jul 2019 12:06:49 -0700 Message-ID: <20190719190649.30096-4-rcampbell@nvidia.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190719190649.30096-1-rcampbell@nvidia.com> References: <20190719190649.30096-1-rcampbell@nvidia.com> MIME-Version: 1.0 X-NVConfidentiality: public Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1563563229; bh=U0IXy/zKKcejMP4DQKHZ57GJs0lTlzQ4MYtXzCim3fk=; h=X-PGP-Universal:From:To:CC:Subject:Date:Message-ID:X-Mailer: In-Reply-To:References:MIME-Version:X-NVConfidentiality: Content-Type:Content-Transfer-Encoding; b=o+RBABok9q175TIaACbH78zORn0X6HKDRIaL7qVbx6i1lTfFIvWnzJW+4vNL6sV6b ZKgT8U/FylXyFeAAKUtwBA/FkfEIAamghIZKdkVrDQ5M71n4NK6qlF4EKNxv4IL4eG +muFf+zUoAUwJ+7LK/tHizAsQdE0USzc+yeNNAo+BhVqKHOsE2Pc3X9dhUdgl61mgH F95OPZNtDZ+Hw34YccLNzh4c/8BUxt3ybbT6oi2KcaW8mcQG/dCe872irjZnjQI/eM pN1XUWI2MkJJS0eZpCAGD73t9t7AvrexWZ6dujBLVTspyk/N1JQL1I0qnp79R8szhm 24nbXQtuN0hkw== Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org When migrating an anonymous private page to a ZONE_DEVICE private page, the source page->mapping and page->index fields are copied to the destination ZONE_DEVICE struct page and the page_mapcount() is increased. This is so rmap_walk() can be used to unmap and migrate the page back to system memory. However, try_to_unmap_one() computes the subpage pointer from a swap pte which computes an invalid page pointer and a kernel panic results such as: BUG: unable to handle page fault for address: ffffea1fffffffc8 Currently, only single pages can be migrated to device private memory so no subpage computation is needed and it can be set to "page". Fixes: a5430dda8a3a1c ("mm/migrate: support un-addressable ZONE_DEVICE page= in migration") Signed-off-by: Ralph Campbell Cc: "J=C3=A9r=C3=B4me Glisse" Cc: "Kirill A. Shutemov" Cc: Mike Kravetz Cc: Christoph Hellwig Cc: Jason Gunthorpe Cc: John Hubbard Cc: Signed-off-by: Andrew Morton --- mm/rmap.c | 1 + 1 file changed, 1 insertion(+) diff --git a/mm/rmap.c b/mm/rmap.c index e5dfe2ae6b0d..ec1af8b60423 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -1476,6 +1476,7 @@ static bool try_to_unmap_one(struct page *page, struc= t vm_area_struct *vma, * No need to invalidate here it will synchronize on * against the special swap migration pte. */ + subpage =3D page; goto discard; } =20 --=20 2.20.1