From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.3 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9DD75C18E5B for ; Mon, 16 Mar 2020 18:13:28 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 6DD072071C for ; Mon, 16 Mar 2020 18:13:28 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 6DD072071C Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id D5ED56B0003; Mon, 16 Mar 2020 14:13:27 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id D0FE36B0005; Mon, 16 Mar 2020 14:13:27 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BFE116B0007; Mon, 16 Mar 2020 14:13:27 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0151.hostedemail.com [216.40.44.151]) by kanga.kvack.org (Postfix) with ESMTP id A492B6B0003 for ; Mon, 16 Mar 2020 14:13:27 -0400 (EDT) Received: from smtpin18.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 484D54DB6 for ; Mon, 16 Mar 2020 18:13:27 +0000 (UTC) X-FDA: 76602022854.18.fire12_3b322d69ea52c X-HE-Tag: fire12_3b322d69ea52c X-Filterd-Recvd-Size: 3909 Received: from verein.lst.de (verein.lst.de [213.95.11.211]) by imf11.hostedemail.com (Postfix) with ESMTP for ; Mon, 16 Mar 2020 18:13:26 +0000 (UTC) Received: by verein.lst.de (Postfix, from userid 2407) id 63BBC68BFE; Mon, 16 Mar 2020 19:13:24 +0100 (CET) Date: Mon, 16 Mar 2020 19:13:24 +0100 From: Christoph Hellwig To: Jason Gunthorpe Cc: Christoph Hellwig , Jerome Glisse , Ralph Campbell , Felix.Kuehling@amd.com, linux-mm@kvack.org, John Hubbard , dri-devel@lists.freedesktop.org, amd-gfx@lists.freedesktop.org, Philip Yang Subject: Re: [PATCH hmm 2/8] mm/hmm: don't free the cached pgmap while scanning Message-ID: <20200316181324.GA24533@lst.de> References: <20200311183506.3997-1-jgg@ziepe.ca> <20200311183506.3997-3-jgg@ziepe.ca> <20200316090250.GB12439@lst.de> <20200316180713.GI20941@ziepe.ca> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline In-Reply-To: <20200316180713.GI20941@ziepe.ca> User-Agent: Mutt/1.5.17 (2007-11-01) Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Mon, Mar 16, 2020 at 03:07:13PM -0300, Jason Gunthorpe wrote: > I chose this to be simple without having to goto unwind it. >=20 > So, instead like this: As =D1=95aid, and per the previous discussion: I think just removing the pgmap lookup is the right thing to do here. Something like this patch: diff --git a/mm/hmm.c b/mm/hmm.c index 3d10485bf323..9f1049815d44 100644 --- a/mm/hmm.c +++ b/mm/hmm.c @@ -28,7 +28,6 @@ =20 struct hmm_vma_walk { struct hmm_range *range; - struct dev_pagemap *pgmap; unsigned long last; unsigned int flags; }; @@ -198,15 +197,8 @@ static int hmm_vma_handle_pmd(struct mm_walk *walk, = unsigned long addr, return hmm_vma_fault(addr, end, fault, write_fault, walk); =20 pfn =3D pmd_pfn(pmd) + ((addr & ~PMD_MASK) >> PAGE_SHIFT); - for (i =3D 0; addr < end; addr +=3D PAGE_SIZE, i++, pfn++) { - if (pmd_devmap(pmd)) { - hmm_vma_walk->pgmap =3D get_dev_pagemap(pfn, - hmm_vma_walk->pgmap); - if (unlikely(!hmm_vma_walk->pgmap)) - return -EBUSY; - } + for (i =3D 0; addr < end; addr +=3D PAGE_SIZE, i++, pfn++) pfns[i] =3D hmm_device_entry_from_pfn(range, pfn) | cpu_flags; - } hmm_vma_walk->last =3D end; return 0; } @@ -277,15 +269,6 @@ static int hmm_vma_handle_pte(struct mm_walk *walk, = unsigned long addr, if (fault || write_fault) goto fault; =20 - if (pte_devmap(pte)) { - hmm_vma_walk->pgmap =3D get_dev_pagemap(pte_pfn(pte), - hmm_vma_walk->pgmap); - if (unlikely(!hmm_vma_walk->pgmap)) { - pte_unmap(ptep); - return -EBUSY; - } - } - /* * Since each architecture defines a struct page for the zero page, jus= t * fall through and treat it like a normal page. @@ -455,12 +438,6 @@ static int hmm_vma_walk_pud(pud_t *pudp, unsigned lo= ng start, unsigned long end, =20 pfn =3D pud_pfn(pud) + ((addr & ~PUD_MASK) >> PAGE_SHIFT); for (i =3D 0; i < npages; ++i, ++pfn) { - hmm_vma_walk->pgmap =3D get_dev_pagemap(pfn, - hmm_vma_walk->pgmap); - if (unlikely(!hmm_vma_walk->pgmap)) { - ret =3D -EBUSY; - goto out_unlock; - } pfns[i] =3D hmm_device_entry_from_pfn(range, pfn) | cpu_flags; } @@ -614,15 +591,6 @@ long hmm_range_fault(struct hmm_range *range, unsign= ed int flags) return -EBUSY; ret =3D walk_page_range(mm, hmm_vma_walk.last, range->end, &hmm_walk_ops, &hmm_vma_walk); - /* - * A pgmap is kept cached in the hmm_vma_walk to avoid expensive - * searching in the probably common case that the pgmap is the - * same for the entire requested range. - */ - if (hmm_vma_walk.pgmap) { - put_dev_pagemap(hmm_vma_walk.pgmap); - hmm_vma_walk.pgmap =3D NULL; - } } while (ret =3D=3D -EBUSY); =20 if (ret)