From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.6 required=3.0 tests=DKIM_INVALID,DKIM_SIGNED, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3DE85C35280 for ; Wed, 2 Oct 2019 13:47:58 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id DAD2E222BE for ; Wed, 2 Oct 2019 13:47:57 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=shipmail.org header.i=@shipmail.org header.b="cdH0E+g9" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org DAD2E222BE Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=shipmail.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 69B536B000C; Wed, 2 Oct 2019 09:47:51 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 5FE286B000E; Wed, 2 Oct 2019 09:47:51 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 49D5F6B000D; Wed, 2 Oct 2019 09:47:51 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0229.hostedemail.com [216.40.44.229]) by kanga.kvack.org (Postfix) with ESMTP id 20F4F6B000C for ; Wed, 2 Oct 2019 09:47:51 -0400 (EDT) Received: from smtpin04.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with SMTP id AC37B612A for ; Wed, 2 Oct 2019 13:47:50 +0000 (UTC) X-FDA: 75998972700.04.watch26_a0cd5cae4e37 X-HE-Tag: watch26_a0cd5cae4e37 X-Filterd-Recvd-Size: 9969 Received: from pio-pvt-msa3.bahnhof.se (pio-pvt-msa3.bahnhof.se [79.136.2.42]) by imf46.hostedemail.com (Postfix) with ESMTP for ; Wed, 2 Oct 2019 13:47:48 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by pio-pvt-msa3.bahnhof.se (Postfix) with ESMTP id 67AD13F9C4; Wed, 2 Oct 2019 15:47:42 +0200 (CEST) Authentication-Results: pio-pvt-msa3.bahnhof.se; dkim=pass (1024-bit key; unprotected) header.d=shipmail.org header.i=@shipmail.org header.b=cdH0E+g9; dkim-atps=neutral X-Virus-Scanned: Debian amavisd-new at bahnhof.se Received: from pio-pvt-msa3.bahnhof.se ([127.0.0.1]) by localhost (pio-pvt-msa3.bahnhof.se [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id XNf_RKybxW_g; Wed, 2 Oct 2019 15:47:41 +0200 (CEST) Received: from mail1.shipmail.org (h-205-35.A357.priv.bahnhof.se [155.4.205.35]) (Authenticated sender: mb878879) by pio-pvt-msa3.bahnhof.se (Postfix) with ESMTPA id 825E03F96D; Wed, 2 Oct 2019 15:47:40 +0200 (CEST) Received: from localhost.localdomain.localdomain (h-205-35.A357.priv.bahnhof.se [155.4.205.35]) by mail1.shipmail.org (Postfix) with ESMTPSA id CC476360377; Wed, 2 Oct 2019 15:47:39 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=shipmail.org; s=mail; t=1570024059; bh=n1vzwikwAT/hJ7fYn3vodCiuLnHHHTpafkS2VEAcvII=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=cdH0E+g9PqxQYb2IzJPqh7M+h9pbyhbIHW9pMUN1zT4FPEtr87DBmVtGtsHaXvYs3 ydzBX4c3YL25nRBn3LDKFNYq/jRSaimg5GAcbVgpNAbfmk1/gsWwAVWfkZ42sLnoNT Iz73TAoEaG6dYjVwWeIhEzUhO/63x4PsmAgfV7Fs= From: =?UTF-8?q?Thomas=20Hellstr=C3=B6m=20=28VMware=29?= To: linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: torvalds@linux-foundation.org, Thomas Hellstrom , Andrew Morton , Matthew Wilcox , Will Deacon , Peter Zijlstra , Rik van Riel , Minchan Kim , Michal Hocko , Huang Ying , =?UTF-8?q?J=C3=A9r=C3=B4me=20Glisse?= , "Kirill A . Shutemov" Subject: [PATCH v3 2/7] mm: Add a walk_page_mapping() function to the pagewalk code Date: Wed, 2 Oct 2019 15:47:25 +0200 Message-Id: <20191002134730.40985-3-thomas_os@shipmail.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20191002134730.40985-1-thomas_os@shipmail.org> References: <20191002134730.40985-1-thomas_os@shipmail.org> MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Thomas Hellstrom For users that want to travers all page table entries pointing into a region of a struct address_space mapping, introduce a walk_page_mapping() function. The walk_page_mapping() function will be initially be used for dirty- tracking in virtual graphics drivers. Cc: Andrew Morton Cc: Matthew Wilcox Cc: Will Deacon Cc: Peter Zijlstra Cc: Rik van Riel Cc: Minchan Kim Cc: Michal Hocko Cc: Huang Ying Cc: J=C3=A9r=C3=B4me Glisse Cc: Kirill A. Shutemov Signed-off-by: Thomas Hellstrom --- include/linux/pagewalk.h | 9 ++++ mm/pagewalk.c | 99 +++++++++++++++++++++++++++++++++++++++- 2 files changed, 107 insertions(+), 1 deletion(-) diff --git a/include/linux/pagewalk.h b/include/linux/pagewalk.h index bddd9759bab9..6ec82e92c87f 100644 --- a/include/linux/pagewalk.h +++ b/include/linux/pagewalk.h @@ -24,6 +24,9 @@ struct mm_walk; * "do page table walk over the current vma", returning * a negative value means "abort current page table walk * right now" and returning 1 means "skip the current vma" + * @pre_vma: if set, called before starting walk on a non-nul= l vma. + * @post_vma: if set, called after a walk on a non-null vma, p= rovided + * that @pre_vma and the vma walk succeeded. */ struct mm_walk_ops { int (*pud_entry)(pud_t *pud, unsigned long addr, @@ -39,6 +42,9 @@ struct mm_walk_ops { struct mm_walk *walk); int (*test_walk)(unsigned long addr, unsigned long next, struct mm_walk *walk); + int (*pre_vma)(unsigned long start, unsigned long end, + struct mm_walk *walk); + void (*post_vma)(struct mm_walk *walk); }; =20 /** @@ -62,5 +68,8 @@ int walk_page_range(struct mm_struct *mm, unsigned long= start, void *private); int walk_page_vma(struct vm_area_struct *vma, const struct mm_walk_ops *= ops, void *private); +int walk_page_mapping(struct address_space *mapping, pgoff_t first_index= , + pgoff_t nr, const struct mm_walk_ops *ops, + void *private); =20 #endif /* _LINUX_PAGEWALK_H */ diff --git a/mm/pagewalk.c b/mm/pagewalk.c index d48c2a986ea3..658d1e5ec428 100644 --- a/mm/pagewalk.c +++ b/mm/pagewalk.c @@ -253,13 +253,23 @@ static int __walk_page_range(unsigned long start, u= nsigned long end, { int err =3D 0; struct vm_area_struct *vma =3D walk->vma; + const struct mm_walk_ops *ops =3D walk->ops; + + if (vma && ops->pre_vma) { + err =3D ops->pre_vma(start, end, walk); + if (err) + return err; + } =20 if (vma && is_vm_hugetlb_page(vma)) { - if (walk->ops->hugetlb_entry) + if (ops->hugetlb_entry) err =3D walk_hugetlb_range(start, end, walk); } else err =3D walk_pgd_range(start, end, walk); =20 + if (vma && ops->post_vma) + ops->post_vma(walk); + return err; } =20 @@ -285,11 +295,17 @@ static int __walk_page_range(unsigned long start, u= nsigned long end, * - <0 : failed to handle the current entry, and return to the caller * with error code. * + * * Before starting to walk page table, some callers want to check whethe= r * they really want to walk over the current vma, typically by checking * its vm_flags. walk_page_test() and @ops->test_walk() are used for thi= s * purpose. * + * If operations need to be staged before and committed after a vma is w= alked, + * there are two callbacks, pre_vma() and post_vma(). Note that post_vma= (), + * since it is intended to handle commit-type operations, can't return a= ny + * errors. + * * struct mm_walk keeps current values of some common data like vma and = pmd, * which are useful for the access from callbacks. If you want to pass s= ome * caller-specific data to callbacks, @private should be helpful. @@ -376,3 +392,84 @@ int walk_page_vma(struct vm_area_struct *vma, const = struct mm_walk_ops *ops, return err; return __walk_page_range(vma->vm_start, vma->vm_end, &walk); } + +/** + * walk_page_mapping - walk all memory areas mapped into a struct addres= s_space. + * @mapping: Pointer to the struct address_space + * @first_index: First page offset in the address_space + * @nr: Number of incremental page offsets to cover + * @ops: operation to call during the walk + * @private: private data for callbacks' usage + * + * This function walks all memory areas mapped into a struct address_spa= ce. + * The walk is limited to only the given page-size index range, but if + * the index boundaries cross a huge page-table entry, that entry will b= e + * included. + * + * Also see walk_page_range() for additional information. + * + * Locking: + * This function can't require that the struct mm_struct::mmap_sem is = held, + * since @mapping may be mapped by multiple processes. Instead + * @mapping->i_mmap_rwsem must be held. This might have implications i= n the + * callbacks, and it's up tho the caller to ensure that the + * struct mm_struct::mmap_sem is not needed. + * + * Also this means that a caller can't rely on the struct + * vm_area_struct::vm_flags to be constant across a call, + * except for immutable flags. Callers requiring this shouldn't use + * this function. + * + * If @mapping allows faulting of huge pmds and puds, it is desirable + * that its huge_fault() handler blocks while this function is running= on + * @mapping. Otherwise a race may occur where the huge entry is split = when + * it was intended to be handled in a huge entry callback. This requir= es an + * external lock, for example that @mapping->i_mmap_rwsem is held in + * write mode in the huge_fault() handlers. + */ +int walk_page_mapping(struct address_space *mapping, pgoff_t first_index= , + pgoff_t nr, const struct mm_walk_ops *ops, + void *private) +{ + struct mm_walk walk =3D { + .ops =3D ops, + .private =3D private, + }; + struct vm_area_struct *vma; + pgoff_t vba, vea, cba, cea; + unsigned long start_addr, end_addr; + int err =3D 0; + + lockdep_assert_held(&mapping->i_mmap_rwsem); + vma_interval_tree_foreach(vma, &mapping->i_mmap, first_index, + first_index + nr - 1) { + /* Clip to the vma */ + vba =3D vma->vm_pgoff; + vea =3D vba + vma_pages(vma); + cba =3D first_index; + cba =3D max(cba, vba); + cea =3D first_index + nr; + cea =3D min(cea, vea); + + start_addr =3D ((cba - vba) << PAGE_SHIFT) + vma->vm_start; + end_addr =3D ((cea - vba) << PAGE_SHIFT) + vma->vm_start; + if (start_addr >=3D end_addr) + continue; + + walk.vma =3D vma; + walk.mm =3D vma->vm_mm; + + err =3D walk_page_test(vma->vm_start, vma->vm_end, &walk); + if (err > 0) { + err =3D 0; + break; + } else if (err < 0) + break; + + err =3D __walk_page_range(start_addr, end_addr, &walk); + if (err) + break; + } + + return err; +} --=20 2.20.1