From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.6 required=3.0 tests=DKIM_INVALID,DKIM_SIGNED, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 03865C4360C for ; Thu, 10 Oct 2019 12:44:00 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id ADD5220B7C for ; Thu, 10 Oct 2019 12:43:59 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=shipmail.org header.i=@shipmail.org header.b="WUPj4IFo" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org ADD5220B7C Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=shipmail.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 9AA778E000C; Thu, 10 Oct 2019 08:43:49 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 8E5998E000B; Thu, 10 Oct 2019 08:43:49 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 784B48E000C; Thu, 10 Oct 2019 08:43:49 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0078.hostedemail.com [216.40.44.78]) by kanga.kvack.org (Postfix) with ESMTP id 4DDB18E000B for ; Thu, 10 Oct 2019 08:43:49 -0400 (EDT) Received: from smtpin13.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with SMTP id F3DBF180AD803 for ; Thu, 10 Oct 2019 12:43:48 +0000 (UTC) X-FDA: 76027841778.13.spark88_3e513748f7d5f X-HE-Tag: spark88_3e513748f7d5f X-Filterd-Recvd-Size: 9394 Received: from pio-pvt-msa2.bahnhof.se (pio-pvt-msa2.bahnhof.se [79.136.2.41]) by imf20.hostedemail.com (Postfix) with ESMTP for ; Thu, 10 Oct 2019 12:43:48 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by pio-pvt-msa2.bahnhof.se (Postfix) with ESMTP id 994483F72B; Thu, 10 Oct 2019 14:43:31 +0200 (CEST) Authentication-Results: pio-pvt-msa2.bahnhof.se; dkim=pass (1024-bit key; unprotected) header.d=shipmail.org header.i=@shipmail.org header.b=WUPj4IFo; dkim-atps=neutral X-Virus-Scanned: Debian amavisd-new at bahnhof.se Received: from pio-pvt-msa2.bahnhof.se ([127.0.0.1]) by localhost (pio-pvt-msa2.bahnhof.se [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 1Rupmn_c-ZPu; Thu, 10 Oct 2019 14:43:25 +0200 (CEST) Received: from mail1.shipmail.org (h-205-35.A357.priv.bahnhof.se [155.4.205.35]) (Authenticated sender: mb878879) by pio-pvt-msa2.bahnhof.se (Postfix) with ESMTPA id 81B8F3F3BA; Thu, 10 Oct 2019 14:43:23 +0200 (CEST) Received: from localhost.localdomain.localdomain (h-205-35.A357.priv.bahnhof.se [155.4.205.35]) by mail1.shipmail.org (Postfix) with ESMTPSA id 363D53610CE; Thu, 10 Oct 2019 14:43:23 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=shipmail.org; s=mail; t=1570711403; bh=LnuSYh0CbmyFz+uFpFwRCzsueR5ZGZ5doaj/D7L9BdQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=WUPj4IFos+giBChffmsMTvRNHlyO8829Yn+9nuF+1U5jctlxKoDiqjpDTUjwffJOA MVbHT1U4Np3Qp7EookU6Twh/3vkyYMurm/y2YnmLnxQ/slzN+JFp5Ew0CAyV00tj2U unUqjiqcrXgf79NMT+Wi3VsOWOmZB7lR3yIWsDeg= From: =?UTF-8?q?Thomas=20Hellstr=C3=B6m=20=28VMware=29?= To: linux-kernel@vger.kernel.org, linux-mm@kvack.org, torvalds@linux-foundation.org, kirill@shutemov.name Cc: Thomas Hellstrom , Andrew Morton , Matthew Wilcox , Will Deacon , Peter Zijlstra , Rik van Riel , Minchan Kim , Michal Hocko , Huang Ying , =?UTF-8?q?J=C3=A9r=C3=B4me=20Glisse?= Subject: [PATCH v5 3/8] mm: Add a walk_page_mapping() function to the pagewalk code Date: Thu, 10 Oct 2019 14:43:09 +0200 Message-Id: <20191010124314.40067-4-thomas_os@shipmail.org> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20191010124314.40067-1-thomas_os@shipmail.org> References: <20191010124314.40067-1-thomas_os@shipmail.org> MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Thomas Hellstrom For users that want to travers all page table entries pointing into a region of a struct address_space mapping, introduce a walk_page_mapping() function. The walk_page_mapping() function will be initially be used for dirty- tracking in virtual graphics drivers. Cc: Andrew Morton Cc: Matthew Wilcox Cc: Will Deacon Cc: Peter Zijlstra Cc: Rik van Riel Cc: Minchan Kim Cc: Michal Hocko Cc: Huang Ying Cc: J=C3=A9r=C3=B4me Glisse Cc: Kirill A. Shutemov Signed-off-by: Thomas Hellstrom --- include/linux/pagewalk.h | 9 ++++ mm/pagewalk.c | 94 +++++++++++++++++++++++++++++++++++++++- 2 files changed, 102 insertions(+), 1 deletion(-) diff --git a/include/linux/pagewalk.h b/include/linux/pagewalk.h index bddd9759bab9..6ec82e92c87f 100644 --- a/include/linux/pagewalk.h +++ b/include/linux/pagewalk.h @@ -24,6 +24,9 @@ struct mm_walk; * "do page table walk over the current vma", returning * a negative value means "abort current page table walk * right now" and returning 1 means "skip the current vma" + * @pre_vma: if set, called before starting walk on a non-nul= l vma. + * @post_vma: if set, called after a walk on a non-null vma, p= rovided + * that @pre_vma and the vma walk succeeded. */ struct mm_walk_ops { int (*pud_entry)(pud_t *pud, unsigned long addr, @@ -39,6 +42,9 @@ struct mm_walk_ops { struct mm_walk *walk); int (*test_walk)(unsigned long addr, unsigned long next, struct mm_walk *walk); + int (*pre_vma)(unsigned long start, unsigned long end, + struct mm_walk *walk); + void (*post_vma)(struct mm_walk *walk); }; =20 /** @@ -62,5 +68,8 @@ int walk_page_range(struct mm_struct *mm, unsigned long= start, void *private); int walk_page_vma(struct vm_area_struct *vma, const struct mm_walk_ops *= ops, void *private); +int walk_page_mapping(struct address_space *mapping, pgoff_t first_index= , + pgoff_t nr, const struct mm_walk_ops *ops, + void *private); =20 #endif /* _LINUX_PAGEWALK_H */ diff --git a/mm/pagewalk.c b/mm/pagewalk.c index c5fa42cab14f..ea0b9e606ad1 100644 --- a/mm/pagewalk.c +++ b/mm/pagewalk.c @@ -254,13 +254,23 @@ static int __walk_page_range(unsigned long start, u= nsigned long end, { int err =3D 0; struct vm_area_struct *vma =3D walk->vma; + const struct mm_walk_ops *ops =3D walk->ops; + + if (vma && ops->pre_vma) { + err =3D ops->pre_vma(start, end, walk); + if (err) + return err; + } =20 if (vma && is_vm_hugetlb_page(vma)) { - if (walk->ops->hugetlb_entry) + if (ops->hugetlb_entry) err =3D walk_hugetlb_range(start, end, walk); } else err =3D walk_pgd_range(start, end, walk); =20 + if (vma && ops->post_vma) + ops->post_vma(walk); + return err; } =20 @@ -291,6 +301,11 @@ static int __walk_page_range(unsigned long start, un= signed long end, * its vm_flags. walk_page_test() and @ops->test_walk() are used for thi= s * purpose. * + * If operations need to be staged before and committed after a vma is w= alked, + * there are two callbacks, pre_vma() and post_vma(). Note that post_vma= (), + * since it is intended to handle commit-type operations, can't return a= ny + * errors. + * * struct mm_walk keeps current values of some common data like vma and = pmd, * which are useful for the access from callbacks. If you want to pass s= ome * caller-specific data to callbacks, @private should be helpful. @@ -377,3 +392,80 @@ int walk_page_vma(struct vm_area_struct *vma, const = struct mm_walk_ops *ops, return err; return __walk_page_range(vma->vm_start, vma->vm_end, &walk); } + +/** + * walk_page_mapping - walk all memory areas mapped into a struct addres= s_space. + * @mapping: Pointer to the struct address_space + * @first_index: First page offset in the address_space + * @nr: Number of incremental page offsets to cover + * @ops: operation to call during the walk + * @private: private data for callbacks' usage + * + * This function walks all memory areas mapped into a struct address_spa= ce. + * The walk is limited to only the given page-size index range, but if + * the index boundaries cross a huge page-table entry, that entry will b= e + * included. + * + * Also see walk_page_range() for additional information. + * + * Locking: + * This function can't require that the struct mm_struct::mmap_sem is = held, + * since @mapping may be mapped by multiple processes. Instead + * @mapping->i_mmap_rwsem must be held. This might have implications i= n the + * callbacks, and it's up tho the caller to ensure that the + * struct mm_struct::mmap_sem is not needed. + * + * Also this means that a caller can't rely on the struct + * vm_area_struct::vm_flags to be constant across a call, + * except for immutable flags. Callers requiring this shouldn't use + * this function. + * + * Return: 0 on success, negative error code on failure, positive number= on + * caller defined premature termination. + */ +int walk_page_mapping(struct address_space *mapping, pgoff_t first_index= , + pgoff_t nr, const struct mm_walk_ops *ops, + void *private) +{ + struct mm_walk walk =3D { + .ops =3D ops, + .private =3D private, + }; + struct vm_area_struct *vma; + pgoff_t vba, vea, cba, cea; + unsigned long start_addr, end_addr; + int err =3D 0; + + lockdep_assert_held(&mapping->i_mmap_rwsem); + vma_interval_tree_foreach(vma, &mapping->i_mmap, first_index, + first_index + nr - 1) { + /* Clip to the vma */ + vba =3D vma->vm_pgoff; + vea =3D vba + vma_pages(vma); + cba =3D first_index; + cba =3D max(cba, vba); + cea =3D first_index + nr; + cea =3D min(cea, vea); + + start_addr =3D ((cba - vba) << PAGE_SHIFT) + vma->vm_start; + end_addr =3D ((cea - vba) << PAGE_SHIFT) + vma->vm_start; + if (start_addr >=3D end_addr) + continue; + + walk.vma =3D vma; + walk.mm =3D vma->vm_mm; + + err =3D walk_page_test(vma->vm_start, vma->vm_end, &walk); + if (err > 0) { + err =3D 0; + break; + } else if (err < 0) + break; + + err =3D __walk_page_range(start_addr, end_addr, &walk); + if (err) + break; + } + + return err; +} --=20 2.21.0