From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id DA6B4C61DA4 for ; Wed, 15 Mar 2023 09:55:39 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 790D26B0080; Wed, 15 Mar 2023 05:55:39 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 73F2A6B0081; Wed, 15 Mar 2023 05:55:39 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6079F6B0082; Wed, 15 Mar 2023 05:55:39 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 51A3B6B0080 for ; Wed, 15 Mar 2023 05:55:39 -0400 (EDT) Received: from smtpin21.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 20F781408A2 for ; Wed, 15 Mar 2023 09:55:39 +0000 (UTC) X-FDA: 80570675598.21.FD3FAF0 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by imf14.hostedemail.com (Postfix) with ESMTP id 5E60710000E for ; Wed, 15 Mar 2023 09:55:37 +0000 (UTC) Authentication-Results: imf14.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=NEtGgASA; spf=pass (imf14.hostedemail.com: domain of rppt@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=rppt@kernel.org; dmarc=pass (policy=none) header.from=kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1678874137; a=rsa-sha256; cv=none; b=DTuYhEkVbWmfaU/ZaA06V7X12++4Fsw6795liJ7+c940m9K5rc+zt2u1NBy5C8rclIdTdF f67jA75+FkJoMU6DqzVfzRHhXwpqi0gHSVAE+vf5R0XVykAZv1rHvT/k4fLKGvdNkKIlqt mNM+kvVXjT++cOyIGjiHsL2XWSDFMZQ= ARC-Authentication-Results: i=1; imf14.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=NEtGgASA; spf=pass (imf14.hostedemail.com: domain of rppt@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=rppt@kernel.org; dmarc=pass (policy=none) header.from=kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1678874137; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=ICNQiB3O7P/6JH0sxd9i0hdJcIclD9Y482DAZv3YmqQ=; b=M1YHSq9tImwiN1LVy0jtOebFVW+xn10yxKNWQGiGLXmv28pFSig2Ykej6tTFnTOLQ3eo3e YnIkAMJf9l4OLK8kAkuSmF64bA1MzgdzpvYFA5/Ep0YnrZE1Wo0ab+7ETMHIWZAKOKH5gs Kpt8yo92L0jxj/OXuFYCvdiwzDNvbZM= Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 9033D61BD2; Wed, 15 Mar 2023 09:55:36 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 189EBC433EF; Wed, 15 Mar 2023 09:55:33 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1678874136; bh=qCOqbmP/auLUfwOnoyJad6OGtpNEVMKZEMkUS0FQCGA=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=NEtGgASA6lpwo15OC2WmI9qavV42lDVVrDmc/31s28fZgNrYLCvWEX0L8euG+kwkg Hq3dXa0d2qM0DiqGalMxDNiuH8cIM9yZ82ip5MZFhnZRSGsVOA1zyfHDxQx5h8y/UP xQoFVarVi9NJJ4DlHJAPMZxyV7we95dBWh/5G7cMVNrXqCRSYQJ5vWg/dv+RI1zfXC UzPOtnL0/sUsEnWR6YKakkQLaolzxoipCLChbUwHHu9B1O85W8lq9ik+YrjW4mmsd6 dAdoEwKG1DYhvcS6rIAG6UlwmSHkS1uYeU2yVFWCgJX+/d2ulb1qbjleVqjyI+Ndle Bj0EjjsF3exXw== Date: Wed, 15 Mar 2023 11:55:23 +0200 From: Mike Rapoport To: "Matthew Wilcox (Oracle)" Cc: linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-ia64@vger.kernel.org Subject: Re: [PATCH v4 12/36] ia64: Implement the new page table range API Message-ID: References: <20230315051444.3229621-1-willy@infradead.org> <20230315051444.3229621-13-willy@infradead.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20230315051444.3229621-13-willy@infradead.org> X-Rspam-User: X-Rspamd-Queue-Id: 5E60710000E X-Rspamd-Server: rspam01 X-Stat-Signature: ffei54kbmiggo6rju88xn94jxgek6wx7 X-HE-Tag: 1678874137-188439 X-HE-Meta: U2FsdGVkX192l2inGnKRxjuLD6oeQuVbzWXq14iTpfPtJqyWU23VlNH4viUS5rO2AfSeFOS7KxBRE90vzkjWYkxjXzo2vTctM10Ltk2m4a5KkwesU2hK8tZB9iD72J8IekB+VUgTv/rEuukUyg/LtM8ZWPaK1nKDpi+eZy7Kwev/GFroRZWZAMhPM6yEptsEgIYFqfP2Ii6I2DSNj6HYPZ/P+Uyx4mIMb1OmeJKxW0lFIoQVhoVchoFLDJvxdcEeEFBexAeoFnQtCQqD2HnFW/nZpvcc1eM0SOfEN2kStGkeKqSMZ3LTeq3m1m9TssyXbKtAH5Uuj/HekzWa/10PvRfHrkYJG8oqosg5SWZl9OGwwT5v2E4MW0T9hc7hynoBm3fbW5OTbcAPww31AYbGzdtb4ih/3P7bvO+r22LX+HUxJ8zkJehlySL+YTcvCkVTdJcYqUho90qcNDJzeLK4hpunr0LRzn8En6iuqCA7ScCEhtaDKWuZ0WVNIc5veKbe297js27dciTweNrJQPuyy2I+YYcPfAUpBtHdvutkY8v9zFvTiIPhLNqAP6B6hNt+4PjzLDJBqP67XjDvzMJ/R6YvV/L2Lwjqr7T8tKkgwQOrvy+MgVCrdiiuCwauIzyLAooqD+znwwzLfAsqgigERN4nW0b3RVRaaotyUwHiZ2qHw7d+JgFq0GYsyZuAuVzAF6vrpLhzssJQIJCv1juUyQCz6gZnXw/AqK0jN9nQem4ej1LSsLmBDVepfe6AwOyKxshzw9LMsa6HlA23a/1KPDrVUArImGbf9Uwdbp4hkcNbaGLGNH2hQ2Ir88EF0wxvAnf4XoH3z7+y8874yVi41wGiWFwYfaL+ozNVugyi0uW6nF1zag0I5OBKlEtTc5iMnwyCfpkY0wVvaZiscO2wkwtKZeX8ClPf5U3RESIU9MZWkkg2J6YQWGbjq4AiZMBZ93eAdzieu/QIRcP+qGt 9QckKJKi SmeCmfOIMju93fQ7ix7tPY3fCJAhqh3cWWp+KzE4iTcr/AplU2TeZ+v7MsPR2TI5I5U1UQ6kh/n/+yfT9c6KnVoBc3tkF5wTOy3nEYARbiiUMWO9CdaJpZu71wZ91QijnC/kT7j5Vtx2Ntvc40X07RpZtsPkjQrezb3HO7h10VqBPoUxUFVKI2bdKdZGZe1E3SzjvC9lZRcAnFUiqUP3HNkE9u4X2aMtquGEG7desdh5D5XwYsQyMY7O2K51dYVnB4+7SZiTE+Krp7wdddh3ZvWRF5fhSUyc+V2kJfhU/TqDbTAJT9yAyaI6aD9MoDH81RPnI0I3EFD6R1TjPx94Igg3vcFfUMPFK2VV2a5JttwuSoHymhXF198KyqK1t/BRmK/0DsCs0FHdmQytWLLgBkPJQz5dLB7pp3sYbsakbkC2bCV0= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Wed, Mar 15, 2023 at 05:14:20AM +0000, Matthew Wilcox (Oracle) wrote: > Add PFN_PTE_SHIFT, update_mmu_cache_range() and flush_dcache_folio(). > Change the PG_arch_1 (aka PG_dcache_clean) flag from being per-page to > per-folio, which makes arch_dma_mark_clean() and mark_clean() a little > more exciting. > > Signed-off-by: Matthew Wilcox (Oracle) > Cc: linux-ia64@vger.kernel.org Acked-by: Mike Rapoport (IBM) > --- > arch/ia64/hp/common/sba_iommu.c | 26 +++++++++++++++----------- > arch/ia64/include/asm/cacheflush.h | 14 ++++++++++---- > arch/ia64/include/asm/pgtable.h | 4 ++-- > arch/ia64/mm/init.c | 28 +++++++++++++++++++--------- > 4 files changed, 46 insertions(+), 26 deletions(-) > > diff --git a/arch/ia64/hp/common/sba_iommu.c b/arch/ia64/hp/common/sba_iommu.c > index 8ad6946521d8..48d475f10003 100644 > --- a/arch/ia64/hp/common/sba_iommu.c > +++ b/arch/ia64/hp/common/sba_iommu.c > @@ -798,22 +798,26 @@ sba_io_pdir_entry(u64 *pdir_ptr, unsigned long vba) > #endif > > #ifdef ENABLE_MARK_CLEAN > -/** > +/* > * Since DMA is i-cache coherent, any (complete) pages that were written via > * DMA can be marked as "clean" so that lazy_mmu_prot_update() doesn't have to > * flush them when they get mapped into an executable vm-area. > */ > -static void > -mark_clean (void *addr, size_t size) > +static void mark_clean(void *addr, size_t size) > { > - unsigned long pg_addr, end; > - > - pg_addr = PAGE_ALIGN((unsigned long) addr); > - end = (unsigned long) addr + size; > - while (pg_addr + PAGE_SIZE <= end) { > - struct page *page = virt_to_page((void *)pg_addr); > - set_bit(PG_arch_1, &page->flags); > - pg_addr += PAGE_SIZE; > + struct folio *folio = virt_to_folio(addr); > + ssize_t left = size; > + size_t offset = offset_in_folio(folio, addr); > + > + if (offset) { > + left -= folio_size(folio) - offset; > + folio = folio_next(folio); > + } > + > + while (left >= folio_size(folio)) { > + set_bit(PG_arch_1, &folio->flags); > + left -= folio_size(folio); > + folio = folio_next(folio); > } > } > #endif > diff --git a/arch/ia64/include/asm/cacheflush.h b/arch/ia64/include/asm/cacheflush.h > index 708c0fa5d975..eac493fa9e0d 100644 > --- a/arch/ia64/include/asm/cacheflush.h > +++ b/arch/ia64/include/asm/cacheflush.h > @@ -13,10 +13,16 @@ > #include > > #define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE 1 > -#define flush_dcache_page(page) \ > -do { \ > - clear_bit(PG_arch_1, &(page)->flags); \ > -} while (0) > +static inline void flush_dcache_folio(struct folio *folio) > +{ > + clear_bit(PG_arch_1, &folio->flags); > +} > +#define flush_dcache_folio flush_dcache_folio > + > +static inline void flush_dcache_page(struct page *page) > +{ > + flush_dcache_folio(page_folio(page)); > +} > > extern void flush_icache_range(unsigned long start, unsigned long end); > #define flush_icache_range flush_icache_range > diff --git a/arch/ia64/include/asm/pgtable.h b/arch/ia64/include/asm/pgtable.h > index 21c97e31a28a..5450d59e4fb9 100644 > --- a/arch/ia64/include/asm/pgtable.h > +++ b/arch/ia64/include/asm/pgtable.h > @@ -206,6 +206,7 @@ ia64_phys_addr_valid (unsigned long addr) > #define RGN_MAP_SHIFT (PGDIR_SHIFT + PTRS_PER_PGD_SHIFT - 3) > #define RGN_MAP_LIMIT ((1UL << RGN_MAP_SHIFT) - PAGE_SIZE) /* per region addr limit */ > > +#define PFN_PTE_SHIFT PAGE_SHIFT > /* > * Conversion functions: convert page frame number (pfn) and a protection value to a page > * table entry (pte). > @@ -303,8 +304,6 @@ static inline void set_pte(pte_t *ptep, pte_t pteval) > *ptep = pteval; > } > > -#define set_pte_at(mm,addr,ptep,pteval) set_pte(ptep,pteval) > - > /* > * Make page protection values cacheable, uncacheable, or write- > * combining. Note that "protection" is really a misnomer here as the > @@ -396,6 +395,7 @@ pte_same (pte_t a, pte_t b) > return pte_val(a) == pte_val(b); > } > > +#define update_mmu_cache_range(vma, address, ptep, nr) do { } while (0) > #define update_mmu_cache(vma, address, ptep) do { } while (0) > > extern pgd_t swapper_pg_dir[PTRS_PER_PGD]; > diff --git a/arch/ia64/mm/init.c b/arch/ia64/mm/init.c > index 7f5353e28516..b95debabdc2a 100644 > --- a/arch/ia64/mm/init.c > +++ b/arch/ia64/mm/init.c > @@ -50,30 +50,40 @@ void > __ia64_sync_icache_dcache (pte_t pte) > { > unsigned long addr; > - struct page *page; > + struct folio *folio; > > - page = pte_page(pte); > - addr = (unsigned long) page_address(page); > + folio = page_folio(pte_page(pte)); > + addr = (unsigned long)folio_address(folio); > > - if (test_bit(PG_arch_1, &page->flags)) > + if (test_bit(PG_arch_1, &folio->flags)) > return; /* i-cache is already coherent with d-cache */ > > - flush_icache_range(addr, addr + page_size(page)); > - set_bit(PG_arch_1, &page->flags); /* mark page as clean */ > + flush_icache_range(addr, addr + folio_size(folio)); > + set_bit(PG_arch_1, &folio->flags); /* mark page as clean */ > } > > /* > - * Since DMA is i-cache coherent, any (complete) pages that were written via > + * Since DMA is i-cache coherent, any (complete) folios that were written via > * DMA can be marked as "clean" so that lazy_mmu_prot_update() doesn't have to > * flush them when they get mapped into an executable vm-area. > */ > void arch_dma_mark_clean(phys_addr_t paddr, size_t size) > { > unsigned long pfn = PHYS_PFN(paddr); > + struct folio *folio = page_folio(pfn_to_page(pfn)); > + ssize_t left = size; > + size_t offset = offset_in_folio(folio, paddr); > > - do { > + if (offset) { > + left -= folio_size(folio) - offset; > + folio = folio_next(folio); > + } > + > + while (left >= (ssize_t)folio_size(folio)) { > set_bit(PG_arch_1, &pfn_to_page(pfn)->flags); > - } while (++pfn <= PHYS_PFN(paddr + size - 1)); > + left -= folio_size(folio); > + folio = folio_next(folio); > + } > } > > inline void > -- > 2.39.2 > > -- Sincerely yours, Mike.