From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.8 required=3.0 tests=DKIM_INVALID,DKIM_SIGNED, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 18047C433DF for ; Fri, 3 Jul 2020 13:39:34 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id C7A1A206A1 for ; Fri, 3 Jul 2020 13:39:33 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="fSIplkJP" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org C7A1A206A1 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 3A21E6B00B0; Fri, 3 Jul 2020 09:39:33 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 32B5B6B00B3; Fri, 3 Jul 2020 09:39:33 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1C9FE6B00B4; Fri, 3 Jul 2020 09:39:33 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0110.hostedemail.com [216.40.44.110]) by kanga.kvack.org (Postfix) with ESMTP id F00836B00B0 for ; Fri, 3 Jul 2020 09:39:32 -0400 (EDT) Received: from smtpin30.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id BABD3180AD801 for ; Fri, 3 Jul 2020 13:39:32 +0000 (UTC) X-FDA: 76996871784.30.flock35_1c0848726e92 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin30.hostedemail.com (Postfix) with ESMTP id 89186180B3C8E for ; Fri, 3 Jul 2020 13:39:32 +0000 (UTC) X-HE-Tag: flock35_1c0848726e92 X-Filterd-Recvd-Size: 7430 Received: from us-smtp-1.mimecast.com (us-smtp-delivery-1.mimecast.com [207.211.31.120]) by imf39.hostedemail.com (Postfix) with ESMTP for ; Fri, 3 Jul 2020 13:39:31 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1593783571; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=2y5grVF58/11v6H28319gKDFTeKgAjvdC69Mz1gFETA=; b=fSIplkJPQU1ZAGwP4XEadZHAELWXglAFEyVYr/hABTwiiIPmUyPs7tG8CTLgwBqXVdGP0N D+Bdov4nEvhVMBH0oyYJQLfBuZnNwhosb53+XvmrH0L3XurcHAvF5hv4fpR0r8arO7/qLx vpMIYBBg3/J0QL4IqYIquB4t9coc8ms= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-483-nPbgXm4NNwSDldlC3XxwhQ-1; Fri, 03 Jul 2020 09:39:27 -0400 X-MC-Unique: nPbgXm4NNwSDldlC3XxwhQ-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.11]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 6957E1054FA1; Fri, 3 Jul 2020 13:39:26 +0000 (UTC) Received: from t480s.redhat.com (ovpn-114-0.ams2.redhat.com [10.36.114.0]) by smtp.corp.redhat.com (Postfix) with ESMTP id C93057AC88; Fri, 3 Jul 2020 13:39:24 +0000 (UTC) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: linux-s390@vger.kernel.org, linux-mm@kvack.org, David Hildenbrand , Heiko Carstens , Vasily Gorbik , Christian Borntraeger , Gerald Schaefer Subject: [PATCH v1 2/9] s390/vmem: recursive implementation of vmem_remove_range() Date: Fri, 3 Jul 2020 15:39:10 +0200 Message-Id: <20200703133917.39045-3-david@redhat.com> In-Reply-To: <20200703133917.39045-1-david@redhat.com> References: <20200703133917.39045-1-david@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.11 X-Rspamd-Queue-Id: 89186180B3C8E X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam03 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: We want to reuse the same functionality in vmemmap_free(). Let's start by introducing recursive remove_pagetable(), inspired by x86. We'll extend it to cover the vmemmap similarly next. A recursive implementation makes it easier to expand individual cases without harming readability. In addition, we minimize traversing the whole hierarchy over and over again. One change is that we don't unmap large PMDs/PUDs when not completely covered by the request, something that should never happen with direct mappings, unless one would be removing in other granularity than added, which would be broken already. Cc: Heiko Carstens Cc: Vasily Gorbik Cc: Christian Borntraeger Cc: Gerald Schaefer Signed-off-by: David Hildenbrand --- arch/s390/mm/vmem.c | 153 +++++++++++++++++++++++++++++++------------- 1 file changed, 107 insertions(+), 46 deletions(-) diff --git a/arch/s390/mm/vmem.c b/arch/s390/mm/vmem.c index 66c5333020ead..6fe156c3f035c 100644 --- a/arch/s390/mm/vmem.c +++ b/arch/s390/mm/vmem.c @@ -138,64 +138,125 @@ static int vmem_add_range(unsigned long start, uns= igned long size) return ret; } =20 -/* - * Remove a physical memory range from the 1:1 mapping. - * Currently only invalidates page table entries. - */ -static void vmem_remove_range(unsigned long start, unsigned long size) +static void remove_pte_table(pmd_t *pmd, unsigned long addr, + unsigned long end) { - unsigned long pages4k, pages1m, pages2g; - unsigned long end =3D start + size; - unsigned long address =3D start; - pgd_t *pg_dir; - p4d_t *p4_dir; - pud_t *pu_dir; - pmd_t *pm_dir; - pte_t *pt_dir; + unsigned long pages =3D 0; + pte_t *pte; =20 - pages4k =3D pages1m =3D pages2g =3D 0; - while (address < end) { - pg_dir =3D pgd_offset_k(address); - if (pgd_none(*pg_dir)) { - address +=3D PGDIR_SIZE; + pte =3D pte_offset_kernel(pmd, addr); + for (; addr < end; addr +=3D PAGE_SIZE, pte++) { + if (pte_none(*pte)) continue; - } - p4_dir =3D p4d_offset(pg_dir, address); - if (p4d_none(*p4_dir)) { - address +=3D P4D_SIZE; + + pte_clear(&init_mm, addr, pte); + pages++; + } + + update_page_count(PG_DIRECT_MAP_4K, -pages); +} + +static void remove_pmd_table(pud_t *pud, unsigned long addr, + unsigned long end) +{ + unsigned long next, pages =3D 0; + pmd_t *pmd; + + pmd =3D pmd_offset(pud, addr); + for (; addr < end; addr =3D next, pmd++) { + next =3D pmd_addr_end(addr, end); + + if (pmd_none(*pmd)) continue; - } - pu_dir =3D pud_offset(p4_dir, address); - if (pud_none(*pu_dir)) { - address +=3D PUD_SIZE; + + if (pmd_large(*pmd)) { + if (IS_ALIGNED(addr, PMD_SIZE) && + IS_ALIGNED(next, PMD_SIZE)) { + pmd_clear(pmd); + pages++; + } continue; } - if (pud_large(*pu_dir)) { - pud_clear(pu_dir); - address +=3D PUD_SIZE; - pages2g++; + + remove_pte_table(pmd, addr, next); + } + + update_page_count(PG_DIRECT_MAP_1M, -pages); +} + +static void remove_pud_table(p4d_t *p4d, unsigned long addr, + unsigned long end) +{ + unsigned long next, pages =3D 0; + pud_t *pud; + + pud =3D pud_offset(p4d, addr); + for (; addr < end; addr =3D next, pud++) { + next =3D pud_addr_end(addr, end); + + if (pud_none(*pud)) continue; - } - pm_dir =3D pmd_offset(pu_dir, address); - if (pmd_none(*pm_dir)) { - address +=3D PMD_SIZE; + + if (pud_large(*pud)) { + if (IS_ALIGNED(addr, PUD_SIZE) && + IS_ALIGNED(next, PUD_SIZE)) { + pud_clear(pud); + pages++; + } continue; } - if (pmd_large(*pm_dir)) { - pmd_clear(pm_dir); - address +=3D PMD_SIZE; - pages1m++; + + remove_pmd_table(pud, addr, next); + } + + update_page_count(PG_DIRECT_MAP_2G, -pages); +} + +static void remove_p4d_table(pgd_t *pgd, unsigned long addr, + unsigned long end) +{ + unsigned long next; + p4d_t *p4d; + + p4d =3D p4d_offset(pgd, addr); + for (; addr < end; addr =3D next, p4d++) { + next =3D p4d_addr_end(addr, end); + + if (p4d_none(*p4d)) continue; - } - pt_dir =3D pte_offset_kernel(pm_dir, address); - pte_clear(&init_mm, address, pt_dir); - address +=3D PAGE_SIZE; - pages4k++; + + remove_pud_table(p4d, addr, next); } +} + +static void remove_pagetable(unsigned long start, unsigned long end) +{ + unsigned long addr, next; + pgd_t *pgd; + + if (WARN_ON_ONCE(!PAGE_ALIGNED(start | end))) + return; + + for (addr =3D start; addr < end; addr =3D next) { + next =3D pgd_addr_end(addr, end); + pgd =3D pgd_offset_k(addr); + + if (pgd_none(*pgd)) + continue; + + remove_p4d_table(pgd, addr, next); + } + flush_tlb_kernel_range(start, end); - update_page_count(PG_DIRECT_MAP_4K, -pages4k); - update_page_count(PG_DIRECT_MAP_1M, -pages1m); - update_page_count(PG_DIRECT_MAP_2G, -pages2g); +} + +/* + * Remove a physical memory range from the 1:1 mapping. + * Currently only invalidates page table entries. + */ +static void vmem_remove_range(unsigned long start, unsigned long size) +{ + remove_pagetable(start, start + size); } =20 /* --=20 2.26.2