From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.8 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id F066AC433F7 for ; Wed, 22 Jul 2020 09:46:34 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id B221C20714 for ; Wed, 22 Jul 2020 09:46:34 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="SfDxQmOt" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org B221C20714 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 340A36B0024; Wed, 22 Jul 2020 05:46:34 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 2C96A6B0025; Wed, 22 Jul 2020 05:46:34 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 205926B0026; Wed, 22 Jul 2020 05:46:34 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0008.hostedemail.com [216.40.44.8]) by kanga.kvack.org (Postfix) with ESMTP id 09ED86B0024 for ; Wed, 22 Jul 2020 05:46:34 -0400 (EDT) Received: from smtpin13.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id CA40D101B571C for ; Wed, 22 Jul 2020 09:46:33 +0000 (UTC) X-FDA: 77065231866.13.sink36_2e15eea26f35 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin13.hostedemail.com (Postfix) with ESMTP id 9DFE718379551 for ; Wed, 22 Jul 2020 09:46:33 +0000 (UTC) X-HE-Tag: sink36_2e15eea26f35 X-Filterd-Recvd-Size: 5455 Received: from us-smtp-delivery-1.mimecast.com (us-smtp-2.mimecast.com [205.139.110.61]) by imf35.hostedemail.com (Postfix) with ESMTP for ; Wed, 22 Jul 2020 09:46:33 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1595411192; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=uaZOR7KXaGa092uT6zCD4t6hx23vbPweqPLGG3hy9U8=; b=SfDxQmOtT33mopxcPNIRS0Jk1SqdDTZQ3QHXP5CX2Vb0Gdx43ocbo1AxXZMoY+HDZ8UW2H hwpzIyi5bzfoWEukBwz1pnsbh+DWJ0BLS2gNHKnnoeT174Vb6wzioh/Y9o3cfMr3OkKk58 33yhda8XdkeDI2/GUaJowBywmhl7JbU= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-292-PqtGEzxtN8i8xNs5IYVoSw-1; Wed, 22 Jul 2020 05:46:30 -0400 X-MC-Unique: PqtGEzxtN8i8xNs5IYVoSw-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.phx2.redhat.com [10.5.11.14]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 7935918C63C5; Wed, 22 Jul 2020 09:46:29 +0000 (UTC) Received: from t480s.redhat.com (ovpn-113-254.ams2.redhat.com [10.36.113.254]) by smtp.corp.redhat.com (Postfix) with ESMTP id C6C965D9CA; Wed, 22 Jul 2020 09:46:27 +0000 (UTC) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: linux-s390@vger.kernel.org, linux-mm@kvack.org, David Hildenbrand , Heiko Carstens , Vasily Gorbik , Christian Borntraeger , Gerald Schaefer Subject: [PATCH v2 9/9] s390/vmemmap: avoid memset(PAGE_UNUSED) when adding consecutive sections Date: Wed, 22 Jul 2020 11:45:58 +0200 Message-Id: <20200722094558.9828-10-david@redhat.com> In-Reply-To: <20200722094558.9828-1-david@redhat.com> References: <20200722094558.9828-1-david@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.14 X-Rspamd-Queue-Id: 9DFE718379551 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam03 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Let's avoid memset(PAGE_UNUSED) when adding consecutive sections, whereby the vmemmap of a single section does not span full PMDs. Cc: Heiko Carstens Cc: Vasily Gorbik Cc: Christian Borntraeger Cc: Gerald Schaefer Signed-off-by: David Hildenbrand --- arch/s390/mm/vmem.c | 45 ++++++++++++++++++++++++++++++++++++++++++--- 1 file changed, 42 insertions(+), 3 deletions(-) diff --git a/arch/s390/mm/vmem.c b/arch/s390/mm/vmem.c index df361bbacda1b..70ebfc7958a68 100644 --- a/arch/s390/mm/vmem.c +++ b/arch/s390/mm/vmem.c @@ -74,7 +74,22 @@ static void vmem_pte_free(unsigned long *table) =20 #define PAGE_UNUSED 0xFD =20 -static void vmemmap_use_sub_pmd(unsigned long start, unsigned long end) +/* + * The unused vmemmap range, which was not yet memset(PAGE_UNUSED) range= s + * from unused_pmd_start to next PMD_SIZE boundary. + */ +static unsigned long unused_pmd_start; + +static void vmemmap_flush_unused_pmd(void) +{ + if (!unused_pmd_start) + return; + memset(__va(unused_pmd_start), PAGE_UNUSED, + ALIGN(unused_pmd_start, PMD_SIZE) - unused_pmd_start); + unused_pmd_start =3D 0; +} + +static void __vmemmap_use_sub_pmd(unsigned long start, unsigned long end= ) { /* * As we expect to add in the same granularity as we remove, it's @@ -85,18 +100,41 @@ static void vmemmap_use_sub_pmd(unsigned long start,= unsigned long end) memset(__va(start), 0, sizeof(struct page)); } =20 +static void vmemmap_use_sub_pmd(unsigned long start, unsigned long end) +{ + /* + * We only optimize if the new used range directly follows the + * previously unused range (esp., when populating consecutive sections)= . + */ + if (unused_pmd_start =3D=3D start) { + unused_pmd_start =3D end; + if (likely(IS_ALIGNED(unused_pmd_start, PMD_SIZE))) + unused_pmd_start =3D 0; + return; + } + vmemmap_flush_unused_pmd(); + __vmemmap_use_sub_pmd(start, end); +} + static void vmemmap_use_new_sub_pmd(unsigned long start, unsigned long e= nd) { void *page =3D __va(ALIGN_DOWN(start, PMD_SIZE)); =20 + vmemmap_flush_unused_pmd(); + /* Could be our memmap page is filled with PAGE_UNUSED already ... */ - vmemmap_use_sub_pmd(start, end); + __vmemmap_use_sub_pmd(start, end); =20 /* Mark the unused parts of the new memmap page PAGE_UNUSED. */ if (!IS_ALIGNED(start, PMD_SIZE)) memset(page, PAGE_UNUSED, start - __pa(page)); + /* + * We want to avoid memset(PAGE_UNUSED) when populating the vmemmap of + * consecutive sections. Remember for the last added PMD the last + * unused range in the populated PMD. + */ if (!IS_ALIGNED(end, PMD_SIZE)) - memset(__va(end), PAGE_UNUSED, __pa(page) + PMD_SIZE - end); + unused_pmd_start =3D end; } =20 /* Returns true if the PMD is completely unused and can be freed. */ @@ -104,6 +142,7 @@ static bool vmemmap_unuse_sub_pmd(unsigned long start= , unsigned long end) { void *page =3D __va(ALIGN_DOWN(start, PMD_SIZE)); =20 + vmemmap_flush_unused_pmd(); memset(__va(start), PAGE_UNUSED, end - start); return !memchr_inv(page, PAGE_UNUSED, PMD_SIZE); } --=20 2.26.2