From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id DE406C4320A for ; Fri, 23 Jul 2021 12:53:50 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 8837460E95 for ; Fri, 23 Jul 2021 12:53:50 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 8837460E95 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 2D8596B0074; Fri, 23 Jul 2021 08:53:50 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 2891A6B0075; Fri, 23 Jul 2021 08:53:50 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 150B16B0078; Fri, 23 Jul 2021 08:53:50 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0094.hostedemail.com [216.40.44.94]) by kanga.kvack.org (Postfix) with ESMTP id F08226B0074 for ; Fri, 23 Jul 2021 08:53:49 -0400 (EDT) Received: from smtpin09.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id A4350181AEF32 for ; Fri, 23 Jul 2021 12:53:49 +0000 (UTC) X-FDA: 78393844578.09.9EA506B Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf23.hostedemail.com (Postfix) with ESMTP id 5019C9001B2D for ; Fri, 23 Jul 2021 12:53:49 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1627044828; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=PGbLMpFDd21ynThNxaFHPKcL/KPx11hoJBf3NnE1f6c=; b=c0uFy1RGjlRYlYtrE5TCFsDRg9NOfFcWSWMSV6Iq4v4FuZsxA/yBmEXj+XUQgM+BbwL00y bqzYXlmtAZyovctH3LlK0exsMLpDldRK4eIXhFWUbkVe6odKgv3Nx3MzMGWCqPTWaMZ8Fr Oiyh9LRuXibG1ALoHMKFDeGwBIa3Dnk= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-418-SiHiXldkNFuWogFuiZ9B7A-1; Fri, 23 Jul 2021 08:53:47 -0400 X-MC-Unique: SiHiXldkNFuWogFuiZ9B7A-1 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.phx2.redhat.com [10.5.11.15]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 53349801AEB; Fri, 23 Jul 2021 12:53:45 +0000 (UTC) Received: from t480s.redhat.com (ovpn-112-253.ams2.redhat.com [10.36.112.253]) by smtp.corp.redhat.com (Postfix) with ESMTP id 120B051C63; Fri, 23 Jul 2021 12:53:39 +0000 (UTC) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, David Hildenbrand , Andrew Morton , Vitaly Kuznetsov , "Michael S. Tsirkin" , Jason Wang , Marek Kedzierski , Hui Zhu , Pankaj Gupta , Wei Yang , Oscar Salvador , Michal Hocko , Dan Williams , Anshuman Khandual , Dave Hansen , Vlastimil Babka , Mike Rapoport , "Rafael J. Wysocki" , Len Brown , Pavel Tatashin , Greg Kroah-Hartman , virtualization@lists.linux-foundation.org, linux-acpi@vger.kernel.org Subject: [PATCH v2 8/9] mm/memory_hotplug: memory group aware "auto-movable" online policy Date: Fri, 23 Jul 2021 14:52:09 +0200 Message-Id: <20210723125210.29987-9-david@redhat.com> In-Reply-To: <20210723125210.29987-1-david@redhat.com> References: <20210723125210.29987-1-david@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.15 X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 5019C9001B2D Authentication-Results: imf23.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=c0uFy1RG; dmarc=pass (policy=none) header.from=redhat.com; spf=none (imf23.hostedemail.com: domain of david@redhat.com has no SPF policy when checking 170.10.133.124) smtp.mailfrom=david@redhat.com X-Stat-Signature: wk6ra3ynneadmqdqid6948kdqs1x89ac X-HE-Tag: 1627044829-409691 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Use memory groups to improve our "auto-movable" onlining policy: 1. For static memory groups (e.g., a DIMM), online a memory block MOVABLE only if all other memory blocks in the group are either MOVABLE or cou= ld be onlined MOVABLE. A DIMM will either be MOVABLE or not, not a mixtur= e. 2. For dynamic memory groups (e.g., a virtio-mem device), online a memory block MOVABLE only if all other memory blocks inside the current unit are either MOVABLE or could be onlined MOVABLE. For a virtio-mem device with a device block size with 512 MiB, all 128 MiB memory blocks wihin a 512 MiB unit will either be MOVABLE or not, not a mixture. We have to pass the memory group to zone_for_pfn_range() to take the memory group into account. Note: for now, there seems to be no compelling reason to make this behavior configurable. Signed-off-by: David Hildenbrand --- drivers/base/memory.c | 18 +++++++------ include/linux/memory_hotplug.h | 3 ++- mm/memory_hotplug.c | 48 +++++++++++++++++++++++++++++++--- 3 files changed, 57 insertions(+), 12 deletions(-) diff --git a/drivers/base/memory.c b/drivers/base/memory.c index e96c4f436ac3..1d34f30a9a80 100644 --- a/drivers/base/memory.c +++ b/drivers/base/memory.c @@ -182,7 +182,8 @@ static int memory_block_online(struct memory_block *m= em) struct zone *zone; int ret; =20 - zone =3D zone_for_pfn_range(mem->online_type, mem->nid, start_pfn, nr_p= ages); + zone =3D zone_for_pfn_range(mem->online_type, mem->nid, mem->group, + start_pfn, nr_pages); =20 /* * Although vmemmap pages have a different lifecycle than the pages @@ -379,12 +380,13 @@ static ssize_t phys_device_show(struct device *dev, =20 #ifdef CONFIG_MEMORY_HOTREMOVE static int print_allowed_zone(char *buf, int len, int nid, + struct memory_group *group, unsigned long start_pfn, unsigned long nr_pages, int online_type, struct zone *default_zone) { struct zone *zone; =20 - zone =3D zone_for_pfn_range(online_type, nid, start_pfn, nr_pages); + zone =3D zone_for_pfn_range(online_type, nid, group, start_pfn, nr_page= s); if (zone =3D=3D default_zone) return 0; =20 @@ -397,9 +399,10 @@ static ssize_t valid_zones_show(struct device *dev, struct memory_block *mem =3D to_memory_block(dev); unsigned long start_pfn =3D section_nr_to_pfn(mem->start_section_nr); unsigned long nr_pages =3D PAGES_PER_SECTION * sections_per_block; + struct memory_group *group =3D mem->group; struct zone *default_zone; + int nid =3D mem->nid; int len =3D 0; - int nid; =20 /* * Check the existing zone. Make sure that we do that only on the @@ -418,14 +421,13 @@ static ssize_t valid_zones_show(struct device *dev, goto out; } =20 - nid =3D mem->nid; - default_zone =3D zone_for_pfn_range(MMOP_ONLINE, nid, start_pfn, - nr_pages); + default_zone =3D zone_for_pfn_range(MMOP_ONLINE, nid, group, + start_pfn, nr_pages); =20 len +=3D sysfs_emit_at(buf, len, "%s", default_zone->name); - len +=3D print_allowed_zone(buf, len, nid, start_pfn, nr_pages, + len +=3D print_allowed_zone(buf, len, nid, group, start_pfn, nr_pages, MMOP_ONLINE_KERNEL, default_zone); - len +=3D print_allowed_zone(buf, len, nid, start_pfn, nr_pages, + len +=3D print_allowed_zone(buf, len, nid, group, start_pfn, nr_pages, MMOP_ONLINE_MOVABLE, default_zone); out: len +=3D sysfs_emit_at(buf, len, "\n"); diff --git a/include/linux/memory_hotplug.h b/include/linux/memory_hotplu= g.h index 23c4d369ad30..97f874a60607 100644 --- a/include/linux/memory_hotplug.h +++ b/include/linux/memory_hotplug.h @@ -348,7 +348,8 @@ extern void sparse_remove_section(struct mem_section = *ms, extern struct page *sparse_decode_mem_map(unsigned long coded_mem_map, unsigned long pnum); extern struct zone *zone_for_pfn_range(int online_type, int nid, - unsigned long start_pfn, unsigned long nr_pages); + struct memory_group *group, unsigned long start_pfn, + unsigned long nr_pages); extern int arch_create_linear_mapping(int nid, u64 start, u64 size, struct mhp_params *params); void arch_remove_linear_mapping(u64 start, u64 size); diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c index 8d556396b5d4..93fb89efdc80 100644 --- a/mm/memory_hotplug.c +++ b/mm/memory_hotplug.c @@ -850,12 +850,53 @@ static struct zone *default_kernel_zone_for_pfn(int= nid, unsigned long start_pfn * "present pages" is an upper limit that can get reached at runtime.= As * we base our calculations on KERNEL_EARLY, this is not an issue. */ -static struct zone *auto_movable_zone_for_pfn(int nid, unsigned long pfn= , +static struct zone *auto_movable_zone_for_pfn(int nid, + struct memory_group *group, + unsigned long pfn, unsigned long nr_pages) { + unsigned long online_pages =3D 0, max_pages, end_pfn; + struct page *page; + if (!auto_movable_ratio) goto kernel_zone; =20 + if (group && !group->is_dynamic) { + max_pages =3D group->s.max_pages; + online_pages =3D group->present_movable_pages; + + /* If anything is !MOVABLE online the rest !MOVABLE. */ + if (group->present_kernel_pages) + goto kernel_zone; + } else if (!group || group->d.unit_pages =3D=3D nr_pages) { + max_pages =3D nr_pages; + } else { + max_pages =3D group->d.unit_pages; + /* + * Take a look at all online sections in the current unit. + * We can safely assume that all pages within a section belong + * to the same zone, because dynamic memory groups only deal + * with hotplugged memory. + */ + pfn =3D ALIGN_DOWN(pfn, group->d.unit_pages); + end_pfn =3D pfn + group->d.unit_pages; + for (; pfn < end_pfn; pfn +=3D PAGES_PER_SECTION) { + page =3D pfn_to_online_page(pfn); + if (!page) + continue; + /* If anything is !MOVABLE online the rest !MOVABLE. */ + if (page_zonenum(page) !=3D ZONE_MOVABLE) + goto kernel_zone; + online_pages +=3D PAGES_PER_SECTION; + } + } + + /* + * Online MOVABLE if we could *currently* online all remaining parts + * MOVABLE. We expect to (add+) online them immediately next, so if + * nobody interferes, all will be MOVABLE if possible. + */ + nr_pages =3D max_pages - online_pages; if (!auto_movable_can_online_movable(NUMA_NO_NODE, nr_pages)) goto kernel_zone; =20 @@ -895,7 +936,8 @@ static inline struct zone *default_zone_for_pfn(int n= id, unsigned long start_pfn } =20 struct zone *zone_for_pfn_range(int online_type, int nid, - unsigned long start_pfn, unsigned long nr_pages) + struct memory_group *group, unsigned long start_pfn, + unsigned long nr_pages) { if (online_type =3D=3D MMOP_ONLINE_KERNEL) return default_kernel_zone_for_pfn(nid, start_pfn, nr_pages); @@ -904,7 +946,7 @@ struct zone *zone_for_pfn_range(int online_type, int = nid, return &NODE_DATA(nid)->node_zones[ZONE_MOVABLE]; =20 if (online_policy =3D=3D ONLINE_POLICY_AUTO_MOVABLE) - return auto_movable_zone_for_pfn(nid, start_pfn, nr_pages); + return auto_movable_zone_for_pfn(nid, group, start_pfn, nr_pages); =20 return default_zone_for_pfn(nid, start_pfn, nr_pages); } --=20 2.31.1