From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.2 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 73D55C432C0 for ; Thu, 28 Nov 2019 13:52:43 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 42A4A2176D for ; Thu, 28 Nov 2019 13:52:43 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 42A4A2176D Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=suse.de Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id D021E6B052F; Thu, 28 Nov 2019 08:52:42 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id CB2E06B0530; Thu, 28 Nov 2019 08:52:42 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BC8B46B0531; Thu, 28 Nov 2019 08:52:42 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0183.hostedemail.com [216.40.44.183]) by kanga.kvack.org (Postfix) with ESMTP id A3CFF6B052F for ; Thu, 28 Nov 2019 08:52:42 -0500 (EST) Received: from smtpin30.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with SMTP id 6329C180AD80F for ; Thu, 28 Nov 2019 13:52:42 +0000 (UTC) X-FDA: 76205826564.30.alarm95_178f472d6ec11 X-HE-Tag: alarm95_178f472d6ec11 X-Filterd-Recvd-Size: 4799 Received: from mx1.suse.de (mx2.suse.de [195.135.220.15]) by imf33.hostedemail.com (Postfix) with ESMTP for ; Thu, 28 Nov 2019 13:52:41 +0000 (UTC) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id E6A75AF22; Thu, 28 Nov 2019 13:52:39 +0000 (UTC) Date: Thu, 28 Nov 2019 14:52:36 +0100 From: Oscar Salvador To: David Hildenbrand Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, Andrew Morton , Michal Hocko Subject: Re: [PATCH v1] mm/memory_hotplug: don't check the nid in find_(smallest|biggest)_section_pfn Message-ID: <20191128135231.GA10554@linux> References: <20191127174158.28226-1-david@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20191127174158.28226-1-david@redhat.com> User-Agent: Mutt/1.10.1 (2018-07-13) X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Wed, Nov 27, 2019 at 06:41:58PM +0100, David Hildenbrand wrote: > Now that we always check against a zone, we can stop checking against > the nid, it is implicitly covered by the zone. > > Cc: Andrew Morton > Cc: Michal Hocko > Cc: Oscar Salvador > Signed-off-by: David Hildenbrand Maybe the check was in place to play against the "assumption" that a zone can span multiple nodes. Hotplug code was full of those hardcoded assumtions (like working with holes and whatnot). Anyway, this looks the right thing to do, and thanks for the previous fixes/cleanups. Reviewed-by: Oscar Salvador > --- > mm/memory_hotplug.c | 23 ++++++++--------------- > 1 file changed, 8 insertions(+), 15 deletions(-) > > diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c > index 46b2e056a43f..602f753c662c 100644 > --- a/mm/memory_hotplug.c > +++ b/mm/memory_hotplug.c > @@ -344,17 +344,14 @@ int __ref __add_pages(int nid, unsigned long pfn, unsigned long nr_pages, > } > > /* find the smallest valid pfn in the range [start_pfn, end_pfn) */ > -static unsigned long find_smallest_section_pfn(int nid, struct zone *zone, > - unsigned long start_pfn, > - unsigned long end_pfn) > +static unsigned long find_smallest_section_pfn(struct zone *zone, > + unsigned long start_pfn, > + unsigned long end_pfn) > { > for (; start_pfn < end_pfn; start_pfn += PAGES_PER_SUBSECTION) { > if (unlikely(!pfn_to_online_page(start_pfn))) > continue; > > - if (unlikely(pfn_to_nid(start_pfn) != nid)) > - continue; > - > if (zone != page_zone(pfn_to_page(start_pfn))) > continue; > > @@ -365,9 +362,9 @@ static unsigned long find_smallest_section_pfn(int nid, struct zone *zone, > } > > /* find the biggest valid pfn in the range [start_pfn, end_pfn). */ > -static unsigned long find_biggest_section_pfn(int nid, struct zone *zone, > - unsigned long start_pfn, > - unsigned long end_pfn) > +static unsigned long find_biggest_section_pfn(struct zone *zone, > + unsigned long start_pfn, > + unsigned long end_pfn) > { > unsigned long pfn; > > @@ -377,9 +374,6 @@ static unsigned long find_biggest_section_pfn(int nid, struct zone *zone, > if (unlikely(!pfn_to_online_page(pfn))) > continue; > > - if (unlikely(pfn_to_nid(pfn) != nid)) > - continue; > - > if (zone != page_zone(pfn_to_page(pfn))) > continue; > > @@ -393,7 +387,6 @@ static void shrink_zone_span(struct zone *zone, unsigned long start_pfn, > unsigned long end_pfn) > { > unsigned long pfn; > - int nid = zone_to_nid(zone); > > zone_span_writelock(zone); > if (zone->zone_start_pfn == start_pfn) { > @@ -403,7 +396,7 @@ static void shrink_zone_span(struct zone *zone, unsigned long start_pfn, > * In this case, we find second smallest valid mem_section > * for shrinking zone. > */ > - pfn = find_smallest_section_pfn(nid, zone, end_pfn, > + pfn = find_smallest_section_pfn(zone, end_pfn, > zone_end_pfn(zone)); > if (pfn) { > zone->spanned_pages = zone_end_pfn(zone) - pfn; > @@ -419,7 +412,7 @@ static void shrink_zone_span(struct zone *zone, unsigned long start_pfn, > * In this case, we find second biggest valid mem_section for > * shrinking zone. > */ > - pfn = find_biggest_section_pfn(nid, zone, zone->zone_start_pfn, > + pfn = find_biggest_section_pfn(zone, zone->zone_start_pfn, > start_pfn); > if (pfn) > zone->spanned_pages = pfn - zone->zone_start_pfn + 1; > -- > 2.21.0 > -- Oscar Salvador SUSE L3