From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.5 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_PASS,URIBL_BLOCKED,USER_AGENT_MUTT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id CDFFBC43218 for ; Fri, 26 Apr 2019 12:57:47 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id A0A522067D for ; Fri, 26 Apr 2019 12:57:47 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726167AbfDZM5q (ORCPT ); Fri, 26 Apr 2019 08:57:46 -0400 Received: from mx2.suse.de ([195.135.220.15]:38772 "EHLO mx1.suse.de" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1725901AbfDZM5q (ORCPT ); Fri, 26 Apr 2019 08:57:46 -0400 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id D89D3AE07; Fri, 26 Apr 2019 12:57:44 +0000 (UTC) Date: Fri, 26 Apr 2019 14:57:41 +0200 From: Oscar Salvador To: Dan Williams Cc: akpm@linux-foundation.org, Michal Hocko , Vlastimil Babka , Logan Gunthorpe , linux-mm@kvack.org, linux-nvdimm@lists.01.org, linux-kernel@vger.kernel.org, david@redhat.com Subject: Re: [PATCH v6 03/12] mm/sparsemem: Add helpers track active portions of a section at boot Message-ID: <20190426125741.GB28583@linux> References: <155552633539.2015392.2477781120122237934.stgit@dwillia2-desk3.amr.corp.intel.com> <155552635098.2015392.5460028594173939000.stgit@dwillia2-desk3.amr.corp.intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <155552635098.2015392.5460028594173939000.stgit@dwillia2-desk3.amr.corp.intel.com> User-Agent: Mutt/1.10.1 (2018-07-13) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, Apr 17, 2019 at 11:39:11AM -0700, Dan Williams wrote: > Prepare for hot{plug,remove} of sub-ranges of a section by tracking a > section active bitmask, each bit representing 2MB (SECTION_SIZE (128M) / > map_active bitmask length (64)). If it turns out that 2MB is too large > of an active tracking granularity it is trivial to increase the size of > the map_active bitmap. > > The implications of a partially populated section is that pfn_valid() > needs to go beyond a valid_section() check and read the sub-section > active ranges from the bitmask. > > Cc: Michal Hocko > Cc: Vlastimil Babka > Cc: Logan Gunthorpe > Signed-off-by: Dan Williams [...] > +static unsigned long section_active_mask(unsigned long pfn, > + unsigned long nr_pages) > +{ > + int idx_start, idx_size; > + phys_addr_t start, size; > + > + if (!nr_pages) > + return 0; > + > + start = PFN_PHYS(pfn); > + size = PFN_PHYS(min(nr_pages, PAGES_PER_SECTION > + - (pfn & ~PAGE_SECTION_MASK))); > + size = ALIGN(size, SECTION_ACTIVE_SIZE); I am probably missing something, and this is more a question than anything else, but: is there a reason for shifting pfn and pages to get the size and the address? Could not we operate on pfn/pages, so we do not have to shift every time? (even for pfn_section_valid() calls) Something like: #define SUB_SECTION_ACTIVE_PAGES (SECTION_ACTIVE_SIZE / PAGE_SIZE) static inline int section_active_index(unsigned long pfn) { return (pfn & ~(PAGE_SECTION_MASK)) / SUB_SECTION_ACTIVE_PAGES; } > + > + idx_start = section_active_index(start); > + idx_size = section_active_index(size); > + > + if (idx_size == 0) > + return -1; What about turning that into something more intuitive? Since -1 represents here a full section, we could define something like: #define FULL_SECTION (-1UL) Or a better name, it is just that I find "-1" not really easy to interpret. > + return ((1UL << idx_size) - 1) << idx_start; > +} > + > +void section_active_init(unsigned long pfn, unsigned long nr_pages) > +{ > + int end_sec = pfn_to_section_nr(pfn + nr_pages - 1); > + int i, start_sec = pfn_to_section_nr(pfn); > + > + if (!nr_pages) > + return; > + > + for (i = start_sec; i <= end_sec; i++) { > + struct mem_section *ms; > + unsigned long mask; > + unsigned long pfns; > + > + pfns = min(nr_pages, PAGES_PER_SECTION > + - (pfn & ~PAGE_SECTION_MASK)); > + mask = section_active_mask(pfn, pfns); > + > + ms = __nr_to_section(i); > + pr_debug("%s: sec: %d mask: %#018lx\n", __func__, i, mask); > + ms->usage->map_active = mask; > + > + pfn += pfns; > + nr_pages -= pfns; > + } > +} > + > /* Record a memory area against a node. */ > void __init memory_present(int nid, unsigned long start, unsigned long end) > { > -- Oscar Salvador SUSE L3