From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-1.0 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 616DFC4321D for ; Thu, 16 Aug 2018 10:06:47 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 282772148E for ; Thu, 16 Aug 2018 10:06:47 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 282772148E Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2390767AbeHPNEN (ORCPT ); Thu, 16 Aug 2018 09:04:13 -0400 Received: from mx3-rdu2.redhat.com ([66.187.233.73]:55824 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1728018AbeHPNEM (ORCPT ); Thu, 16 Aug 2018 09:04:12 -0400 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.rdu2.redhat.com [10.11.54.3]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 054E440122C4; Thu, 16 Aug 2018 10:06:43 +0000 (UTC) Received: from t460s.redhat.com (ovpn-116-94.ams2.redhat.com [10.36.116.94]) by smtp.corp.redhat.com (Postfix) with ESMTP id 020A510EE95C; Thu, 16 Aug 2018 10:06:39 +0000 (UTC) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, Andrew Morton , Michal Hocko , Vlastimil Babka , Stephen Rothwell , Pavel Tatashin , Kemi Wang , David Rientjes , Jia He , Oscar Salvador , Petr Tesarik , Andrey Ryabinin , Dan Williams , David Hildenbrand , Mathieu Malaterre , Baoquan He , Wei Yang , Ross Zwisler , "Kirill A . Shutemov" Subject: [PATCH v1 2/5] mm/memory_hotplug: enforce section alignment when onlining/offlining Date: Thu, 16 Aug 2018 12:06:25 +0200 Message-Id: <20180816100628.26428-3-david@redhat.com> In-Reply-To: <20180816100628.26428-1-david@redhat.com> References: <20180816100628.26428-1-david@redhat.com> X-Scanned-By: MIMEDefang 2.78 on 10.11.54.3 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.5]); Thu, 16 Aug 2018 10:06:43 +0000 (UTC) X-Greylist: inspected by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.5]); Thu, 16 Aug 2018 10:06:43 +0000 (UTC) for IP:'10.11.54.3' DOMAIN:'int-mx03.intmail.prod.int.rdu2.redhat.com' HELO:'smtp.corp.redhat.com' FROM:'david@redhat.com' RCPT:'' Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org onlining/offlining code works on whole sections, so let's enforce that. Existing code only allows to add memory in memory block size. And only whole memory blocks can be onlined/offlined. Memory blocks are always aligned to sections, so this should not break anything. online_pages/offline_pages will implicitly mark whole sections online/offline, so the code really can only handle such granularities. (especially offlining code cannot deal with pageblock_nr_pages but theoretically only MAX_ORDER-1) Signed-off-by: David Hildenbrand --- mm/memory_hotplug.c | 10 +++++++--- 1 file changed, 7 insertions(+), 3 deletions(-) diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c index 090cf474de87..30d2fa42b0bb 100644 --- a/mm/memory_hotplug.c +++ b/mm/memory_hotplug.c @@ -897,6 +897,11 @@ int __ref online_pages(unsigned long pfn, unsigned long nr_pages, int online_typ struct memory_notify arg; struct memory_block *mem; + if (!IS_ALIGNED(pfn, PAGES_PER_SECTION)) + return -EINVAL; + if (!IS_ALIGNED(nr_pages, PAGES_PER_SECTION)) + return -EINVAL; + /* * We can't use pfn_to_nid() because nid might be stored in struct page * which is not yet initialized. Instead, we find nid from memory block. @@ -1600,10 +1605,9 @@ int offline_pages(unsigned long start_pfn, unsigned long nr_pages) struct zone *zone; struct memory_notify arg; - /* at least, alignment against pageblock is necessary */ - if (!IS_ALIGNED(start_pfn, pageblock_nr_pages)) + if (!IS_ALIGNED(start_pfn, PAGES_PER_SECTION)) return -EINVAL; - if (!IS_ALIGNED(end_pfn, pageblock_nr_pages)) + if (!IS_ALIGNED(nr_pages, PAGES_PER_SECTION)) return -EINVAL; /* This makes hotplug much easier...and readable. we assume this for now. .*/ -- 2.17.1