From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754542AbcGHJgd (ORCPT ); Fri, 8 Jul 2016 05:36:33 -0400 Received: from outbound-smtp02.blacknight.com ([81.17.249.8]:60427 "EHLO outbound-smtp02.blacknight.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754305AbcGHJgF (ORCPT ); Fri, 8 Jul 2016 05:36:05 -0400 From: Mel Gorman To: Andrew Morton , Linux-MM Cc: Rik van Riel , Vlastimil Babka , Johannes Weiner , Minchan Kim , Joonsoo Kim , LKML , Mel Gorman Subject: [PATCH 04/34] mm, mmzone: clarify the usage of zone padding Date: Fri, 8 Jul 2016 10:34:40 +0100 Message-Id: <1467970510-21195-5-git-send-email-mgorman@techsingularity.net> X-Mailer: git-send-email 2.6.4 In-Reply-To: <1467970510-21195-1-git-send-email-mgorman@techsingularity.net> References: <1467970510-21195-1-git-send-email-mgorman@techsingularity.net> Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Zone padding separates write-intensive fields used by page allocation, compaction and vmstats but the comments are a little misleading and need clarification. Signed-off-by: Mel Gorman --- include/linux/mmzone.h | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-) diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index d4f5cac0a8c3..edafdaf62e90 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -477,20 +477,21 @@ struct zone { unsigned long wait_table_hash_nr_entries; unsigned long wait_table_bits; + /* Write-intensive fields used from the page allocator */ ZONE_PADDING(_pad1_) + /* free areas of different sizes */ struct free_area free_area[MAX_ORDER]; /* zone flags, see below */ unsigned long flags; - /* Write-intensive fields used from the page allocator */ + /* Primarily protects free_area */ spinlock_t lock; + /* Write-intensive fields used by compaction and vmstats. */ ZONE_PADDING(_pad2_) - /* Write-intensive fields used by page reclaim */ - /* * When free pages are below this point, additional steps are taken * when reading the number of free pages to avoid per-cpu counter -- 2.6.4