From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751238AbcGMFu3 (ORCPT ); Wed, 13 Jul 2016 01:50:29 -0400 Received: from mail-pa0-f67.google.com ([209.85.220.67]:34848 "EHLO mail-pa0-f67.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750843AbcGMFuU (ORCPT ); Wed, 13 Jul 2016 01:50:20 -0400 Date: Wed, 13 Jul 2016 15:50:39 +1000 From: Balbir Singh To: Mel Gorman Cc: Balbir Singh , Andrew Morton , Linux-MM , Rik van Riel , Vlastimil Babka , Johannes Weiner , Minchan Kim , Joonsoo Kim , LKML Subject: Re: [PATCH 02/34] mm, vmscan: move lru_lock to the node Message-ID: <20160713055039.GA23860@350D> Reply-To: bsingharora@gmail.com References: <1467970510-21195-1-git-send-email-mgorman@techsingularity.net> <1467970510-21195-3-git-send-email-mgorman@techsingularity.net> <20160712110604.GA5981@350D> <20160712111805.GD9806@techsingularity.net> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20160712111805.GD9806@techsingularity.net> User-Agent: Mutt/1.5.24 (2015-08-30) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Jul 12, 2016 at 12:18:05PM +0100, Mel Gorman wrote: > On Tue, Jul 12, 2016 at 09:06:04PM +1000, Balbir Singh wrote: > > > diff --git a/Documentation/cgroup-v1/memory.txt b/Documentation/cgroup-v1/memory.txt > > > index b14abf217239..946e69103cdd 100644 > > > --- a/Documentation/cgroup-v1/memory.txt > > > +++ b/Documentation/cgroup-v1/memory.txt > > > @@ -267,11 +267,11 @@ When oom event notifier is registered, event will be delivered. > > > Other lock order is following: > > > PG_locked. > > > mm->page_table_lock > > > - zone->lru_lock > > > + zone_lru_lock > > > > zone_lru_lock is a little confusing, can't we just call it > > node_lru_lock? > > > > It's a matter of perspective. People familiar with the VM already expect > a zone lock so will be looking for it. I can do a rename if you insist > but it may not actually help. I don't want to insist, but zone_ in the name can be confusing, as to leading us to think that the lru_lock is still in the zone If the rest of the reviewers are fine with, we don't need to rename > > > > @@ -496,7 +496,6 @@ struct zone { > > > /* Write-intensive fields used by page reclaim */ > > > > > > /* Fields commonly accessed by the page reclaim scanner */ > > > - spinlock_t lru_lock; > > > struct lruvec lruvec; > > > > > > /* > > > @@ -690,6 +689,9 @@ typedef struct pglist_data { > > > /* Number of pages migrated during the rate limiting time interval */ > > > unsigned long numabalancing_migrate_nr_pages; > > > #endif > > > + /* Write-intensive fields used by page reclaim */ > > > + ZONE_PADDING(_pad1_)a > > > > I thought this was to have zone->lock and zone->lru_lock in different > > cachelines, do we still need the padding here? > > > > The zone padding current keeps the page lock wait tables, page allocator > lists, compaction and vmstats on separate cache lines. They're still > fine. > > The node padding may not be necessary. It currently ensures that zonelists > and numa balancing are separate from the LRU lock but there is no guarantee > the current arrangement is optimal. It would depend on both the kernel > config and the workload but it may be necessary in the future to split > node into read-mostly sections and then different write-intensive sections > similar to what has happened to struct zone in the past. > Fair enough Balbir Singh.