From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-pf0-f199.google.com (mail-pf0-f199.google.com [209.85.192.199]) by kanga.kvack.org (Postfix) with ESMTP id 01C686B0007 for ; Mon, 26 Feb 2018 22:11:49 -0500 (EST) Received: by mail-pf0-f199.google.com with SMTP id c83so2803441pfk.5 for ; Mon, 26 Feb 2018 19:11:48 -0800 (PST) Received: from mga01.intel.com (mga01.intel.com. [192.55.52.88]) by mx.google.com with ESMTPS id 33-v6si7735941pls.710.2018.02.26.19.11.47 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 26 Feb 2018 19:11:47 -0800 (PST) Date: Tue, 27 Feb 2018 11:12:44 +0800 From: Aaron Lu Subject: Re: [PATCH v3 1/3] mm/free_pcppages_bulk: update pcp->count inside Message-ID: <20180227031244.GA28977@intel.com> References: <20180226135346.7208-1-aaron.lu@intel.com> <20180226135346.7208-2-aaron.lu@intel.com> <20180227015613.GA9141@intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20180227015613.GA9141@intel.com> Sender: owner-linux-mm@kvack.org List-ID: To: David Rientjes Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Andrew Morton , Huang Ying , Dave Hansen , Kemi Wang , Tim Chen , Andi Kleen , Michal Hocko , Vlastimil Babka , Mel Gorman , Matthew Wilcox On Tue, Feb 27, 2018 at 09:56:13AM +0800, Aaron Lu wrote: > On Mon, Feb 26, 2018 at 01:48:14PM -0800, David Rientjes wrote: > > On Mon, 26 Feb 2018, Aaron Lu wrote: > > > > > Matthew Wilcox found that all callers of free_pcppages_bulk() currently > > > update pcp->count immediately after so it's natural to do it inside > > > free_pcppages_bulk(). > > > > > > No functionality or performance change is expected from this patch. > > > > > > Suggested-by: Matthew Wilcox > > > Signed-off-by: Aaron Lu > > > --- > > > mm/page_alloc.c | 10 +++------- > > > 1 file changed, 3 insertions(+), 7 deletions(-) > > > > > > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > > > index cb416723538f..3154859cccd6 100644 > > > --- a/mm/page_alloc.c > > > +++ b/mm/page_alloc.c > > > @@ -1117,6 +1117,7 @@ static void free_pcppages_bulk(struct zone *zone, int count, > > > int batch_free = 0; > > > bool isolated_pageblocks; > > > > > > + pcp->count -= count; > > > spin_lock(&zone->lock); > > > isolated_pageblocks = has_isolate_pageblock(zone); > > > > > > > Why modify pcp->count before the pages have actually been freed? > > When count is still count and not zero after pages have actually been > freed :-) > > > > > I doubt that it matters too much, but at least /proc/zoneinfo uses > > zone->lock. I think it should be done after the lock is dropped. > > Agree that it looks a bit weird to do it beforehand and I just want to > avoid adding one more local variable here. > > pcp->count is not protected by zone->lock though so even we do it after > dropping the lock, it could still happen that zoneinfo shows a wrong > value of pcp->count while it should be zero(this isn't a problem since > zoneinfo doesn't need to be precise). > > Anyway, I'll follow your suggestion here to avoid confusion. What about this: update pcp->count when page is dropped off pcp list. diff --git a/mm/page_alloc.c b/mm/page_alloc.c index cb416723538f..faa33eac1635 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -1148,6 +1148,7 @@ static void free_pcppages_bulk(struct zone *zone, int count, page = list_last_entry(list, struct page, lru); /* must delete as __free_one_page list manipulates */ list_del(&page->lru); + pcp->count--; mt = get_pcppage_migratetype(page); /* MIGRATE_ISOLATE page should not go to pcplists */ @@ -2416,10 +2417,8 @@ void drain_zone_pages(struct zone *zone, struct per_cpu_pages *pcp) local_irq_save(flags); batch = READ_ONCE(pcp->batch); to_drain = min(pcp->count, batch); - if (to_drain > 0) { + if (to_drain > 0) free_pcppages_bulk(zone, to_drain, pcp); - pcp->count -= to_drain; - } local_irq_restore(flags); } #endif @@ -2441,10 +2440,8 @@ static void drain_pages_zone(unsigned int cpu, struct zone *zone) pset = per_cpu_ptr(zone->pageset, cpu); pcp = &pset->pcp; - if (pcp->count) { + if (pcp->count) free_pcppages_bulk(zone, pcp->count, pcp); - pcp->count = 0; - } local_irq_restore(flags); } @@ -2668,7 +2665,6 @@ static void free_unref_page_commit(struct page *page, unsigned long pfn) if (pcp->count >= pcp->high) { unsigned long batch = READ_ONCE(pcp->batch); free_pcppages_bulk(zone, batch, pcp); - pcp->count -= batch; } } -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org