From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754051AbbHXJHX (ORCPT ); Mon, 24 Aug 2015 05:07:23 -0400 Received: from mx2.suse.de ([195.135.220.15]:57946 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751475AbbHXJHW (ORCPT ); Mon, 24 Aug 2015 05:07:22 -0400 Subject: Re: [PATCH v2 1/9] mm/compaction: skip useless pfn when updating cached pfn To: Joonsoo Kim , Andrew Morton References: <1440382773-16070-1-git-send-email-iamjoonsoo.kim@lge.com> <1440382773-16070-2-git-send-email-iamjoonsoo.kim@lge.com> Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, Mel Gorman , Rik van Riel , David Rientjes , Minchan Kim , Joonsoo Kim From: Vlastimil Babka Message-ID: <55DADEC0.5030800@suse.cz> Date: Mon, 24 Aug 2015 11:07:12 +0200 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:38.0) Gecko/20100101 Thunderbird/38.2.0 MIME-Version: 1.0 In-Reply-To: <1440382773-16070-2-git-send-email-iamjoonsoo.kim@lge.com> Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 08/24/2015 04:19 AM, Joonsoo Kim wrote: > Cached pfn is used to determine the start position of scanner > at next compaction run. Current cached pfn points the skipped pageblock > so we uselessly checks whether pageblock is valid for compaction and > skip-bit is set or not. If we set scanner's cached pfn to next pfn of > skipped pageblock, we don't need to do this check. > > Signed-off-by: Joonsoo Kim > --- > mm/compaction.c | 13 ++++++------- > 1 file changed, 6 insertions(+), 7 deletions(-) > > diff --git a/mm/compaction.c b/mm/compaction.c > index 6ef2fdf..c2d3d6a 100644 > --- a/mm/compaction.c > +++ b/mm/compaction.c > @@ -261,10 +261,9 @@ void reset_isolation_suitable(pg_data_t *pgdat) > */ > static void update_pageblock_skip(struct compact_control *cc, > struct page *page, unsigned long nr_isolated, > - bool migrate_scanner) > + unsigned long pfn, bool migrate_scanner) > { > struct zone *zone = cc->zone; > - unsigned long pfn; > > if (cc->ignore_skip_hint) > return; > @@ -277,8 +276,6 @@ static void update_pageblock_skip(struct compact_control *cc, > > set_pageblock_skip(page); > > - pfn = page_to_pfn(page); > - > /* Update where async and sync compaction should restart */ > if (migrate_scanner) { > if (pfn > zone->compact_cached_migrate_pfn[0]) > @@ -300,7 +297,7 @@ static inline bool isolation_suitable(struct compact_control *cc, > > static void update_pageblock_skip(struct compact_control *cc, > struct page *page, unsigned long nr_isolated, > - bool migrate_scanner) > + unsigned long pfn, bool migrate_scanner) > { > } > #endif /* CONFIG_COMPACTION */ > @@ -509,7 +506,8 @@ isolate_fail: > > /* Update the pageblock-skip if the whole pageblock was scanned */ > if (blockpfn == end_pfn) > - update_pageblock_skip(cc, valid_page, total_isolated, false); > + update_pageblock_skip(cc, valid_page, total_isolated, > + end_pfn, false); In isolate_freepages_block() this means we actually go logically *back* one pageblock, as the direction is opposite? I know it's not an issue after the redesign patch so you wouldn't notice it when testing the whole series. But there's a non-zero chance that the smaller fixes are merged first and the redesign later... > > count_compact_events(COMPACTFREE_SCANNED, nr_scanned); > if (total_isolated) > @@ -811,7 +809,8 @@ isolate_success: > * if the whole pageblock was scanned without isolating any page. > */ > if (low_pfn == end_pfn) > - update_pageblock_skip(cc, valid_page, nr_isolated, true); > + update_pageblock_skip(cc, valid_page, nr_isolated, > + end_pfn, true); > > trace_mm_compaction_isolate_migratepages(start_pfn, low_pfn, > nr_scanned, nr_isolated); >