From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.9 required=3.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 60C19C4332D for ; Thu, 19 Mar 2020 16:20:32 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 149A12072C for ; Thu, 19 Mar 2020 16:20:32 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=chrisdown.name header.i=@chrisdown.name header.b="rnEAXvFg" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 149A12072C Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=chrisdown.name Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id A0B9B6B0006; Thu, 19 Mar 2020 12:20:31 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 9BC076B0008; Thu, 19 Mar 2020 12:20:31 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8D0C16B000A; Thu, 19 Mar 2020 12:20:31 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0193.hostedemail.com [216.40.44.193]) by kanga.kvack.org (Postfix) with ESMTP id 7196F6B0006 for ; Thu, 19 Mar 2020 12:20:31 -0400 (EDT) Received: from smtpin22.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id E6C67181AEF07 for ; Thu, 19 Mar 2020 16:20:30 +0000 (UTC) X-FDA: 76612624620.22.curve71_788a7cdaade34 X-HE-Tag: curve71_788a7cdaade34 X-Filterd-Recvd-Size: 10232 Received: from mail-wm1-f67.google.com (mail-wm1-f67.google.com [209.85.128.67]) by imf48.hostedemail.com (Postfix) with ESMTP for ; Thu, 19 Mar 2020 16:20:28 +0000 (UTC) Received: by mail-wm1-f67.google.com with SMTP id d1so3031126wmb.2 for ; Thu, 19 Mar 2020 09:20:28 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chrisdown.name; s=google; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to; bh=m2Yg4RAtpYnFVIAG6Psi2QrRZ5bd1zexCNrGq8I/IPU=; b=rnEAXvFg/uZlOfAf+E2qPZKFu/jR2ikNMsXRx2LS0tUE8En+OPIAkXBFfUDfiRIkOZ WRH5EMxzMzGxvDJDZcEQRoj2ILXiIl+0m5JuBcFirktnAiooX5KFqv09hJ/9tcnWU4Vr rTQ/9q7ksbrNcQRGMlV/VqzkaQ8ly/m8i//8Q= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=m2Yg4RAtpYnFVIAG6Psi2QrRZ5bd1zexCNrGq8I/IPU=; b=B2WEGZZtjycSP0gaX19bx7Kvtafv7OLy0fHlzOluUQY2JYf5hezPVnnzVhdIgRMGhg WxloNHPadRVz8fPmGsp9Ksq3LxM+9fIziGZWeJpk5JPlvlkoI3T5v0bFEqrtc9Ck/5CI PBjUpkszMtM8w0vSAirx2I8MBivdiiZORSRWqAZkCvWYnDI9K1L4CWZDaMZv1rAPw488 wZLkRSyUB3+itTbf6A9u2Ca7moev0WovxQIQdFqNZgMjZC0cBCXX3DS4ORFfO/yAm01E rrVrwQF8wurlBuGJZE5SR6U9rXKLXvQyZM+CU/niwf0ipVsA2+EiKKpKJY2+3zryBSnz HEOg== X-Gm-Message-State: ANhLgQ3I3QLMudYRPgjhN5J4SLm6gvR/tWlG6vwZ2mggE5fJFohICJ3q 5dVyXsVetr6TYpmS/wx5SaKeaw== X-Google-Smtp-Source: ADFU+vuB0u//Dw79tWRFZ4R0kD6V4kVAmMmRMKR4MH8wZ0OrqsRcymC3jBvMo7Xkw+wV0sCZ+VCOkg== X-Received: by 2002:a1c:e30b:: with SMTP id a11mr4419670wmh.7.1584634827013; Thu, 19 Mar 2020 09:20:27 -0700 (PDT) Received: from localhost ([2a01:4b00:8432:8a00:fa59:71ff:fe7e:8d21]) by smtp.gmail.com with ESMTPSA id y11sm4097672wrd.65.2020.03.19.09.20.26 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 19 Mar 2020 09:20:26 -0700 (PDT) Date: Thu, 19 Mar 2020 16:20:25 +0000 From: Chris Down To: qiwuchen55@gmail.com Cc: akpm@linux-foundation.org, linux-mm@kvack.org, chenqiwu Subject: Re: [PATCH] mm/vmscan: fix incorrect return type for cgroup_reclaim() Message-ID: <20200319162025.GA237751@chrisdown.name> References: <1584633026-26288-1-git-send-email-qiwuchen55@gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii; format=flowed Content-Disposition: inline In-Reply-To: <1584633026-26288-1-git-send-email-qiwuchen55@gmail.com> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: qiwuchen55@gmail.com writes: >From: chenqiwu > >The return type of cgroup_reclaim() is bool, but the correct type >should be struct mem_cgroup pointer. As a result, cgroup_reclaim() >can be used to wrap sc->target_mem_cgroup in vmscan code. The code changes looks okay to me, but the changelog and patch subject could do with some improvement. For example, the type isn't "incorrect" before and "correct" now, per se, it's just coercing from *struct mem_cgroup to bool. Could you make it more clear what your intent is in the patch? If it's that you found the code confusing, you can just write something like: mm/vmscan: return target_mem_cgroup from cgroup_reclaim Previously the code splits apart the action of checking whether we are in cgroup reclaim from getting the target memory cgroup for that reclaim. This split is confusing and seems unnecessary, since cgroup_reclaim itself only checks if sc->target_mem_cgroup is NULL or not, so merge the two use cases into one by returning the target_mem_cgroup itself from cgroup_reclaim. >Signed-off-by: chenqiwu After improving the patch title and summary, you can add: Acked-by: Chris Down >--- > mm/vmscan.c | 39 +++++++++++++++++++++------------------ > 1 file changed, 21 insertions(+), 18 deletions(-) > >diff --git a/mm/vmscan.c b/mm/vmscan.c >index dca623d..c795fc3 100644 >--- a/mm/vmscan.c >+++ b/mm/vmscan.c >@@ -238,7 +238,7 @@ static void unregister_memcg_shrinker(struct shrinker *shrinker) > up_write(&shrinker_rwsem); > } > >-static bool cgroup_reclaim(struct scan_control *sc) >+static struct mem_cgroup *cgroup_reclaim(struct scan_control *sc) > { > return sc->target_mem_cgroup; > } >@@ -276,9 +276,9 @@ static void unregister_memcg_shrinker(struct shrinker *shrinker) > { > } > >-static bool cgroup_reclaim(struct scan_control *sc) >+static struct mem_cgroup *cgroup_reclaim(struct scan_control *sc) > { >- return false; >+ return NULL; > } > > static bool writeback_throttling_sane(struct scan_control *sc) >@@ -984,7 +984,7 @@ static enum page_references page_check_references(struct page *page, > int referenced_ptes, referenced_page; > unsigned long vm_flags; > >- referenced_ptes = page_referenced(page, 1, sc->target_mem_cgroup, >+ referenced_ptes = page_referenced(page, 1, cgroup_reclaim(sc), > &vm_flags); > referenced_page = TestClearPageReferenced(page); > >@@ -1422,7 +1422,7 @@ static unsigned long shrink_page_list(struct list_head *page_list, > count_vm_event(PGLAZYFREED); > count_memcg_page_event(page, PGLAZYFREED); > } else if (!mapping || !__remove_mapping(mapping, page, true, >- sc->target_mem_cgroup)) >+ cgroup_reclaim(sc))) > goto keep_locked; > > unlock_page(page); >@@ -1907,6 +1907,7 @@ static int current_may_throttle(void) > enum vm_event_item item; > struct pglist_data *pgdat = lruvec_pgdat(lruvec); > struct zone_reclaim_stat *reclaim_stat = &lruvec->reclaim_stat; >+ struct mem_cgroup *target_memcg = cgroup_reclaim(sc); > bool stalled = false; > > while (unlikely(too_many_isolated(pgdat, file, sc))) { >@@ -1933,7 +1934,7 @@ static int current_may_throttle(void) > reclaim_stat->recent_scanned[file] += nr_taken; > > item = current_is_kswapd() ? PGSCAN_KSWAPD : PGSCAN_DIRECT; >- if (!cgroup_reclaim(sc)) >+ if (!target_memcg) > __count_vm_events(item, nr_scanned); > __count_memcg_events(lruvec_memcg(lruvec), item, nr_scanned); > spin_unlock_irq(&pgdat->lru_lock); >@@ -1947,7 +1948,7 @@ static int current_may_throttle(void) > spin_lock_irq(&pgdat->lru_lock); > > item = current_is_kswapd() ? PGSTEAL_KSWAPD : PGSTEAL_DIRECT; >- if (!cgroup_reclaim(sc)) >+ if (!target_memcg) > __count_vm_events(item, nr_reclaimed); > __count_memcg_events(lruvec_memcg(lruvec), item, nr_reclaimed); > reclaim_stat->recent_rotated[0] += stat.nr_activate[0]; >@@ -2041,7 +2042,7 @@ static void shrink_active_list(unsigned long nr_to_scan, > } > } > >- if (page_referenced(page, 0, sc->target_mem_cgroup, >+ if (page_referenced(page, 0, cgroup_reclaim(sc), > &vm_flags)) { > nr_rotated += hpage_nr_pages(page); > /* >@@ -2625,7 +2626,7 @@ static inline bool should_continue_reclaim(struct pglist_data *pgdat, > > static void shrink_node_memcgs(pg_data_t *pgdat, struct scan_control *sc) > { >- struct mem_cgroup *target_memcg = sc->target_mem_cgroup; >+ struct mem_cgroup *target_memcg = cgroup_reclaim(sc); > struct mem_cgroup *memcg; > > memcg = mem_cgroup_iter(target_memcg, NULL, NULL); >@@ -2686,10 +2687,11 @@ static void shrink_node(pg_data_t *pgdat, struct scan_control *sc) > struct reclaim_state *reclaim_state = current->reclaim_state; > unsigned long nr_reclaimed, nr_scanned; > struct lruvec *target_lruvec; >+ struct mem_cgroup *target_memcg = cgroup_reclaim(sc); > bool reclaimable = false; > unsigned long file; > >- target_lruvec = mem_cgroup_lruvec(sc->target_mem_cgroup, pgdat); >+ target_lruvec = mem_cgroup_lruvec(target_memcg, pgdat); > > again: > memset(&sc->nr, 0, sizeof(sc->nr)); >@@ -2744,7 +2746,7 @@ static void shrink_node(pg_data_t *pgdat, struct scan_control *sc) > * thrashing file LRU becomes infinitely more attractive than > * anon pages. Try to detect this based on file LRU size. > */ >- if (!cgroup_reclaim(sc)) { >+ if (!target_memcg) { > unsigned long total_high_wmark = 0; > unsigned long free, anon; > int z; >@@ -2782,7 +2784,7 @@ static void shrink_node(pg_data_t *pgdat, struct scan_control *sc) > } > > /* Record the subtree's reclaim efficiency */ >- vmpressure(sc->gfp_mask, sc->target_mem_cgroup, true, >+ vmpressure(sc->gfp_mask, target_memcg, true, > sc->nr_scanned - nr_scanned, > sc->nr_reclaimed - nr_reclaimed); > >@@ -2833,7 +2835,7 @@ static void shrink_node(pg_data_t *pgdat, struct scan_control *sc) > * stalling in wait_iff_congested(). > */ > if ((current_is_kswapd() || >- (cgroup_reclaim(sc) && writeback_throttling_sane(sc))) && >+ (target_memcg && writeback_throttling_sane(sc))) && > sc->nr.dirty && sc->nr.dirty == sc->nr.congested) > set_bit(LRUVEC_CONGESTED, &target_lruvec->flags); > >@@ -3020,14 +3022,15 @@ static unsigned long do_try_to_free_pages(struct zonelist *zonelist, > pg_data_t *last_pgdat; > struct zoneref *z; > struct zone *zone; >+ struct mem_cgroup *target_memcg = cgroup_reclaim(sc); > retry: > delayacct_freepages_start(); > >- if (!cgroup_reclaim(sc)) >+ if (!target_memcg) > __count_zid_vm_events(ALLOCSTALL, sc->reclaim_idx, 1); > > do { >- vmpressure_prio(sc->gfp_mask, sc->target_mem_cgroup, >+ vmpressure_prio(sc->gfp_mask, target_memcg, > sc->priority); > sc->nr_scanned = 0; > shrink_zones(zonelist, sc); >@@ -3053,12 +3056,12 @@ static unsigned long do_try_to_free_pages(struct zonelist *zonelist, > continue; > last_pgdat = zone->zone_pgdat; > >- snapshot_refaults(sc->target_mem_cgroup, zone->zone_pgdat); >+ snapshot_refaults(target_memcg, zone->zone_pgdat); > >- if (cgroup_reclaim(sc)) { >+ if (target_memcg) { > struct lruvec *lruvec; > >- lruvec = mem_cgroup_lruvec(sc->target_mem_cgroup, >+ lruvec = mem_cgroup_lruvec(target_memcg, > zone->zone_pgdat); > clear_bit(LRUVEC_CONGESTED, &lruvec->flags); > } >-- >1.9.1 > >