linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Vladimir Davydov <vdavydov@parallels.com>
To: Michal Hocko <mhocko@suse.cz>
Cc: Andrew Morton <akpm@linux-foundation.org>,
	Johannes Weiner <hannes@cmpxchg.org>,
	Vlastimil Babka <vbabka@suse.cz>, Mel Gorman <mgorman@suse.de>,
	Rik van Riel <riel@redhat.com>,
	linux-mm@kvack.org, linux-kernel@vger.kernel.org
Subject: Re: [PATCH -mm v2] vmscan: move reclaim_state handling to shrink_slab
Date: Thu, 15 Jan 2015 20:07:26 +0300	[thread overview]
Message-ID: <20150115170726.GH11264@esperanza> (raw)
In-Reply-To: <20150115144838.GI7000@dhcp22.suse.cz>

On Thu, Jan 15, 2015 at 03:48:38PM +0100, Michal Hocko wrote:
> On Thu 15-01-15 16:25:16, Vladimir Davydov wrote:
> > 		memcg = mem_cgroup_iter(root, NULL, &reclaim);
> > 		do {
> > 			[...]
> > 			if (memcg && is_classzone)
> > 				shrink_slab(sc->gfp_mask, zone_to_nid(zone),
> > 					    memcg, sc->nr_scanned - scanned,
> > 					    lru_pages);
> > 
> > 			/*
> > 			 * Direct reclaim and kswapd have to scan all memory
> > 			 * cgroups to fulfill the overall scan target for the
> > 			 * zone.
> > 			 *
> > 			 * Limit reclaim, on the other hand, only cares about
> > 			 * nr_to_reclaim pages to be reclaimed and it will
> > 			 * retry with decreasing priority if one round over the
> > 			 * whole hierarchy is not sufficient.
> > 			 */
> > 			if (!global_reclaim(sc) &&
> > 					sc->nr_reclaimed >= sc->nr_to_reclaim) {
> > 				mem_cgroup_iter_break(root, memcg);
> > 				break;
> > 			}
> > 			memcg = mem_cgroup_iter(root, memcg, &reclaim);
> > 		} while (memcg);
> > 
> > 
> > If we can ignore reclaimed slab pages here (?), let's drop this patch.
> 
> I see what you are trying to achieve but can this lead to a serious
> over-reclaim?

I think it can, but only if we shrink an inode with lots of pages
attached to its address space (they also count to reclaim_state). In
this case, we overreclaim anyway though.

I agree that this is a high risk for a vague benefit. Let's drop it
until we see this problem in real life.

Thanks,
Vladimir

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  reply	other threads:[~2015-01-15 17:07 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-01-15  8:37 [PATCH -mm v2] vmscan: move reclaim_state handling to shrink_slab Vladimir Davydov
2015-01-15 12:58 ` Michal Hocko
2015-01-15 13:25   ` Vladimir Davydov
2015-01-15 14:48     ` Michal Hocko
2015-01-15 17:07       ` Vladimir Davydov [this message]
2015-01-20  7:35       ` Paul E. McKenney
2015-01-20 10:11         ` Michal Hocko
  -- strict thread matches above, loose matches on Subject: below --
2015-01-15 11:43 Hillf Danton

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20150115170726.GH11264@esperanza \
    --to=vdavydov@parallels.com \
    --cc=akpm@linux-foundation.org \
    --cc=hannes@cmpxchg.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mgorman@suse.de \
    --cc=mhocko@suse.cz \
    --cc=riel@redhat.com \
    --cc=vbabka@suse.cz \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).