cgroups.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Michal Hocko <mhocko-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org>
To: Chris Down <chris-6Bi1550iOqEnzZ6mRAm98g@public.gmane.org>
Cc: Johannes Weiner <hannes-druUgvl0LCNAfugRpC6u6w@public.gmane.org>,
	Andrew Morton
	<akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b@public.gmane.org>,
	Tejun Heo <tj-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org>,
	linux-mm-Bw31MaZKKs3YtjvyW6yDsg@public.gmane.org,
	cgroups-u79uwXL29TY76Z2rM5mHXA@public.gmane.org,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org,
	kernel-team-b10kYP2dOMg@public.gmane.org
Subject: Re: [PATCH] mm, memcg: reclaim more aggressively before high allocator throttling
Date: Fri, 29 May 2020 09:31:18 +0200	[thread overview]
Message-ID: <20200529073118.GE4406@dhcp22.suse.cz> (raw)
In-Reply-To: <20200528164848.GB839178-6Bi1550iOqEnzZ6mRAm98g@public.gmane.org>

On Thu 28-05-20 17:48:48, Chris Down wrote:
> Michal Hocko writes:
> > > We send a simple bug fix: bring this instance of reclaim in line with
> > > how everybody else is using the reclaim API, to meet the semantics as
> > > they are intendend and documented.
> > 
> > Here is where we are not on the same page though. Once you have identified
> > that the main problem is that the reclaim fails too early to meet the
> > target then the fix would be to enforce that target. I have asked why
> > this hasn't been done and haven't got any real answer for that. Instead
> > what you call "a simple bug fix" has larger consequences which are not
> > really explained in the changelog and they are also not really trivial
> > to see. If the changelog explicitly stated that the proportional memory
> > reclaim is not sufficient because XYZ and the implementation has been
> > changed to instead meet the high limit target then this would be a
> > completely different story and I believe we could have saved some
> > discussion.
> 
> I agree that the changelog can be made more clear. Any objection if I send
> v2 with changelog changes to that effect, then? :-)

Yes, please. And I would highly appreciate to have the above addressed.
So that we do not have to really scratch heads why a particular design
decision has been made and argue what was the thinking behind.

> > > And somehow this is controversial, and we're just changing around user
> > > promises as we see fit for our particular usecase?
> > > 
> > > I don't even understand how the supposed alternate semantics you read
> > > between the lines in the documentation would make for a useful
> > > feature: It may fail to contain a group of offending tasks to the
> > > configured limit, but it will be fair to those tasks while doing so?
> > > 
> > > > But if your really want to push this through then let's do it
> > > > properly at least. memcg->memcg_nr_pages_over_high has only very
> > > > vague meaning if the reclaim target is the high limit.
> > > 
> > > task->memcg_nr_pages_over_high is not vague, it's a best-effort
> > > mechanism to distribute fairness. It's the current task's share of the
> > > cgroup's overage, and it allows us in the majority of situations to
> > > distribute reclaim work and sleeps in proportion to how much the task
> > > is actually at fault.
> > 
> > Agreed. But this stops being the case as soon as the reclaim target has
> > been reached and new reclaim attempts are enforced because the memcg is
> > still above the high limit. Because then you have a completely different
> > reclaim target - get down to the limit. This would be especially visible
> > with a large memcg_nr_pages_over_high which could even lead to an over
> > reclaim.
> 
> We actually over reclaim even before this patch -- this patch doesn't bring
> much new in that regard.
> 
> Tracing try_to_free_pages for a cgroup at the memory.high threshold shows
> that before this change, we sometimes even reclaim on the order of twice the
> number of pages requested. For example, I see cases where we requested 1000
> pages to be reclaimed, but end up reclaiming 2000 in a single reclaim
> attempt.

This is interesting and worth looking into. I am aware that we can
reclaim potentially much more pages during the icache reclaim and that
there was a heated discussion without any fix merged in the end IIRC.
Do you have any details?

-- 
Michal Hocko
SUSE Labs

  parent reply	other threads:[~2020-05-29  7:31 UTC|newest]

Thread overview: 40+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-05-20 14:37 [PATCH] mm, memcg: reclaim more aggressively before high allocator throttling Chris Down
     [not found] ` <20200520143712.GA749486-6Bi1550iOqEnzZ6mRAm98g@public.gmane.org>
2020-05-20 16:07   ` Michal Hocko
2020-05-20 16:51     ` Johannes Weiner
2020-05-20 17:04       ` Michal Hocko
2020-05-20 17:51         ` Johannes Weiner
     [not found]           ` <20200520175135.GA793901-druUgvl0LCNAfugRpC6u6w@public.gmane.org>
2020-05-21  7:32             ` Michal Hocko
     [not found]               ` <20200521073245.GI6462-2MMpYkNvuYDjFM9bn6wA6Q@public.gmane.org>
2020-05-21 13:51                 ` Johannes Weiner
2020-05-21 14:22                   ` Johannes Weiner
     [not found]                   ` <20200521135152.GA810429-druUgvl0LCNAfugRpC6u6w@public.gmane.org>
2020-05-21 14:35                     ` Michal Hocko
     [not found]                       ` <20200521143515.GU6462-2MMpYkNvuYDjFM9bn6wA6Q@public.gmane.org>
2020-05-21 15:02                         ` Chris Down
2020-05-21 16:38                         ` Johannes Weiner
     [not found]                           ` <20200521163833.GA813446-druUgvl0LCNAfugRpC6u6w@public.gmane.org>
2020-05-21 17:37                             ` Michal Hocko
     [not found]                               ` <20200521173701.GX6462-2MMpYkNvuYDjFM9bn6wA6Q@public.gmane.org>
2020-05-21 18:45                                 ` Johannes Weiner
     [not found]                                   ` <20200521184505.GA815980-druUgvl0LCNAfugRpC6u6w@public.gmane.org>
2020-05-28 16:31                                     ` Michal Hocko
     [not found]                                       ` <20200528163101.GJ27484-2MMpYkNvuYDjFM9bn6wA6Q@public.gmane.org>
2020-05-28 16:48                                         ` Chris Down
     [not found]                                           ` <20200528164848.GB839178-6Bi1550iOqEnzZ6mRAm98g@public.gmane.org>
2020-05-29  7:31                                             ` Michal Hocko [this message]
2020-05-29 10:08                                               ` Chris Down
     [not found]                                                 ` <20200529100858.GA98458-6Bi1550iOqEnzZ6mRAm98g@public.gmane.org>
2020-05-29 10:14                                                   ` Michal Hocko
2020-05-28 20:11                                         ` Johannes Weiner
     [not found]     ` <20200520160756.GE6462-2MMpYkNvuYDjFM9bn6wA6Q@public.gmane.org>
2020-05-20 20:26       ` Chris Down
     [not found]         ` <20200520202650.GB558281-6Bi1550iOqEnzZ6mRAm98g@public.gmane.org>
2020-05-21  7:19           ` Michal Hocko
2020-05-21 11:27             ` Chris Down
     [not found]               ` <20200521112711.GA990580-6Bi1550iOqEnzZ6mRAm98g@public.gmane.org>
2020-05-21 12:04                 ` Michal Hocko
     [not found]                   ` <20200521120455.GM6462-2MMpYkNvuYDjFM9bn6wA6Q@public.gmane.org>
2020-05-21 12:23                     ` Chris Down
     [not found]                       ` <20200521122327.GB990580-6Bi1550iOqEnzZ6mRAm98g@public.gmane.org>
2020-05-21 12:24                         ` Chris Down
2020-05-21 12:37                       ` Michal Hocko
2020-05-21 12:57                         ` Chris Down
     [not found]                           ` <20200521125759.GD990580-6Bi1550iOqEnzZ6mRAm98g@public.gmane.org>
2020-05-21 13:05                             ` Chris Down
     [not found]                               ` <20200521130530.GE990580-6Bi1550iOqEnzZ6mRAm98g@public.gmane.org>
2020-05-21 13:28                                 ` Michal Hocko
2020-05-21 13:21                             ` Michal Hocko
     [not found]                               ` <20200521132120.GR6462-2MMpYkNvuYDjFM9bn6wA6Q@public.gmane.org>
2020-05-21 13:41                                 ` Chris Down
     [not found]                                   ` <20200521133324.GF990580-6Bi1550iOqEnzZ6mRAm98g@public.gmane.org>
2020-05-21 13:58                                     ` Michal Hocko
2020-05-21 14:22                                       ` Chris Down
2020-05-21 12:28                 ` Michal Hocko
2020-05-28 18:02 ` Shakeel Butt
     [not found]   ` <CALvZod7rSeAKXKq_V0SggZWn4aL8pYWJiej4NdRd8MmuwUzPEw-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2020-05-28 19:48     ` Chris Down
     [not found]       ` <20200528194831.GA2017-6Bi1550iOqEnzZ6mRAm98g@public.gmane.org>
2020-05-28 20:29         ` Johannes Weiner
     [not found]           ` <20200528202944.GA76514-druUgvl0LCNAfugRpC6u6w@public.gmane.org>
2020-05-28 21:02             ` Shakeel Butt
2020-05-28 21:14             ` Chris Down
2020-05-29  7:25             ` Michal Hocko

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20200529073118.GE4406@dhcp22.suse.cz \
    --to=mhocko-dgejt+ai2ygdnm+yrofe0a@public.gmane.org \
    --cc=akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b@public.gmane.org \
    --cc=cgroups-u79uwXL29TY76Z2rM5mHXA@public.gmane.org \
    --cc=chris-6Bi1550iOqEnzZ6mRAm98g@public.gmane.org \
    --cc=hannes-druUgvl0LCNAfugRpC6u6w@public.gmane.org \
    --cc=kernel-team-b10kYP2dOMg@public.gmane.org \
    --cc=linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org \
    --cc=linux-mm-Bw31MaZKKs3YtjvyW6yDsg@public.gmane.org \
    --cc=tj-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).