public inbox for cgroups@vger.kernel.org
 help / color / mirror / Atom feed
From: SeongJae Park <sjpark-vV1OtcyAfmbQT0dZR+AlfA@public.gmane.org>
To: Shakeel Butt <shakeelb-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
Cc: "Johannes Weiner"
	<hannes-druUgvl0LCNAfugRpC6u6w@public.gmane.org>,
	"Roman Gushchin" <guro-b10kYP2dOMg@public.gmane.org>,
	"Michal Hocko" <mhocko-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org>,
	"Yang Shi"
	<yang.shi-KPsoFbNs7GizrGE5bRqYAgC/G2K4zDHf@public.gmane.org>,
	"Greg Thelen" <gthelen-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>,
	"David Rientjes"
	<rientjes-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>,
	"Michal Koutný" <mkoutny-IBi9RG/b67k@public.gmane.org>,
	"Andrew Morton"
	<akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b@public.gmane.org>,
	linux-mm-Bw31MaZKKs3YtjvyW6yDsg@public.gmane.org,
	cgroups-u79uwXL29TY76Z2rM5mHXA@public.gmane.org,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
Subject: Re: [PATCH] memcg: introduce per-memcg reclaim interface
Date: Thu, 10 Sep 2020 08:36:56 +0200	[thread overview]
Message-ID: <20200910063656.25038-1-sjpark@amazon.com> (raw)
In-Reply-To: <20200909215752.1725525-1-shakeelb-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>

On 2020-09-09T14:57:52-07:00 Shakeel Butt <shakeelb-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org> wrote:

> Introduce an memcg interface to trigger memory reclaim on a memory cgroup.
> 
> Use cases:
> ----------
> 
> 1) Per-memcg uswapd:
> 
> Usually applications consists of combination of latency sensitive and
> latency tolerant tasks. For example, tasks serving user requests vs
> tasks doing data backup for a database application. At the moment the
> kernel does not differentiate between such tasks when the application
> hits the memcg limits. So, potentially a latency sensitive user facing
> task can get stuck in high reclaim and be throttled by the kernel.
> 
> Similarly there are cases of single process applications having two set
> of thread pools where threads from one pool have high scheduling
> priority and low latency requirement. One concrete example from our
> production is the VMM which have high priority low latency thread pool
> for the VCPUs while separate thread pool for stats reporting, I/O
> emulation, health checks and other managerial operations. The kernel
> memory reclaim does not differentiate between VCPU thread or a
> non-latency sensitive thread and a VCPU thread can get stuck in high
> reclaim.
> 
> One way to resolve this issue is to preemptively trigger the memory
> reclaim from a latency tolerant task (uswapd) when the application is
> near the limits. Finding 'near the limits' situation is an orthogonal
> problem.
> 
> 2) Proactive reclaim:
> 
> This is a similar to the previous use-case, the difference is instead of
> waiting for the application to be near its limit to trigger memory
> reclaim, continuously pressuring the memcg to reclaim a small amount of
> memory. This gives more accurate and uptodate workingset estimation as
> the LRUs are continuously sorted and can potentially provide more
> deterministic memory overcommit behavior. The memory overcommit
> controller can provide more proactive response to the changing behavior
> of the running applications instead of being reactive.
> 
> Benefit of user space solution:
> -------------------------------
> 
> 1) More flexible on who should be charged for the cpu of the memory
> reclaim. For proactive reclaim, it makes more sense to centralized the
> overhead while for uswapd, it makes more sense for the application to
> pay for the cpu of the memory reclaim.
> 
> 2) More flexible on dedicating the resources (like cpu). The memory
> overcommit controller can balance the cost between the cpu usage and
> the memory reclaimed.
> 
> 3) Provides a way to the applications to keep their LRUs sorted, so,
> under memory pressure better reclaim candidates are selected. This also
> gives more accurate and uptodate notion of working set for an
> application.
> 
> Questions:
> ----------
> 
> 1) Why memory.high is not enough?
> 
> memory.high can be used to trigger reclaim in a memcg and can
> potentially be used for proactive reclaim as well as uswapd use cases.
> However there is a big negative in using memory.high. It can potentially
> introduce high reclaim stalls in the target application as the
> allocations from the processes or the threads of the application can hit
> the temporary memory.high limit.
> 
> Another issue with memory.high is that it is not delegatable. To
> actually use this interface for uswapd, the application has to introduce
> another layer of cgroup on whose memory.high it has write access.
> 
> 2) Why uswapd safe from self induced reclaim?
> 
> This is very similar to the scenario of oomd under global memory
> pressure. We can use the similar mechanisms to protect uswapd from self
> induced reclaim i.e. memory.min and mlock.
> 
> Interface options:
> ------------------
> 
> Introducing a very simple memcg interface 'echo 10M > memory.reclaim' to
> trigger reclaim in the target memory cgroup.
> 
> In future we might want to reclaim specific type of memory from a memcg,
> so, this interface can be extended to allow that. e.g.
> 
> $ echo 10M [all|anon|file|kmem] > memory.reclaim
> 
> However that should be when we have concrete use-cases for such
> functionality. Keep things simple for now.
> 
> Signed-off-by: Shakeel Butt <shakeelb-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
> ---
>  Documentation/admin-guide/cgroup-v2.rst |  9 ++++++
>  mm/memcontrol.c                         | 37 +++++++++++++++++++++++++
>  2 files changed, 46 insertions(+)
> 
> diff --git a/Documentation/admin-guide/cgroup-v2.rst b/Documentation/admin-guide/cgroup-v2.rst
> index 6be43781ec7f..58d70b5989d7 100644
> --- a/Documentation/admin-guide/cgroup-v2.rst
> +++ b/Documentation/admin-guide/cgroup-v2.rst
> @@ -1181,6 +1181,15 @@ PAGE_SIZE multiple when read back.
>  	high limit is used and monitored properly, this limit's
>  	utility is limited to providing the final safety net.
>  
> +  memory.reclaim
> +	A write-only file which exists on non-root cgroups.
> +
> +	This is a simple interface to trigger memory reclaim in the
> +	target cgroup. Write the number of bytes to reclaim to this
> +	file and the kernel will try to reclaim that much memory.
> +	Please note that the kernel can over or under reclaim from
> +	the target cgroup.
> +
>    memory.oom.group
>  	A read-write single value file which exists on non-root
>  	cgroups.  The default value is "0".
> diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> index 75cd1a1e66c8..2d006c36d7f3 100644
> --- a/mm/memcontrol.c
> +++ b/mm/memcontrol.c
> @@ -6456,6 +6456,38 @@ static ssize_t memory_oom_group_write(struct kernfs_open_file *of,
>  	return nbytes;
>  }
>  
> +static ssize_t memory_reclaim(struct kernfs_open_file *of, char *buf,
> +			      size_t nbytes, loff_t off)
> +{
> +	struct mem_cgroup *memcg = mem_cgroup_from_css(of_css(of));
> +	unsigned int nr_retries = MAX_RECLAIM_RETRIES;
> +	unsigned long nr_to_reclaim, nr_reclaimed = 0;
> +	int err;
> +
> +	buf = strstrip(buf);
> +	err = page_counter_memparse(buf, "", &nr_to_reclaim);
> +	if (err)
> +		return err;
> +
> +	while (nr_reclaimed < nr_to_reclaim) {
> +		unsigned long reclaimed;
> +
> +		if (signal_pending(current))
> +			break;
> +
> +		reclaimed = try_to_free_mem_cgroup_pages(memcg,
> +						nr_to_reclaim - nr_reclaimed,
> +						GFP_KERNEL, true);
> +
> +		if (!reclaimed && !nr_retries--)
> +			break;

Shouldn't the if condition use '||' instead of '&&'?  I think it could be
easier to read if we put the 'nr_retires' condition in the while condition as
below (just my personal preference, though).

    while (nr_reclaimed < nr_to_reclaim && nr_retires--)


Thanks,
SeongJae Park

  parent reply	other threads:[~2020-09-10  6:36 UTC|newest]

Thread overview: 28+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-09-09 21:57 [PATCH] memcg: introduce per-memcg reclaim interface Shakeel Butt
     [not found] ` <20200909215752.1725525-1-shakeelb-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
2020-09-10  6:36   ` SeongJae Park [this message]
     [not found]     ` <20200910063656.25038-1-sjpark-vV1OtcyAfmbQT0dZR+AlfA@public.gmane.org>
2020-09-10 16:10       ` Shakeel Butt
2020-09-10 16:34         ` SeongJae Park
2020-09-21 16:30   ` Michal Hocko
     [not found]     ` <20200921163055.GQ12990-2MMpYkNvuYDjFM9bn6wA6Q@public.gmane.org>
2020-09-21 17:50       ` Shakeel Butt
     [not found]         ` <CALvZod43VXKZ3StaGXK_EZG_fKcW3v3=cEYOWFwp4HNJpOOf8g-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2020-09-22 11:49           ` Michal Hocko
     [not found]             ` <20200922114908.GZ12990-2MMpYkNvuYDjFM9bn6wA6Q@public.gmane.org>
2020-09-22 15:54               ` Shakeel Butt
     [not found]                 ` <CALvZod4FvE12o53BpeH5WB_McTdCkFTFXgc9gcT1CEHXzQLy_A-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2020-09-22 16:55                   ` Michal Hocko
     [not found]                     ` <20200922165527.GD12990-2MMpYkNvuYDjFM9bn6wA6Q@public.gmane.org>
2020-09-22 18:10                       ` Shakeel Butt
     [not found]                         ` <CALvZod7K9g9mi599c5+ayLeC4__kckv155QQGVMVy2rXXOY1Rw-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2020-09-22 18:31                           ` Michal Hocko
2020-09-22 18:56                             ` Shakeel Butt
2020-09-22 19:08                           ` Michal Hocko
     [not found]                             ` <20200922190859.GH12990-2MMpYkNvuYDjFM9bn6wA6Q@public.gmane.org>
2020-09-22 20:02                               ` Yang Shi
2020-09-22 22:38                               ` Shakeel Butt
2020-09-28 21:02 ` Johannes Weiner
     [not found]   ` <20200928210216.GA378894-druUgvl0LCNAfugRpC6u6w@public.gmane.org>
2020-09-29 15:04     ` Michal Hocko
     [not found]       ` <20200929150444.GG2277-2MMpYkNvuYDjFM9bn6wA6Q@public.gmane.org>
2020-09-29 21:53         ` Johannes Weiner
     [not found]           ` <20200929215341.GA408059-druUgvl0LCNAfugRpC6u6w@public.gmane.org>
2020-09-30 15:45             ` Shakeel Butt
     [not found]               ` <CALvZod5eN0PDtKo8SEp1n-xGvgCX9k6-OBGYLT3RmzhA+Q-2hw-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2020-10-01 14:31                 ` Johannes Weiner
2020-10-06 16:55                   ` Shakeel Butt
     [not found]                     ` <CALvZod59cU40A3nbQtkP50Ae3g6T2MQSt+q1=O2=Gy9QUzNkbg-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2020-10-08 14:53                       ` Johannes Weiner
     [not found]                         ` <20201008145336.GA163830-druUgvl0LCNAfugRpC6u6w@public.gmane.org>
2020-10-08 15:55                           ` Shakeel Butt
     [not found]                             ` <CALvZod5-EtB0jNi9DXTmLSKrUzK2jXRhW8h6+7sqB356k0t1+g-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2020-10-08 21:09                               ` Johannes Weiner
2020-09-30 15:26   ` Shakeel Butt
     [not found]     ` <CALvZod7afgoAL7KyfjpP-LoSFGSHv7XtfbbnVhEEhsiZLqZu9A-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2020-10-01 15:10       ` Johannes Weiner
2020-10-05 21:59         ` Shakeel Butt
     [not found]           ` <CALvZod66T4-y2JQnN+favf6tnKkkFQ17HZ8EAAX0GXAcbO4v+w-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2020-10-08 15:14             ` Johannes Weiner

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20200910063656.25038-1-sjpark@amazon.com \
    --to=sjpark-vv1otcyafmbqt0dzr+alfa@public.gmane.org \
    --cc=akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b@public.gmane.org \
    --cc=cgroups-u79uwXL29TY76Z2rM5mHXA@public.gmane.org \
    --cc=gthelen-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org \
    --cc=guro-b10kYP2dOMg@public.gmane.org \
    --cc=hannes-druUgvl0LCNAfugRpC6u6w@public.gmane.org \
    --cc=linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org \
    --cc=linux-mm-Bw31MaZKKs3YtjvyW6yDsg@public.gmane.org \
    --cc=mhocko-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org \
    --cc=mkoutny-IBi9RG/b67k@public.gmane.org \
    --cc=rientjes-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org \
    --cc=shakeelb-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org \
    --cc=yang.shi-KPsoFbNs7GizrGE5bRqYAgC/G2K4zDHf@public.gmane.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox