public inbox for cgroups@vger.kernel.org
 help / color / mirror / Atom feed
From: CGEL <cgel.zte-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
To: Yang Shi <shy828301-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
Cc: Michal Hocko <mhocko-IBi9RG/b67k@public.gmane.org>,
	Andrew Morton
	<akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b@public.gmane.org>,
	Johannes Weiner <hannes-druUgvl0LCNAfugRpC6u6w@public.gmane.org>,
	Matthew Wilcox <willy-wEGCiKHe2LqWVfeAwA7xHQ@public.gmane.org>,
	Roman Gushchin
	<roman.gushchin-fxUVXftIFDnyG1zEObXtfA@public.gmane.org>,
	Shakeel Butt <shakeelb-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>,
	Miaohe Lin <linmiaohe-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>,
	William Kucharski
	<william.kucharski-QHcLZuEGTsvQT0dZR+AlfA@public.gmane.org>,
	Peter Xu <peterx-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>,
	Hugh Dickins <hughd-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>,
	Vlastimil Babka <vbabka-AlSwsSmVLrQ@public.gmane.org>,
	Muchun Song <songmuchun-EC8Uxl6Npydl57MIdRCFDg@public.gmane.org>,
	Suren Baghdasaryan
	<surenb-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>,
	Linux Kernel Mailing List
	<linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org>,
	Linux MM <linux-mm-Bw31MaZKKs3YtjvyW6yDsg@public.gmane.org>,
	Cgroups <cgroups-u79uwXL29TY76Z2rM5mHXA@public.gmane.org>,
	Yang Yang <yang.yang29-Th6q7B73Y6EnDS1+zs4M5A@public.gmane.org>
Subject: Re: [PATCH] mm/memcg: support control THP behaviour in cgroup
Date: Wed, 11 May 2022 02:19:36 +0000	[thread overview]
Message-ID: <627b1d39.1c69fb81.fe952.6426@mx.google.com> (raw)
In-Reply-To: <CAHbLzkqztB+NXVcxtd7bVo7onH6AcMJ3JWCAHHqH3OAdbZsMOQ-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>

On Tue, May 10, 2022 at 12:34:20PM -0700, Yang Shi wrote:
> On Mon, May 9, 2022 at 6:43 PM CGEL <cgel.zte-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org> wrote:
> >
> > On Mon, May 09, 2022 at 01:48:39PM +0200, Michal Hocko wrote:
> > > On Mon 09-05-22 11:26:43, CGEL wrote:
> > > > On Mon, May 09, 2022 at 12:00:28PM +0200, Michal Hocko wrote:
> > > > > On Sat 07-05-22 02:05:25, CGEL wrote:
> > > > > [...]
> > > > > > If there are many containers to run on one host, and some of them have high
> > > > > > performance requirements, administrator could turn on thp for them:
> > > > > > # docker run -it --thp-enabled=always
> > > > > > Then all the processes in those containers will always use thp.
> > > > > > While other containers turn off thp by:
> > > > > > # docker run -it --thp-enabled=never
> > > > >
> > > > > I do not know. The THP config space is already too confusing and complex
> > > > > and this just adds on top. E.g. is the behavior of the knob
> > > > > hierarchical? What is the policy if parent memcg says madivise while
> > > > > child says always? How does the per-application configuration aligns
> > > > > with all that (e.g. memcg policy madivise but application says never via
> > > > > prctl while still uses some madvised - e.g. via library).
> > > > >
> > > >
> > > > The cgroup THP behavior is align to host and totally independent just likes
> > > > /sys/fs/cgroup/memory.swappiness. That means if one cgroup config 'always'
> > > > for thp, it has no matter with host or other cgroup. This make it simple for
> > > > user to understand or control.
> > >
> > > All controls in cgroup v2 should be hierarchical. This is really
> > > required for a proper delegation semantic.
> > >
> >
> > Could we align to the semantic of /sys/fs/cgroup/memory.swappiness?
> > Some distributions like Ubuntu is still using cgroup v1.
> 
> Other than enable flag, how would you handle the defrag flag
> hierarchically? It is much more complicated.

Refer to memory.swappiness for cgroup, this new interface better be independent.
> >
> > > > If memcg policy madivise but application says never, just like host, the result
> > > > is no THP for that application.
> > > >
> > > > > > By doing this we could promote important containers's performance with less
> > > > > > footprint of thp.
> > > > >
> > > > > Do we really want to provide something like THP based QoS? To me it
> > > > > sounds like a bad idea and if the justification is "it might be useful"
> > > > > then I would say no. So you really need to come with a very good usecase
> > > > > to promote this further.
> > > >
> > > > At least on some 5G(communication technology) machine, it's useful to provide
> > > > THP based QoS. Those 5G machine use micro-service software architecture, in
> > > > other words one service application runs in one container.
> > >
> > > I am not really sure I understand. If this is one application per
> > > container (cgroup) then why do you really need per-group setting?
> > > Does the application is a set of different processes which are only very
> > > loosely tight?
> > >
> > For micro-service architecture, the application in one container is not a
> > set of loosely tight processes, it's aim at provide one certain service,
> > so different containers means different service, and different service
> > has different QoS demand.
> >
> > The reason why we need per-group(per-container) setting is because most
> > container are managed by compose software, the compose software provide
> > UI to decide how to run a container(likes setting swappiness value). For
> > example the docker compose:
> > https://docs.docker.com/compose/#compose-v2-and-the-new-docker-compose-command
> >
> > To make it clearer, I try to make a summary for why container needs this patch:
> >     1.one machine can run different containers;
> >     2.for some scenario, container runs only one service inside(can be only one
> > application);
> >     3.different containers provide different services, different services have
> > different QoS demands;
> >     4.THP has big influence on QoS. It's fast for memory access, but eat more
> > memory;
> 
> I have been involved in this kind of topic discussion offline a couple
> of times. But TBH I don't see how you could achieve QoS by this flag.
> THP allocation is *NOT* guaranteed. And the overhead may be quite
> high. It depends on how fragmented the system is.

For THP, the word 'QoS' maybe too absolute, so let's describe it in the way why user
need THP: seeking for better memory performance.
Yes as you said THP may be quite overhead, and madvise() may not be suitable some time
(see PR_SET_THP_DISABLE https://man7.org/linux/man-pages/man2/prctl.2.html).

So I think this is just the reason why we need the patch: give user a method to use
THP with more precise range(only the performance sensitive containers) and reduce
overhead.

> 
> >     5.containers usually managed by compose software, which treats container as
> > base management unit;
> >     6.this patch provide cgroup THP controller, which can be a method to adjust
> > container memory QoS.
> >
> > > > Container becomes
> > > > the suitable management unit but not the whole host. And some performance
> > > > sensitive containers desiderate THP to provide low latency communication.
> > > > But if we use THP with 'always', it will consume more memory(on our machine
> > > > that is about 10% of total memory). And unnecessary huge pages will increase
> > > > memory pressure, add latency for minor pages faults, and add overhead when
> > > > splitting huge pages or coalescing normal sized pages into huge pages.
> > >
> > > It is still not really clear to me how do you achieve that the whole
> > > workload in the said container has the same THP requirements.
> > > --
> > > Michal Hocko
> > > SUSE Labs

  parent reply	other threads:[~2022-05-11  2:19 UTC|newest]

Thread overview: 24+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-05-05  3:38 [PATCH] mm/memcg: support control THP behaviour in cgroup cgel.zte-Re5JQEeQqe8AvxtiuMwx3w
     [not found] ` <20220505033814.103256-1-xu.xin16-Th6q7B73Y6EnDS1+zs4M5A@public.gmane.org>
2022-05-05 12:49   ` kernel test robot
2022-05-05 13:31   ` kernel test robot
2022-05-05 16:09   ` kernel test robot
2022-05-06 13:41   ` Michal Hocko
     [not found]     ` <YnUlntNFR4zeD+qa-2MMpYkNvuYDjFM9bn6wA6Q@public.gmane.org>
2022-05-07  2:05       ` CGEL
     [not found]         ` <6275d3e7.1c69fb81.1d62.4504-ATjtLOhZ0NVl57MIdRCFDg@public.gmane.org>
2022-05-09 10:00           ` Michal Hocko
     [not found]             ` <YnjmPAToTR0C5o8x-2MMpYkNvuYDjFM9bn6wA6Q@public.gmane.org>
2022-05-09 11:26               ` CGEL
     [not found]                 ` <6278fa75.1c69fb81.9c598.f794-ATjtLOhZ0NVl57MIdRCFDg@public.gmane.org>
2022-05-09 11:48                   ` Michal Hocko
     [not found]                     ` <Ynj/l+pyFJxKfcbQ-2MMpYkNvuYDjFM9bn6wA6Q@public.gmane.org>
2022-05-10  1:43                       ` CGEL
     [not found]                         ` <6279c354.1c69fb81.7f6c1.15e0-ATjtLOhZ0NVl57MIdRCFDg@public.gmane.org>
2022-05-10 10:00                           ` Michal Hocko
     [not found]                             ` <Yno3pNQOn1lAMPnu-2MMpYkNvuYDjFM9bn6wA6Q@public.gmane.org>
2022-05-10 11:52                               ` CGEL
     [not found]                                 ` <627a5214.1c69fb81.1b7fb.47be-ATjtLOhZ0NVl57MIdRCFDg@public.gmane.org>
2022-05-10 13:36                                   ` Michal Hocko
     [not found]                                     ` <YnpqYte2jLdcBiPg-2MMpYkNvuYDjFM9bn6wA6Q@public.gmane.org>
2022-05-11  1:59                                       ` CGEL
     [not found]                                         ` <627b1899.1c69fb81.cd831.12d9-ATjtLOhZ0NVl57MIdRCFDg@public.gmane.org>
2022-05-11  7:21                                           ` Michal Hocko
2022-05-11  9:47                                             ` CGEL
2022-05-18  5:58                                       ` CGEL
2022-05-10 19:34                           ` Yang Shi
     [not found]                             ` <CAHbLzkqztB+NXVcxtd7bVo7onH6AcMJ3JWCAHHqH3OAdbZsMOQ-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2022-05-11  2:19                               ` CGEL [this message]
     [not found]                                 ` <627b1d39.1c69fb81.fe952.6426-ATjtLOhZ0NVl57MIdRCFDg@public.gmane.org>
2022-05-11  2:47                                   ` Shakeel Butt
     [not found]                                     ` <CALvZod5aqZjUE8BBQZxwHDBuSWOSEAOqW4_xE22Am0sGZZs4sw-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2022-05-11  3:11                                       ` Roman Gushchin
2022-05-11  3:31                                         ` CGEL
     [not found]                                           ` <627b2df5.1c69fb81.4a22.160f-ATjtLOhZ0NVl57MIdRCFDg@public.gmane.org>
2022-05-18  8:14                                             ` Balbir Singh
2022-05-11  3:17                                       ` CGEL

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=627b1d39.1c69fb81.fe952.6426@mx.google.com \
    --to=cgel.zte-re5jqeeqqe8avxtiumwx3w@public.gmane.org \
    --cc=akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b@public.gmane.org \
    --cc=cgroups-u79uwXL29TY76Z2rM5mHXA@public.gmane.org \
    --cc=hannes-druUgvl0LCNAfugRpC6u6w@public.gmane.org \
    --cc=hughd-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org \
    --cc=linmiaohe-hv44wF8Li93QT0dZR+AlfA@public.gmane.org \
    --cc=linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org \
    --cc=linux-mm-Bw31MaZKKs3YtjvyW6yDsg@public.gmane.org \
    --cc=mhocko-IBi9RG/b67k@public.gmane.org \
    --cc=peterx-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org \
    --cc=roman.gushchin-fxUVXftIFDnyG1zEObXtfA@public.gmane.org \
    --cc=shakeelb-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org \
    --cc=shy828301-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org \
    --cc=songmuchun-EC8Uxl6Npydl57MIdRCFDg@public.gmane.org \
    --cc=surenb-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org \
    --cc=vbabka-AlSwsSmVLrQ@public.gmane.org \
    --cc=william.kucharski-QHcLZuEGTsvQT0dZR+AlfA@public.gmane.org \
    --cc=willy-wEGCiKHe2LqWVfeAwA7xHQ@public.gmane.org \
    --cc=yang.yang29-Th6q7B73Y6EnDS1+zs4M5A@public.gmane.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox