public inbox for cgroups@vger.kernel.org
 help / color / mirror / Atom feed
From: Waiman Long <longman-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
To: Ivan Babrou <ivan-lDpJ742SOEtZroRs9YW3xA@public.gmane.org>
Cc: Shakeel Butt <shakeelb-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>,
	cgroups-u79uwXL29TY76Z2rM5mHXA@public.gmane.org,
	Linux MM <linux-mm-Bw31MaZKKs3YtjvyW6yDsg@public.gmane.org>,
	kernel-team <kernel-team-lDpJ742SOEtZroRs9YW3xA@public.gmane.org>,
	Johannes Weiner <hannes-druUgvl0LCNAfugRpC6u6w@public.gmane.org>,
	Michal Hocko <mhocko-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org>,
	Roman Gushchin
	<roman.gushchin-fxUVXftIFDnyG1zEObXtfA@public.gmane.org>,
	Muchun Song <muchun.song-fxUVXftIFDnyG1zEObXtfA@public.gmane.org>,
	Andrew Morton
	<akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b@public.gmane.org>,
	linux-kernel
	<linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org>
Subject: Re: Expensive memory.stat + cpu.stat reads
Date: Fri, 14 Jul 2023 13:23:42 -0400	[thread overview]
Message-ID: <fea3587a-ca6a-6930-bd3d-c4f7f330be67@redhat.com> (raw)
In-Reply-To: <CABWYdi2iWYT0sHpK74W6=Oz6HA_3bAqKQd4h+amK0n3T3nge6g-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>

On 7/13/23 19:25, Ivan Babrou wrote:
> On Mon, Jul 10, 2023 at 5:44 PM Waiman Long <longman-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> wrote:
>> On 7/10/23 19:21, Ivan Babrou wrote:
>>> On Wed, Jul 5, 2023 at 11:20 PM Shakeel Butt <shakeelb-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org> wrote:
>>>> On Fri, Jun 30, 2023 at 04:22:28PM -0700, Ivan Babrou wrote:
>>>>> Hello,
>>>>>
>>>>> We're seeing CPU load issues with cgroup stats retrieval. I made a
>>>>> public gist with all the details, including the repro code (which
>>>>> unfortunately requires heavily loaded hardware) and some flamegraphs:
>>>>>
>>>>> * https://gist.github.com/bobrik/5ba58fb75a48620a1965026ad30a0a13
>>>>>
>>>>> I'll repeat the gist of that gist here. Our repro has the following
>>>>> output after a warm-up run:
>>>>>
>>>>> completed:  5.17s [manual / mem-stat + cpu-stat]
>>>>> completed:  5.59s [manual / cpu-stat + mem-stat]
>>>>> completed:  0.52s [manual / mem-stat]
>>>>> completed:  0.04s [manual / cpu-stat]
>>>>>
>>>>> The first two lines do effectively the following:
>>>>>
>>>>> for _ in $(seq 1 1000); do cat /sys/fs/cgroup/system.slice/memory.stat
>>>>> /sys/fs/cgroup/system.slice/cpu.stat > /dev/null
>>>>>
>>>>> The latter two are the same thing, but via two loops:
>>>>>
>>>>> for _ in $(seq 1 1000); do cat /sys/fs/cgroup/system.slice/cpu.stat >
>>>>> /dev/null; done
>>>>> for _ in $(seq 1 1000); do cat /sys/fs/cgroup/system.slice/memory.stat
>>>>>> /dev/null; done
>>>>> As you might've noticed from the output, splitting the loop into two
>>>>> makes the code run 10x faster. This isn't great, because most
>>>>> monitoring software likes to get all stats for one service before
>>>>> reading the stats for the next one, which maps to the slow and
>>>>> expensive way of doing this.
>>>>>
>>>>> We're running Linux v6.1 (the output is from v6.1.25) with no patches
>>>>> that touch the cgroup or mm subsystems, so you can assume vanilla
>>>>> kernel.
>>>>>
>>>>>   From the flamegraph it just looks like rstat flushing takes longer. I
>>>>> used the following flags on an AMD EPYC 7642 system (our usual pick
>>>>> cpu-clock was blaming spinlock irqrestore, which was questionable):
>>>>>
>>>>> perf -e cycles -g --call-graph fp -F 999 -- /tmp/repro
>>>>>
>>>>> Naturally, there are two questions that arise:
>>>>>
>>>>> * Is this expected (I guess not, but good to be sure)?
>>>>> * What can we do to make this better?
>>>>>
>>>>> I am happy to try out patches or to do some tracing to help understand
>>>>> this better.
>>>> Hi Ivan,
>>>>
>>>> Thanks a lot, as always, for reporting this. This is not expected and
>>>> should be fixed. Is the issue easy to repro or some specific workload or
>>>> high load/traffic is required? Can you repro this with the latest linus
>>>> tree? Also do you see any difference of root's cgroup.stat where this
>>>> issue happens vs good state?
>>> I'm afraid there's no easy way to reproduce. We see it from time to
>>> time in different locations. The one that I was looking at for the
>>> initial email does not reproduce it anymore:
>> My understanding of mem-stat and cpu-stat is that they are independent
>> of each other. In theory, reading one shouldn't affect the performance
>> of reading the others. Since you are doing mem-stat and cpu-stat reading
>> repetitively in a loop, it is likely that all the data are in the cache
>> most of the time resulting in very fast processing time. If it happens
>> that the specific memory location of mem-stat and cpu-stat data are such
>> that reading one will cause the other data to be flushed out of the
>> cache and have to be re-read from memory again, you could see
>> significant performance regression.
>>
>> It is one of the possible causes, but I may be wrong.
> Do you think it's somewhat similar to how iterating a matrix in rows
> is faster than in columns due to sequential vs random memory reads?
>
> * https://stackoverflow.com/q/9936132
> * https://en.wikipedia.org/wiki/Row-_and_column-major_order
> * https://en.wikipedia.org/wiki/Loop_interchange

Yes, it is similar to what is being described in those articles.


>
> I've had a similar suspicion and it would be good to confirm whether
> it's that or something else. I can probably collect perf counters for
> different runs, but I'm not sure which ones I'll need.
>
> In a similar vein, if we could come up with a tracepoint that would
> tell us the amount of work done (or any other relevant metric that
> would help) during rstat flushing, I can certainly collect that
> information as well for every reading combination.

The perf-c2c tool may be able to help. The data to look for is how often 
the data is from caches vs direct memory load/store.

Cheers,
Longman


  parent reply	other threads:[~2023-07-14 17:23 UTC|newest]

Thread overview: 21+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-06-30 23:22 Expensive memory.stat + cpu.stat reads Ivan Babrou
     [not found] ` <CABWYdi0c6__rh-K7dcM_pkf9BJdTRtAU08M43KO9ME4-dsgfoQ-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2023-07-06  6:20   ` Shakeel Butt
2023-07-10 23:21     ` Ivan Babrou
2023-07-11  0:44       ` Waiman Long
     [not found]         ` <d3f3a7bc-b181-a408-af1d-dd401c172cbf-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2023-07-13 23:25           ` Ivan Babrou
     [not found]             ` <CABWYdi2iWYT0sHpK74W6=Oz6HA_3bAqKQd4h+amK0n3T3nge6g-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2023-07-14 17:23               ` Waiman Long [this message]
     [not found]                 ` <fea3587a-ca6a-6930-bd3d-c4f7f330be67-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2023-07-15  0:00                   ` Ivan Babrou
2023-07-15  0:30             ` Ivan Babrou
     [not found]               ` <CABWYdi3YNwtPDwwJWmCO-ER50iP7CfbXkCep5TKb-9QzY-a40A-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2023-08-11 22:03                 ` Ivan Babrou
     [not found]                   ` <CABWYdi0+0gxr7PB4R8rh6hXO=H7ZaCzfk8bmOSeQMuZR7s7Pjg-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2023-08-11 22:27                     ` Waiman Long
     [not found]                       ` <a052dffe-ed5e-6d22-8af8-0861e618f327-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2023-08-11 22:35                         ` Ivan Babrou
     [not found]                           ` <CABWYdi0CXy2GZax_s6O-Xc0gvH+TGJzKwv_v6QqMty9P-ATJug-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2023-08-12  2:33                             ` Shakeel Butt
     [not found]                               ` <CALvZod65Y-dSkH6a=ASTDTK2oGznTd7Yts1csttxoP0w9jaQUw-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2023-08-14 17:56                                 ` Ivan Babrou
2023-08-11 23:43                     ` Yosry Ahmed
     [not found]                       ` <CAJD7tkaf5GNbyhCbWyyLtxpqmZ4+iByQgmS1QEFf+bnEMCdmFA-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2023-08-12  0:01                         ` Yosry Ahmed
     [not found]                           ` <CAJD7tkb=dUfc=L+61noQYHymHPUHswm_XUyFvRdaZemo80qUdQ-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2023-08-15  0:18                             ` Tejun Heo
2023-08-15  0:30                               ` Ivan Babrou
2023-08-15  0:31                                 ` Yosry Ahmed
2023-07-15  0:14       ` Ivan Babrou
2023-07-10 14:44   ` Michal Koutný
2023-07-10 23:23     ` Ivan Babrou

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=fea3587a-ca6a-6930-bd3d-c4f7f330be67@redhat.com \
    --to=longman-h+wxahxf7alqt0dzr+alfa@public.gmane.org \
    --cc=akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b@public.gmane.org \
    --cc=cgroups-u79uwXL29TY76Z2rM5mHXA@public.gmane.org \
    --cc=hannes-druUgvl0LCNAfugRpC6u6w@public.gmane.org \
    --cc=ivan-lDpJ742SOEtZroRs9YW3xA@public.gmane.org \
    --cc=kernel-team-lDpJ742SOEtZroRs9YW3xA@public.gmane.org \
    --cc=linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org \
    --cc=linux-mm-Bw31MaZKKs3YtjvyW6yDsg@public.gmane.org \
    --cc=mhocko-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org \
    --cc=muchun.song-fxUVXftIFDnyG1zEObXtfA@public.gmane.org \
    --cc=roman.gushchin-fxUVXftIFDnyG1zEObXtfA@public.gmane.org \
    --cc=shakeelb-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox