From: Johannes Weiner <hannes@cmpxchg.org>
To: Andrew Morton <akpm@linux-foundation.org>
Cc: Michal Hocko <mhocko@kernel.org>,
bugzilla-daemon@bugzilla.kernel.org, frolvlad@gmail.com,
linux-mm@kvack.org, Vladimir Davydov <vdavydov.dev@gmail.com>
Subject: Re: [Bug 190841] New: [REGRESSION] Intensive Memory CGroup removal leads to high load average 10+
Date: Thu, 5 Jan 2017 16:22:52 -0500 [thread overview]
Message-ID: <20170105212252.GA17613@cmpxchg.org> (raw)
In-Reply-To: <20170104173037.7e501fdfee9ec21f0a3a5d55@linux-foundation.org>
On Wed, Jan 04, 2017 at 05:30:37PM -0800, Andrew Morton wrote:
> > My simplified workflow looks like this:
> >
> > 1. Create a Memory CGroup with memory limit
> > 2. Exec a child process
> > 3. Add the child process PID into the Memory CGroup
> > 4. Wait for the child process to finish
> > 5. Remove the Memory CGroup
> >
> > The child processes usually run less than 0.1 seconds, but I have lots of them.
> > Normally, I could run over 10000 child processes per minute, but with newer
> > kernels, I can only do 400-500 executions per minute, and my system becomes
> > extremely sluggish (the only indicator of the weirdness I found is an unusually
> > high load average, which sometimes goes over 250!).
> >
> > Here is a simple reproduction script:
> >
> > #!/bin/sh
> > CGROUP_BASE=/sys/fs/cgroup/memory/qq
> >
> > for $i in $(seq 1000); do
> > echo "Iteration #$i"
> > sh -c "
> > mkdir '$CGROUP_BASE'
> > sh -c 'echo \$$ > $CGROUP_BASE/tasks ; sleep 0.0'
> > rmdir '$CGROUP_BASE' || true
> > "
> > done
> > # ===
You're not even running anything concurrently. While I agree with
Michal that cgroup creation and destruction are not the fastest paths,
a load of 250 from a single-threaded testcase is silly.
We recently had a load spikee issue with the on-demand memcg slab
cache duplication, but that should have happened in 4.6 already. I
don't see anything suspicious going into memcontrol.c after 4.6.
When the load is high like this, can you check with ps what the
blocked tasks are?
A run with perf record -a also might give us an idea if cycles go to
the wrong place.
I'll try to reproduce this once I have access to my test machine again
next week.
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2017-01-05 21:23 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <bug-190841-27@https.bugzilla.kernel.org/>
2017-01-05 1:30 ` [Bug 190841] New: [REGRESSION] Intensive Memory CGroup removal leads to high load average 10+ Andrew Morton
2017-01-05 12:33 ` Michal Hocko
2017-01-05 20:26 ` Vladyslav Frolov
2017-01-06 14:08 ` Michal Hocko
2017-01-05 21:22 ` Johannes Weiner [this message]
2017-01-06 16:28 ` Vladimir Davydov
2017-01-12 13:55 ` Vladyslav Frolov
2017-01-12 15:33 ` Vladimir Davydov
2017-12-26 20:40 ` Vladyslav Frolov
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20170105212252.GA17613@cmpxchg.org \
--to=hannes@cmpxchg.org \
--cc=akpm@linux-foundation.org \
--cc=bugzilla-daemon@bugzilla.kernel.org \
--cc=frolvlad@gmail.com \
--cc=linux-mm@kvack.org \
--cc=mhocko@kernel.org \
--cc=vdavydov.dev@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).