From: Mike Galbraith <bitbucket@online.de>
To: Lei Wen <adrian.wenl@gmail.com>
Cc: Lei Wen <leiwen@marvell.com>,
Peter Zijlstra <peterz@infradead.org>,
mingo@redhat.com, preeti.lkml@gmail.com,
daniel.lezcano@linaro.org, viresh.kumar@linaro.org,
xjian@marvell.com, linux-kernel@vger.kernel.org
Subject: Re: [PATCH v2] sched: keep quiescent cpu out of idle balance loop
Date: Fri, 21 Feb 2014 09:34:51 +0100 [thread overview]
Message-ID: <1392971691.5451.84.camel@marge.simpson.net> (raw)
In-Reply-To: <CALZhoSTUZpL5th8HuN_pMe7UHt1PMVO7h9_DsChyz8eW3vctvw@mail.gmail.com>
On Fri, 2014-02-21 at 15:28 +0800, Lei Wen wrote:
> Actually, what I have experiment is as:
> 1. set top cpuset as disable load balance
> 2. set 0-2 cpus to "system", and enable its load balance
> 3. set 3 cpu to "rt" and disable load balance.
Exactly as I do, pertinent part of my cheezy script being...
# ...and fire up the shield
cset shield --userset=rtcpus --cpu=${START_CPU}-${END_CPU} --kthread=on
# If cpuset wasn't previously mounted (no obnoxious systemd),
# we just mounted it. Find the mount point.
if [ -z $CPUSET_ROOT ]; then
CPUSET_ROOT=$(grep cpuset /proc/mounts|cut -d ' ' -f2)
if [ -z $CPUSET_ROOT ]; then
# If it's not mounted now, bail.
echo EEK, cupset is not mounted!
exit
else
# ok, check for cgroup mount
if [ -f ${CPUSET_ROOT}/cpuset.cpus ]; then
CPUSET_PREFIX=cpuset.
fi
fi
fi
echo 0 > ${CPUSET_ROOT}/${CPUSET_PREFIX}sched_load_balance
echo 1 > ${CPUSET_ROOT}/system/${CPUSET_PREFIX}sched_load_balance
echo 0 > ${CPUSET_ROOT}/rtcpus/${CPUSET_PREFIX}sched_load_balance
echo 0 > ${CPUSET_ROOT}/rtcpus/${CPUSET_PREFIX}sched_relax_domain_level
> While by this way, root span always covering [0-2] which is seen
> by cpu 0-2, as you also mentioned.
> And it is true that if I disable load balance, I would see span mask
> get them merged.
>
> So how about below change?
> + if (!this_rq()->sd)
> + return;
> Suppose isolated cpu would lose its sd, could you help
> confirm it from crash too?
Yeah, isolated CPUs have no sd connectivity. I sent Peter a patchlet
offline showing what I do to keep nohz at bay.
> Or, you think it is wrong to do merge job when system group disable
> the load balance?
I think the construction stuff works fine, and !->sd is the perfect cue
to tell various things to keep their grubby mitts off of a CPU.
-Mike
next prev parent reply other threads:[~2014-02-21 8:35 UTC|newest]
Thread overview: 15+ messages / expand[flat|nested] mbox.gz Atom feed top
2014-02-19 5:20 [PATCH] sched: keep quiescent cpu out of idle balance loop Lei Wen
2014-02-19 9:04 ` Peter Zijlstra
2014-02-20 2:42 ` Lei Wen
2014-02-20 8:50 ` Peter Zijlstra
2014-02-20 9:15 ` Lei Wen
2014-02-20 9:17 ` Lei Wen
2014-02-20 12:04 ` Peter Zijlstra
2014-02-20 12:23 ` Peter Zijlstra
2014-02-21 2:23 ` [PATCH v2] " Lei Wen
2014-02-21 5:51 ` Mike Galbraith
2014-02-21 7:28 ` Lei Wen
2014-02-21 8:34 ` Mike Galbraith [this message]
2014-02-21 9:15 ` [PATCH v3] " Lei Wen
2014-02-21 9:41 ` Mike Galbraith
2014-02-21 9:15 ` [PATCH v2] " Mike Galbraith
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1392971691.5451.84.camel@marge.simpson.net \
--to=bitbucket@online.de \
--cc=adrian.wenl@gmail.com \
--cc=daniel.lezcano@linaro.org \
--cc=leiwen@marvell.com \
--cc=linux-kernel@vger.kernel.org \
--cc=mingo@redhat.com \
--cc=peterz@infradead.org \
--cc=preeti.lkml@gmail.com \
--cc=viresh.kumar@linaro.org \
--cc=xjian@marvell.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox