From: Mel Gorman <mgorman@techsingularity.net>
To: "Rafael J. Wysocki" <rafael@kernel.org>
Cc: Daniel Lezcano <daniel.lezcano@linaro.org>,
Marcelo Tosatti <mtosatti@redhat.com>,
Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
Linux PM <linux-pm@vger.kernel.org>
Subject: Re: [PATCH] cpuidle: Allow configuration of the polling interval before cpuidle enters a c-state
Date: Fri, 27 Nov 2020 10:53:22 +0000 [thread overview]
Message-ID: <20201127105322.GO3371@techsingularity.net> (raw)
In-Reply-To: <20201126203151.GM3371@techsingularity.net>
On Thu, Nov 26, 2020 at 08:31:51PM +0000, Mel Gorman wrote:
> > > and it is reasonable behaviour but it should be tunable.
> >
> > Only if there is no way to cover all of the relevant use cases in a
> > generally acceptable way without adding more module params etc.
> >
> > In this particular case, it should be possible to determine a polling
> > limit acceptable to everyone.
> >
>
> Potentially yes. cpuidle is not my strong suit but it could try being
> adaptive the polling similar to how the menu governor tries to guess
> the typical interval. Basically it would have to pick a polling internal
> between 2 and TICK_NSEC. Superficially it a task is queued before polling
> finishes, decrease the interval and increase it otherwise. That is a mess
> though because then it may be polling for ages with nothing arriving. It
> would have to start tracking when the CPU exited idle to see if polling
> is even worthwhile. That
>
> I felt that starting with anything that tried adapting the polling
> interval based on heuristics would meet higher resistance than making it
> tunable. Hence, make it tunable so at least the problem can be addressed
> when it's encountered.
>
I looked at this again and determining a "polling limit acceptable
to everyone" looks like reimplementing haltpoll in the core or adding
haltpoll-like logic to each governor. I doubt that'll be a popular
approach.
The c1 exit latency as a hint is definitely too low though. I checked
one of the test machines to double check what the granularity of the time
checks in poll_idle() at boot time with something like this.
for (i = 0; i < POLL_IDLE_RELAX_COUNT; i++) {
cpu_relax();
}
This takes roughly 1100ns on a test machine where the C1 exit latency is
2000ns. Lets say you have a basic pair of tasks communicating over a pipe
on the same machine (e.g. perf bench pipe). The time for a round-trip on
the same machine is roughly 7000ns meaning that polling is almost never
useful for a basic workload.
--
Mel Gorman
SUSE Labs
next prev parent reply other threads:[~2020-11-27 10:53 UTC|newest]
Thread overview: 7+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-11-26 17:18 [PATCH] cpuidle: Allow configuration of the polling interval before cpuidle enters a c-state Mel Gorman
2020-11-26 18:24 ` Rafael J. Wysocki
2020-11-26 20:31 ` Mel Gorman
2020-11-27 10:53 ` Mel Gorman [this message]
2020-11-27 14:08 ` Marcelo Tosatti
2020-11-27 14:34 ` Mel Gorman
2020-11-27 14:45 ` Mel Gorman
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20201127105322.GO3371@techsingularity.net \
--to=mgorman@techsingularity.net \
--cc=daniel.lezcano@linaro.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-pm@vger.kernel.org \
--cc=mtosatti@redhat.com \
--cc=rafael@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).