From: "Zhang, Yanmin" <yanmin_zhang@linux.intel.com>
To: Robert Hancock <hancockrwd@gmail.com>
Cc: Andreas Mohr <andi@lisas.de>,
Corrado Zoccolo <czoccolo@gmail.com>,
LKML <linux-kernel@vger.kernel.org>,
linux-acpi@vger.kernel.org
Subject: Re: Dynamic configure max_cstate
Date: Fri, 31 Jul 2009 15:06:46 +0800 [thread overview]
Message-ID: <1249024006.2560.735.camel@ymzhang> (raw)
In-Reply-To: <4A726844.7040505@gmail.com>
On Thu, 2009-07-30 at 21:43 -0600, Robert Hancock wrote:
> On 07/28/2009 04:11 AM, Andreas Mohr wrote:
> > Hi,
> >
> > On Tue, Jul 28, 2009 at 05:00:35PM +0800, Zhang, Yanmin wrote:
> >> I tried different clocksources. For exmaple, I could get a better (30%) result with
> >> hpet. With hpet, cpu utilization is about 5~8%. Function hpet_read uses too much cpu
> >> time. With tsc, cpu utilization is about 2~3%. I think more cpu utilization causes fewer
> >> C state transitions.
> >>
> >> With idle=poll, the result is about 10% better than the one of hpet. If using idle=poll,
> >> I didn't find result difference among different clocksources.
> >
> > IOW, this seems to clearly point to ACPI Cx causing it.
> >
> > Both Corrado and me have been thinking that one should try skipping all
> > bigger-latency ACPI Cx states whenever there's an ongoing I/O request where an
> > immediate reply interrupt is expected.
> >
> > I've been investigating this a bit, and interesting parts would perhaps include
> > . kernel/pm_qos_params.c
> > . drivers/cpuidle/governors/menu.c (which acts on the ACPI _cx state
> > structs as configured by drivers/acpi/processor_idle.c)
> > . and e.g. the wait_for_completion_timeout() part in drivers/ata/libata-core.c
> > (or other sources in case of other disk I/O mechanisms)
> >
> > One way to do some quick (and dirty!!) testing would be to set a flag
> > before calling wait_for_completion_timeout() and testing for this flag in
> > drivers/cpuidle/governors/menu.c and then skip deeper Cx states
> > conditionally.
> >
> > As a very quick test, I tried a
> > while :; do :; done
> > loop in shell and renicing shell to 19 (to keep my CPU out of ACPI idle),
> > but bonnie -s 100 results initially looked promising yet turned out to
> > be inconsistent. The real way to test this would be idle=poll.
> > My test system was Athlon XP with /proc/acpi/processor/CPU0/power
> > latencies of 000 and 100 (the maximum allowed value, BTW) for C1/C2.
> >
> > If the wait_for_completion_timeout() flag testing turns out to help,
> > then one might intend to use the pm_qos infrastructure to indicate
> > these conditions, however it might be too bloated for such a
> > purpose, a relatively simple (read: fast) boolean flag mechanism
> > could be better.
> >
> > Plus one could then create a helper function which figures out a
> > "pretty fast" Cx state (independent of specific latency times!).
> > But when introducing this mechanism, take care to not ignore the
> > requirements defined by pm_qos settings!
> >
> > Oh, and about the places which submit I/O requests where one would have to
> > flag this: are they in any way correlated with the scheduler I/O wait
> > value? Would the I/O wait mechanism be a place to more easily and centrally
> > indicate that we're waiting for a request to come back in "very soon"?
> > OTOH I/O requests may have vastly differing delay expectations,
> > thus specifically only short-term expected I/O replies should be flagged,
> > otherwise we're wasting lots of ACPI deep idle opportunities.
>
> Did the results show a big difference in performance between maximum C2
> and maximum C3?
No big difference. I tried different max cstate by processor.max_cstate.
Mostly, processor.max_cstate=1 could get the similiar result like idle=poll.
> Thing with C3 is that it likely will have some
> interference with bus-master DMA activity as the CPU has to wake up at
> least partially before the SATA controller can complete DMA operations,
> which will likely stall the controller for some period of time. There
> would be an argument for avoiding going into deep C-states which can't
> handle snooping while IO is in progress and DMA will shortly be occurring..
next prev parent reply other threads:[~2009-07-31 7:06 UTC|newest]
Thread overview: 21+ messages / expand[flat|nested] mbox.gz Atom feed top
2009-07-27 5:30 Dynamic configure max_cstate Zhang, Yanmin
2009-07-27 7:33 ` Andreas Mohr
2009-07-28 2:42 ` Zhang, Yanmin
2009-07-28 7:20 ` Corrado Zoccolo
2009-07-28 9:00 ` Zhang, Yanmin
2009-07-28 10:11 ` Andreas Mohr
2009-07-28 14:03 ` Andreas Mohr
2009-07-28 17:35 ` ok, now would this be useful? (Re: Dynamic configure max_cstate) Andreas Mohr
2009-07-29 8:20 ` Dynamic configure max_cstate Zhang, Yanmin
2009-07-31 3:43 ` Robert Hancock
2009-07-31 7:06 ` Zhang, Yanmin [this message]
2009-07-31 8:07 ` Andreas Mohr
2009-07-31 14:40 ` Andi Kleen
2009-07-31 14:56 ` Michael S. Zick
2009-07-31 17:37 ` Pallipadi, Venkatesh
2009-07-31 15:14 ` Len Brown
2009-07-30 6:28 ` Zhang, Yanmin
2009-07-28 19:25 ` Len Brown
2009-07-29 0:17 ` Len Brown
2009-07-29 8:00 ` Andreas Mohr
2009-07-28 19:47 ` Len Brown
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1249024006.2560.735.camel@ymzhang \
--to=yanmin_zhang@linux.intel.com \
--cc=andi@lisas.de \
--cc=czoccolo@gmail.com \
--cc=hancockrwd@gmail.com \
--cc=linux-acpi@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox