xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: George Dunlap <george.dunlap@citrix.com>
To: Stefano Stabellini <sstabellini@kernel.org>,
	Dario Faggioli <dario.faggioli@citrix.com>
Cc: george.dunlap@eu.citrix.com, edgar.iglesias@xilinx.com,
	julien.grall@arm.com, xen-devel@lists.xen.org
Subject: Re: Xen on ARM IRQ latency and scheduler overhead
Date: Mon, 20 Feb 2017 11:04:37 +0000	[thread overview]
Message-ID: <ad43d06c-8536-a0f0-6e5e-fa17b8273d4a@citrix.com> (raw)
In-Reply-To: <alpine.DEB.2.10.1702171615030.9566@sstabellini-ThinkPad-X260>

On 18/02/17 00:41, Stefano Stabellini wrote:
> On Fri, 17 Feb 2017, Dario Faggioli wrote:
>> On Thu, 2017-02-09 at 16:54 -0800, Stefano Stabellini wrote:
>>> These are the results, in nanosec:
>>>
>>>                         AVG     MIN     MAX     WARM MAX
>>>
>>> NODEBUG no WFI          1890    1800    3170    2070
>>> NODEBUG WFI             4850    4810    7030    4980
>>> NODEBUG no WFI credit2  2217    2090    3420    2650
>>> NODEBUG WFI credit2     8080    7890    10320   8300
>>>
>>> DEBUG no WFI            2252    2080    3320    2650
>>> DEBUG WFI               6500    6140    8520    8130
>>> DEBUG WFI, credit2      8050    7870    10680   8450
>>>
>>> As you can see, depending on whether the guest issues a WFI or not
>>> while
>>> waiting for interrupts, the results change significantly.
>>> Interestingly,
>>> credit2 does worse than credit1 in this area.
>>>
>> I did some measuring myself, on x86, with different tools. So,
>> cyclictest is basically something very very similar to the app
>> Stefano's app.
>>
>> I've run it both within Dom0, and inside a guest. I also run a Xen
>> build (in this case, only inside of the guest).
>>
>>> We are down to 2000-3000ns. Then, I started investigating the
>>> scheduler.
>>> I measured how long it takes to run "vcpu_unblock": 1050ns, which is
>>> significant. I don't know what is causing the remaining 1000-2000ns,
>>> but
>>> I bet on another scheduler function. Do you have any suggestions on
>>> which one?
>>>
>> So, vcpu_unblock() calls vcpu_wake(), which then invokes the
>> scheduler's wakeup related functions.
>>
>> If you time vcpu_unblock(), from beginning to end of the function, you
>> actually capture quite a few things. E.g., the scheduler lock is taken
>> inside vcpu_wake(), so you're basically including time spent waited on
>> the lock in the estimation.
>>
>> That is probably ok (as in, lock contention definitely is something
>> relevant to latency), but it is expected for things to be rather
>> different between Credit1 and Credit2.
>>
>> I've, OTOH, tried to time, SCHED_OP(wake) and SCHED_OP(do_schedule),
>> and here's the result. Numbers are in cycles (I've used RDTSC) and, for
>> making sure to obtain consistent and comparable numbers, I've set the
>> frequency scaling governor to performance.
>>
>> Dom0, [performance]							
>> 	        cyclictest 1us	cyclictest 1ms	cyclictest 100ms			
>> (cycles)	Credit1	Credit2	Credit1	Credit2	Credit1	Credit2		
>> wakeup-avg	2429	2035	1980	1633	2535	1979		
>> wakeup-max	14577	113682	15153	203136	12285	115164		
> 
> I am not that familiar with the x86 side of things, but the 113682 and
> 203136 look worrisome, especially considering that credit1 doesn't have
> them.

Dario,

Do you reckon those 'MAX' values could be the load balancer running
(both for credit1 and credit2)?

 -George


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

  reply	other threads:[~2017-02-20 11:04 UTC|newest]

Thread overview: 19+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-02-10  0:54 Xen on ARM IRQ latency and scheduler overhead Stefano Stabellini
2017-02-10  8:40 ` Dario Faggioli
2017-02-10 18:32   ` Stefano Stabellini
2017-02-16 12:20     ` Dario Faggioli
2017-02-16 19:52       ` Stefano Stabellini
2017-02-16 23:07         ` Stefano Stabellini
2017-02-17 11:02           ` Dario Faggioli
2017-02-17 19:34             ` Julien Grall
2017-02-17 23:14               ` Stefano Stabellini
2017-02-18  0:02                 ` Stefano Stabellini
2017-02-18  0:47                   ` Dario Faggioli
2017-02-17 18:40 ` Dario Faggioli
2017-02-17 19:44   ` Julien Grall
2017-02-17 22:55     ` Stefano Stabellini
2017-02-18  0:59     ` Dario Faggioli
2017-02-20 12:18       ` Punit Agrawal
2017-02-18  0:41   ` Stefano Stabellini
2017-02-20 11:04     ` George Dunlap [this message]
2017-02-20 11:40       ` Dario Faggioli

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=ad43d06c-8536-a0f0-6e5e-fa17b8273d4a@citrix.com \
    --to=george.dunlap@citrix.com \
    --cc=dario.faggioli@citrix.com \
    --cc=edgar.iglesias@xilinx.com \
    --cc=george.dunlap@eu.citrix.com \
    --cc=julien.grall@arm.com \
    --cc=sstabellini@kernel.org \
    --cc=xen-devel@lists.xen.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).