xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: Arianna Avanzini <avanzini.arianna@gmail.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: xen-devel@lists.xenproject.org, Keir Fraser <keir@xen.org>,
	Jan Beulich <JBeulich@suse.com>
Subject: Re: [RFC PATCH v1] Replace tasklets with per-cpu implementation.
Date: Fri, 29 Aug 2014 22:58:06 +0200	[thread overview]
Message-ID: <5400E95E.8080808@gmail.com> (raw)
In-Reply-To: 1409162329-6094-1-git-send-email-konrad.wilk@oracle.com

> Hey,
>
> With the Xen 4.5 feature freeze being right on the doorsteps I am not
> expecting this to go in as:
>  1) It touches core code,
>  2) It has never been tested on ARM,

Sorry if I intrude - for what it's worth, the patchset works on my setup. I am
running Xen from the development repository, plus this patchset, with a Linux
3.15 dom0 (linux-sunxi) on a cubieboard2.


>  3) It is RFC for right now.
>
> With those expectations out of the way, I am submitting for review
> an over-haul of the tasklet code. We had found one large machines
> with a small size of guests (12) that the steal time for idle
> guests was excessively high. Further debugging revealed that the
> global tasklet lock was taken across all sockets at an excessively
> high rate. To the point that 1/10th of a guest idle time was
> taken (and actually accounted for as in RUNNING state!).
>
> The ideal situation to reproduce this behavior is:
>  1). Allocate a twelve guests with one to four SR-IOV VFs.
>  2). Have half of them (six) heavily use the SR-IOV VFs devices.
>  3). Monitor the rest (which are in idle) and despair.
>
> As I discovered under the hood, we have two tasklets that are
> scheduled and executed quite often - the VIRQ_TIMER one:
> aassert_evtchn_irq_taskle, and the one in charge of injecting
> an PCI interrupt in the guest: hvm_do_IRQ_dpci.
>
> The 'hvm_do_IRQ_dpci' is the on that is most often scheduled
> and run. The performance bottleneck comes from the fact that
> we take the same spinlock three times: tasklet_schedule,
> when we are about to execute the tasklet, and when we are
> done executing the tasklet.
>
> This patchset throws away the global list and lock for all
> tasklets. Instead there are two per-cpu lists: one for
> softirq, and one run when scheduler decides it. There is also
> an global list and lock when we have cross-CPU tasklet scheduling
> - which thankfully rarely happens (microcode update and
> hypercall continuation).
>
> The insertion and removal from the list is done by disabling
> interrupts - which are short bursts of time. The behavior
> of the code to only execute one tasklet per iteration is
> also preserved (the Linux code would run through all
> of its tasklets).
>
> The performance benefit of this patch were astounding and
> removed the issues we saw. It also decreased the IRQ
> latency of delievering an interrupt to a guest.
>
> In terms of the patchset I choose an path in which:
>  0) The first patch fixes the performance bug we saw and it
>     was easy to backport.
>  1) It is bisectable.
>  2) If something breaks it should be fairly easy to figure
>     out which patch broke it.
>  3) It is spit up in a bit weird fashion with scaffolding code
>     was added to keep it ticking (as at some point we have
>     the old and the new implementation existing and used).
>     And then later on removed. This is how Greg KH added
>     kref and kobjects long time ago in the kernel and it had
>     worked - so I figured I would borrow from this workflow.
>
> I would appreciate feedback from the maintainers if they
> would like this to be organized better.
>
>  xen/common/tasklet.c      | 305 +++++++++++++++++++++++++++++++++-------------
>  xen/include/xen/tasklet.h |  52 +++++++-
>  2 files changed, 271 insertions(+), 86 deletions(-)
>
> Konrad Rzeszutek Wilk (5):
>       tasklet: Introduce per-cpu tasklet for softirq.
>       tasklet: Add cross CPU feeding of per-cpu tasklets.
>       tasklet: Remove the old-softirq implementation.
>       tasklet: Introduce per-cpu tasklet for schedule tasklet.
>       tasklet: Remove the scaffolding.

             reply	other threads:[~2014-08-29 20:57 UTC|newest]

Thread overview: 15+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-08-29 20:58 Arianna Avanzini [this message]
2014-09-02 20:10 ` [RFC PATCH v1] Replace tasklets with per-cpu implementation Konrad Rzeszutek Wilk
  -- strict thread matches above, loose matches on Subject: below --
2014-08-27 17:58 Konrad Rzeszutek Wilk
2014-08-28 12:39 ` Jan Beulich
2014-08-29 13:46   ` Konrad Rzeszutek Wilk
2014-08-29 14:10     ` Jan Beulich
2014-09-02 20:28       ` Konrad Rzeszutek Wilk
2014-09-03  8:03         ` Jan Beulich
2014-09-08 19:01           ` Konrad Rzeszutek Wilk
2014-09-09  9:01             ` Jan Beulich
2014-09-09 14:37               ` Konrad Rzeszutek Wilk
2014-09-09 16:37                 ` Jan Beulich
2014-09-10 16:03                   ` Konrad Rzeszutek Wilk
2014-09-10 16:25                     ` Jan Beulich
2014-09-10 16:35                       ` Konrad Rzeszutek Wilk

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=5400E95E.8080808@gmail.com \
    --to=avanzini.arianna@gmail.com \
    --cc=JBeulich@suse.com \
    --cc=keir@xen.org \
    --cc=konrad.wilk@oracle.com \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).