xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: Justin Acker <ackerj67@yahoo.com>
To: Ian Campbell <ian.campbell@citrix.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: "boris.ostrovsky@oracle.com" <boris.ostrovsky@oracle.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: xhci_hcd intterrupt affinity in Dom0/DomU limited to single interrupt
Date: Wed, 2 Sep 2015 17:12:01 +0000 (UTC)	[thread overview]
Message-ID: <852262210.472799.1441213921652.JavaMail.yahoo@mail.yahoo.com> (raw)
In-Reply-To: <1441201779.26292.206.camel@citrix.com>


[-- Attachment #1.1: Type: text/plain, Size: 4215 bytes --]


      From: Ian Campbell <ian.campbell@citrix.com>
 To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>; Justin Acker <ackerj67@yahoo.com> 
Cc: "boris.ostrovsky@oracle.com" <boris.ostrovsky@oracle.com>; "xen-devel@lists.xen.org" <xen-devel@lists.xen.org> 
 Sent: Wednesday, September 2, 2015 9:49 AM
 Subject: Re: [Xen-devel] xhci_hcd intterrupt affinity in Dom0/DomU limited to single interrupt
   
On Wed, 2015-09-02 at 08:53 -0400, Konrad Rzeszutek Wilk wrote:
> On Tue, Sep 01, 2015 at 11:09:38PM +0000, Justin Acker wrote:
> > 
> >      From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> >  To: Justin Acker <ackerj67@yahoo.com> 
> > Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>; 
> > boris.ostrovsky@oracle.com 
> >  Sent: Tuesday, September 1, 2015 4:56 PM
> >  Subject: Re: [Xen-devel] xhci_hcd intterrupt affinity in Dom0/DomU 
> > limited to single interrupt
> >    
> > On Tue, Sep 01, 2015 at 05:39:46PM +0000, Justin Acker wrote:
> > > Taking this to the dev list from users. 
> > > 
> > > Is there a way to force or enable pirq delivery to a set of cpus as 
> > > opposed to single device from being a assigned a single pirq so that 
> > > its interrupt can be distributed across multiple cpus? I believe the 
> > > device drivers do support multiple queues when run natively without 
> > > the Dom0 loaded. The device in question is the xhci_hcd driver for 
> > > which I/O transfers seem to be slowed when the Dom0 is loaded. The 
> > > behavior seems to pass through to the DomU if pass through is 
> > > enabled. I found some similar threads, but most relate to Ethernet 
> > > controllers. I tried some of the x2apic and x2apic_phys dom0 kernel 
> > > arguments, but none distributed the pirqs. Based on the reading 
> > > relating to IRQs for Xen, I think pinning the pirqs to cpu0 is done 
> > > to avoid an interrupt storm. I tried IRQ balance and when 
> > > configured/adjusted it will balance individual pirqs, but not 
> > > multiple interrupts.
> > 
> > Yes. You can do it with smp affinity:
> > 
> > https://cs.uwaterloo.ca/~brecht/servers/apic/SMP-affinity.txt
> > Yes, this does allow for assigning a specific interrupt to a single 
> > cpu, but it will not spread the interrupt load across a defined group 
> > or all cpus. Is it possible to define a range of CPUs or spread the 
> > interrupt load for a device across all cpus as it does with a native 
> > kernel without the Dom0 loaded?
> 
> It should be. Did you try giving it an mask that puts the interrupts on 
> all the CPUs?
> (0xf) ?
> > 
> > I don't follow the "behavior seems to pass through to the DomU if pass 
> > through is enabled" ?
> > The device interrupts are limited to a single pirq if the device is 
> > used directly in the Dom0. If the device is passed through to a DomU - 
> > i.e. the xhci_hcd controller - then the DomU cannot spread the 
> > interrupt load across the cpus in the VM. 
> 
> Why? How are you seeing this? The method by which you use smp affinity 
> should
> be exactly the same.
> 
> And it looks to me that the device has a single pirq as well when booting 
> as baremetal right?
> 
> So the issue here is that you want to spread the interrupt delivery to happen across
> all of the CPUs. The smp_affinity should do it. Did you try modifying it by hand (you may
> want to kill irqbalance when you do this just to make sure it does not write its own values in)?

It sounds then like the real issue is that under native irqbalance is
writing smp_affinity values with potentially multiple bits set while under
Xen it is only setting a single bit?

Justin, is the contents of /proc/irq/<IRQ>/smp_affinity for the IRQ in
question under Native and Xen consistent with that supposition?


Ian, I think the mask is the same in both cases.  With irqbalance enabled, the interrupts are mapped - seemed randomly - to various cpus, but only one cpu per interrupt in all cases. 

With irqbalance disabled at boot and the same kernel version used with Dom0 and baremetal.
With Dom0 loaded:
cat /proc/irq/78/smp_affinity
ff

Baremetal kernel:
cat /proc/irq/27/smp_affinity
ff


Ian.



  

[-- Attachment #1.2: Type: text/html, Size: 8110 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

  reply	other threads:[~2015-09-02 17:12 UTC|newest]

Thread overview: 22+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <1441121643.26292.63.camel@citrix.com>
     [not found] ` <800613365.4285959.1441128848192.JavaMail.yahoo@mail.yahoo.com>
2015-09-01 17:39   ` xhci_hcd intterrupt affinity in Dom0/DomU limited to single interrupt Justin Acker
2015-09-01 20:56     ` Konrad Rzeszutek Wilk
2015-09-01 21:38       ` Boris Ostrovsky
2015-09-01 23:09       ` Justin Acker
2015-09-02 12:53         ` Konrad Rzeszutek Wilk
2015-09-02 13:49           ` Ian Campbell
2015-09-02 17:12             ` Justin Acker [this message]
2015-09-02 17:02           ` Justin Acker
2015-09-02 13:47     ` David Vrabel
2015-09-02 17:25       ` Justin Acker
2015-09-02 17:35         ` David Vrabel
     [not found] <55E6C83402000078000D7CF5@prv-mh.provo.novell.com>
     [not found] ` <1981596850.505327.1441214239184.JavaMail.yahoo@mail.yahoo.com>
2015-09-03 10:15   ` Jan Beulich
2015-09-03 12:04     ` Justin Acker
2015-09-03 15:04       ` Jan Beulich
2015-09-03 16:52         ` Justin Acker
2015-09-04  7:41           ` Jan Beulich
2015-09-08 16:02             ` Justin Acker
2015-09-09  6:48               ` Jan Beulich
2015-09-10 16:20                 ` Justin Acker
2015-09-11 10:03                   ` Jan Beulich
2015-09-16 20:31                     ` Justin Acker
2015-09-21 12:53                       ` Jan Beulich

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=852262210.472799.1441213921652.JavaMail.yahoo@mail.yahoo.com \
    --to=ackerj67@yahoo.com \
    --cc=boris.ostrovsky@oracle.com \
    --cc=ian.campbell@citrix.com \
    --cc=konrad.wilk@oracle.com \
    --cc=xen-devel@lists.xen.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).