From mboxrd@z Thu Jan 1 00:00:00 1970 From: Justin Acker Subject: Re: xhci_hcd intterrupt affinity in Dom0/DomU limited to single interrupt Date: Wed, 2 Sep 2015 17:12:01 +0000 (UTC) Message-ID: <852262210.472799.1441213921652.JavaMail.yahoo@mail.yahoo.com> References: <1441201779.26292.206.camel@citrix.com> Reply-To: Justin Acker Mime-Version: 1.0 Content-Type: multipart/mixed; boundary="===============4776010972323411177==" Return-path: In-Reply-To: <1441201779.26292.206.camel@citrix.com> List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Sender: xen-devel-bounces@lists.xen.org Errors-To: xen-devel-bounces@lists.xen.org To: Ian Campbell , Konrad Rzeszutek Wilk Cc: "boris.ostrovsky@oracle.com" , "xen-devel@lists.xen.org" List-Id: xen-devel@lists.xenproject.org --===============4776010972323411177== Content-Type: multipart/alternative; boundary="----=_Part_472798_1570085211.1441213921642" Content-Length: 13202 ------=_Part_472798_1570085211.1441213921642 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable From: Ian Campbell To: Konrad Rzeszutek Wilk ; Justin Acker =20 Cc: "boris.ostrovsky@oracle.com" ; "xen-devel@l= ists.xen.org" =20 Sent: Wednesday, September 2, 2015 9:49 AM Subject: Re: [Xen-devel] xhci_hcd intterrupt affinity in Dom0/DomU limited= to single interrupt =20 On Wed, 2015-09-02 at 08:53 -0400, Konrad Rzeszutek Wilk wrote: > On Tue, Sep 01, 2015 at 11:09:38PM +0000, Justin Acker wrote: > >=20 > >=C2=A0 =C2=A0 =C2=A0 From: Konrad Rzeszutek Wilk > >=C2=A0 To: Justin Acker =20 > > Cc: "xen-devel@lists.xen.org" ;=20 > > boris.ostrovsky@oracle.com=20 > >=C2=A0 Sent: Tuesday, September 1, 2015 4:56 PM > >=C2=A0 Subject: Re: [Xen-devel] xhci_hcd intterrupt affinity in Dom0/Dom= U=20 > > limited to single interrupt > >=C2=A0 =C2=A0=20 > > On Tue, Sep 01, 2015 at 05:39:46PM +0000, Justin Acker wrote: > > > Taking this to the dev list from users.=20 > > >=20 > > > Is there a way to force or enable pirq delivery to a set of cpus as= =20 > > > opposed to single device from being a assigned a single pirq so that= =20 > > > its interrupt can be distributed across multiple cpus? I believe the= =20 > > > device drivers do support multiple queues when run natively without= =20 > > > the Dom0 loaded. The device in question is the xhci_hcd driver for=20 > > > which I/O transfers seem to be slowed when the Dom0 is loaded. The=20 > > > behavior seems to pass through to the DomU if pass through is=20 > > > enabled. I found some similar threads, but most relate to Ethernet=20 > > > controllers. I tried some of the x2apic and x2apic_phys dom0 kernel= =20 > > > arguments, but none distributed the pirqs. Based on the reading=20 > > > relating to IRQs for Xen, I think pinning the pirqs to cpu0 is done= =20 > > > to avoid an interrupt storm. I tried IRQ balance and when=20 > > > configured/adjusted it will balance individual pirqs, but not=20 > > > multiple interrupts. > >=20 > > Yes. You can do it with smp affinity: > >=20 > > https://cs.uwaterloo.ca/~brecht/servers/apic/SMP-affinity.txt > > Yes, this does allow for assigning a specific interrupt to a single=20 > > cpu, but it will not spread the interrupt load across a defined group= =20 > > or all cpus. Is it possible to define a range of CPUs or spread the=20 > > interrupt load for a device across all cpus as it does with a native=20 > > kernel without the Dom0 loaded? >=20 > It should be. Did you try giving it an mask that puts the interrupts on= =20 > all the CPUs? > (0xf) ? > >=20 > > I don't follow the "behavior seems to pass through to the DomU if pass= =20 > > through is enabled" ? > > The device interrupts are limited to a single pirq if the device is=20 > > used directly in the Dom0. If the device is passed through to a DomU -= =20 > > i.e. the xhci_hcd controller - then the DomU cannot spread the=20 > > interrupt load across the cpus in the VM.=20 >=20 > Why? How are you seeing this? The method by which you use smp affinity=20 > should > be exactly the same. >=20 > And it looks to me that the device has a single pirq as well when booting= =20 > as baremetal right? >=20 > So the issue here is that you want to spread the interrupt delivery to ha= ppen across > all of the CPUs. The smp_affinity should do it. Did you try modifying it = by hand (you may > want to kill irqbalance when you do this just to make sure it does not wr= ite its own values in)? It sounds then like the real issue is that under native irqbalance is writing smp_affinity values with potentially multiple bits set while under Xen it is only setting a single bit? Justin, is the contents of /proc/irq//smp_affinity for the IRQ in question under Native and Xen consistent with that supposition? Ian, I think the mask is the same in both cases.=C2=A0 With irqbalance enab= led, the interrupts are mapped - seemed randomly - to various cpus, but onl= y one cpu per interrupt in all cases.=20 With irqbalance disabled at boot and the same kernel version used with Dom0= and baremetal. With Dom0 loaded: cat /proc/irq/78/smp_affinity ff Baremetal kernel: cat /proc/irq/27/smp_affinity ff Ian. ------=_Part_472798_1570085211.1441213921642 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable


From:= Ian Campbell <ian.campbell@citrix.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracl= e.com>; Justin Acker <ackerj67@yahoo.com>
Cc: "boris.ostrovsky@oracle.com" <boris.ost= rovsky@oracle.com>; "xen-devel@lists.xen.org" <xen-devel@lists.xen.or= g>
Sent: Wednesday= , September 2, 2015 9:49 AM
Subje= ct: Re: [Xen-devel] xhci_hcd intterrupt affinity in Dom0/DomU li= mited to single interrupt

On Wed, 2015-09-02 at 08:53 -04= 00, Konrad Rzeszutek Wilk wrote:
> On Tue, Sep 01, 201= 5 at 11:09:38PM +0000, Justin Acker wrote:
> >
> >      From: Konrad Rzeszutek Wilk &= lt;konrad.wilk@oracle.com>
> >  To: Justin Acker <ackerj67@yahoo.com>
> > Cc: "
xen-devel= @lists.xen.org" <xen-devel@lists.xen.org>;
> >
boris.= ostrovsky@oracle.com
> >  Sent: Tuesday, = September 1, 2015 4:56 PM
> >  Subject: Re: [X= en-devel] xhci_hcd intterrupt affinity in Dom0/DomU
>= > limited to single interrupt
> >    =
> > On Tue, Sep 01, 2015 at 05:39:46PM +0000, Just= in Acker wrote:
> > > Taking this to the dev lis= t from users.
> > >
> >= ; > Is there a way to force or enable pirq delivery to a set of cpus as =
> > > opposed to single device from being a ass= igned a single pirq so that
> > > its interrupt= can be distributed across multiple cpus? I believe the
= > > > device drivers do support multiple queues when run natively = without
> > > the Dom0 loaded. The device in qu= estion is the xhci_hcd driver for
> > > which I= /O transfers seem to be slowed when the Dom0 is loaded. The
> > > behavior seems to pass through to the DomU if pass throu= gh is
> > > enabled. I found some similar threa= ds, but most relate to Ethernet
> > > controlle= rs. I tried some of the x2apic and x2apic_phys dom0 kernel
> > > arguments, but none distributed the pirqs. Based on the r= eading
> > > relating to IRQs for Xen, I think = pinning the pirqs to cpu0 is done
> > > to avoi= d an interrupt storm. I tried IRQ balance and when
> = > > configured/adjusted it will balance individual pirqs, but not > > > multiple interrupts.
> = >
> > Yes. You can do it with smp affinity:
> >
> > https://cs.uwaterloo.ca/~brecht/servers/apic/SMP-affinity.txt
> > Yes, this does allow for assigning a specifi= c interrupt to a single
> > cpu, but it will not s= pread the interrupt load across a defined group
> >= ; or all cpus. Is it possible to define a range of CPUs or spread the
> > interrupt load for a device across all cpus as it d= oes with a native
> > kernel without the Dom0 load= ed?
>
> It should be. Did you tr= y giving it an mask that puts the interrupts on
> all= the CPUs?
> (0xf) ?
> >
> > I don't follow the "behavior seems to pass through = to the DomU if pass
> > through is enabled" ?
> > The device interrupts are limited to a single pirq = if the device is
> > used directly in the Dom0. If= the device is passed through to a DomU -
> > i.e.= the xhci_hcd controller - then the DomU cannot spread the
> > interrupt load across the cpus in the VM.
&= gt;
> Why? How are you seeing this? The method by whi= ch you use smp affinity
> should
&g= t; be exactly the same.
>
> And = it looks to me that the device has a single pirq as well when booting
> as baremetal right?
>
> So the issue here is that you want to spread the interrupt deliv= ery to happen across
> all of the CPUs. The smp_affini= ty should do it. Did you try modifying it by hand (you may
> want to kill irqbalance when you do this just to make sure it does n= ot write its own values in)?

It sounds= then like the real issue is that under native irqbalance is
writing smp_affinity values with potentially multiple bits set while un= der
Xen it is only setting a single bit?

Justin, is the contents of /proc/irq/<IRQ>/smp_a= ffinity for the IRQ in
question under Native and Xen cons= istent with that supposition?



Ian, I think the mask is the same in both cases.  With irqbalance = enabled, the interrupts are mapped - seemed randomly - to various cpus, but= only one cpu per interrupt in all cases.

= With irqbalance disabled at boot and the same kernel version used with Dom0= and baremetal.

With Dom0 loaded:cat /proc/irq/78/smp_af= finity
ff

Baremetal kernel:
cat /proc/irq/27/smp_affinity
ff


Ian.



------=_Part_472798_1570085211.1441213921642-- --===============4776010972323411177== Content-Type: text/plain; charset="us-ascii" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit Content-Disposition: inline _______________________________________________ Xen-devel mailing list Xen-devel@lists.xen.org http://lists.xen.org/xen-devel --===============4776010972323411177==--