From: David Woodhouse <dwmw2@infradead.org>
To: "Roger Pau Monné" <roger.pau@citrix.com>, paul@xen.org
Cc: xen-devel@lists.xenproject.org,
'Eslam Elnikety' <elnikety@amazon.de>,
'Andrew Cooper' <andrew.cooper3@citrix.com>,
'Shan Haitao' <haitao.shan@intel.com>,
'Jan Beulich' <jbeulich@suse.com>
Subject: Re: [Xen-devel] [PATCH v2] x86/hvm: re-work viridian APIC assist code
Date: Fri, 14 Aug 2020 15:13:51 +0100 [thread overview]
Message-ID: <999f185404fcedc03d8cf1bd1f47a492219b8e9b.camel@infradead.org> (raw)
In-Reply-To: <20200813094555.GF975@Air-de-Roger>
[-- Attachment #1: Type: text/plain, Size: 1511 bytes --]
On Thu, 2020-08-13 at 11:45 +0200, Roger Pau Monné wrote:
> > The loop appears to be there to handle the case where multiple
> > devices assigned to a domain have MSIs programmed with the same
> > dest/vector... which seems like an odd thing for a guest to do but I
> > guess it is at liberty to do it. Does it matter whether they are
> > maskable or not?
>
> Such configuration would never work properly, as lapic vectors are
> edge triggered and thus can't be safely shared between devices?
>
> I think the iteration is there in order to get the hvm_pirq_dpci
> struct that injected that specific vector, so that you can perform the
> ack if required. Having lapic EOI callbacks should simply this, as you
> can pass a hvm_pirq_dpci when injecting a vector, and that would be
> forwarded to the EOI callback, so there should be no need to iterate
> over the list of hvm_pirq_dpci for a domain.
If we didn't have the loop — or more to the point if we didn't grab the
domain-global d->event_lock that protects it — then I wouldn't even
care about optimising the whole thing away for the modern MSI case.
It isn't the act of not doing any work in the _hvm_dpci_msi_eoi()
function that takes the time. It's that domain-global lock, and a
little bit the retpoline-stalled indirect call from pt_pirq_interate().
I suppose with Roger's series, we'll still suffer the retpoline stall
for a callback that ultimately does nothing, but it's nowhere near as
expensive as the lock.
[-- Attachment #2: smime.p7s --]
[-- Type: application/x-pkcs7-signature, Size: 5174 bytes --]
next prev parent reply other threads:[~2020-08-14 14:14 UTC|newest]
Thread overview: 20+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-01-18 15:10 [PATCH v2] x86/hvm: re-work viridian APIC assist code Paul Durrant
2018-01-18 16:21 ` Jan Beulich
2018-01-18 16:27 ` Paul Durrant
2018-08-24 23:38 ` David Woodhouse
2018-09-03 10:12 ` Paul Durrant
2018-09-04 20:31 ` David Woodhouse
2018-09-05 9:36 ` Paul Durrant
2018-09-05 9:40 ` David Woodhouse
2018-09-05 9:43 ` Paul Durrant
2018-09-05 10:40 ` Paul Durrant
2018-09-05 10:45 ` David Woodhouse
2018-09-05 10:48 ` Paul Durrant
2020-08-11 13:25 ` [Xen-devel] " David Woodhouse
2020-08-12 13:43 ` Roger Pau Monné
2020-08-13 8:10 ` Paul Durrant
2020-08-13 9:45 ` Roger Pau Monné
2020-08-14 14:13 ` David Woodhouse [this message]
2020-08-14 14:41 ` Roger Pau Monné
2020-08-19 7:12 ` Jan Beulich
2020-08-19 8:26 ` Roger Pau Monné
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=999f185404fcedc03d8cf1bd1f47a492219b8e9b.camel@infradead.org \
--to=dwmw2@infradead.org \
--cc=andrew.cooper3@citrix.com \
--cc=elnikety@amazon.de \
--cc=haitao.shan@intel.com \
--cc=jbeulich@suse.com \
--cc=paul@xen.org \
--cc=roger.pau@citrix.com \
--cc=xen-devel@lists.xenproject.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).