From: "Michael S. Tsirkin" <mst@redhat.com>
To: Gleb Natapov <gleb@redhat.com>
Cc: Avi Kivity <avi@redhat.com>,
kvm@vger.kernel.org, Jan Kiszka <jan.kiszka@web.de>
Subject: Re: [RFC PATCH 0/2] irq destination caching prototype
Date: Mon, 13 Aug 2012 14:22:14 +0300 [thread overview]
Message-ID: <20120813112214.GB16801@redhat.com> (raw)
In-Reply-To: <20120813111241.GY3341@redhat.com>
On Mon, Aug 13, 2012 at 02:12:41PM +0300, Gleb Natapov wrote:
> On Mon, Aug 13, 2012 at 02:03:51PM +0300, Avi Kivity wrote:
> > On 08/13/2012 02:01 PM, Gleb Natapov wrote:
> > >>
> > >> Actually this is overkill. Suppose we do an apicid->vcpu translation
> > >> cache? Then we retain O(1) behaviour, no need for a huge cache.
> > >>
> > > Not sure I follow.
> >
> > Unicast MSIs and IPIs can be speeded up by looking up the vcpu using the
> > apic id, using a static lookup table (only changed when the guest
> > updates apicid or a vcpu is inserted).
> >
> To check that MSI/IPI is unicast you need to check a lot of things: delivery
> mode, shorthand, dest mode, vector. In short everything but level. This
> is exactly what kvm_irq_delivery_to_apic() is doing. Caching apicid->vcpu
> is not enough, caching (delivery mode, shorthand, dest mode,
> vector)->vcpu is enough and this is exactly what the patch does for irq
> routing entries.
At least for MSI I think it is simple. Here's the relevant code from
my old patch:
+static bool kvm_msi_is_multicast(unsigned dest, int dest_mode)
+{
+ if (dest_mode == 0)
+ /* Physical mode. */
+ return dest == 0xff;
+ else
+ /* Logical mode. */
+ return dest & (dest - 1);
+}
> --
> Gleb.
next prev parent reply other threads:[~2012-08-13 11:21 UTC|newest]
Thread overview: 34+ messages / expand[flat|nested] mbox.gz Atom feed top
2012-08-13 9:16 [RFC PATCH 0/2] irq destination caching prototype Gleb Natapov
2012-08-13 9:16 ` [RFC PATCH 1/2] Call irq_rt callback under rcu_read_lock() Gleb Natapov
2012-08-13 9:16 ` [RFC PATCH 2/2] Cache msi irq destination Gleb Natapov
2012-08-13 9:32 ` Avi Kivity
2012-08-13 9:34 ` Gleb Natapov
2012-08-13 9:34 ` [RFC PATCH 0/2] irq destination caching prototype Michael S. Tsirkin
2012-08-13 9:36 ` Gleb Natapov
2012-08-13 9:46 ` Michael S. Tsirkin
2012-08-13 9:48 ` Gleb Natapov
2012-08-13 9:36 ` Avi Kivity
2012-08-13 10:12 ` Michael S. Tsirkin
2012-08-13 10:16 ` Gleb Natapov
2012-08-13 10:21 ` Avi Kivity
2012-08-13 10:24 ` Gleb Natapov
2012-08-13 10:31 ` Avi Kivity
2012-08-13 10:35 ` Gleb Natapov
2012-08-13 10:38 ` Michael S. Tsirkin
2012-08-13 10:58 ` Avi Kivity
2012-08-13 11:01 ` Gleb Natapov
2012-08-13 11:03 ` Avi Kivity
2012-08-13 11:12 ` Gleb Natapov
2012-08-13 11:22 ` Michael S. Tsirkin [this message]
2012-08-13 11:29 ` Gleb Natapov
2012-08-13 11:43 ` Gleb Natapov
2012-08-13 12:14 ` Avi Kivity
2012-08-13 11:30 ` Avi Kivity
2012-08-13 11:41 ` Gleb Natapov
2012-08-13 12:13 ` Avi Kivity
2012-08-13 12:59 ` Michael S. Tsirkin
2012-08-13 11:19 ` Michael S. Tsirkin
2012-08-13 9:43 ` Avi Kivity
2012-08-13 9:51 ` Michael S. Tsirkin
2012-08-13 9:53 ` Gleb Natapov
2012-08-13 10:33 ` Gleb Natapov
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20120813112214.GB16801@redhat.com \
--to=mst@redhat.com \
--cc=avi@redhat.com \
--cc=gleb@redhat.com \
--cc=jan.kiszka@web.de \
--cc=kvm@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).