kvm.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Gleb Natapov <gleb@redhat.com>
To: "Michael S. Tsirkin" <mst@redhat.com>
Cc: kvm@vger.kernel.org, Jan Kiszka <jan.kiszka@siemens.com>
Subject: Re: [PATCHv4 2/2] kvm: deliver msi interrupts from irq handler
Date: Wed, 28 Nov 2012 14:45:09 +0200	[thread overview]
Message-ID: <20121128124509.GD928@redhat.com> (raw)
In-Reply-To: <20121128122245.GA16255@redhat.com>

On Wed, Nov 28, 2012 at 02:22:45PM +0200, Michael S. Tsirkin wrote:
> On Wed, Nov 28, 2012 at 02:13:01PM +0200, Gleb Natapov wrote:
> > On Wed, Nov 28, 2012 at 01:56:16PM +0200, Michael S. Tsirkin wrote:
> > > On Wed, Nov 28, 2012 at 01:43:34PM +0200, Gleb Natapov wrote:
> > > > On Wed, Oct 17, 2012 at 06:06:06PM +0200, Michael S. Tsirkin wrote:
> > > > > We can deliver certain interrupts, notably MSI,
> > > > > from atomic context.  Use kvm_set_irq_inatomic,
> > > > > to implement an irq handler for msi.
> > > > > 
> > > > > This reduces the pressure on scheduler in case
> > > > > where host and guest irq share a host cpu.
> > > > > 
> > > > > Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> > > > > ---
> > > > >  virt/kvm/assigned-dev.c | 36 ++++++++++++++++++++++++++----------
> > > > >  1 file changed, 26 insertions(+), 10 deletions(-)
> > > > > 
> > > > > diff --git a/virt/kvm/assigned-dev.c b/virt/kvm/assigned-dev.c
> > > > > index 23a41a9..3642239 100644
> > > > > --- a/virt/kvm/assigned-dev.c
> > > > > +++ b/virt/kvm/assigned-dev.c
> > > > > @@ -105,6 +105,15 @@ static irqreturn_t kvm_assigned_dev_thread_intx(int irq, void *dev_id)
> > > > >  }
> > > > >  
> > > > >  #ifdef __KVM_HAVE_MSI
> > > > > +static irqreturn_t kvm_assigned_dev_msi(int irq, void *dev_id)
> > > > > +{
> > > > > +	struct kvm_assigned_dev_kernel *assigned_dev = dev_id;
> > > > > +	int ret = kvm_set_irq_inatomic(assigned_dev->kvm,
> > > > > +				       assigned_dev->irq_source_id,
> > > > > +				       assigned_dev->guest_irq, 1);
> > > > Why not use kvm_set_msi_inatomic() and drop kvm_set_irq_inatomic() from
> > > > previous patch? 
> > > 
> > > kvm_set_msi_inatomic needs a routing entry, and
> > > we don't have the routing entry at this level.
> > > 
> > Yes, right. BTW is this interface will be used only for legacy assigned
> > device or there will be other users too?
> 
> I think long term we should convert irqfd to this too.
> 
VIFO uses irqfd, no? So, why legacy device assignment needs that code
to achieve parity with VFIO? Also why long term? What are the
complications?

> > > Further, guest irq might not be an MSI: host MSI
> > > can cause guest intx injection I think, we need to
> > > bounce it to thread as we did earlier.
> > Ah, so msi in kvm_assigned_dev_msi() is about host msi?
> 
> Yes.
> 
> > Can host be intx
> > but guest msi?
> 
> No.
> 
> > You seems to not handle this case. Also injection of intx
> > via ioapic is the same as injecting MSI. The format and the capability
> > of irq message are essentially the same.
> 
> Absolutely. So we will be able to extend this to intx long term.
> The difference is in the fact that unlike msi, intx can
> (and does) have multiple entries per GSI.
> I have not yet figured out how to report and handle failure
> in case one of these can be injected in atomic context,
> another can't. There's likely an easy way but can
> be a follow up patch I think.
I prefer to figure that out before introducing the interface. Hmm, we
can get rid of vcpu loop in pic (should be very easily done by checking
for kvm_apic_accept_pic_intr() during apic configuration and keeping
global extint vcpu) and then sorting irq routing entries so that ioapic
entry is first since only ioapic injection can fail.

> 
> > 
> > > 
> > > > > +	return unlikely(ret == -EWOULDBLOCK) ? IRQ_WAKE_THREAD : IRQ_HANDLED;
> > > > > +}
> > > > > +
> > > > >  static irqreturn_t kvm_assigned_dev_thread_msi(int irq, void *dev_id)
> > > > >  {
> > > > >  	struct kvm_assigned_dev_kernel *assigned_dev = dev_id;
> > > > > @@ -117,6 +126,23 @@ static irqreturn_t kvm_assigned_dev_thread_msi(int irq, void *dev_id)
> > > > >  #endif
> > > > >  
> > > > >  #ifdef __KVM_HAVE_MSIX
> > > > > +static irqreturn_t kvm_assigned_dev_msix(int irq, void *dev_id)
> > > > > +{
> > > > > +	struct kvm_assigned_dev_kernel *assigned_dev = dev_id;
> > > > > +	int index = find_index_from_host_irq(assigned_dev, irq);
> > > > > +	u32 vector;
> > > > > +	int ret = 0;
> > > > > +
> > > > > +	if (index >= 0) {
> > > > > +		vector = assigned_dev->guest_msix_entries[index].vector;
> > > > > +		ret = kvm_set_irq_inatomic(assigned_dev->kvm,
> > > > > +					   assigned_dev->irq_source_id,
> > > > > +					   vector, 1);
> > > > > +	}
> > > > > +
> > > > > +	return unlikely(ret == -EWOULDBLOCK) ? IRQ_WAKE_THREAD : IRQ_HANDLED;
> > > > > +}
> > > > > +
> > > > >  static irqreturn_t kvm_assigned_dev_thread_msix(int irq, void *dev_id)
> > > > >  {
> > > > >  	struct kvm_assigned_dev_kernel *assigned_dev = dev_id;
> > > > > @@ -334,11 +360,6 @@ static int assigned_device_enable_host_intx(struct kvm *kvm,
> > > > >  }
> > > > >  
> > > > >  #ifdef __KVM_HAVE_MSI
> > > > > -static irqreturn_t kvm_assigned_dev_msi(int irq, void *dev_id)
> > > > > -{
> > > > > -	return IRQ_WAKE_THREAD;
> > > > > -}
> > > > > -
> > > > >  static int assigned_device_enable_host_msi(struct kvm *kvm,
> > > > >  					   struct kvm_assigned_dev_kernel *dev)
> > > > >  {
> > > > > @@ -363,11 +384,6 @@ static int assigned_device_enable_host_msi(struct kvm *kvm,
> > > > >  #endif
> > > > >  
> > > > >  #ifdef __KVM_HAVE_MSIX
> > > > > -static irqreturn_t kvm_assigned_dev_msix(int irq, void *dev_id)
> > > > > -{
> > > > > -	return IRQ_WAKE_THREAD;
> > > > > -}
> > > > > -
> > > > >  static int assigned_device_enable_host_msix(struct kvm *kvm,
> > > > >  					    struct kvm_assigned_dev_kernel *dev)
> > > > >  {
> > > > > -- 
> > > > > MST
> > > > > --
> > > > > To unsubscribe from this list: send the line "unsubscribe kvm" in
> > > > > the body of a message to majordomo@vger.kernel.org
> > > > > More majordomo info at  http://vger.kernel.org/majordomo-info.html
> > > > 
> > > > --
> > > > 			Gleb.
> > 
> > --
> > 			Gleb.

--
			Gleb.

  reply	other threads:[~2012-11-28 12:45 UTC|newest]

Thread overview: 16+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2012-10-17 16:05 [PATCHv4 0/2] kvm: direct msix injection Michael S. Tsirkin
2012-10-17 16:06 ` [PATCHv4 1/2] kvm: add kvm_set_irq_inatomic Michael S. Tsirkin
2012-10-17 16:06 ` [PATCHv4 2/2] kvm: deliver msi interrupts from irq handler Michael S. Tsirkin
2012-11-28 11:43   ` Gleb Natapov
2012-11-28 11:56     ` Michael S. Tsirkin
2012-11-28 12:13       ` Gleb Natapov
2012-11-28 12:22         ` Michael S. Tsirkin
2012-11-28 12:45           ` Gleb Natapov [this message]
2012-11-28 13:25             ` Michael S. Tsirkin
2012-11-28 13:38               ` Gleb Natapov
2012-11-28 15:25                 ` Michael S. Tsirkin
2012-11-28 15:25                   ` Gleb Natapov
2012-11-21 19:26 ` [PATCHv4 0/2] kvm: direct msix injection Michael S. Tsirkin
2012-11-28  4:34   ` Alex Williamson
2012-11-28 11:19     ` Michael S. Tsirkin
2012-12-05 13:12 ` Gleb Natapov

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20121128124509.GD928@redhat.com \
    --to=gleb@redhat.com \
    --cc=jan.kiszka@siemens.com \
    --cc=kvm@vger.kernel.org \
    --cc=mst@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).