linux-arm-kernel.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
From: c.dall@virtualopensystems.com (Christoffer Dall)
To: linux-arm-kernel@lists.infradead.org
Subject: [PATCH 09/10] ARM: KVM: vgic: reduce the number of vcpu kick
Date: Sat, 15 Sep 2012 11:38:28 -0400	[thread overview]
Message-ID: <20120915153828.21545.82376.stgit@ubuntu> (raw)
In-Reply-To: <20120915153657.21545.3972.stgit@ubuntu>

From: Marc Zyngier <marc.zyngier@arm.com>

If we have level interrupts already programmed to fire on a vcpu,
there is no reason to kick it after injecting a new interrupt,
as we're guaranteed that we'll exit when the level interrupt will
be EOId (VGIC_LR_EOI is set).

The exit will force a reload of the VGIC, injecting the new interrupts.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <c.dall@virtualopensystems.com>
---
 arch/arm/include/asm/kvm_vgic.h |   10 ++++++++++
 arch/arm/kvm/arm.c              |   10 +++++++++-
 arch/arm/kvm/vgic.c             |   10 ++++++++--
 3 files changed, 27 insertions(+), 3 deletions(-)

diff --git a/arch/arm/include/asm/kvm_vgic.h b/arch/arm/include/asm/kvm_vgic.h
index c5601f2..d722541 100644
--- a/arch/arm/include/asm/kvm_vgic.h
+++ b/arch/arm/include/asm/kvm_vgic.h
@@ -213,6 +213,9 @@ struct vgic_cpu {
 	u32		vgic_elrsr[2];	/* Saved only */
 	u32		vgic_apr;
 	u32		vgic_lr[64];	/* A15 has only 4... */
+
+	/* Number of level-triggered interrupt in progress */
+	atomic_t	irq_active_count;
 #endif
 };
 
@@ -249,6 +252,8 @@ bool vgic_handle_mmio(struct kvm_vcpu *vcpu, struct kvm_run *run,
 		      struct kvm_exit_mmio *mmio);
 
 #define irqchip_in_kernel(k)	(!!((k)->arch.vgic.vctrl_base))
+#define vgic_active_irq(v)	(atomic_read(&(v)->arch.vgic_cpu.irq_active_count) == 0)
+
 #else
 static inline int kvm_vgic_hyp_init(void)
 {
@@ -285,6 +290,11 @@ static inline int irqchip_in_kernel(struct kvm *kvm)
 {
 	return 0;
 }
+
+static inline int vgic_active_irq(struct kvm_vcpu *vcpu)
+{
+	return 0;
+}
 #endif
 
 #endif
diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
index fdd69ef..b959863 100644
--- a/arch/arm/kvm/arm.c
+++ b/arch/arm/kvm/arm.c
@@ -95,7 +95,15 @@ int kvm_arch_hardware_enable(void *garbage)
 
 int kvm_arch_vcpu_should_kick(struct kvm_vcpu *vcpu)
 {
-	return kvm_vcpu_exiting_guest_mode(vcpu) == IN_GUEST_MODE;
+	if (kvm_vcpu_exiting_guest_mode(vcpu) == IN_GUEST_MODE) {
+		if (vgic_active_irq(vcpu) &&
+		    cmpxchg(&vcpu->mode, EXITING_GUEST_MODE, IN_GUEST_MODE) == EXITING_GUEST_MODE)
+			return 0;
+
+		return 1;
+	}
+
+	return 0;
 }
 
 void kvm_arch_hardware_disable(void *garbage)
diff --git a/arch/arm/kvm/vgic.c b/arch/arm/kvm/vgic.c
index fc2a138..63fe0dd 100644
--- a/arch/arm/kvm/vgic.c
+++ b/arch/arm/kvm/vgic.c
@@ -674,8 +674,10 @@ static bool vgic_queue_irq(struct kvm_vcpu *vcpu, u8 sgi_source_id, int irq)
 		kvm_debug("LR%d piggyback for IRQ%d %x\n", lr, irq, vgic_cpu->vgic_lr[lr]);
 		BUG_ON(!test_bit(lr, vgic_cpu->lr_used));
 		vgic_cpu->vgic_lr[lr] |= VGIC_LR_PENDING_BIT;
-		if (is_level)
+		if (is_level) {
 			vgic_cpu->vgic_lr[lr] |= VGIC_LR_EOI;
+			atomic_inc(&vgic_cpu->irq_active_count);
+		}
 		return true;
 	}
 
@@ -687,8 +689,10 @@ static bool vgic_queue_irq(struct kvm_vcpu *vcpu, u8 sgi_source_id, int irq)
 
 	kvm_debug("LR%d allocated for IRQ%d %x\n", lr, irq, sgi_source_id);
 	vgic_cpu->vgic_lr[lr] = MK_LR_PEND(sgi_source_id, irq);
-	if (is_level)
+	if (is_level) {
 		vgic_cpu->vgic_lr[lr] |= VGIC_LR_EOI;
+		atomic_inc(&vgic_cpu->irq_active_count);
+	}
 
 	vgic_cpu->vgic_irq_lr_map[irq] = lr;
 	clear_bit(lr, (unsigned long *)vgic_cpu->vgic_elrsr);
@@ -963,6 +967,8 @@ static irqreturn_t vgic_maintenance_handler(int irq, void *data)
 
 			vgic_bitmap_set_irq_val(&dist->irq_active,
 						vcpu->vcpu_id, irq, 0);
+			atomic_dec(&vgic_cpu->irq_active_count);
+			smp_mb();
 			vgic_cpu->vgic_lr[lr] &= ~VGIC_LR_EOI;
 			writel_relaxed(vgic_cpu->vgic_lr[lr],
 				       dist->vctrl_base + GICH_LR0 + (lr << 2));

  parent reply	other threads:[~2012-09-15 15:38 UTC|newest]

Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2012-09-15 15:36 [PATCH 01/10] ARM: KVM: Keep track of currently running vcpus Christoffer Dall
2012-09-15 15:37 ` [PATCH 02/10] ARM: KVM: Initial VGIC infrastructure support Christoffer Dall
2012-09-15 15:37 ` [PATCH 03/10] ARM: KVM: Initial VGIC MMIO support code Christoffer Dall
2012-09-15 15:37 ` [PATCH 04/10] ARM: KVM: VGIC distributor handling Christoffer Dall
2012-09-15 15:37 ` [PATCH 05/10] ARM: KVM: VGIC virtual CPU interface management Christoffer Dall
2012-09-15 15:37 ` [PATCH 06/10] ARM: KVM: VGIC interrupt injection Christoffer Dall
2012-09-15 15:38 ` [PATCH 07/10] ARM: KVM: VGIC control interface world switch Christoffer Dall
2012-09-15 15:38 ` [PATCH 08/10] ARM: KVM: VGIC initialisation code Christoffer Dall
2012-09-15 15:38 ` Christoffer Dall [this message]
2012-09-15 15:38 ` [PATCH 10/10] ARM: KVM: Add VGIC configuration option Christoffer Dall
2012-09-20 12:53 ` [PATCH 01/10] ARM: KVM: Keep track of currently running vcpus Min-gyu Kim
2012-09-20 14:02   ` Marc Zyngier

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20120915153828.21545.82376.stgit@ubuntu \
    --to=c.dall@virtualopensystems.com \
    --cc=linux-arm-kernel@lists.infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).