public inbox for kvm@vger.kernel.org
 help / color / mirror / Atom feed
From: ehrhardt@linux.vnet.ibm.com
To: kvm@vger.kernel.org, avi@redhat.com
Cc: ehrhardt@linux.vnet.ibm.com, borntraeger@de.ibm.com,
	cotte@de.ibm.com, heiko.carstens@de.ibm.com,
	schwidefsky@de.ibm.com, mtosatti@redhat.com
Subject: [PATCH 3/3] kvm-s390: streamline memslot handling - rebased
Date: Tue,  2 Jun 2009 16:26:11 +0200	[thread overview]
Message-ID: <1243952771-32428-4-git-send-email-ehrhardt@linux.vnet.ibm.com> (raw)
In-Reply-To: <1243952771-32428-1-git-send-email-ehrhardt@linux.vnet.ibm.com>

From: Christian Ehrhardt <ehrhardt@linux.vnet.ibm.com>

As requested this is a rebased patch on top of the already applied v3
of the patch series.

*updates to applied version*
- ensure the wait_on_bit waiter is notified
- ensure dropping vcpu all requests while freeing a vcpu
- kickout only scheduled vcpus (its superfluous and wait might hang forever on
  not running vcpus)
- kvm_arch_set_memory_region waits until the bit is consumed by the vcpu

This patch relocates the variables kvm-s390 uses to track guest mem addr/size.
As discussed dropping the variables at struct kvm_arch level allows to use the
common vcpu->request based mechanism to reload guest memory if e.g. changes
via set_memory_region.
The kick mechanism introduced in this series is used to ensure running vcpus
leave guest state to catch the update.


Signed-off-by: Christian Ehrhardt <ehrhardt@linux.vnet.ibm.com>
---

[diffstat]
 arch/s390/kvm/kvm-s390.c |   27 ++++++++++++++++++++-------
 arch/s390/kvm/kvm-s390.h |    7 +++++++
 virt/kvm/kvm_main.c      |    4 ++++
 3 files changed, 31 insertions(+), 7 deletions(-)

Index: kvm/arch/s390/kvm/kvm-s390.c
===================================================================
--- kvm.orig/arch/s390/kvm/kvm-s390.c
+++ kvm/arch/s390/kvm/kvm-s390.c
@@ -674,6 +674,12 @@ long kvm_arch_vcpu_ioctl(struct file *fi
 	return -EINVAL;
 }
 
+static int wait_bit_schedule(void *word)
+{
+	schedule();
+	return 0;
+}
+
 /* Section: memory related */
 int kvm_arch_set_memory_region(struct kvm *kvm,
 				struct kvm_userspace_memory_region *mem,
@@ -681,6 +687,7 @@ int kvm_arch_set_memory_region(struct kv
 				int user_alloc)
 {
 	int i;
+	struct kvm_vcpu *vcpu;
 
 	/* A few sanity checks. We can have exactly one memory slot which has
 	   to start at guest virtual zero and which has to be located at a
@@ -706,13 +713,19 @@ int kvm_arch_set_memory_region(struct kv
 
 	/* request update of sie control block for all available vcpus */
 	for (i = 0; i < KVM_MAX_VCPUS; ++i) {
-		if (kvm->vcpus[i]) {
-			if (test_and_set_bit(KVM_REQ_MMU_RELOAD,
-						&kvm->vcpus[i]->requests))
-				continue;
-			kvm_s390_inject_sigp_stop(kvm->vcpus[i],
-						  ACTION_VCPUREQUEST_ON_STOP);
-		}
+		vcpu = kvm->vcpus[i];
+		if (!vcpu)
+			continue;
+
+		if (!test_and_set_bit(KVM_REQ_MMU_RELOAD, &vcpu->requests))
+			continue;
+
+		if (vcpu->cpu == -1)
+			continue;
+
+		kvm_s390_inject_sigp_stop(vcpu, ACTION_VCPUREQUEST_ON_STOP);
+		wait_on_bit(&vcpu->requests, KVM_REQ_MMU_RELOAD,
+			    wait_bit_schedule, TASK_UNINTERRUPTIBLE);
 	}
 
 	return 0;
Index: kvm/arch/s390/kvm/kvm-s390.h
===================================================================
--- kvm.orig/arch/s390/kvm/kvm-s390.h
+++ kvm/arch/s390/kvm/kvm-s390.h
@@ -92,6 +92,13 @@ static inline unsigned long kvm_s390_han
 	if (!vcpu->requests)
 		return 0;
 
+	/* requests that can be handled at all levels */
+	if (test_and_clear_bit(KVM_REQ_MMU_RELOAD, &vcpu->requests)) {
+		smp_mb__after_clear_bit();
+		wake_up_bit(&vcpu->requests, KVM_REQ_MMU_RELOAD);
+		kvm_s390_vcpu_set_mem(vcpu);
+	}
+
 	return vcpu->requests;
 }
 
Index: kvm/virt/kvm/kvm_main.c
===================================================================
--- kvm.orig/virt/kvm/kvm_main.c
+++ kvm/virt/kvm/kvm_main.c
@@ -1682,6 +1682,10 @@ static int kvm_vcpu_release(struct inode
 {
 	struct kvm_vcpu *vcpu = filp->private_data;
 
+	clear_bit(KVM_REQ_MMU_RELOAD, &vcpu->requests);
+	smp_mb__after_clear_bit();
+	wake_up_bit(&vcpu->requests, KVM_REQ_MMU_RELOAD);
+
 	kvm_put_kvm(vcpu->kvm);
 	return 0;
 }

  parent reply	other threads:[~2009-06-02 14:26 UTC|newest]

Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2009-06-02 14:26 [PATCH 0/3] kvm-s390: revised version of kvm-s390 guest memory handling - rebased ehrhardt
2009-06-02 14:26 ` [PATCH 1/3] kvm-s390: infrastructure to kick vcpus out of guest state " ehrhardt
2009-06-02 14:26 ` [PATCH 2/3] kvm-s390: update vcpu->cpu " ehrhardt
2009-06-02 14:26 ` ehrhardt [this message]
2009-06-05 20:53   ` [PATCH 3/3] kvm-s390: streamline memslot handling " Marcelo Tosatti
2009-06-08 10:51     ` Christian Ehrhardt
2009-06-08 11:10       ` Avi Kivity
2009-06-08 12:05         ` Christian Ehrhardt
2009-06-08 12:09           ` Avi Kivity
2009-06-09  0:56       ` Marcelo Tosatti
2009-06-14 12:04         ` Avi Kivity
2009-06-15 13:47           ` Christian Ehrhardt

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1243952771-32428-4-git-send-email-ehrhardt@linux.vnet.ibm.com \
    --to=ehrhardt@linux.vnet.ibm.com \
    --cc=avi@redhat.com \
    --cc=borntraeger@de.ibm.com \
    --cc=cotte@de.ibm.com \
    --cc=heiko.carstens@de.ibm.com \
    --cc=kvm@vger.kernel.org \
    --cc=mtosatti@redhat.com \
    --cc=schwidefsky@de.ibm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox