From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from [140.186.70.92] (port=35300 helo=eggs.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1PW7bK-0007mW-9B for qemu-devel@nongnu.org; Fri, 24 Dec 2010 08:22:32 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1PW7bI-0000K8-10 for qemu-devel@nongnu.org; Fri, 24 Dec 2010 08:22:30 -0500 Received: from mx1.redhat.com ([209.132.183.28]:14926) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1PW7bH-0000Jv-Gb for qemu-devel@nongnu.org; Fri, 24 Dec 2010 08:22:27 -0500 Date: Fri, 24 Dec 2010 15:21:35 +0200 From: "Michael S. Tsirkin" Message-ID: <20101224132135.GD24424@redhat.com> References: <20101202120213.GA2454@redhat.com> <20101216095140.GB19495@redhat.com> <20101216144010.GA25333@redhat.com> <20101224092710.GA23271@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Disposition: inline In-Reply-To: Content-Transfer-Encoding: quoted-printable Subject: [Qemu-devel] Re: [PATCH 05/21] virtio: modify save/load handler to handle inuse varialble. List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Yoshiaki Tamura Cc: aliguori@us.ibm.com, dlaor@redhat.com, ananth@in.ibm.com, kvm@vger.kernel.org, ohmura.kei@lab.ntt.co.jp, Marcelo Tosatti , qemu-devel@nongnu.org, vatsa@linux.vnet.ibm.com, avi@redhat.com, psuriset@linux.vnet.ibm.com, stefanha@linux.vnet.ibm.com On Fri, Dec 24, 2010 at 08:42:19PM +0900, Yoshiaki Tamura wrote: > 2010/12/24 Michael S. Tsirkin : > > On Fri, Dec 17, 2010 at 12:59:58AM +0900, Yoshiaki Tamura wrote: > >> 2010/12/16 Michael S. Tsirkin : > >> > On Thu, Dec 16, 2010 at 11:28:46PM +0900, Yoshiaki Tamura wrote: > >> >> 2010/12/16 Michael S. Tsirkin : > >> >> > On Thu, Dec 16, 2010 at 04:36:16PM +0900, Yoshiaki Tamura wrote= : > >> >> >> 2010/12/3 Yoshiaki Tamura : > >> >> >> > 2010/12/2 Michael S. Tsirkin : > >> >> >> >> On Wed, Dec 01, 2010 at 05:03:43PM +0900, Yoshiaki Tamura w= rote: > >> >> >> >>> 2010/11/28 Michael S. Tsirkin : > >> >> >> >>> > On Sun, Nov 28, 2010 at 08:27:58PM +0900, Yoshiaki Tamur= a wrote: > >> >> >> >>> >> 2010/11/28 Michael S. Tsirkin : > >> >> >> >>> >> > On Thu, Nov 25, 2010 at 03:06:44PM +0900, Yoshiaki Ta= mura wrote: > >> >> >> >>> >> >> Modify inuse type to uint16_t, let save/load to hand= le, and revert > >> >> >> >>> >> >> last_avail_idx with inuse if there are outstanding e= mulation. > >> >> >> >>> >> >> > >> >> >> >>> >> >> Signed-off-by: Yoshiaki Tamura > >> >> >> >>> >> > > >> >> >> >>> >> > This changes migration format, so it will break compa= tibility with > >> >> >> >>> >> > existing drivers. More generally, I think migrating i= nternal > >> >> >> >>> >> > state that is not guest visible is always a mistake > >> >> >> >>> >> > as it ties migration format to an internal implementa= tion > >> >> >> >>> >> > (yes, I know we do this sometimes, but we should at l= east > >> >> >> >>> >> > try not to add such cases). =A0I think the right thin= g to do in this case > >> >> >> >>> >> > is to flush outstanding > >> >> >> >>> >> > work when vm is stopped. =A0Then, we are guaranteed t= hat inuse is 0. > >> >> >> >>> >> > I sent patches that do this for virtio net and block. > >> >> >> >>> >> > >> >> >> >>> >> Could you give me the link of your patches? =A0I'd like= to test > >> >> >> >>> >> whether they work with Kemari upon failover. =A0If they= do, I'm > >> >> >> >>> >> happy to drop this patch. > >> >> >> >>> >> > >> >> >> >>> >> Yoshi > >> >> >> >>> > > >> >> >> >>> > Look for this: > >> >> >> >>> > stable migration image on a stopped vm > >> >> >> >>> > sent on: > >> >> >> >>> > Wed, 24 Nov 2010 17:52:49 +0200 > >> >> >> >>> > >> >> >> >>> Thanks for the info. > >> >> >> >>> > >> >> >> >>> However, The patch series above didn't solve the issue. =A0= In > >> >> >> >>> case of Kemari, inuse is mostly > 0 because it queues the > >> >> >> >>> output, and while last_avail_idx gets incremented > >> >> >> >>> immediately, not sending inuse makes the state inconsisten= t > >> >> >> >>> between Primary and Secondary. > >> >> >> >> > >> >> >> >> Hmm. Can we simply avoid incrementing last_avail_idx? > >> >> >> > > >> >> >> > I think we can calculate or prepare an internal last_avail_i= dx, > >> >> >> > and update the external when inuse is decremented. =A0I'll t= ry > >> >> >> > whether it work w/ w/o Kemari. > >> >> >> > >> >> >> Hi Michael, > >> >> >> > >> >> >> Could you please take a look at the following patch? > >> >> > > >> >> > Which version is this against? > >> >> > >> >> Oops. =A0It should be very old. > >> >> 67f895bfe69f323b427b284430b6219c8a62e8d4 > >> >> > >> >> >> commit 36ee7910059e6b236fe9467a609f5b4aed866912 > >> >> >> Author: Yoshiaki Tamura > >> >> >> Date: =A0 Thu Dec 16 14:50:54 2010 +0900 > >> >> >> > >> >> >> =A0 =A0 virtio: update last_avail_idx when inuse is decreased. > >> >> >> > >> >> >> =A0 =A0 Signed-off-by: Yoshiaki Tamura > >> >> > > >> >> > It would be better to have a commit description explaining why = a change > >> >> > is made, and why it is correct, not just repeating what can be = seen from > >> >> > the diff anyway. > >> >> > >> >> Sorry for being lazy here. > >> >> > >> >> >> diff --git a/hw/virtio.c b/hw/virtio.c > >> >> >> index c8a0fc6..6688c02 100644 > >> >> >> --- a/hw/virtio.c > >> >> >> +++ b/hw/virtio.c > >> >> >> @@ -237,6 +237,7 @@ void virtqueue_flush(VirtQueue *vq, unsign= ed int count) > >> >> >> =A0 =A0 =A0wmb(); > >> >> >> =A0 =A0 =A0trace_virtqueue_flush(vq, count); > >> >> >> =A0 =A0 =A0vring_used_idx_increment(vq, count); > >> >> >> + =A0 =A0vq->last_avail_idx +=3D count; > >> >> >> =A0 =A0 =A0vq->inuse -=3D count; > >> >> >> =A0} > >> >> >> > >> >> >> @@ -385,7 +386,7 @@ int virtqueue_pop(VirtQueue *vq, VirtQueue= Element *elem) > >> >> >> =A0 =A0 =A0unsigned int i, head, max; > >> >> >> =A0 =A0 =A0target_phys_addr_t desc_pa =3D vq->vring.desc; > >> >> >> > >> >> >> - =A0 =A0if (!virtqueue_num_heads(vq, vq->last_avail_idx)) > >> >> >> + =A0 =A0if (!virtqueue_num_heads(vq, vq->last_avail_idx + vq-= >inuse)) > >> >> >> =A0 =A0 =A0 =A0 =A0return 0; > >> >> >> > >> >> >> =A0 =A0 =A0/* When we start there are none of either input nor= output. */ > >> >> >> @@ -393,7 +394,7 @@ int virtqueue_pop(VirtQueue *vq, VirtQueue= Element *elem) > >> >> >> > >> >> >> =A0 =A0 =A0max =3D vq->vring.num; > >> >> >> > >> >> >> - =A0 =A0i =3D head =3D virtqueue_get_head(vq, vq->last_avail_= idx++); > >> >> >> + =A0 =A0i =3D head =3D virtqueue_get_head(vq, vq->last_avail_= idx + vq->inuse); > >> >> >> > >> >> >> =A0 =A0 =A0if (vring_desc_flags(desc_pa, i) & VRING_DESC_F_IND= IRECT) { > >> >> >> =A0 =A0 =A0 =A0 =A0if (vring_desc_len(desc_pa, i) % sizeof(VRi= ngDesc)) { > >> >> >> > >> >> > > >> >> > Hmm, will virtio_queue_empty be wrong now? What about virtqueue= _avail_bytes? > >> >> > >> >> I think there are two problems. > >> >> > >> >> 1. When to update last_avail_idx. > >> >> 2. The ordering issue you're mentioning below. > >> >> > >> >> The patch above is only trying to address 1 because last time you > >> >> mentioned that modifying last_avail_idx upon save may break the > >> >> guest, which I agree. =A0If virtio_queue_empty and > >> >> virtqueue_avail_bytes are only used internally, meaning invisible > >> >> to the guest, I guess the approach above can be applied too. > >> > > >> > So IMHO 2 is the real issue. This is what was problematic > >> > with the save patch, otherwise of course changes in save > >> > are better than changes all over the codebase. > >> > >> All right. =A0Then let's focus on 2 first. > >> > >> >> > Previous patch version sure looked simpler, and this seems func= tionally > >> >> > equivalent, so my question still stands: here it is rephrased i= n a > >> >> > different way: > >> >> > > >> >> > =A0 =A0 =A0 =A0assume that we have in avail ring 2 requests at = start of ring: A and B in this order > >> >> > > >> >> > =A0 =A0 =A0 =A0host pops A, then B, then completes B and flushe= s > >> >> > > >> >> > =A0 =A0 =A0 =A0now with this patch last_avail_idx will be 1, an= d then > >> >> > =A0 =A0 =A0 =A0remote will get it, it will execute B again. As = a result > >> >> > =A0 =A0 =A0 =A0B will complete twice, and apparently A will nev= er complete. > >> >> > > >> >> > > >> >> > This is what I was saying below: assuming that there are > >> >> > outstanding requests when we migrate, there is no way > >> >> > a single index can be enough to figure out which requests > >> >> > need to be handled and which are in flight already. > >> >> > > >> >> > We must add some kind of bitmask to tell us which is which. > >> >> > >> >> I should understand why this inversion can happen before solving > >> >> the issue. > >> > > >> > It's a fundamental thing in virtio. > >> > I think it is currently only likely to happen with block, I think = tap > >> > currently completes things in order. =A0In any case relying on thi= s in the > >> > frontend is a mistake. > >> > > >> >> =A0Currently, how are you making virio-net to flush > >> >> every requests for live migration? =A0Is it qemu_aio_flush()? > >> > > >> > Think so. > >> > >> If qemu_aio_flush() is responsible for flushing the outstanding > >> virtio-net requests, I'm wondering why it's a problem for Kemari. > >> As I described in the previous message, Kemari queues the > >> requests first. =A0So in you example above, it should start with > >> > >> virtio-net: last_avai_idx 0 inuse 2 > >> event-tap: {A,B} > >> > >> As you know, the requests are still in order still because net > >> layer initiates in order. =A0Not about completing. > >> > >> In the first synchronization, the status above is transferred. =A0In > >> the next synchronization, the status will be as following. > >> > >> virtio-net: last_avai_idx 1 inuse 1 > >> event-tap: {B} > > > > OK, this answers the ordering question. >=20 > Glad to hear that! >=20 > > Another question: at this point we transfer this status: both > > event-tap and virtio ring have the command B, > > so the remote will have: > > > > virtio-net: inuse 0 > > event-tap: {B} > > > > Is this right? This already seems to be a problem as when B completes > > inuse will go negative? >=20 > I think state above is wrong. inuse 0 means there shouldn't be > any requests in event-tap. Note that the callback is called only > when event-tap flushes the requests. >=20 > > Next it seems that the remote virtio will resubmit B to event-tap. Th= e > > remote will then have: > > > > virtio-net: inuse 1 > > event-tap: {B, B} > > > > This looks kind of wrong ... will two packets go out? >=20 > No. Currently, we're just replaying the requests with pio/mmio. > In the situation above, it should be, >=20 > virtio-net: inuse 1 > event-tap: {B} >=20 > >> Why? Because Kemari flushes the first virtio-net request using > >> qemu_aio_flush() before each synchronization. =A0If > >> qemu_aio_flush() doesn't guarantee the order, what you pointed > >> should be problematic. =A0So in the final synchronization, the > >> state should be, > >> > >> virtio-net: last_avai_idx 2 inuse 0 > >> event-tap: {} > >> > >> where A,B were completed in order. > >> > >> Yoshi > > > > > > It might be better to discuss block because that's where > > requests can complete out of order. >=20 > It's same as net. We queue requests and call bdrv_flush per > sending requests to the block. So there shouldn't be any > inversion. >=20 > > So let me see if I understand: > > - each command passed to event tap is queued by it, > > =A0it is not passed directly to the backend > > - later requests are passed to the backend, > > =A0always in the same order that they were submitted > > - each synchronization point flushes all requests > > =A0passed to the backend so far > > - each synchronization transfers all requests not passed to the backe= nd, > > =A0to the remote, and they are replayed there >=20 > Correct. >=20 > > Now to analyse this for correctness I am looking at the original patc= h > > because it is smaller so easier to analyse and I think it is > > functionally equivalent, correct me if I am wrong in this. >=20 > So you think decreasing last_avail_idx upon save is better than > updating it in the callback? If this is correct, of the two equivalent approaches the one that only touches save/load seems superiour. > > So the reason there's no out of order issue is this > > (and might be a good thing to put in commit log > > or a comment somewhere): >=20 > I've done some in the latest patch. Please point it out if it > wasn't enough. >=20 > > At point of save callback event tap has flushed commands > > passed to the backend already. Thus at the point of > > the save callback if a command has completed > > all previous commands have been flushed and completed. > > > > > > Therefore inuse is > > in fact the # of requests passed to event tap but not yet > > passed to the backend (for non-event tap case all commands are > > passed to the backend immediately and because of this > > inuse is 0) and these are the last inuse commands submitted. > > > > > > Right? >=20 > Yep. >=20 > > Now a question: > > > > When we pass last_used_index - inuse to the remote, > > the remote virtio will resubmit the request. > > Since request is also passed by event tap, we get > > the request twice, why is this not a problem? >=20 > It's not a problem because event-tap currently replays with > pio/mmio only, as I mentioned above. Although event-tap receives > information about the queued requests, it won't pass it to the > backend. The reason is the problem in setting the callbacks > which are specific to devices on the secondary. These are > pointers, and even worse, are usually static functions, which > event-tap has no way to restore it upon failover. I do want to > change event-tap replay to be this way in the future, pio/mmio > replay is implemented for now. >=20 > Thanks, >=20 > Yoshi >=20 > > > > > >> > > >> >> > > >> >> >> > > >> >> >> >> > >> >> >> >>> =A0I'm wondering why > >> >> >> >>> last_avail_idx is OK to send but not inuse. > >> >> >> >> > >> >> >> >> last_avail_idx is at some level a mistake, it exposes part = of > >> >> >> >> our internal implementation, but it does *also* express > >> >> >> >> a guest observable state. > >> >> >> >> > >> >> >> >> Here's the problem that it solves: just looking at the ring= s in virtio > >> >> >> >> there is no way to detect that a specific request has alrea= dy been > >> >> >> >> completed. And the protocol forbids completing the same req= uest twice. > >> >> >> >> > >> >> >> >> Our implementation always starts processing the requests > >> >> >> >> in order, and since we flush outstanding requests > >> >> >> >> before save, it works to just tell the remote 'process only= requests > >> >> >> >> after this place'. > >> >> >> >> > >> >> >> >> But there's no such requirement in the virtio protocol, > >> >> >> >> so to be really generic we could add a bitmask of valid ava= il > >> >> >> >> ring entries that did not complete yet. This would be > >> >> >> >> the exact representation of the guest observable state. > >> >> >> >> In practice we have rings of up to 512 entries. > >> >> >> >> That's 64 byte per ring, not a lot at all. > >> >> >> >> > >> >> >> >> However, if we ever do change the protocol to send the bitm= ask, > >> >> >> >> we would need some code to resubmit requests > >> >> >> >> out of order, so it's not trivial. > >> >> >> >> > >> >> >> >> Another minor mistake with last_avail_idx is that it has > >> >> >> >> some redundancy: the high bits in the index > >> >> >> >> (> vq size) are not necessary as they can be > >> >> >> >> got from avail idx. =A0There's a consistency check > >> >> >> >> in load but we really should try to use formats > >> >> >> >> that are always consistent. > >> >> >> >> > >> >> >> >>> The following patch does the same thing as original, yet > >> >> >> >>> keeps the format of the virtio. =A0It shouldn't break live > >> >> >> >>> migration either because inuse should be 0. > >> >> >> >>> > >> >> >> >>> Yoshi > >> >> >> >> > >> >> >> >> Question is, can you flush to make inuse 0 in kemari too? > >> >> >> >> And if not, how do you handle the fact that some requests > >> >> >> >> are in flight on the primary? > >> >> >> > > >> >> >> > Although we try flushing requests one by one making inuse 0, > >> >> >> > there are cases when it failovers to the secondary when inus= e > >> >> >> > isn't 0. =A0We handle these in flight request on the primary= by > >> >> >> > replaying on the secondary. > >> >> >> > > >> >> >> >> > >> >> >> >>> diff --git a/hw/virtio.c b/hw/virtio.c > >> >> >> >>> index c8a0fc6..875c7ca 100644 > >> >> >> >>> --- a/hw/virtio.c > >> >> >> >>> +++ b/hw/virtio.c > >> >> >> >>> @@ -664,12 +664,16 @@ void virtio_save(VirtIODevice *vdev,= QEMUFile *f) > >> >> >> >>> =A0 =A0 =A0qemu_put_be32(f, i); > >> >> >> >>> > >> >> >> >>> =A0 =A0 =A0for (i =3D 0; i < VIRTIO_PCI_QUEUE_MAX; i++) { > >> >> >> >>> + =A0 =A0 =A0 =A0uint16_t last_avail_idx; > >> >> >> >>> + > >> >> >> >>> =A0 =A0 =A0 =A0 =A0if (vdev->vq[i].vring.num =3D=3D 0) > >> >> >> >>> =A0 =A0 =A0 =A0 =A0 =A0 =A0break; > >> >> >> >>> > >> >> >> >>> + =A0 =A0 =A0 =A0last_avail_idx =3D vdev->vq[i].last_avail= _idx - vdev->vq[i].inuse; > >> >> >> >>> + > >> >> >> >>> =A0 =A0 =A0 =A0 =A0qemu_put_be32(f, vdev->vq[i].vring.num)= ; > >> >> >> >>> =A0 =A0 =A0 =A0 =A0qemu_put_be64(f, vdev->vq[i].pa); > >> >> >> >>> - =A0 =A0 =A0 =A0qemu_put_be16s(f, &vdev->vq[i].last_avail= _idx); > >> >> >> >>> + =A0 =A0 =A0 =A0qemu_put_be16s(f, &last_avail_idx); > >> >> >> >>> =A0 =A0 =A0 =A0 =A0if (vdev->binding->save_queue) > >> >> >> >>> =A0 =A0 =A0 =A0 =A0 =A0 =A0vdev->binding->save_queue(vdev-= >binding_opaque, i, f); > >> >> >> >>> =A0 =A0 =A0} > >> >> >> >>> > >> >> >> >>> > >> >> >> >> > >> >> >> >> This looks wrong to me. =A0Requests can complete in any ord= er, can they > >> >> >> >> not? =A0So if request 0 did not complete and request 1 did = not, > >> >> >> >> you send avail - inuse and on the secondary you will proces= s and > >> >> >> >> complete request 1 the second time, crashing the guest. > >> >> >> > > >> >> >> > In case of Kemari, no. =A0We sit between devices and net/blo= ck, and > >> >> >> > queue the requests. =A0After completing each transaction, we= flush > >> >> >> > the requests one by one. =A0So there won't be completion inv= ersion, > >> >> >> > and therefore won't be visible to the guest. > >> >> >> > > >> >> >> > Yoshi > >> >> >> > > >> >> >> >> > >> >> >> >>> > >> >> >> >>> > > >> >> >> >>> >> > > >> >> >> >>> >> >> --- > >> >> >> >>> >> >> =A0hw/virtio.c | =A0 =A08 +++++++- > >> >> >> >>> >> >> =A01 files changed, 7 insertions(+), 1 deletions(-) > >> >> >> >>> >> >> > >> >> >> >>> >> >> diff --git a/hw/virtio.c b/hw/virtio.c > >> >> >> >>> >> >> index 849a60f..5509644 100644 > >> >> >> >>> >> >> --- a/hw/virtio.c > >> >> >> >>> >> >> +++ b/hw/virtio.c > >> >> >> >>> >> >> @@ -72,7 +72,7 @@ struct VirtQueue > >> >> >> >>> >> >> =A0 =A0 =A0VRing vring; > >> >> >> >>> >> >> =A0 =A0 =A0target_phys_addr_t pa; > >> >> >> >>> >> >> =A0 =A0 =A0uint16_t last_avail_idx; > >> >> >> >>> >> >> - =A0 =A0int inuse; > >> >> >> >>> >> >> + =A0 =A0uint16_t inuse; > >> >> >> >>> >> >> =A0 =A0 =A0uint16_t vector; > >> >> >> >>> >> >> =A0 =A0 =A0void (*handle_output)(VirtIODevice *vdev,= VirtQueue *vq); > >> >> >> >>> >> >> =A0 =A0 =A0VirtIODevice *vdev; > >> >> >> >>> >> >> @@ -671,6 +671,7 @@ void virtio_save(VirtIODevice *v= dev, QEMUFile *f) > >> >> >> >>> >> >> =A0 =A0 =A0 =A0 =A0qemu_put_be32(f, vdev->vq[i].vrin= g.num); > >> >> >> >>> >> >> =A0 =A0 =A0 =A0 =A0qemu_put_be64(f, vdev->vq[i].pa); > >> >> >> >>> >> >> =A0 =A0 =A0 =A0 =A0qemu_put_be16s(f, &vdev->vq[i].la= st_avail_idx); > >> >> >> >>> >> >> + =A0 =A0 =A0 =A0qemu_put_be16s(f, &vdev->vq[i].inus= e); > >> >> >> >>> >> >> =A0 =A0 =A0 =A0 =A0if (vdev->binding->save_queue) > >> >> >> >>> >> >> =A0 =A0 =A0 =A0 =A0 =A0 =A0vdev->binding->save_queue= (vdev->binding_opaque, i, f); > >> >> >> >>> >> >> =A0 =A0 =A0} > >> >> >> >>> >> >> @@ -711,6 +712,11 @@ int virtio_load(VirtIODevice *v= dev, QEMUFile *f) > >> >> >> >>> >> >> =A0 =A0 =A0 =A0 =A0vdev->vq[i].vring.num =3D qemu_ge= t_be32(f); > >> >> >> >>> >> >> =A0 =A0 =A0 =A0 =A0vdev->vq[i].pa =3D qemu_get_be64(= f); > >> >> >> >>> >> >> =A0 =A0 =A0 =A0 =A0qemu_get_be16s(f, &vdev->vq[i].la= st_avail_idx); > >> >> >> >>> >> >> + =A0 =A0 =A0 =A0qemu_get_be16s(f, &vdev->vq[i].inus= e); > >> >> >> >>> >> >> + > >> >> >> >>> >> >> + =A0 =A0 =A0 =A0/* revert last_avail_idx if there a= re outstanding emulation. */ > >> >> >> >>> >> >> + =A0 =A0 =A0 =A0vdev->vq[i].last_avail_idx -=3D vde= v->vq[i].inuse; > >> >> >> >>> >> >> + =A0 =A0 =A0 =A0vdev->vq[i].inuse =3D 0; > >> >> >> >>> >> >> > >> >> >> >>> >> >> =A0 =A0 =A0 =A0 =A0if (vdev->vq[i].pa) { > >> >> >> >>> >> >> =A0 =A0 =A0 =A0 =A0 =A0 =A0virtqueue_init(&vdev->vq[= i]); > >> >> >> >>> >> >> -- > >> >> >> >>> >> >> 1.7.1.2 > >> >> >> >>> >> >> > >> >> >> >>> >> >> -- > >> >> >> >>> >> >> To unsubscribe from this list: send the line "unsubs= cribe kvm" in > >> >> >> >>> >> >> the body of a message to majordomo@vger.kernel.org > >> >> >> >>> >> >> More majordomo info at =A0http://vger.kernel.org/maj= ordomo-info.html > >> >> >> >>> >> > -- > >> >> >> >>> >> > To unsubscribe from this list: send the line "unsubsc= ribe kvm" in > >> >> >> >>> >> > the body of a message to majordomo@vger.kernel.org > >> >> >> >>> >> > More majordomo info at =A0http://vger.kernel.org/majo= rdomo-info.html > >> >> >> >>> >> > > >> >> >> >>> > -- > >> >> >> >>> > To unsubscribe from this list: send the line "unsubscrib= e kvm" in > >> >> >> >>> > the body of a message to majordomo@vger.kernel.org > >> >> >> >>> > More majordomo info at =A0http://vger.kernel.org/majordo= mo-info.html > >> >> >> >>> > > >> >> >> >> -- > >> >> >> >> To unsubscribe from this list: send the line "unsubscribe k= vm" in > >> >> >> >> the body of a message to majordomo@vger.kernel.org > >> >> >> >> More majordomo info at =A0http://vger.kernel.org/majordomo-= info.html > >> >> >> >> > >> >> >> > > >> >> > -- > >> >> > To unsubscribe from this list: send the line "unsubscribe kvm" = in > >> >> > the body of a message to majordomo@vger.kernel.org > >> >> > More majordomo info at =A0http://vger.kernel.org/majordomo-info= .html > >> >> > > >> > -- > >> > To unsubscribe from this list: send the line "unsubscribe kvm" in > >> > the body of a message to majordomo@vger.kernel.org > >> > More majordomo info at =A0http://vger.kernel.org/majordomo-info.ht= ml > >> > > > -- > > To unsubscribe from this list: send the line "unsubscribe kvm" in > > the body of a message to majordomo@vger.kernel.org > > More majordomo info at =A0http://vger.kernel.org/majordomo-info.html > >