From mboxrd@z Thu Jan 1 00:00:00 1970 From: Don Slutz Subject: Re: [BUGFIX][PATCH 3/4] hvm_save_one: return correct data. Date: Sun, 15 Dec 2013 12:19:13 -0500 Message-ID: <52ADE491.5040708@terremark.com> References: <1386809777-12898-1-git-send-email-dslutz@terremark.com> <1386809777-12898-4-git-send-email-dslutz@terremark.com> <52AB25B4020000780010D0B0@nat28.tlf.novell.com> <52ACF7CE.9030904@terremark.com> <52ADDE15.8010408@citrix.com> Mime-Version: 1.0 Content-Type: multipart/mixed; boundary="===============2690846803517701120==" Return-path: Received: from mail6.bemta4.messagelabs.com ([85.158.143.247]) by lists.xen.org with esmtp (Exim 4.72) (envelope-from ) id 1VsFLh-0005Yu-Cl for xen-devel@lists.xenproject.org; Sun, 15 Dec 2013 17:19:25 +0000 In-Reply-To: <52ADDE15.8010408@citrix.com> List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Sender: xen-devel-bounces@lists.xen.org Errors-To: xen-devel-bounces@lists.xen.org To: Andrew Cooper Cc: Keir Fraser , Ian Campbell , Stefano Stabellini , Ian Jackson , Don Slutz , Jan Beulich , xen-devel List-Id: xen-devel@lists.xenproject.org --===============2690846803517701120== Content-Type: multipart/alternative; boundary="------------020409070509060309040505" --------------020409070509060309040505 Content-Type: text/plain; charset="ISO-8859-1"; format=flowed Content-Transfer-Encoding: 7bit On 12/15/13 11:51, Andrew Cooper wrote: > On 15/12/2013 00:29, Don Slutz wrote: >> >> I think I have corrected all coding errors (please check again). And >> done all requested changes. I did add the reviewed by (not sure if I >> should since this changes a large part of the patch, but they are all >> what Jan said). >> >> I have unit tested it and it appears to work the same as the previous >> version (as expected). >> >> Here is the new version, also attached. >> >> From e0e8f5246ba492b153884cea93bfe753f1b0782e Mon Sep 17 00:00:00 2001 >> From: Don Slutz >> Date: Tue, 12 Nov 2013 08:22:53 -0500 >> Subject: [PATCH v2 3/4] hvm_save_one: return correct data. >> >> It is possible that hvm_sr_handlers[typecode].save does not use all >> the provided room. In that case, using: >> >> instance * hvm_sr_handlers[typecode].size >> >> does not select the correct instance. Add code to search for the >> correct instance. >> >> Signed-off-by: Don Slutz >> Reviewed-by: Jan Beulich > > but this fairs no better at selecting the correct subset in the case > that less data than hvm_sr_handlers[typecode].size is written by > hvm_sr_handlers[typecode].save. > True, but the inverse is the case here; .save writes 'n' 'size' blocks. Form the loop above: if ( hvm_sr_handlers[typecode].kind == HVMSR_PER_VCPU ) for_each_vcpu(d, v) sz += hvm_sr_handlers[typecode].size; else sz = hvm_sr_handlers[typecode].size; so sz is in multiples of 'size'. Normally sz == ctxt.cur. With some offline vcpus it write fewer 'size' blocks. > It always increments by 'size' bytes, and will only copy the data back > if the bytes under desc->instance happen to match the instance we are > looking for. > The only time it does not find one is for an offline vcpu. Try out the unit test code in patch #1 on an unchanged xen. It should not display anything. Then offline a cpu in a domU (echo 0 > /sys/devices/system/cpu/cpu1/online). And with 3 vcpus, it will report an error. -Don Slutz > The only solution I can see is that for the per-vcpu records, the save > functions get refactored to take an instance ID, and only save their > specific instance. > > I shall have a go at this and see how invasive it is. > > ~Andrew > >> --- >> xen/common/hvm/save.c | 28 +++++++++++++++++++++------- >> 1 file changed, 21 insertions(+), 7 deletions(-) >> >> diff --git a/xen/common/hvm/save.c b/xen/common/hvm/save.c >> index de76ada..6aaea6f 100644 >> --- a/xen/common/hvm/save.c >> +++ b/xen/common/hvm/save.c >> @@ -112,13 +112,27 @@ int hvm_save_one(struct domain *d, uint16_t >> typecode, uint16_t instance, >> d->domain_id, typecode); >> rv = -EFAULT; >> } >> - else if ( copy_to_guest(handle, >> - ctxt.data >> - + (instance * >> hvm_sr_handlers[typecode].size) >> - + sizeof (struct hvm_save_descriptor), >> - hvm_sr_handlers[typecode].size >> - - sizeof (struct hvm_save_descriptor)) ) >> - rv = -EFAULT; >> + else >> + { >> + uint32_t off; >> + >> + rv = -EBADSLT; >> + for ( off = 0; off < ctxt.cur; off += >> hvm_sr_handlers[typecode].size ) >> + { >> + const struct hvm_save_descriptor *desc = (void >> *)&ctxt.data[off]; >> + >> + if ( instance == desc->instance ) >> + { >> + rv = 0; >> + if ( copy_to_guest(handle, >> + ctxt.data + off + sizeof(*desc), >> + hvm_sr_handlers[typecode].size >> + - sizeof(*desc)) ) >> + rv = -EFAULT; >> + break; >> + } >> + } >> + } >> >> xfree(ctxt.data); >> return rv; >> >> >> _______________________________________________ >> Xen-devel mailing list >> Xen-devel@lists.xen.org >> http://lists.xen.org/xen-devel > --------------020409070509060309040505 Content-Type: text/html; charset="ISO-8859-1" Content-Transfer-Encoding: 7bit
On 12/15/13 11:51, Andrew Cooper wrote:
On 15/12/2013 00:29, Don Slutz wrote:

I think I have corrected all coding errors (please check again). And done all requested changes.  I did add the reviewed by (not sure if I should since this changes a large part of the patch, but they are all what Jan said).

I have unit tested it and it appears to work the same as the previous version (as expected).

Here is the new version, also attached.

From e0e8f5246ba492b153884cea93bfe753f1b0782e Mon Sep 17 00:00:00 2001
From: Don Slutz <dslutz@verizon.com>
Date: Tue, 12 Nov 2013 08:22:53 -0500
Subject: [PATCH v2 3/4] hvm_save_one: return correct data.

It is possible that hvm_sr_handlers[typecode].save does not use all
the provided room.  In that case, using:

   instance * hvm_sr_handlers[typecode].size

does not select the correct instance.  Add code to search for the
correct instance.

Signed-off-by: Don Slutz <dslutz@verizon.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>

but this fairs no better at selecting the correct subset in the case that less data than hvm_sr_handlers[typecode].size is written by hvm_sr_handlers[typecode].save.

True, but the inverse is the case here; .save writes 'n' 'size' blocks.  Form the loop above:

    if ( hvm_sr_handlers[typecode].kind == HVMSR_PER_VCPU )
        for_each_vcpu(d, v)
            sz += hvm_sr_handlers[typecode].size;
    else
        sz = hvm_sr_handlers[typecode].size;

so sz is in multiples of 'size'.  Normally sz == ctxt.cur.  With some offline vcpus it write fewer 'size' blocks.
It always increments by 'size' bytes, and will only copy the data back if the bytes under desc->instance happen to match the instance we are looking for.

The only time it does not find one is for an offline vcpu.  Try out the unit test code in patch #1 on an unchanged xen.  It should not display anything.  Then offline a cpu in a domU (echo 0 > /sys/devices/system/cpu/cpu1/online).  And with 3 vcpus, it will report an error.

   -Don Slutz
The only solution I can see is that for the per-vcpu records, the save functions get refactored to take an instance ID, and only save their specific instance.

I shall have a go at this and see how invasive it is.

~Andrew

---
 xen/common/hvm/save.c | 28 +++++++++++++++++++++-------
 1 file changed, 21 insertions(+), 7 deletions(-)

diff --git a/xen/common/hvm/save.c b/xen/common/hvm/save.c
index de76ada..6aaea6f 100644
--- a/xen/common/hvm/save.c
+++ b/xen/common/hvm/save.c
@@ -112,13 +112,27 @@ int hvm_save_one(struct domain *d, uint16_t typecode, uint16_t instance,
                d->domain_id, typecode);
         rv = -EFAULT;
     }
-    else if ( copy_to_guest(handle,
-                            ctxt.data
-                            + (instance * hvm_sr_handlers[typecode].size)
-                            + sizeof (struct hvm_save_descriptor),
-                            hvm_sr_handlers[typecode].size
-                            - sizeof (struct hvm_save_descriptor)) )
-        rv = -EFAULT;
+    else
+    {
+        uint32_t off;
+
+        rv = -EBADSLT;
+        for ( off = 0; off < ctxt.cur; off += hvm_sr_handlers[typecode].size )
+        {
+            const struct hvm_save_descriptor *desc = (void *)&ctxt.data[off];
+
+            if ( instance == desc->instance )
+            {
+                rv = 0;
+                if ( copy_to_guest(handle,
+                                   ctxt.data + off + sizeof(*desc),
+                                   hvm_sr_handlers[typecode].size
+                                   - sizeof(*desc)) )
+                    rv = -EFAULT;
+                break;
+            }
+        }
+    }

     xfree(ctxt.data);
     return rv;


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


--------------020409070509060309040505-- --===============2690846803517701120== Content-Type: text/plain; charset="us-ascii" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit Content-Disposition: inline _______________________________________________ Xen-devel mailing list Xen-devel@lists.xen.org http://lists.xen.org/xen-devel --===============2690846803517701120==--