xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xen.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Julien Grall <julien.grall@arm.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>
Subject: [PATCH for-4.9 1/4] tools/libxc: Tolerate specific zero-content records in migration v2 streams
Date: Thu, 30 Mar 2017 17:32:31 +0100	[thread overview]
Message-ID: <1490891554-28597-2-git-send-email-andrew.cooper3@citrix.com> (raw)
In-Reply-To: <1490891554-28597-1-git-send-email-andrew.cooper3@citrix.com>

The migration v2 save code was written to avoid sending data records with no
content, as such records serve no purpose but come with a performance hit.
The restore code sanity checks this expectation.

Under some circumstances (most notably, on AMD hardware with Debug Extensions,
and a PV guest kernel which is not using the feature), the save code would
generate a record with no content, which trips the sanity check in the restore
code.

As the stream is otherwise fine, tolerate these records and avoid failing the
migration.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Reviewed-by: Wei Liu <wei.liu2@citrix.com>
---
CC: Ian Jackson <Ian.Jackson@eu.citrix.com>
CC: Julien Grall <julien.grall@arm.com>

This needs backporting to Xen 4.6, and fixes XEN-5
---
 tools/libxc/xc_sr_restore_x86_hvm.c | 25 ++++++++++++++++++++++---
 tools/libxc/xc_sr_restore_x86_pv.c  | 17 ++++++++++++++---
 2 files changed, 36 insertions(+), 6 deletions(-)

diff --git a/tools/libxc/xc_sr_restore_x86_hvm.c b/tools/libxc/xc_sr_restore_x86_hvm.c
index 49d22c7..1dca853 100644
--- a/tools/libxc/xc_sr_restore_x86_hvm.c
+++ b/tools/libxc/xc_sr_restore_x86_hvm.c
@@ -39,13 +39,32 @@ static int handle_hvm_params(struct xc_sr_context *ctx,
     unsigned int i;
     int rc;
 
-    if ( rec->length < sizeof(*hdr)
-         || rec->length < sizeof(*hdr) + hdr->count * sizeof(*entry) )
+    if ( rec->length < sizeof(*hdr) )
     {
-        ERROR("hvm_params record is too short");
+        ERROR("HVM_PARAMS record truncated: length %u, header size %zu",
+              rec->length, sizeof(*hdr));
         return -1;
     }
 
+    if ( rec->length != (sizeof(*hdr) + hdr->count * sizeof(*entry)) )
+    {
+        ERROR("HVM_PARAMS record truncated: header %zu, count %u, "
+              "expected len %zu, got %u",
+              sizeof(*hdr), hdr->count, hdr->count * sizeof(*entry),
+              rec->length);
+        return -1;
+    }
+
+    /*
+     * Tolerate empty records.  Older sending sides used to accidentally
+     * generate them.
+     */
+    if ( hdr->count == 0 )
+    {
+        DBGPRINTF("Skipping empty HVM_PARAMS record\n");
+        return 0;
+    }
+
     for ( i = 0; i < hdr->count; i++, entry++ )
     {
         switch ( entry->index )
diff --git a/tools/libxc/xc_sr_restore_x86_pv.c b/tools/libxc/xc_sr_restore_x86_pv.c
index bc604b3..50e25c1 100644
--- a/tools/libxc/xc_sr_restore_x86_pv.c
+++ b/tools/libxc/xc_sr_restore_x86_pv.c
@@ -753,15 +753,26 @@ static int handle_x86_pv_vcpu_blob(struct xc_sr_context *ctx,
     }
 
     /* Confirm that there is a complete header. */
-    if ( rec->length <= sizeof(*vhdr) )
+    if ( rec->length < sizeof(*vhdr) )
     {
-        ERROR("%s record truncated: length %u, min %zu",
-              rec_name, rec->length, sizeof(*vhdr) + 1);
+        ERROR("%s record truncated: length %u, header size %zu",
+              rec_name, rec->length, sizeof(*vhdr));
         goto out;
     }
 
     blobsz = rec->length - sizeof(*vhdr);
 
+    /*
+     * Tolerate empty records.  Older sending sides used to accidentally
+     * generate them.
+     */
+    if ( blobsz == 0 )
+    {
+        DBGPRINTF("Skipping empty %s record for vcpu %u\n",
+                  rec_type_to_str(rec->type), vhdr->vcpu_id);
+        goto out;
+    }
+
     /* Check that the vcpu id is within range. */
     if ( vhdr->vcpu_id >= ctx->x86_pv.restore.nr_vcpus )
     {
-- 
2.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

  reply	other threads:[~2017-03-30 16:32 UTC|newest]

Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-03-30 16:32 [PATCH for-4.9 0/4] Fix migration of PV guests on modern AMD hardware Andrew Cooper
2017-03-30 16:32 ` Andrew Cooper [this message]
2017-03-30 16:32 ` [PATCH for-4.9 2/4] tools/libxc: Avoid generating inappropriate zero-content records Andrew Cooper
2017-03-31  7:46   ` Jan Beulich
2017-04-05 11:55   ` Wei Liu
2017-03-30 16:32 ` [PATCH for-4.9 3/4] tools/python: Adjust migration v2 library to warn about " Andrew Cooper
2017-03-30 16:32 ` [PATCH for-4.9 4/4] docs: Clarify the expected behaviour of " Andrew Cooper
2017-03-31  7:51   ` Jan Beulich
2017-04-06  9:35     ` Wei Liu
2017-04-05 11:55   ` Wei Liu
2017-04-05 11:23 ` [PATCH for-4.9 0/4] Fix migration of PV guests on modern AMD hardware Julien Grall
2017-04-05 11:25   ` Wei Liu

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1490891554-28597-2-git-send-email-andrew.cooper3@citrix.com \
    --to=andrew.cooper3@citrix.com \
    --cc=Ian.Jackson@eu.citrix.com \
    --cc=julien.grall@arm.com \
    --cc=xen-devel@lists.xen.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).