From: Ankur Arora <ankur.a.arora@oracle.com>
To: linux-kernel@vger.kernel.org, xen-devel@lists.xenproject.org
Cc: jgross@suse.com, boris.ostrovsky@oracle.com,
Ankur Arora <ankur.a.arora@oracle.com>
Subject: [PATCH 0/5] xen/pvh*: Support > 32 VCPUs at restore
Date: Fri, 2 Jun 2017 17:05:57 -0700 [thread overview]
Message-ID: <1496448362-26558-1-git-send-email-ankur.a.arora@oracle.com> (raw)
This patch series fixes a bunch of issues in the xen_vcpu setup
logic.
Simplify xen_vcpu related code: code refactoring in advance of the
rest of the patch series.
Support > 32 VCPUs at restore: unify all vcpu restore logic in
xen_vcpu_restore() and support > 32 VCPUs for PVH*.
Remove vcpu info placement from restore (!SMP): some pv_ops are
marked RO after init so lets not redo xen_setup_vcpu_info_placement
at restore.
Handle xen_vcpu_setup() failure in hotplug: handle vcpu_info
registration failures by propagating them from the cpuhp-prepare
callback back up to the cpuhp logic.
Handle xen_vcpu_setup() failure at boot: pull CPUs (> MAX_VIRT_CPUS)
down if we fall back to xen_have_vcpu_info_placement = 0.
Tested with various combinations of PV/PVHv2/PVHVM save/restore
and cpu-hotadd-hotremove. Also tested by simulating failure in
VCPUOP_register_vcpu_info.
Please review.
Ankur Arora (5):
xen/vcpu: Simplify xen_vcpu related code
xen/pvh*: Support > 32 VCPUs at domain restore
xen/pv: Fix OOPS on restore for a PV, !SMP domain
xen/vcpu: Handle xen_vcpu_setup() failure in hotplug
xen/vcpu: Handle xen_vcpu_setup() failure at boot
arch/x86/xen/enlighten.c | 154 +++++++++++++++++++++++++++++++------------
arch/x86/xen/enlighten_hvm.c | 33 ++++------
arch/x86/xen/enlighten_pv.c | 87 +++++++++++-------------
arch/x86/xen/smp.c | 31 +++++++++
arch/x86/xen/smp.h | 2 +
arch/x86/xen/smp_hvm.c | 14 +++-
arch/x86/xen/smp_pv.c | 6 +-
arch/x86/xen/suspend_hvm.c | 11 +---
arch/x86/xen/xen-ops.h | 3 +-
include/xen/xen-ops.h | 2 +
10 files changed, 218 insertions(+), 125 deletions(-)
--
2.7.4
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel
next reply other threads:[~2017-06-03 0:06 UTC|newest]
Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-06-03 0:05 Ankur Arora [this message]
2017-06-03 0:05 ` [PATCH 1/5] xen/vcpu: Simplify xen_vcpu related code Ankur Arora
2017-06-03 0:05 ` [PATCH 2/5] xen/pvh*: Support > 32 VCPUs at domain restore Ankur Arora
2017-06-03 0:06 ` [PATCH 3/5] xen/pv: Fix OOPS on restore for a PV, !SMP domain Ankur Arora
2017-06-03 0:06 ` [PATCH 4/5] xen/vcpu: Handle xen_vcpu_setup() failure in hotplug Ankur Arora
2017-06-03 0:06 ` [PATCH 5/5] xen/vcpu: Handle xen_vcpu_setup() failure at boot Ankur Arora
2017-06-08 8:28 ` [PATCH 0/5] xen/pvh*: Support > 32 VCPUs at restore Juergen Gross
[not found] ` <7f803a2a-3015-73a4-d6f0-8850c0f9aa4b@suse.com>
2017-06-08 22:53 ` Konrad Rzeszutek Wilk
[not found] ` <20170608225307.GT676@char.us.oracle.com>
2017-06-09 0:05 ` Ankur Arora
2017-06-13 14:12 ` Juergen Gross
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1496448362-26558-1-git-send-email-ankur.a.arora@oracle.com \
--to=ankur.a.arora@oracle.com \
--cc=boris.ostrovsky@oracle.com \
--cc=jgross@suse.com \
--cc=linux-kernel@vger.kernel.org \
--cc=xen-devel@lists.xenproject.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).