From: Chao Gao <chao.gao@intel.com>
To: xen-devel@lists.xen.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
Wei Liu <wei.liu2@citrix.com>,
George Dunlap <george.dunlap@eu.citrix.com>,
Andrew Cooper <andrew.cooper3@citrix.com>,
Ian Jackson <ian.jackson@eu.citrix.com>, Tim Deegan <tim@xen.org>,
Paul Durrant <paul.durrant@citrix.com>,
Jan Beulich <jbeulich@suse.com>, Chao Gao <chao.gao@intel.com>
Subject: [RFC Patch v4 0/8] Extend resources to support more vcpus in single VM
Date: Wed, 6 Dec 2017 15:50:06 +0800 [thread overview]
Message-ID: <1512546614-9937-1-git-send-email-chao.gao@intel.com> (raw)
This series is based on Paul Durrant's "x86: guest resource mapping"
(https://lists.xenproject.org/archives/html/xen-devel/2017-11/msg01735.html)
and "add vIOMMU support with irq remapping function of virtual VT-d"
(https://lists.xenproject.org/archives/html/xen-devel/2017-11/msg01063.html).
In order to support more vcpus in hvm, this series is to remove VCPU number
constraint imposed by several components:
1. IOREQ server: current only one IOREQ page is used, which limits
the maximum number of vcpus to 128.
2. libacpi: no x2apic entry is built in MADT and SRAT
3. Size of pre-allocated shadow memory
4. The way how we boot up APs.
This series is RFC for
1. I am not sure whether changes in patch 2 are acceptable.
2. It depends on our VIOMMU patches which are still under review.
Change since v3:
- Respond Wei and Roger's comments.
- Support multiple IOREQ pages. Seeing patch 1 and 2.
- boot APs through broadcast. Seeing patch 4.
- unify the computation of lapic_id.
- Add x2apic entry in SRAT.
- Increase shadow memory according to the maximum vcpus of HVM.
Change since v2:
1) Increase page pool size during setting max vcpu
2) Allocate madt table size according APIC id of each vcpus
3) Fix some code style issues.
Change since v1:
1) Increase hap page pool according vcpu number
2) Use "Processor" syntax to define vcpus with APIC id < 255
in dsdt and use "Device" syntax for other vcpus in ACPI DSDT table.
3) Use XAPIC structure for vcpus with APIC id < 255
in dsdt and use x2APIC structure for other vcpus in the ACPI MADT table.
This patchset is to extend some resources(i.e, event channel,
hap and so) to support more vcpus for single VM.
Chao Gao (6):
ioreq: remove most 'buf' parameter from static functions
ioreq: bump the number of IOREQ page to 4 pages
xl/acpi: unify the computation of lapic_id
hvmloader: boot cpu through broadcast
x86/hvm: bump the number of pages of shadow memory
x86/hvm: bump the maximum number of vcpus to 512
Lan Tianyu (2):
Tool/ACPI: DSDT extension to support more vcpus
hvmload: Add x2apic entry support in the MADT and SRAT build
tools/firmware/hvmloader/apic_regs.h | 4 +
tools/firmware/hvmloader/config.h | 3 +-
tools/firmware/hvmloader/smp.c | 64 ++++++++++++--
tools/libacpi/acpi2_0.h | 25 +++++-
tools/libacpi/build.c | 57 +++++++++---
tools/libacpi/libacpi.h | 9 ++
tools/libacpi/mk_dsdt.c | 40 +++++++--
tools/libxc/include/xc_dom.h | 2 +-
tools/libxc/xc_dom_x86.c | 6 +-
tools/libxl/libxl_x86_acpi.c | 2 +-
xen/arch/x86/hvm/hvm.c | 1 +
xen/arch/x86/hvm/ioreq.c | 150 ++++++++++++++++++++++----------
xen/arch/x86/mm/hap/hap.c | 2 +-
xen/arch/x86/mm/shadow/common.c | 2 +-
xen/include/asm-x86/hvm/domain.h | 6 +-
xen/include/public/hvm/hvm_info_table.h | 2 +-
xen/include/public/hvm/ioreq.h | 2 +
xen/include/public/hvm/params.h | 8 +-
18 files changed, 303 insertions(+), 82 deletions(-)
--
1.8.3.1
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
next reply other threads:[~2017-12-06 7:50 UTC|newest]
Thread overview: 56+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-12-06 7:50 Chao Gao [this message]
2017-12-06 7:50 ` [RFC Patch v4 1/8] ioreq: remove most 'buf' parameter from static functions Chao Gao
2017-12-06 14:44 ` Paul Durrant
2017-12-06 8:37 ` Chao Gao
2017-12-06 7:50 ` [RFC Patch v4 2/8] ioreq: bump the number of IOREQ page to 4 pages Chao Gao
2017-12-06 15:04 ` Paul Durrant
2017-12-06 9:02 ` Chao Gao
2017-12-06 16:10 ` Paul Durrant
2017-12-07 8:41 ` Paul Durrant
2017-12-07 6:56 ` Chao Gao
2017-12-08 11:06 ` Paul Durrant
2017-12-12 1:03 ` Chao Gao
2017-12-12 9:07 ` Paul Durrant
2017-12-12 23:39 ` Chao Gao
2017-12-13 10:49 ` Paul Durrant
2017-12-13 17:50 ` Paul Durrant
2017-12-14 14:50 ` Paul Durrant
2017-12-15 0:35 ` Chao Gao
2017-12-15 9:40 ` Paul Durrant
2018-04-18 8:19 ` Jan Beulich
2017-12-06 7:50 ` [RFC Patch v4 3/8] xl/acpi: unify the computation of lapic_id Chao Gao
2018-02-22 18:05 ` Wei Liu
2017-12-06 7:50 ` [RFC Patch v4 4/8] hvmloader: boot cpu through broadcast Chao Gao
2018-02-22 18:44 ` Wei Liu
2018-02-23 8:41 ` Jan Beulich
2018-02-23 16:42 ` Roger Pau Monné
2018-02-24 5:49 ` Chao Gao
2018-02-26 8:28 ` Jan Beulich
2018-02-26 12:33 ` Chao Gao
2018-02-26 14:19 ` Roger Pau Monné
2018-04-18 8:38 ` Jan Beulich
2018-04-18 11:20 ` Chao Gao
2018-04-18 11:50 ` Jan Beulich
2017-12-06 7:50 ` [RFC Patch v4 5/8] Tool/ACPI: DSDT extension to support more vcpus Chao Gao
2017-12-06 7:50 ` [RFC Patch v4 6/8] hvmload: Add x2apic entry support in the MADT and SRAT build Chao Gao
2018-04-18 8:48 ` Jan Beulich
2017-12-06 7:50 ` [RFC Patch v4 7/8] x86/hvm: bump the number of pages of shadow memory Chao Gao
2018-02-27 14:17 ` George Dunlap
2018-04-18 8:53 ` Jan Beulich
2018-04-18 11:39 ` Chao Gao
2018-04-18 11:50 ` Andrew Cooper
2018-04-18 11:59 ` Jan Beulich
2017-12-06 7:50 ` [RFC Patch v4 8/8] x86/hvm: bump the maximum number of vcpus to 512 Chao Gao
2018-02-22 18:46 ` Wei Liu
2018-02-23 8:50 ` Jan Beulich
2018-02-23 17:18 ` Wei Liu
2018-02-23 18:11 ` Roger Pau Monné
2018-02-24 6:26 ` Chao Gao
2018-02-26 8:26 ` Jan Beulich
2018-02-26 13:11 ` Chao Gao
2018-02-26 16:10 ` Jan Beulich
2018-03-01 5:21 ` Chao Gao
2018-03-01 7:17 ` Juergen Gross
2018-03-01 7:37 ` Jan Beulich
2018-03-01 7:11 ` Chao Gao
2018-02-27 14:59 ` George Dunlap
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1512546614-9937-1-git-send-email-chao.gao@intel.com \
--to=chao.gao@intel.com \
--cc=andrew.cooper3@citrix.com \
--cc=george.dunlap@eu.citrix.com \
--cc=ian.jackson@eu.citrix.com \
--cc=jbeulich@suse.com \
--cc=paul.durrant@citrix.com \
--cc=sstabellini@kernel.org \
--cc=tim@xen.org \
--cc=wei.liu2@citrix.com \
--cc=xen-devel@lists.xen.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).