From: Chao Gao <chao.gao@intel.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: tim@xen.org, sstabellini@kernel.org, wei.liu2@citrix.com,
George.Dunlap@eu.citrix.com, andrew.cooper3@citrix.com,
ian.jackson@eu.citrix.com, xen-devel@lists.xen.org,
roger.pau@citrix.com
Subject: Re: [RFC Patch v4 8/8] x86/hvm: bump the maximum number of vcpus to 512
Date: Thu, 1 Mar 2018 15:11:24 +0800 [thread overview]
Message-ID: <20180301071123.GA82788@skl-4s-chao.sh.intel.com> (raw)
In-Reply-To: <5A97ADD50200007800128C08@prv-mh.provo.novell.com>
On Thu, Mar 01, 2018 at 12:37:57AM -0700, Jan Beulich wrote:
>>>> Chao Gao <chao.gao@intel.com> 03/01/18 7:34 AM >>>
>>On Mon, Feb 26, 2018 at 09:10:33AM -0700, Jan Beulich wrote:
>>>Again - here we're talking about implementation limits, not
>>>bottlenecks. So in this context all I'm interested in is whether
>>>(and if so which) implementation limit remains. If an (almost)
>>>arbitrary number is fine, perhaps we'll want to have a Kconfig
>>>option.
>>
>>Do you think that struct hvm_info_table would be a implementation
>>limits? To contain this struct in a single page, the HVM_MAX_VCPUS
>>should be smaller than a value, like (PAGE_SIZE * 8). Supposing
>>it is the only implementation limit, I don't think it is reasonable
>>to set HVM_MAX_VCPUS to that value, because we don't have hardwares to
>>perform tests, even Xeon-phi isn't capable. This value can be bumped
>>when some methods verify a guest can work with more vcpus. Now I
>>prefer 288 over 512 and some values else.
>
>Whether going beyond PAGE_SIZE with the structure size is a valid item
>to think about, but I don't think there's any implied limit from that. But -
>did you read my and George's subsequent reply at all? You continue to
Yes. I did. But somehow I didn't clearly understand the difference.
Sorry for this.
>mix up supported (because of being able to test) limits with implementation
>ones. Even Jürgen's suggestion to take NR_CPUS as the limit is not very
>reasonable - PV guests have an implementation limit of (iirc) 8192. Once
>again - if there's no sensible upper limit imposed by the implementation,
>consider introducing a Kconfig option to pick the limit.
Got it.
Thanks
Chao
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
next prev parent reply other threads:[~2018-03-01 7:11 UTC|newest]
Thread overview: 56+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-12-06 7:50 [RFC Patch v4 0/8] Extend resources to support more vcpus in single VM Chao Gao
2017-12-06 7:50 ` [RFC Patch v4 1/8] ioreq: remove most 'buf' parameter from static functions Chao Gao
2017-12-06 14:44 ` Paul Durrant
2017-12-06 8:37 ` Chao Gao
2017-12-06 7:50 ` [RFC Patch v4 2/8] ioreq: bump the number of IOREQ page to 4 pages Chao Gao
2017-12-06 15:04 ` Paul Durrant
2017-12-06 9:02 ` Chao Gao
2017-12-06 16:10 ` Paul Durrant
2017-12-07 8:41 ` Paul Durrant
2017-12-07 6:56 ` Chao Gao
2017-12-08 11:06 ` Paul Durrant
2017-12-12 1:03 ` Chao Gao
2017-12-12 9:07 ` Paul Durrant
2017-12-12 23:39 ` Chao Gao
2017-12-13 10:49 ` Paul Durrant
2017-12-13 17:50 ` Paul Durrant
2017-12-14 14:50 ` Paul Durrant
2017-12-15 0:35 ` Chao Gao
2017-12-15 9:40 ` Paul Durrant
2018-04-18 8:19 ` Jan Beulich
2017-12-06 7:50 ` [RFC Patch v4 3/8] xl/acpi: unify the computation of lapic_id Chao Gao
2018-02-22 18:05 ` Wei Liu
2017-12-06 7:50 ` [RFC Patch v4 4/8] hvmloader: boot cpu through broadcast Chao Gao
2018-02-22 18:44 ` Wei Liu
2018-02-23 8:41 ` Jan Beulich
2018-02-23 16:42 ` Roger Pau Monné
2018-02-24 5:49 ` Chao Gao
2018-02-26 8:28 ` Jan Beulich
2018-02-26 12:33 ` Chao Gao
2018-02-26 14:19 ` Roger Pau Monné
2018-04-18 8:38 ` Jan Beulich
2018-04-18 11:20 ` Chao Gao
2018-04-18 11:50 ` Jan Beulich
2017-12-06 7:50 ` [RFC Patch v4 5/8] Tool/ACPI: DSDT extension to support more vcpus Chao Gao
2017-12-06 7:50 ` [RFC Patch v4 6/8] hvmload: Add x2apic entry support in the MADT and SRAT build Chao Gao
2018-04-18 8:48 ` Jan Beulich
2017-12-06 7:50 ` [RFC Patch v4 7/8] x86/hvm: bump the number of pages of shadow memory Chao Gao
2018-02-27 14:17 ` George Dunlap
2018-04-18 8:53 ` Jan Beulich
2018-04-18 11:39 ` Chao Gao
2018-04-18 11:50 ` Andrew Cooper
2018-04-18 11:59 ` Jan Beulich
2017-12-06 7:50 ` [RFC Patch v4 8/8] x86/hvm: bump the maximum number of vcpus to 512 Chao Gao
2018-02-22 18:46 ` Wei Liu
2018-02-23 8:50 ` Jan Beulich
2018-02-23 17:18 ` Wei Liu
2018-02-23 18:11 ` Roger Pau Monné
2018-02-24 6:26 ` Chao Gao
2018-02-26 8:26 ` Jan Beulich
2018-02-26 13:11 ` Chao Gao
2018-02-26 16:10 ` Jan Beulich
2018-03-01 5:21 ` Chao Gao
2018-03-01 7:17 ` Juergen Gross
2018-03-01 7:37 ` Jan Beulich
2018-03-01 7:11 ` Chao Gao [this message]
2018-02-27 14:59 ` George Dunlap
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20180301071123.GA82788@skl-4s-chao.sh.intel.com \
--to=chao.gao@intel.com \
--cc=George.Dunlap@eu.citrix.com \
--cc=andrew.cooper3@citrix.com \
--cc=ian.jackson@eu.citrix.com \
--cc=jbeulich@suse.com \
--cc=roger.pau@citrix.com \
--cc=sstabellini@kernel.org \
--cc=tim@xen.org \
--cc=wei.liu2@citrix.com \
--cc=xen-devel@lists.xen.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).