xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: Chao Gao <chao.gao@intel.com>
To: Jan Beulich <JBeulich@suse.com>
Cc: "Tim Deegan" <tim@xen.org>,
	"Stefano Stabellini" <sstabellini@kernel.org>,
	"Wei Liu" <wei.liu2@citrix.com>,
	"George Dunlap" <George.Dunlap@eu.citrix.com>,
	"Andrew Cooper" <andrew.cooper3@citrix.com>,
	"Ian Jackson" <ian.jackson@eu.citrix.com>,
	xen-devel@lists.xen.org, "Roger Pau Monné" <roger.pau@citrix.com>
Subject: Re: [RFC Patch v4 8/8] x86/hvm: bump the maximum number of vcpus to 512
Date: Thu, 1 Mar 2018 13:21:26 +0800	[thread overview]
Message-ID: <20180301052124.GA74072@skl-4s-chao.sh.intel.com> (raw)
In-Reply-To: <5A943F8902000078001ABDE2@prv-mh.provo.novell.com>

On Mon, Feb 26, 2018 at 09:10:33AM -0700, Jan Beulich wrote:
>>>> On 26.02.18 at 14:11, <chao.gao@intel.com> wrote:
>> On Mon, Feb 26, 2018 at 01:26:42AM -0700, Jan Beulich wrote:
>>>>>> On 23.02.18 at 19:11, <roger.pau@citrix.com> wrote:
>>>> On Wed, Dec 06, 2017 at 03:50:14PM +0800, Chao Gao wrote:
>>>>> Signed-off-by: Chao Gao <chao.gao@intel.com>
>>>>> ---
>>>>>  xen/include/public/hvm/hvm_info_table.h | 2 +-
>>>>>  1 file changed, 1 insertion(+), 1 deletion(-)
>>>>> 
>>>>> diff --git a/xen/include/public/hvm/hvm_info_table.h 
>>>> b/xen/include/public/hvm/hvm_info_table.h
>>>>> index 08c252e..6833a4c 100644
>>>>> --- a/xen/include/public/hvm/hvm_info_table.h
>>>>> +++ b/xen/include/public/hvm/hvm_info_table.h
>>>>> @@ -32,7 +32,7 @@
>>>>>  #define HVM_INFO_PADDR       ((HVM_INFO_PFN << 12) + HVM_INFO_OFFSET)
>>>>>  
>>>>>  /* Maximum we can support with current vLAPIC ID mapping. */
>>>>> -#define HVM_MAX_VCPUS        128
>>>>> +#define HVM_MAX_VCPUS        512
>>>> 
>>>> Wow, that looks like a pretty big jump. I certainly don't have access
>>>> to any box with this number of vCPUs, so that's going to be quite hard
>>>> to test. What the reasoning behind this bump? Is hardware with 512
>>>> ways expected soon-ish?
>>>> 
>>>> Also osstest is not even able to test the current limit, so I would
>>>> maybe bump this to 256, but as I expressed in other occasions I don't
>>>> feel comfortable with have a number of vCPUs that the current test
>>>> system doesn't have hardware to test with.
>>>
>>>I think implementation limit and supported limit need to be clearly
>>>distinguished here. Therefore I'd put the question the other way
>>>around: What's causing the limit to be 512, rather than 1024,
>>>4096, or even 4G-1 (x2APIC IDs are 32 bits wide, after all)?
>> 
>> TBH, I have no idea. When I choose a value, what comes up to my mind is
>> that the value should be 288, because Intel has Xeon-phi platform which
>> has 288 physical threads, and some customers wants to use this new platform
>> for HPC cloud. Furthermore, they requests to support a big VM in which
>> almost computing and device resources are assigned to the VM. They just
>> use virtulization technology to manage the machines. In this situation,
>> I choose 512 is because I feel much better if the limit is a power of 2.
>> 
>> You are asking that as these patches remove limitations imposed by some
>> components, which one is the next bottleneck and how many vcpus does it
>> limit.  Maybe it would be the use-case. No one is requesting to support
>> more than 288 at this moment. So what is the value you prefer? 288 or
>> 512? or you think I should find the next bottleneck in Xen's
>> implementation.
>
>Again - here we're talking about implementation limits, not
>bottlenecks. So in this context all I'm interested in is whether
>(and if so which) implementation limit remains. If an (almost)
>arbitrary number is fine, perhaps we'll want to have a Kconfig
>option.

Do you think that struct hvm_info_table would be a implementation
limits? To contain this struct in a single page, the HVM_MAX_VCPUS
should be smaller than a value, like (PAGE_SIZE * 8). Supposing
it is the only implementation limit, I don't think it is reasonable
to set HVM_MAX_VCPUS to that value, because we don't have hardwares to
perform tests, even Xeon-phi isn't capable. This value can be bumped
when some methods verify a guest can work with more vcpus. Now I
prefer 288 over 512 and some values else.

>
>I'm also curious - do Phis not come in multi-socket configs? It's
>my understanding that 288 is the count for a single socket.

Currently we don't have. But it's hard to say for future products.

Thanks
Chao

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

  reply	other threads:[~2018-03-01  5:21 UTC|newest]

Thread overview: 56+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-12-06  7:50 [RFC Patch v4 0/8] Extend resources to support more vcpus in single VM Chao Gao
2017-12-06  7:50 ` [RFC Patch v4 1/8] ioreq: remove most 'buf' parameter from static functions Chao Gao
2017-12-06 14:44   ` Paul Durrant
2017-12-06  8:37     ` Chao Gao
2017-12-06  7:50 ` [RFC Patch v4 2/8] ioreq: bump the number of IOREQ page to 4 pages Chao Gao
2017-12-06 15:04   ` Paul Durrant
2017-12-06  9:02     ` Chao Gao
2017-12-06 16:10       ` Paul Durrant
2017-12-07  8:41         ` Paul Durrant
2017-12-07  6:56           ` Chao Gao
2017-12-08 11:06             ` Paul Durrant
2017-12-12  1:03               ` Chao Gao
2017-12-12  9:07                 ` Paul Durrant
2017-12-12 23:39                   ` Chao Gao
2017-12-13 10:49                     ` Paul Durrant
2017-12-13 17:50                       ` Paul Durrant
2017-12-14 14:50                         ` Paul Durrant
2017-12-15  0:35                           ` Chao Gao
2017-12-15  9:40                             ` Paul Durrant
2018-04-18  8:19   ` Jan Beulich
2017-12-06  7:50 ` [RFC Patch v4 3/8] xl/acpi: unify the computation of lapic_id Chao Gao
2018-02-22 18:05   ` Wei Liu
2017-12-06  7:50 ` [RFC Patch v4 4/8] hvmloader: boot cpu through broadcast Chao Gao
2018-02-22 18:44   ` Wei Liu
2018-02-23  8:41     ` Jan Beulich
2018-02-23 16:42   ` Roger Pau Monné
2018-02-24  5:49     ` Chao Gao
2018-02-26  8:28       ` Jan Beulich
2018-02-26 12:33         ` Chao Gao
2018-02-26 14:19           ` Roger Pau Monné
2018-04-18  8:38   ` Jan Beulich
2018-04-18 11:20     ` Chao Gao
2018-04-18 11:50       ` Jan Beulich
2017-12-06  7:50 ` [RFC Patch v4 5/8] Tool/ACPI: DSDT extension to support more vcpus Chao Gao
2017-12-06  7:50 ` [RFC Patch v4 6/8] hvmload: Add x2apic entry support in the MADT and SRAT build Chao Gao
2018-04-18  8:48   ` Jan Beulich
2017-12-06  7:50 ` [RFC Patch v4 7/8] x86/hvm: bump the number of pages of shadow memory Chao Gao
2018-02-27 14:17   ` George Dunlap
2018-04-18  8:53   ` Jan Beulich
2018-04-18 11:39     ` Chao Gao
2018-04-18 11:50       ` Andrew Cooper
2018-04-18 11:59       ` Jan Beulich
2017-12-06  7:50 ` [RFC Patch v4 8/8] x86/hvm: bump the maximum number of vcpus to 512 Chao Gao
2018-02-22 18:46   ` Wei Liu
2018-02-23  8:50     ` Jan Beulich
2018-02-23 17:18       ` Wei Liu
2018-02-23 18:11   ` Roger Pau Monné
2018-02-24  6:26     ` Chao Gao
2018-02-26  8:26     ` Jan Beulich
2018-02-26 13:11       ` Chao Gao
2018-02-26 16:10         ` Jan Beulich
2018-03-01  5:21           ` Chao Gao [this message]
2018-03-01  7:17             ` Juergen Gross
2018-03-01  7:37             ` Jan Beulich
2018-03-01  7:11               ` Chao Gao
2018-02-27 14:59         ` George Dunlap

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20180301052124.GA74072@skl-4s-chao.sh.intel.com \
    --to=chao.gao@intel.com \
    --cc=George.Dunlap@eu.citrix.com \
    --cc=JBeulich@suse.com \
    --cc=andrew.cooper3@citrix.com \
    --cc=ian.jackson@eu.citrix.com \
    --cc=roger.pau@citrix.com \
    --cc=sstabellini@kernel.org \
    --cc=tim@xen.org \
    --cc=wei.liu2@citrix.com \
    --cc=xen-devel@lists.xen.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).