xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: Paul Durrant <Paul.Durrant@citrix.com>
To: 'Chao Gao' <chao.gao@intel.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wei.liu2@citrix.com>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	"Tim (Xen.org)" <tim@xen.org>,
	George Dunlap <George.Dunlap@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Jan Beulich <jbeulich@suse.com>,
	Ian Jackson <Ian.Jackson@citrix.com>
Subject: Re: [RFC Patch v4 2/8] ioreq: bump the number of IOREQ page to 4 pages
Date: Wed, 6 Dec 2017 16:10:27 +0000	[thread overview]
Message-ID: <5cf06a5713b0402b8ad1d1a69a7d77f0@AMSPEX02CL03.citrite.net> (raw)
In-Reply-To: <20171206090213.GA23898@op-computing>

> -----Original Message-----
> From: Chao Gao [mailto:chao.gao@intel.com]
> Sent: 06 December 2017 09:02
> To: Paul Durrant <Paul.Durrant@citrix.com>
> Cc: xen-devel@lists.xen.org; Tim (Xen.org) <tim@xen.org>; Stefano
> Stabellini <sstabellini@kernel.org>; Konrad Rzeszutek Wilk
> <konrad.wilk@oracle.com>; Jan Beulich <jbeulich@suse.com>; George
> Dunlap <George.Dunlap@citrix.com>; Andrew Cooper
> <Andrew.Cooper3@citrix.com>; Wei Liu <wei.liu2@citrix.com>; Ian Jackson
> <Ian.Jackson@citrix.com>
> Subject: Re: [RFC Patch v4 2/8] ioreq: bump the number of IOREQ page to 4
> pages
> 
> On Wed, Dec 06, 2017 at 03:04:11PM +0000, Paul Durrant wrote:
> >> -----Original Message-----
> >> From: Chao Gao [mailto:chao.gao@intel.com]
> >> Sent: 06 December 2017 07:50
> >> To: xen-devel@lists.xen.org
> >> Cc: Chao Gao <chao.gao@intel.com>; Paul Durrant
> >> <Paul.Durrant@citrix.com>; Tim (Xen.org) <tim@xen.org>; Stefano
> Stabellini
> >> <sstabellini@kernel.org>; Konrad Rzeszutek Wilk
> >> <konrad.wilk@oracle.com>; Jan Beulich <jbeulich@suse.com>; George
> >> Dunlap <George.Dunlap@citrix.com>; Andrew Cooper
> >> <Andrew.Cooper3@citrix.com>; Wei Liu <wei.liu2@citrix.com>; Ian
> Jackson
> >> <Ian.Jackson@citrix.com>
> >> Subject: [RFC Patch v4 2/8] ioreq: bump the number of IOREQ page to 4
> >> pages
> >>
> >> One 4K-byte page at most contains 128 'ioreq_t'. In order to remove the
> vcpu
> >> number constraint imposed by one IOREQ page, bump the number of
> IOREQ
> >> page to
> >> 4 pages. With this patch, multiple pages can be used as IOREQ page.
> >>
> >> Basically, this patch extends 'ioreq' field in struct hvm_ioreq_server to an
> >> array. All accesses to 'ioreq' field such as 's->ioreq' are replaced with
> >> FOR_EACH_IOREQ_PAGE macro.
> >>
> >> In order to access an IOREQ page, QEMU should get the gmfn and map
> this
> >> gmfn
> >> to its virtual address space.
> >
> >No. There's no need to extend the 'legacy' mechanism of using magic page
> gfns. You should only handle the case where the mfns are allocated on
> demand (see the call to hvm_ioreq_server_alloc_pages() in
> hvm_get_ioreq_server_frame()). The number of guest vcpus is known at
> this point so the correct number of pages can be allocated. If the creator of
> the ioreq server attempts to use the legacy hvm_get_ioreq_server_info()
> and the guest has >128 vcpus then the call should fail.
> 
> Great suggestion. I will introduce a new dmop, a variant of
> hvm_get_ioreq_server_frame() for creator to get an array of gfns and the
> size of array. And the legacy interface will report an error if more
> than one IOREQ PAGES are needed.

You don't need a new dmop for mapping I think. The mem op to map ioreq server frames should work. All you should need to do is update hvm_get_ioreq_server_frame() to deal with an index > 1, and provide some means for the ioreq server creator to convert the number of guest vcpus into the correct number of pages to map. (That might need a new dm op).

  Paul

> 
> Thanks
> Chao

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

  reply	other threads:[~2017-12-06 16:10 UTC|newest]

Thread overview: 56+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-12-06  7:50 [RFC Patch v4 0/8] Extend resources to support more vcpus in single VM Chao Gao
2017-12-06  7:50 ` [RFC Patch v4 1/8] ioreq: remove most 'buf' parameter from static functions Chao Gao
2017-12-06 14:44   ` Paul Durrant
2017-12-06  8:37     ` Chao Gao
2017-12-06  7:50 ` [RFC Patch v4 2/8] ioreq: bump the number of IOREQ page to 4 pages Chao Gao
2017-12-06 15:04   ` Paul Durrant
2017-12-06  9:02     ` Chao Gao
2017-12-06 16:10       ` Paul Durrant [this message]
2017-12-07  8:41         ` Paul Durrant
2017-12-07  6:56           ` Chao Gao
2017-12-08 11:06             ` Paul Durrant
2017-12-12  1:03               ` Chao Gao
2017-12-12  9:07                 ` Paul Durrant
2017-12-12 23:39                   ` Chao Gao
2017-12-13 10:49                     ` Paul Durrant
2017-12-13 17:50                       ` Paul Durrant
2017-12-14 14:50                         ` Paul Durrant
2017-12-15  0:35                           ` Chao Gao
2017-12-15  9:40                             ` Paul Durrant
2018-04-18  8:19   ` Jan Beulich
2017-12-06  7:50 ` [RFC Patch v4 3/8] xl/acpi: unify the computation of lapic_id Chao Gao
2018-02-22 18:05   ` Wei Liu
2017-12-06  7:50 ` [RFC Patch v4 4/8] hvmloader: boot cpu through broadcast Chao Gao
2018-02-22 18:44   ` Wei Liu
2018-02-23  8:41     ` Jan Beulich
2018-02-23 16:42   ` Roger Pau Monné
2018-02-24  5:49     ` Chao Gao
2018-02-26  8:28       ` Jan Beulich
2018-02-26 12:33         ` Chao Gao
2018-02-26 14:19           ` Roger Pau Monné
2018-04-18  8:38   ` Jan Beulich
2018-04-18 11:20     ` Chao Gao
2018-04-18 11:50       ` Jan Beulich
2017-12-06  7:50 ` [RFC Patch v4 5/8] Tool/ACPI: DSDT extension to support more vcpus Chao Gao
2017-12-06  7:50 ` [RFC Patch v4 6/8] hvmload: Add x2apic entry support in the MADT and SRAT build Chao Gao
2018-04-18  8:48   ` Jan Beulich
2017-12-06  7:50 ` [RFC Patch v4 7/8] x86/hvm: bump the number of pages of shadow memory Chao Gao
2018-02-27 14:17   ` George Dunlap
2018-04-18  8:53   ` Jan Beulich
2018-04-18 11:39     ` Chao Gao
2018-04-18 11:50       ` Andrew Cooper
2018-04-18 11:59       ` Jan Beulich
2017-12-06  7:50 ` [RFC Patch v4 8/8] x86/hvm: bump the maximum number of vcpus to 512 Chao Gao
2018-02-22 18:46   ` Wei Liu
2018-02-23  8:50     ` Jan Beulich
2018-02-23 17:18       ` Wei Liu
2018-02-23 18:11   ` Roger Pau Monné
2018-02-24  6:26     ` Chao Gao
2018-02-26  8:26     ` Jan Beulich
2018-02-26 13:11       ` Chao Gao
2018-02-26 16:10         ` Jan Beulich
2018-03-01  5:21           ` Chao Gao
2018-03-01  7:17             ` Juergen Gross
2018-03-01  7:37             ` Jan Beulich
2018-03-01  7:11               ` Chao Gao
2018-02-27 14:59         ` George Dunlap

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=5cf06a5713b0402b8ad1d1a69a7d77f0@AMSPEX02CL03.citrite.net \
    --to=paul.durrant@citrix.com \
    --cc=Andrew.Cooper3@citrix.com \
    --cc=George.Dunlap@citrix.com \
    --cc=Ian.Jackson@citrix.com \
    --cc=chao.gao@intel.com \
    --cc=jbeulich@suse.com \
    --cc=sstabellini@kernel.org \
    --cc=tim@xen.org \
    --cc=wei.liu2@citrix.com \
    --cc=xen-devel@lists.xen.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).