From: Paul Durrant <Paul.Durrant@citrix.com>
To: 'Jan Beulich' <JBeulich@suse.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>,
Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
Andrew Cooper <Andrew.Cooper3@citrix.com>,
"Tim (Xen.org)" <tim@xen.org>,
Ian Jackson <Ian.Jackson@citrix.com>,
"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: Re: [PATCH v9 06/11] x86/hvm/ioreq: add a new mappable resource type...
Date: Tue, 10 Oct 2017 14:45:02 +0000 [thread overview]
Message-ID: <75e919eac68c40e59a26eae9831b3640@AMSPEX02CL03.citrite.net> (raw)
In-Reply-To: <59DBAFE20200007800184135@prv-mh.provo.novell.com>
> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@suse.com]
> Sent: 09 October 2017 16:21
> To: Paul Durrant <Paul.Durrant@citrix.com>
> Cc: Andrew Cooper <Andrew.Cooper3@citrix.com>; Ian Jackson
> <Ian.Jackson@citrix.com>; Stefano Stabellini <sstabellini@kernel.org>; xen-
> devel@lists.xenproject.org; Konrad Rzeszutek Wilk
> <konrad.wilk@oracle.com>; Tim (Xen.org) <tim@xen.org>
> Subject: Re: [PATCH v9 06/11] x86/hvm/ioreq: add a new mappable resource
> type...
>
> >>> On 06.10.17 at 14:25, <paul.durrant@citrix.com> wrote:
> > @@ -288,6 +301,61 @@ static int hvm_map_ioreq_gfn(struct
> hvm_ioreq_server *s, bool buf)
> > return rc;
> > }
> >
> > +static int hvm_alloc_ioreq_mfn(struct hvm_ioreq_server *s, bool buf)
> > +{
> > + struct domain *currd = current->domain;
> > + struct hvm_ioreq_page *iorp = buf ? &s->bufioreq : &s->ioreq;
> > +
> > + if ( iorp->page )
> > + {
> > + /*
> > + * If a guest frame has already been mapped (which may happen
> > + * on demand if hvm_get_ioreq_server_info() is called), then
> > + * allocating a page is not permitted.
> > + */
> > + if ( !gfn_eq(iorp->gfn, INVALID_GFN) )
> > + return -EPERM;
> > +
> > + return 0;
> > + }
> > +
> > + /*
> > + * Allocated IOREQ server pages are assigned to the emulating
> > + * domain, not the target domain. This is because the emulator is
> > + * likely to be destroyed after the target domain has been torn
> > + * down, and we must use MEMF_no_refcount otherwise page
> allocation
> > + * could fail if the emulating domain has already reached its
> > + * maximum allocation.
> > + */
> > + iorp->page = alloc_domheap_page(currd, MEMF_no_refcount);
>
> Whichever domain you assign the page to, you need to prevent
> it becoming usable as e.g. a page table or descriptor table page.
> IOW I think your missing a get_page_type(..., PGT_writable)
> here, with the put_page() on the free path below then needing
> to become put_page_and_type().
Ok.
>
> > @@ -784,6 +885,45 @@ int hvm_get_ioreq_server_info(struct domain *d,
> ioservid_t id,
> > return rc;
> > }
> >
> > +int hvm_get_ioreq_server_frame(struct domain *d, ioservid_t id,
> > + unsigned int idx, mfn_t *mfn)
> > +{
> > + struct hvm_ioreq_server *s;
> > + int rc;
> > +
> > + spin_lock_recursive(&d->arch.hvm_domain.ioreq_server.lock);
> > +
> > + if ( id == DEFAULT_IOSERVID )
> > + return -EOPNOTSUPP;
> > +
> > + s = get_ioreq_server(d, id);
> > +
> > + ASSERT(!IS_DEFAULT(s));
> > +
> > + rc = hvm_ioreq_server_alloc_pages(s);
> > + if ( rc )
> > + goto out;
> > +
> > + if ( idx == 0 )
>
> switch() ?
Yes, but if idx can exceed 1 in future (which would need to be the case to support greater numbers of vcpus) then I guess it may change back.
>
> > @@ -3866,6 +3867,27 @@ int xenmem_add_to_physmap_one(
> > return rc;
> > }
> >
> > +int xenmem_acquire_ioreq_server(struct domain *d, unsigned int id,
> > + unsigned long frame,
> > + unsigned long nr_frames,
> > + unsigned long mfn_list[])
> > +{
> > + unsigned int i;
> > +
> > + for ( i = 0; i < nr_frames; i++ )
> > + {
> > + mfn_t mfn;
> > + int rc = hvm_get_ioreq_server_frame(d, id, frame + i, &mfn);
>
> Coming back to the question of the size of the "frame" interface
> structure field, note how you silently truncate the upper 32 bits
> here.
OK. For this resource type I can't see 64-bits being needed, but I'll carry them through.
>
> > --- a/xen/include/public/memory.h
> > +++ b/xen/include/public/memory.h
> > @@ -609,15 +609,26 @@ struct xen_mem_acquire_resource {
> > domid_t domid;
> > /* IN - the type of resource */
> > uint16_t type;
> > +
> > +#define XENMEM_resource_ioreq_server 0
> > +
> > /*
> > * IN - a type-specific resource identifier, which must be zero
> > * unless stated otherwise.
> > + *
> > + * type == XENMEM_resource_ioreq_server -> id == ioreq server id
> > */
> > uint32_t id;
> > /* IN - number of (4K) frames of the resource to be mapped */
> > uint32_t nr_frames;
> > uint32_t pad;
> > - /* IN - the index of the initial frame to be mapped */
> > + /* IN - the index of the initial frame to be mapped
> > + *
> > + * type == XENMEM_resource_ioreq_server -> frame == 0 -> bufioreq
> > + * page
> > + * frame == 1 -> ioreq
> > + * page
> > + */
>
> Long comment or not I think you want to introduce constants
> for these two numbers.
>
Yes, that would probably be better although increasing the number of supported vcpus may mean that >1 becomes valid in future.
Paul
> Jan
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel
next prev parent reply other threads:[~2017-10-11 1:11 UTC|newest]
Thread overview: 36+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-10-06 12:25 [PATCH v9 00/11] x86: guest resource mapping Paul Durrant
2017-10-06 12:25 ` [PATCH v9 01/11] x86/hvm/ioreq: maintain an array of ioreq servers rather than a list Paul Durrant
2017-10-09 12:40 ` Jan Beulich
2017-10-09 12:45 ` Paul Durrant
2017-10-06 12:25 ` [PATCH v9 02/11] x86/hvm/ioreq: simplify code and use consistent naming Paul Durrant
2017-10-06 12:25 ` [PATCH v9 03/11] x86/hvm/ioreq: use gfn_t in struct hvm_ioreq_page Paul Durrant
2017-10-06 12:25 ` [PATCH v9 04/11] x86/hvm/ioreq: defer mapping gfns until they are actually requsted Paul Durrant
2017-10-09 12:45 ` Jan Beulich
2017-10-09 12:47 ` Paul Durrant
2017-10-06 12:25 ` [PATCH v9 05/11] x86/mm: add HYPERVISOR_memory_op to acquire guest resources Paul Durrant
2017-10-09 13:05 ` Jan Beulich
2017-10-10 13:26 ` Paul Durrant
2017-10-11 8:20 ` Jan Beulich
2017-10-09 14:23 ` Jan Beulich
2017-10-10 14:10 ` Paul Durrant
2017-10-10 14:37 ` Paul Durrant
2017-10-11 8:30 ` Jan Beulich
2017-10-11 8:38 ` Paul Durrant
2017-10-11 8:48 ` Jan Beulich
2017-10-06 12:25 ` [PATCH v9 06/11] x86/hvm/ioreq: add a new mappable resource type Paul Durrant
2017-10-09 15:20 ` Jan Beulich
2017-10-10 14:45 ` Paul Durrant [this message]
2017-10-11 8:35 ` Jan Beulich
2017-10-06 12:25 ` [PATCH v9 07/11] x86/mm: add an extra command to HYPERVISOR_mmu_update Paul Durrant
2017-10-09 15:44 ` Jan Beulich
2017-10-06 12:25 ` [PATCH v9 08/11] tools/libxenforeignmemory: add support for resource mapping Paul Durrant
2017-10-06 12:25 ` [PATCH v9 09/11] tools/libxenforeignmemory: reduce xenforeignmemory_restrict code footprint Paul Durrant
2017-10-06 12:25 ` [PATCH v9 10/11] common: add a new mappable resource type: XENMEM_resource_grant_table Paul Durrant
2017-10-10 10:25 ` Jan Beulich
2017-10-10 16:01 ` Paul Durrant
2017-10-11 8:47 ` Jan Beulich
2017-10-11 8:54 ` Paul Durrant
2017-10-11 9:43 ` Jan Beulich
2017-10-11 9:54 ` Paul Durrant
2017-10-11 10:12 ` Jan Beulich
2017-10-06 12:25 ` [PATCH v9 11/11] tools/libxenctrl: use new xenforeignmemory API to seed grant table Paul Durrant
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=75e919eac68c40e59a26eae9831b3640@AMSPEX02CL03.citrite.net \
--to=paul.durrant@citrix.com \
--cc=Andrew.Cooper3@citrix.com \
--cc=Ian.Jackson@citrix.com \
--cc=JBeulich@suse.com \
--cc=konrad.wilk@oracle.com \
--cc=sstabellini@kernel.org \
--cc=tim@xen.org \
--cc=xen-devel@lists.xenproject.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).