From: Paul Durrant <Paul.Durrant@citrix.com>
To: 'Jan Beulich' <JBeulich@suse.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>,
Wei Liu <wei.liu2@citrix.com>,
Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
Andrew Cooper <Andrew.Cooper3@citrix.com>,
"Tim (Xen.org)" <tim@xen.org>,
George Dunlap <George.Dunlap@citrix.com>,
Ian Jackson <Ian.Jackson@citrix.com>,
"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: Re: [PATCH v9 05/11] x86/mm: add HYPERVISOR_memory_op to acquire guest resources
Date: Tue, 10 Oct 2017 13:26:22 +0000 [thread overview]
Message-ID: <c9c409345edd498fadf0d748f539eaf9@AMSPEX02CL03.citrite.net> (raw)
In-Reply-To: <59DB903F0200007800183F04@prv-mh.provo.novell.com>
> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@suse.com]
> Sent: 09 October 2017 14:06
> To: Paul Durrant <Paul.Durrant@citrix.com>
> Cc: Andrew Cooper <Andrew.Cooper3@citrix.com>; Wei Liu
> <wei.liu2@citrix.com>; George Dunlap <George.Dunlap@citrix.com>; Ian
> Jackson <Ian.Jackson@citrix.com>; Stefano Stabellini
> <sstabellini@kernel.org>; xen-devel@lists.xenproject.org; Konrad Rzeszutek
> Wilk <konrad.wilk@oracle.com>; Tim (Xen.org) <tim@xen.org>
> Subject: Re: [PATCH v9 05/11] x86/mm: add HYPERVISOR_memory_op to
> acquire guest resources
>
> >>> On 06.10.17 at 14:25, <paul.durrant@citrix.com> wrote:
> > @@ -395,6 +397,39 @@ int compat_memory_op(unsigned int cmd,
> XEN_GUEST_HANDLE_PARAM(void) compat)
> > }
> > #endif
> >
> > + case XENMEM_acquire_resource:
> > + {
> > + xen_ulong_t *gmfn_list = (xen_ulong_t *)(nat.mar + 1);
> > +
> > + if ( copy_from_guest(&cmp.mar, compat, 1) ||
> > + !compat_handle_okay(cmp.mar.gmfn_list,
> > + cmp.mar.nr_frames) )
> > + return -EFAULT;
> > +
> > + if ( sizeof(*gmfn_list) * cmp.mar.nr_frames >
> > + COMPAT_ARG_XLAT_SIZE - sizeof(*nat.mar) )
> > + return -E2BIG;
>
> With the actual handler accepting no more than 2 entries this is
> certainly good enough for now, but since larger arrays could be
> handled here perhaps a comment would be helpful to clarify
> why this is sufficient atm.
Ok.
>
> > @@ -535,6 +570,23 @@ int compat_memory_op(unsigned int cmd,
> XEN_GUEST_HANDLE_PARAM(void) compat)
> > rc = -EFAULT;
> > break;
> >
> > + case XENMEM_acquire_resource:
> > + {
> > + xen_ulong_t *gmfn_list = (xen_ulong_t *)(nat.mar + 1);
> > +
> > + for ( i = 0; i < cmp.mar.nr_frames; i++ )
> > + {
> > + compat_ulong_t gmfn = gmfn_list[i];
> > +
> > + if ( gmfn != gmfn_list[i] )
> > + return -ERANGE;
> > +
> > + if ( __copy_to_compat_offset(cmp.mar.gmfn_list, i,
> > + &gmfn, 1) )
> > + return -EFAULT;
>
> While I consider it acceptable to leave kind of inconsistent state in
> this latter case (as it's under guest control to avoid the situation),
> I'm not sure the -ERANGE return above wouldn't better be
> accompanied by undoing the operation. Undoing in both cases
> would become imperative once set_foreign_p2m_entry() acquires
> proper page references.
Ok. I'll try to make sure things get left in a consistent state.
>
> > --- a/xen/common/memory.c
> > +++ b/xen/common/memory.c
> > @@ -965,6 +965,67 @@ static long xatp_permission_check(struct domain
> *d, unsigned int space)
> > return xsm_add_to_physmap(XSM_TARGET, current->domain, d);
> > }
> >
> > +#ifdef CONFIG_X86
>
> Could you try to (a) have only a single such #ifdef and (b) reduce
> its scope as much as possible? Even if for now the code is dead on
> ARM, making sure it continues to compile there would be helpful.
>
Ok. I don't have an arm machine to build on so I can't tell if anything is broken though.
> > +static int acquire_resource(const xen_mem_acquire_resource_t *xmar)
> > +{
> > + struct domain *d, *currd = current->domain;
> > + unsigned long mfn_list[2];
> > + int rc;
> > +
> > + if ( xmar->nr_frames == 0 || xmar->pad != 0 )
> > + return -EINVAL;
> > +
> > + if ( xmar->nr_frames > ARRAY_SIZE(mfn_list) )
> > + return -E2BIG;
> > +
> > + d = rcu_lock_domain_by_any_id(xmar->domid);
> > + if ( d == NULL )
> > + return -ESRCH;
> > +
> > + rc = xsm_domain_memory_map(XSM_TARGET, d);
> > + if ( rc )
> > + goto out;
> > +
> > + switch ( xmar->type )
> > + {
> > + default:
> > + rc = -EOPNOTSUPP;
> > + break;
> > + }
> > +
> > + if ( rc )
> > + goto out;
> > +
> > + if ( !paging_mode_translate(currd) )
> > + {
> > + if ( copy_to_guest_offset(xmar->gmfn_list, 0, mfn_list,
> > + xmar->nr_frames) )
>
> Just copy_to_guest()?
>
Ok, if that is preferable.
> > + rc = -EFAULT;
> > + }
> > + else
> > + {
> > + unsigned int i;
> > +
> > + for ( i = 0; i < xmar->nr_frames; i++ )
> > + {
> > + xen_pfn_t gfn;
> > +
> > + rc = -EFAULT;
> > + if ( copy_from_guest_offset(&gfn, xmar->gmfn_list, i, 1) )
> > + goto out;
> > +
> > + rc = set_foreign_p2m_entry(currd, gfn, _mfn(mfn_list[i]));
> > + if ( rc )
> > + goto out;
>
> Perhaps partial success indication would be necessary here, so
> the caller knows what to undo later. Or alternatively (just like
> said for the compat wrapper) you may want/need to clean up
> yourself.
OK.
>
> > --- a/xen/include/public/memory.h
> > +++ b/xen/include/public/memory.h
> > @@ -599,6 +599,36 @@ struct xen_reserved_device_memory_map {
> > typedef struct xen_reserved_device_memory_map
> xen_reserved_device_memory_map_t;
> > DEFINE_XEN_GUEST_HANDLE(xen_reserved_device_memory_map_t);
> >
> > +/*
> > + * Get the pages for a particular guest resource, so that they can be
> > + * mapped directly by a tools domain.
> > + */
> > +#define XENMEM_acquire_resource 28
> > +struct xen_mem_acquire_resource {
> > + /* IN - the domain whose resource is to be mapped */
> > + domid_t domid;
> > + /* IN - the type of resource */
> > + uint16_t type;
> > + /*
> > + * IN - a type-specific resource identifier, which must be zero
> > + * unless stated otherwise.
> > + */
> > + uint32_t id;
> > + /* IN - number of (4K) frames of the resource to be mapped */
> > + uint32_t nr_frames;
> > + uint32_t pad;
> > + /* IN - the index of the initial frame to be mapped */
> > + uint64_aligned_t frame;
>
> Does this really need to be 64 bits wide?
>
I'd prefer not to limit to 32 bits just in case we want to use this hypercall to map something with a large frame space.
> > + /* IN/OUT - If the tools domain is PV then, upon return, gmfn_list
> > + * will be populated with the MFNs of the resource.
> > + * If the tools domain is HVM then it is expected that, on
> > + * entry, gmfn_list will be populated with a list of GFNs
> > + * that will be mapped to the MFNs of the resource.
> > + */
> > + XEN_GUEST_HANDLE(xen_ulong_t) gmfn_list;
>
> Why not xen_pfn_t?
I've debated this with myself a few times. I'm not convinced that something that could be an mfn or a gfn should have a xen_pfn_t type.
Paul
>
> Jan
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel
next prev parent reply other threads:[~2017-10-10 23:35 UTC|newest]
Thread overview: 36+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-10-06 12:25 [PATCH v9 00/11] x86: guest resource mapping Paul Durrant
2017-10-06 12:25 ` [PATCH v9 01/11] x86/hvm/ioreq: maintain an array of ioreq servers rather than a list Paul Durrant
2017-10-09 12:40 ` Jan Beulich
2017-10-09 12:45 ` Paul Durrant
2017-10-06 12:25 ` [PATCH v9 02/11] x86/hvm/ioreq: simplify code and use consistent naming Paul Durrant
2017-10-06 12:25 ` [PATCH v9 03/11] x86/hvm/ioreq: use gfn_t in struct hvm_ioreq_page Paul Durrant
2017-10-06 12:25 ` [PATCH v9 04/11] x86/hvm/ioreq: defer mapping gfns until they are actually requsted Paul Durrant
2017-10-09 12:45 ` Jan Beulich
2017-10-09 12:47 ` Paul Durrant
2017-10-06 12:25 ` [PATCH v9 05/11] x86/mm: add HYPERVISOR_memory_op to acquire guest resources Paul Durrant
2017-10-09 13:05 ` Jan Beulich
2017-10-10 13:26 ` Paul Durrant [this message]
2017-10-11 8:20 ` Jan Beulich
2017-10-09 14:23 ` Jan Beulich
2017-10-10 14:10 ` Paul Durrant
2017-10-10 14:37 ` Paul Durrant
2017-10-11 8:30 ` Jan Beulich
2017-10-11 8:38 ` Paul Durrant
2017-10-11 8:48 ` Jan Beulich
2017-10-06 12:25 ` [PATCH v9 06/11] x86/hvm/ioreq: add a new mappable resource type Paul Durrant
2017-10-09 15:20 ` Jan Beulich
2017-10-10 14:45 ` Paul Durrant
2017-10-11 8:35 ` Jan Beulich
2017-10-06 12:25 ` [PATCH v9 07/11] x86/mm: add an extra command to HYPERVISOR_mmu_update Paul Durrant
2017-10-09 15:44 ` Jan Beulich
2017-10-06 12:25 ` [PATCH v9 08/11] tools/libxenforeignmemory: add support for resource mapping Paul Durrant
2017-10-06 12:25 ` [PATCH v9 09/11] tools/libxenforeignmemory: reduce xenforeignmemory_restrict code footprint Paul Durrant
2017-10-06 12:25 ` [PATCH v9 10/11] common: add a new mappable resource type: XENMEM_resource_grant_table Paul Durrant
2017-10-10 10:25 ` Jan Beulich
2017-10-10 16:01 ` Paul Durrant
2017-10-11 8:47 ` Jan Beulich
2017-10-11 8:54 ` Paul Durrant
2017-10-11 9:43 ` Jan Beulich
2017-10-11 9:54 ` Paul Durrant
2017-10-11 10:12 ` Jan Beulich
2017-10-06 12:25 ` [PATCH v9 11/11] tools/libxenctrl: use new xenforeignmemory API to seed grant table Paul Durrant
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=c9c409345edd498fadf0d748f539eaf9@AMSPEX02CL03.citrite.net \
--to=paul.durrant@citrix.com \
--cc=Andrew.Cooper3@citrix.com \
--cc=George.Dunlap@citrix.com \
--cc=Ian.Jackson@citrix.com \
--cc=JBeulich@suse.com \
--cc=konrad.wilk@oracle.com \
--cc=sstabellini@kernel.org \
--cc=tim@xen.org \
--cc=wei.liu2@citrix.com \
--cc=xen-devel@lists.xenproject.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).