From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Vitaly Kuznetsov <vkuznets@redhat.com>
Cc: xen-devel@lists.xenproject.org, Andrew Jones <drjones@redhat.com>,
David Vrabel <david.vrabel@citrix.com>,
Jan Beulich <JBeulich@suse.com>
Subject: Re: [PATCH RFC/WIPv2 1/6] Introduce XENMEM_transfer operation
Date: Wed, 24 Sep 2014 16:33:06 +0100 [thread overview]
Message-ID: <5422E432.7050809@citrix.com> (raw)
In-Reply-To: <87a95p6oht.fsf@vitty.brq.redhat.com>
On 24/09/14 16:13, Vitaly Kuznetsov wrote:
> Andrew Cooper <andrew.cooper3@citrix.com> writes:
>
>> On 24/09/14 15:20, Vitaly Kuznetsov wrote:
>>> New operation reassigns pages from one domain to the other mapping them
>>> at exactly the same GFNs in the destination domain. Pages mapped more
>>> than once (e.g. granted pages) are being copied.
>>>
>>> Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
>>> ---
>>> xen/common/memory.c | 178 ++++++++++++++++++++++++++++++++++++++++++++
>>> xen/include/public/memory.h | 32 +++++++-
>>> 2 files changed, 209 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/xen/common/memory.c b/xen/common/memory.c
>>> index 2e3225d..653e117 100644
>>> --- a/xen/common/memory.c
>>> +++ b/xen/common/memory.c
>>> @@ -578,6 +578,180 @@ static long memory_exchange(XEN_GUEST_HANDLE_PARAM(xen_memory_exchange_t) arg)
>>> return rc;
>>> }
>>>
>>> +static long memory_transfer(XEN_GUEST_HANDLE_PARAM(xen_memory_transfer_t) arg)
>>> +{
>>> + long rc = 0;
>>> + struct xen_memory_transfer trans;
>>> + struct domain *source_d, *dest_d;
>>> + unsigned long mfn, gmfn, last_gmfn;
>>> + p2m_type_t p2mt;
>>> + struct page_info *page, *new_page;
>>> + char *sp, *dp;
>>> + int copying;
>>> +
>>> + if ( copy_from_guest(&trans, arg, 1) )
>>> + return -EFAULT;
>>> +
>>> + source_d = rcu_lock_domain_by_any_id(trans.source_domid);
>>> + if ( source_d == NULL )
>>> + {
>>> + rc = -ESRCH;
>>> + goto fail_early;
>>> + }
>>> +
>>> + if ( source_d->is_dying )
>>> + {
>>> + rc = -EINVAL;
>>> + rcu_unlock_domain(source_d);
>>> + goto fail_early;
>>> + }
>>> +
>>> + dest_d = rcu_lock_domain_by_any_id(trans.dest_domid);
>>> + if ( dest_d == NULL )
>>> + {
>>> + rc = -ESRCH;
>>> + rcu_unlock_domain(source_d);
>>> + goto fail_early;
>>> + }
>>> +
>>> + if ( dest_d->is_dying )
>>> + {
>>> + rc = -EINVAL;
>>> + goto fail;
>>> + }
>>> +
>>> + last_gmfn = trans.gmfn_start + trans.gmfn_count;
>>> + for ( gmfn = trans.gmfn_start; gmfn < last_gmfn; gmfn++ )
>>> + {
>>> + page = get_page_from_gfn(source_d, gmfn, &p2mt, 0);
>>> + if ( !page )
>>> + {
>>> + continue;
>>> + }
>>> +
>>> + mfn = page_to_mfn(page);
>>> + if ( !mfn_valid(mfn) )
>>> + {
>>> + put_page(page);
>>> + continue;
>>> + }
>>> +
>>> + copying = 0;
>>> +
>>> + if ( is_xen_heap_mfn(mfn) )
>>> + {
>>> + put_page(page);
>>> + continue;
>>> + }
>>> +
>>> + /* Page table always worth copying */
>>> + if ( (page->u.inuse.type_info & PGT_l4_page_table) ||
>>> + (page->u.inuse.type_info & PGT_l3_page_table) ||
>>> + (page->u.inuse.type_info & PGT_l2_page_table) ||
>>> + (page->u.inuse.type_info & PGT_l1_page_table) )
>>> + copying = 1;
>> How can copying pagetables like this ever work? You will end up with an
>> L4 belonging to the new domain pointing to L3's owned by the old domain.
>>
>> Even if you change the ownership of the pages pointed to by the L1's, as
>> soon as the old domain is torn down, the new domains pagetables will be
>> freed heap pages.
> Yes, I'm aware it is broken and that's actually why I sent this RFC - in
> my PATCH 0/6 letter the main question was: what's the best approach here
> with regards to PV? If we want to avoid copying and updating this pages
> we can do it while killing the original domain (so instead of this
> _transfer op we'll have special 'domain kill' op).
Ah - I had not taken that meaning from your 0/6.
Xen has no knowledge whatsoever of a PV domains p2m table (other than
holding a reference to it for toolstack/domain use). This knowledge
lives exclusively in the toolstack and guest.
As a result, I would say that a hypercall like this cannot possibly be
made to work for PV guests without some PV architectural changes in Xen.
~Andrew
next prev parent reply other threads:[~2014-09-24 15:33 UTC|newest]
Thread overview: 12+ messages / expand[flat|nested] mbox.gz Atom feed top
2014-09-24 14:20 [PATCH RFC/WIPv2 0/6] toolstack-based approach to pvhvm guest kexec Vitaly Kuznetsov
2014-09-24 14:20 ` [PATCH RFC/WIPv2 1/6] Introduce XENMEM_transfer operation Vitaly Kuznetsov
2014-09-24 15:07 ` Andrew Cooper
2014-09-24 15:13 ` Vitaly Kuznetsov
2014-09-24 15:33 ` Andrew Cooper [this message]
2014-09-24 14:20 ` [PATCH RFC/WIPv2 2/6] libxc: support " Vitaly Kuznetsov
2014-09-24 14:20 ` [PATCH RFC/WIPv2 3/6] libxc: introduce soft reset Vitaly Kuznetsov
2014-09-24 14:20 ` [PATCH RFC/WIPv2 4/6] xen: Introduce SHUTDOWN_soft_reset shutdown reason Vitaly Kuznetsov
2014-09-24 14:20 ` [PATCH RFC/WIPv2 5/6] libxl: support " Vitaly Kuznetsov
2014-09-24 14:20 ` [PATCH RFC/WIPv2 6/6] libxl: soft reset support Vitaly Kuznetsov
2014-09-24 15:23 ` [PATCH RFC/WIPv2 0/6] toolstack-based approach to pvhvm guest kexec Ian Campbell
2014-09-24 15:37 ` Vitaly Kuznetsov
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=5422E432.7050809@citrix.com \
--to=andrew.cooper3@citrix.com \
--cc=JBeulich@suse.com \
--cc=david.vrabel@citrix.com \
--cc=drjones@redhat.com \
--cc=vkuznets@redhat.com \
--cc=xen-devel@lists.xenproject.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).