xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Ian Campbell <ian.campbell@citrix.com>
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Jan Beulich <JBeulich@suse.com>,
	Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [PATCH 2/4] tools/libxc: Use an explicit check for PV MSRs in xc_domain_save()
Date: Thu, 5 Jun 2014 16:57:32 +0100	[thread overview]
Message-ID: <5390936C.5010308@citrix.com> (raw)
In-Reply-To: <1401983546.15729.150.camel@hastur.hellion.org.uk>

On 05/06/14 16:52, Ian Campbell wrote:
> On Wed, 2014-06-04 at 18:26 +0100, Andrew Cooper wrote:
>> Migrating PV domains using MSRs is not supported.  This uses the new
>> XEN_DOMCTL_get_vcpu_msrs and will fail the migration with an explicit error.
>>
>> This is an improvement upon the current failure of
>>   "No extended context for VCPUxx (ENOBUFS)"
>>
>> Support for migrating PV domains which are using MSRs will be included in the
>> migration v2 work.
>>
>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
>> CC: Jan Beulich <JBeulich@suse.com>
>> CC: Ian Jackson <Ian.Jackson@eu.citrix.com>
>> ---
>>  tools/libxc/xc_domain_save.c |   38 ++++++++++++++++++++++++++++++++++++++
>>  1 file changed, 38 insertions(+)
>>
>> diff --git a/tools/libxc/xc_domain_save.c b/tools/libxc/xc_domain_save.c
>> index acf3685..7ef5183 100644
>> --- a/tools/libxc/xc_domain_save.c
>> +++ b/tools/libxc/xc_domain_save.c
>> @@ -1995,6 +1995,44 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
>>              goto out;
>>          }
>>  
>> +        /* Check there are no PV MSRs in use. */
>> +        domctl.cmd = XEN_DOMCTL_get_vcpu_msrs;
>> +        domctl.domain = dom;
>> +        memset(&domctl.u, 0, sizeof(domctl.u));
>> +        domctl.u.vcpu_msrs.vcpu = i;
>> +        if ( xc_domctl(xch, &domctl) < 0 )
>> +        {
>> +            PERROR("Error querying maximum number of MSRs for VCPU%d", i);
>> +            goto out;
>> +        }
>> +
>> +        if ( domctl.u.vcpu_msrs.msr_count )
>> +        {
>> +            buffer = xc_hypercall_buffer_alloc(xch, buffer,
>> +                                               domctl.u.vcpu_msrs.msr_count *
>> +                                               sizeof(xen_domctl_vcpu_msr_t));
>> +            if ( !buffer )
>> +            {
>> +                PERROR("Insufficient memory for getting MSRs for VCPU%d", i);
>> +                goto out;
>> +            }
>> +            set_xen_guest_handle(domctl.u.vcpu_msrs.msrs, buffer);
>> +
>> +            if ( xc_domctl(xch, &domctl) < 0 )
>> +            {
>> +                PERROR("Error querying MSRs for VCPU%d", i);
>> +                goto out;
>> +            }
>> +
>> +            xc_hypercall_buffer_free(xch, buffer);
>> +            if ( domctl.u.vcpu_msrs.msr_count )
> I'm obviously missing something.
>
> You first call it with a NULL buffer to get
> domctl.u.vcpu_msrs.msr_count. Then if msr_count is non-zero you allocate
> a buffer and ask Xen to fill it. If it turns out that Xen did actually
> fill the buffer with something then you consider that an error.

Correct.  This is the "ask once for size, second for content" style used
in other domctls.

>
> Can you not just error out on the basis of the initial msr_count?
>
> Ian.
>

No.

To avoid a race condition with the vcpu touching a new MSR between the
two hypercalls, Xen must return the maximum possible msr_count with the
request for size, so the toolstack can guarantee to allocate a large
enough buffer.

Otherwise, the second hypercall would fail because of an undersized
buffer, despite querying for size beforehand.

~Andrew

  reply	other threads:[~2014-06-05 15:57 UTC|newest]

Thread overview: 19+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-06-04 17:26 [PATCH 0/4] Fixes to several domctls for migration Andrew Cooper
2014-06-04 17:26 ` [PATCH 1/4] x86/domctl: Implement XEN_DOMCTL_{get, set}_vcpu_msrs Andrew Cooper
2014-06-05 12:46   ` Jan Beulich
2014-06-05 13:01     ` Andrew Cooper
2014-06-05 13:33       ` Jan Beulich
2014-06-06 14:53         ` Andrew Cooper
2014-06-06 15:09           ` Jan Beulich
2014-06-06 15:28             ` Andrew Cooper
2014-06-04 17:26 ` [PATCH 2/4] tools/libxc: Use an explicit check for PV MSRs in xc_domain_save() Andrew Cooper
2014-06-05 13:41   ` Jan Beulich
2014-06-05 15:52   ` Ian Campbell
2014-06-05 15:57     ` Andrew Cooper [this message]
2014-06-06  9:15       ` Ian Campbell
2014-06-06  9:44         ` Andrew Cooper
2014-06-06  9:48           ` Ian Campbell
2014-06-04 17:26 ` [PATCH 3/4] x86/domctl: Remove PV MSR parts of XEN_DOMCTL_[gs]et_ext_vcpucontext Andrew Cooper
2014-06-05  7:52   ` Frediano Ziglio
2014-06-05  9:25     ` Andrew Cooper
2014-06-04 17:26 ` [PATCH 4/4] x86/domctl: Two functional fixes to XEN_DOMCTL_[gs]etvcpuextstate Andrew Cooper

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=5390936C.5010308@citrix.com \
    --to=andrew.cooper3@citrix.com \
    --cc=Ian.Jackson@eu.citrix.com \
    --cc=JBeulich@suse.com \
    --cc=ian.campbell@citrix.com \
    --cc=xen-devel@lists.xen.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).