From mboxrd@z Thu Jan 1 00:00:00 1970 From: Andrew Cooper Subject: Re: [PATCH 2/4] tools/libxc: Use an explicit check for PV MSRs in xc_domain_save() Date: Thu, 5 Jun 2014 16:57:32 +0100 Message-ID: <5390936C.5010308@citrix.com> References: <1401902794-15542-1-git-send-email-andrew.cooper3@citrix.com> <1401902794-15542-3-git-send-email-andrew.cooper3@citrix.com> <1401983546.15729.150.camel@hastur.hellion.org.uk> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Return-path: In-Reply-To: <1401983546.15729.150.camel@hastur.hellion.org.uk> List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Sender: xen-devel-bounces@lists.xen.org Errors-To: xen-devel-bounces@lists.xen.org To: Ian Campbell Cc: Ian Jackson , Jan Beulich , Xen-devel List-Id: xen-devel@lists.xenproject.org On 05/06/14 16:52, Ian Campbell wrote: > On Wed, 2014-06-04 at 18:26 +0100, Andrew Cooper wrote: >> Migrating PV domains using MSRs is not supported. This uses the new >> XEN_DOMCTL_get_vcpu_msrs and will fail the migration with an explicit error. >> >> This is an improvement upon the current failure of >> "No extended context for VCPUxx (ENOBUFS)" >> >> Support for migrating PV domains which are using MSRs will be included in the >> migration v2 work. >> >> Signed-off-by: Andrew Cooper >> CC: Jan Beulich >> CC: Ian Jackson >> --- >> tools/libxc/xc_domain_save.c | 38 ++++++++++++++++++++++++++++++++++++++ >> 1 file changed, 38 insertions(+) >> >> diff --git a/tools/libxc/xc_domain_save.c b/tools/libxc/xc_domain_save.c >> index acf3685..7ef5183 100644 >> --- a/tools/libxc/xc_domain_save.c >> +++ b/tools/libxc/xc_domain_save.c >> @@ -1995,6 +1995,44 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter >> goto out; >> } >> >> + /* Check there are no PV MSRs in use. */ >> + domctl.cmd = XEN_DOMCTL_get_vcpu_msrs; >> + domctl.domain = dom; >> + memset(&domctl.u, 0, sizeof(domctl.u)); >> + domctl.u.vcpu_msrs.vcpu = i; >> + if ( xc_domctl(xch, &domctl) < 0 ) >> + { >> + PERROR("Error querying maximum number of MSRs for VCPU%d", i); >> + goto out; >> + } >> + >> + if ( domctl.u.vcpu_msrs.msr_count ) >> + { >> + buffer = xc_hypercall_buffer_alloc(xch, buffer, >> + domctl.u.vcpu_msrs.msr_count * >> + sizeof(xen_domctl_vcpu_msr_t)); >> + if ( !buffer ) >> + { >> + PERROR("Insufficient memory for getting MSRs for VCPU%d", i); >> + goto out; >> + } >> + set_xen_guest_handle(domctl.u.vcpu_msrs.msrs, buffer); >> + >> + if ( xc_domctl(xch, &domctl) < 0 ) >> + { >> + PERROR("Error querying MSRs for VCPU%d", i); >> + goto out; >> + } >> + >> + xc_hypercall_buffer_free(xch, buffer); >> + if ( domctl.u.vcpu_msrs.msr_count ) > I'm obviously missing something. > > You first call it with a NULL buffer to get > domctl.u.vcpu_msrs.msr_count. Then if msr_count is non-zero you allocate > a buffer and ask Xen to fill it. If it turns out that Xen did actually > fill the buffer with something then you consider that an error. Correct. This is the "ask once for size, second for content" style used in other domctls. > > Can you not just error out on the basis of the initial msr_count? > > Ian. > No. To avoid a race condition with the vcpu touching a new MSR between the two hypercalls, Xen must return the maximum possible msr_count with the request for size, so the toolstack can guarantee to allocate a large enough buffer. Otherwise, the second hypercall would fail because of an undersized buffer, despite querying for size beforehand. ~Andrew