From mboxrd@z Thu Jan 1 00:00:00 1970 From: Andrew Cooper Subject: Re: [BUGFIX][PATCH 3/4] hvm_save_one: return correct data. Date: Sun, 15 Dec 2013 19:06:00 +0000 Message-ID: <52ADFD98.3060407@citrix.com> References: <1386809777-12898-1-git-send-email-dslutz@terremark.com> <1386809777-12898-4-git-send-email-dslutz@terremark.com> <52AB25B4020000780010D0B0@nat28.tlf.novell.com> <52ACF7CE.9030904@terremark.com> <52ADDE15.8010408@citrix.com> <52ADE491.5040708@terremark.com> <52ADE552.4020608@citrix.com> <52ADEA05.7090907@terremark.com> <52ADF0C5.5060408@citrix.com> <52ADF7F7.9060007@terremark.com> Mime-Version: 1.0 Content-Type: multipart/mixed; boundary="===============4552781712519595647==" Return-path: Received: from mail6.bemta14.messagelabs.com ([193.109.254.103]) by lists.xen.org with esmtp (Exim 4.72) (envelope-from ) id 1VsH0x-0001m3-Jb for xen-devel@lists.xenproject.org; Sun, 15 Dec 2013 19:06:07 +0000 In-Reply-To: <52ADF7F7.9060007@terremark.com> List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Sender: xen-devel-bounces@lists.xen.org Errors-To: xen-devel-bounces@lists.xen.org To: Don Slutz Cc: Keir Fraser , Ian Campbell , Stefano Stabellini , Ian Jackson , Jan Beulich , xen-devel List-Id: xen-devel@lists.xenproject.org --===============4552781712519595647== Content-Type: multipart/alternative; boundary="------------000207070508020403020009" --------------000207070508020403020009 Content-Type: text/plain; charset="ISO-8859-1" Content-Transfer-Encoding: 7bit On 15/12/2013 18:41, Don Slutz wrote: > On 12/15/13 13:11, Andrew Cooper wrote: >> On 15/12/2013 17:42, Don Slutz wrote: >>> >>> >>> is the final part of this one. So I do not find any code that does >>> what you are wondering about. >>> >>> -Don >>> >> >> HVM_CPU_XSAVE_SIZE() changes depending on which xsave features have >> ever been enabled by a vcpu (size is proportional to the contents of >> v->arch.xcr0_accum). It is not guaranteed to be the same for each >> vcpu in a domain, (although almost certainly will be the same for any >> recognisable OS) >> > Ah, I see. > > Well, hvm_save_one, hvm_save_size, and hvm_save all expect that > hvm_sr_handlers[typecode].size has the max size. I do not see that > being true for XSAVE. hvm_sr_handlers[typecode].size does need to be the maximum possible size. It does not mean that the maximum amount of data will be written. So long as the load on the far side can read the somewhat-shorter-than-maximum save record, it doesn't matter (except hvm_save_one). hvm_save_size specifically need to return the maximum size possible, so the toolstack can allocate a big enough buffer. xc_domain_save() does correctly deal with Xen handing back less than the maximum when actually saving the domain. >> Jan's new generic MSR save record will also write less than the >> maximum if it can. >> > This looks to be Jan's patch: > > http://lists.xen.org/archives/html/xen-devel/2013-12/msg02061.html > > Does look to set hvm_sr_handlers[typecode].size to the max size. > > And it looks like the code I did in patch #4 would actually fix this > issue. Since it now uses the length stored in the save descriptor to > find each instance. > > Jan has some questions about patch #4; so what to do about it is still > pending. > > Clearly I can merge #3 and #4 into 1 patch. > > -Don Slutz >> ~Andrew > > > As I said, to fix this newest problem I am experimenting with splitting the per-dom and per-vcpu save handlers, and making good progress. It does mean that the fix for #3 would be much much more simple. I shall send out a very RFC series as soon as I can ~Andrew --------------000207070508020403020009 Content-Type: text/html; charset="ISO-8859-1" Content-Transfer-Encoding: 7bit
On 15/12/2013 18:41, Don Slutz wrote:
On 12/15/13 13:11, Andrew Cooper wrote:
On 15/12/2013 17:42, Don Slutz wrote:


is the final part of this one.  So I do not find any code that does what you are wondering about.

   -Don


HVM_CPU_XSAVE_SIZE() changes depending on which xsave features have ever been enabled by a vcpu (size is proportional to the contents of v->arch.xcr0_accum).  It is not guaranteed to be the same for each vcpu in a domain, (although almost certainly will be the same for any recognisable OS)

Ah, I see.

Well, hvm_save_one, hvm_save_size, and hvm_save all expect that hvm_sr_handlers[typecode].size has the max size.  I do not see that being true for XSAVE.

hvm_sr_handlers[typecode].size does need to be the maximum possible size.  It does not mean that the maximum amount of data will be written.

So long as the load on the far side can read the somewhat-shorter-than-maximum save record, it doesn't matter (except hvm_save_one).  hvm_save_size specifically need to return the maximum size possible, so the toolstack can allocate a big enough buffer.  xc_domain_save() does correctly deal with Xen handing back less than the maximum when actually saving the domain.

Jan's new generic MSR save record will also write less than the maximum if it can.

This looks to be Jan's patch:

http://lists.xen.org/archives/html/xen-devel/2013-12/msg02061.html

Does look to set hvm_sr_handlers[typecode].size to the max size.

And it looks like the code I did in patch #4 would actually fix this issue.  Since it now uses the length stored in the save descriptor to find each instance.

Jan has some questions about patch #4; so what to do about it is still pending.

Clearly I can merge #3 and #4 into 1 patch.

   -Don Slutz
~Andrew




As I said, to fix this newest problem I am experimenting with splitting the per-dom and per-vcpu save handlers, and making good progress.  It does mean that the fix for #3 would be much much more simple.

I shall send out a very RFC series as soon as I can

~Andrew
--------------000207070508020403020009-- --===============4552781712519595647== Content-Type: text/plain; charset="us-ascii" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit Content-Disposition: inline _______________________________________________ Xen-devel mailing list Xen-devel@lists.xen.org http://lists.xen.org/xen-devel --===============4552781712519595647==--