From: Don Slutz <dslutz@verizon.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Keir Fraser <keir@xen.org>,
Ian Campbell <ian.campbell@citrix.com>,
Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
Ian Jackson <ian.jackson@eu.citrix.com>,
Don Slutz <dslutz@verizon.com>, Jan Beulich <JBeulich@suse.com>,
xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [BUGFIX][PATCH 3/4] hvm_save_one: return correct data.
Date: Sun, 15 Dec 2013 14:23:26 -0500 [thread overview]
Message-ID: <52AE01AE.70901@terremark.com> (raw)
In-Reply-To: <52ADFD98.3060407@citrix.com>
[-- Attachment #1.1: Type: text/plain, Size: 2318 bytes --]
On 12/15/13 14:06, Andrew Cooper wrote:
> On 15/12/2013 18:41, Don Slutz wrote:
>> On 12/15/13 13:11, Andrew Cooper wrote:
>>> On 15/12/2013 17:42, Don Slutz wrote:
>>>>
>>>>
>>>> is the final part of this one. So I do not find any code that does
>>>> what you are wondering about.
>>>>
>>>> -Don
>>>>
>>>
>>> HVM_CPU_XSAVE_SIZE() changes depending on which xsave features have
>>> ever been enabled by a vcpu (size is proportional to the contents of
>>> v->arch.xcr0_accum). It is not guaranteed to be the same for each
>>> vcpu in a domain, (although almost certainly will be the same for
>>> any recognisable OS)
>>>
>> Ah, I see.
>>
>> Well, hvm_save_one, hvm_save_size, and hvm_save all expect that
>> hvm_sr_handlers[typecode].size has the max size. I do not see that
>> being true for XSAVE.
>
> hvm_sr_handlers[typecode].size does need to be the maximum possible
> size. It does not mean that the maximum amount of data will be written.
>
> So long as the load on the far side can read the
> somewhat-shorter-than-maximum save record, it doesn't matter (except
> hvm_save_one). hvm_save_size specifically need to return the maximum
> size possible, so the toolstack can allocate a big enough buffer.
> xc_domain_save() does correctly deal with Xen handing back less than
> the maximum when actually saving the domain.
>
>>> Jan's new generic MSR save record will also write less than the
>>> maximum if it can.
>>>
>> This looks to be Jan's patch:
>>
>> http://lists.xen.org/archives/html/xen-devel/2013-12/msg02061.html
>>
>> Does look to set hvm_sr_handlers[typecode].size to the max size.
>>
>> And it looks like the code I did in patch #4 would actually fix this
>> issue. Since it now uses the length stored in the save descriptor to
>> find each instance.
>>
>> Jan has some questions about patch #4; so what to do about it is
>> still pending.
>>
>> Clearly I can merge #3 and #4 into 1 patch.
>>
>> -Don Slutz
>>> ~Andrew
>>
>>
>>
>
> As I said, to fix this newest problem I am experimenting with
> splitting the per-dom and per-vcpu save handlers, and making good
> progress. It does mean that the fix for #3 would be much much more
> simple.
>
> I shall send out a very RFC series as soon as I can
>
> ~Andrew
Great, I look forward to seeing them.
-Don Slutz
[-- Attachment #1.2: Type: text/html, Size: 4195 bytes --]
[-- Attachment #2: Type: text/plain, Size: 126 bytes --]
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel
next prev parent reply other threads:[~2013-12-15 19:23 UTC|newest]
Thread overview: 34+ messages / expand[flat|nested] mbox.gz Atom feed top
2013-12-12 0:56 [BUGFIX][PATCH 0/4] hvm_save_one: return correct data Don Slutz
2013-12-12 0:56 ` [PATCH 1/4] tools/test: Add check-hvmctx Don Slutz
2013-12-12 0:56 ` [PATCH 2/4] Add tools/tests/offline_module Don Slutz
2013-12-12 10:01 ` Ian Campbell
2013-12-12 11:09 ` David Vrabel
2013-12-12 14:24 ` Don Slutz
2013-12-12 14:32 ` Don Slutz
2013-12-12 0:56 ` [BUGFIX][PATCH 3/4] hvm_save_one: return correct data Don Slutz
2013-12-13 14:20 ` Jan Beulich
2013-12-15 0:29 ` Don Slutz
2013-12-15 16:51 ` Andrew Cooper
2013-12-15 17:19 ` Don Slutz
2013-12-15 17:22 ` Andrew Cooper
2013-12-15 17:42 ` Don Slutz
2013-12-15 18:11 ` Andrew Cooper
2013-12-15 18:41 ` Don Slutz
2013-12-15 19:06 ` Andrew Cooper
2013-12-15 19:23 ` Don Slutz [this message]
2013-12-16 8:17 ` Jan Beulich
2013-12-16 17:51 ` Don Slutz
2013-12-16 18:33 ` Andrew Cooper
2013-12-22 19:40 ` Don Slutz
2013-12-22 21:13 ` Andrew Cooper
2014-01-07 15:55 ` Keir Fraser
2013-12-17 8:20 ` Jan Beulich
2013-12-17 10:40 ` Andrew Cooper
2013-12-20 0:32 ` Don Slutz
2013-12-20 13:31 ` George Dunlap
2013-12-22 19:44 ` Don Slutz
2013-12-17 15:58 ` Don Slutz
2013-12-12 0:56 ` [BUGFIX][PATCH 4/4] hvm_save_one: allow the 2nd instance to be fetched for PIC Don Slutz
2013-12-13 14:38 ` Jan Beulich
2013-12-15 1:38 ` Don Slutz
2013-12-16 8:22 ` Jan Beulich
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=52AE01AE.70901@terremark.com \
--to=dslutz@verizon.com \
--cc=JBeulich@suse.com \
--cc=andrew.cooper3@citrix.com \
--cc=ian.campbell@citrix.com \
--cc=ian.jackson@eu.citrix.com \
--cc=keir@xen.org \
--cc=stefano.stabellini@eu.citrix.com \
--cc=xen-devel@lists.xenproject.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).