From: Alexey Kardashevskiy <aik@ozlabs.ru>
To: Alexander Graf <agraf@suse.de>, qemu-devel@nongnu.org
Cc: Paolo Bonzini <pbonzini@redhat.com>, qemu-ppc@nongnu.org
Subject: Re: [Qemu-devel] [PATCH 2/2] spapr_nvram: Enable migration
Date: Thu, 25 Sep 2014 20:06:40 +1000 [thread overview]
Message-ID: <5423E930.6070704@ozlabs.ru> (raw)
In-Reply-To: <5423E3C6.4070708@suse.de>
On 09/25/2014 07:43 PM, Alexander Graf wrote:
>
>
> On 25.09.14 09:02, Alexey Kardashevskiy wrote:
>> The only case when sPAPR NVRAM migrates now is if is backed by a file and
>> copy-storage migration is performed.
>>
>> This enables RAM copy of NVRAM even if NVRAM is backed by a file.
>>
>> This defines a VMSTATE descriptor for NVRAM device so the memory copy
>> of NVRAM can migrate and be written to a backing file on the destination
>> if one is provided.
>>
>> Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
>> ---
>> hw/nvram/spapr_nvram.c | 68 +++++++++++++++++++++++++++++++++++++++++++-------
>> 1 file changed, 59 insertions(+), 9 deletions(-)
>>
>> diff --git a/hw/nvram/spapr_nvram.c b/hw/nvram/spapr_nvram.c
>> index 6a72ef4..254009e 100644
>> --- a/hw/nvram/spapr_nvram.c
>> +++ b/hw/nvram/spapr_nvram.c
>> @@ -76,15 +76,20 @@ static void rtas_nvram_fetch(PowerPCCPU *cpu, sPAPREnvironment *spapr,
>> return;
>> }
>>
>> + assert(nvram->buf);
>> +
>> membuf = cpu_physical_memory_map(buffer, &len, 1);
>> +
>> + alen = len;
>> if (nvram->drive) {
>> alen = bdrv_pread(nvram->drive, offset, membuf, len);
>> + if (alen > 0) {
>> + memcpy(nvram->buf + offset, membuf, alen);
>
> Why?
This way I do not need pre_save hook and I keep the buf in sync with the
file. If I implement pre_save, then buf will serve 2 purposes - it is
either NVRAM itself (if there is no backing file, exists during guest's
lifetime) or it is a migration copy (exists between pre_save and post_load
and then it is disposed). Two quite different uses of the same thing
confuse me. But - I do not mind doing it your way, no big deal, should I?
>> + }
>> } else {
>> - assert(nvram->buf);
>> -
>> memcpy(membuf, nvram->buf + offset, len);
>> - alen = len;
>> }
>> +
>> cpu_physical_memory_unmap(membuf, len, 1, len);
>>
>> rtas_st(rets, 0, (alen < len) ? RTAS_OUT_HW_ERROR : RTAS_OUT_SUCCESS);
>> @@ -122,14 +127,15 @@ static void rtas_nvram_store(PowerPCCPU *cpu, sPAPREnvironment *spapr,
>> }
>>
>> membuf = cpu_physical_memory_map(buffer, &len, 0);
>> +
>> + alen = len;
>> if (nvram->drive) {
>> alen = bdrv_pwrite(nvram->drive, offset, membuf, len);
>> - } else {
>> - assert(nvram->buf);
>> -
>> - memcpy(nvram->buf + offset, membuf, len);
>> - alen = len;
>> }
>> +
>> + assert(nvram->buf);
>> + memcpy(nvram->buf + offset, membuf, len);
>> +
>> cpu_physical_memory_unmap(membuf, len, 0, len);
>>
>> rtas_st(rets, 0, (alen < len) ? RTAS_OUT_HW_ERROR : RTAS_OUT_SUCCESS);
>> @@ -144,9 +150,10 @@ static int spapr_nvram_init(VIOsPAPRDevice *dev)
>> nvram->size = bdrv_getlength(nvram->drive);
>> } else {
>> nvram->size = DEFAULT_NVRAM_SIZE;
>> - nvram->buf = g_malloc0(nvram->size);
>> }
>>
>> + nvram->buf = g_malloc0(nvram->size);
>> +
>> if ((nvram->size < MIN_NVRAM_SIZE) || (nvram->size > MAX_NVRAM_SIZE)) {
>> fprintf(stderr, "spapr-nvram must be between %d and %d bytes in size\n",
>> MIN_NVRAM_SIZE, MAX_NVRAM_SIZE);
>> @@ -166,6 +173,48 @@ static int spapr_nvram_devnode(VIOsPAPRDevice *dev, void *fdt, int node_off)
>> return fdt_setprop_cell(fdt, node_off, "#bytes", nvram->size);
>> }
>>
>> +static int spapr_nvram_pre_load(void *opaque)
>> +{
>> + sPAPRNVRAM *nvram = VIO_SPAPR_NVRAM(opaque);
>> +
>> + g_free(nvram->buf);
>> + nvram->buf = NULL;
>> + nvram->size = 0;
>> +
>> + return 0;
>> +}
>> +
>> +static int spapr_nvram_post_load(void *opaque, int version_id)
>> +{
>> + sPAPRNVRAM *nvram = VIO_SPAPR_NVRAM(opaque);
>> +
>> + if (nvram->drive) {
>> + int alen = bdrv_pwrite(nvram->drive, 0, nvram->buf, nvram->size);
>
> In the file backed case you're already overwriting the disk backed
> version, no?
Yes. Is that bad?
> Also, couldn't you just do the copy and provisioning of buf in a
> pre_save hook?
I can do this too. I just do not see why that would be lot better though :)
--
Alexey
next prev parent reply other threads:[~2014-09-25 10:07 UTC|newest]
Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top
2014-09-25 7:02 [Qemu-devel] [PATCH 0/2] spapr_nvram: Support migration Alexey Kardashevskiy
2014-09-25 7:02 ` [Qemu-devel] [PATCH 1/2] vmstate: Allow dynamic allocation for VBUFFER during migration Alexey Kardashevskiy
2014-09-25 7:02 ` [Qemu-devel] [PATCH 2/2] spapr_nvram: Enable migration Alexey Kardashevskiy
2014-09-25 9:43 ` Alexander Graf
2014-09-25 10:06 ` Alexey Kardashevskiy [this message]
2014-09-26 2:31 ` David Gibson
2014-09-26 2:53 ` Alexey Kardashevskiy
2014-09-29 8:30 ` Alexander Graf
-- strict thread matches above, loose matches on Subject: below --
2014-10-02 9:56 [Qemu-devel] [PATCH 0/2] Here are 2 patches to enable sPAPR NVRAM migration Alexey Kardashevskiy
2014-10-02 9:56 ` [Qemu-devel] [PATCH 2/2] spapr_nvram: Enable migration Alexey Kardashevskiy
2014-10-03 4:11 ` David Gibson
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=5423E930.6070704@ozlabs.ru \
--to=aik@ozlabs.ru \
--cc=agraf@suse.de \
--cc=pbonzini@redhat.com \
--cc=qemu-devel@nongnu.org \
--cc=qemu-ppc@nongnu.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).