From: Max Reitz <mreitz@redhat.com>
To: Zhang Haoyu <zhanghy@sangfor.com>, qemu-devel <qemu-devel@nongnu.org>
Cc: Kevin Wolf <kwolf@redhat.com>,
qemu-trivial <qemu-trivial@nongnu.org>,
Stefan Hajnoczi <stefanha@redhat.com>
Subject: Re: [Qemu-devel] [PATCH] snapshot: use local variable to bdrv_pwrite_syncL1 table
Date: Tue, 21 Oct 2014 12:53:40 +0200 [thread overview]
Message-ID: <54463B34.3090203@redhat.com> (raw)
In-Reply-To: <201410211849166022861@sangfor.com>
On 2014-10-21 at 12:49, Zhang Haoyu wrote:
>>> Use local variable to bdrv_pwrite_sync L1 table,
>>> needless to make conversion of cached L1 table between
>>> big-endian and host style.
>>>
>>> Signed-off-by: Zhang Haoyu <zhanghy@sangfor.com>
>>> ---
>>> block/qcow2-refcount.c | 22 +++++++---------------
>>> 1 file changed, 7 insertions(+), 15 deletions(-)
>>>
>>> diff --git a/block/qcow2-refcount.c b/block/qcow2-refcount.c
>>> index 2bcaaf9..8b318e8 100644
>>> --- a/block/qcow2-refcount.c
>>> +++ b/block/qcow2-refcount.c
>>> @@ -881,7 +881,6 @@ int qcow2_update_snapshot_refcount(BlockDriverState *bs,
>>> {
>>> BDRVQcowState *s = bs->opaque;
>>> uint64_t *l1_table, *l2_table, l2_offset, offset, l1_size2;
>>> - bool l1_allocated = false;
>>> int64_t old_offset, old_l2_offset;
>>> int i, j, l1_modified = 0, nb_csectors, refcount;
>>> int ret;
>>> @@ -889,6 +888,11 @@ int qcow2_update_snapshot_refcount(BlockDriverState *bs,
>>> l2_table = NULL;
>>> l1_table = NULL;
>> Please remove this assignment; thanks to this hunk we don't need it anymore.
> OK.
>>> l1_size2 = l1_size * sizeof(uint64_t);
>>> + l1_table = g_try_malloc0(align_offset(l1_size2, 512));
>> I wanted to propose using qemu_try_blockalign(), but since it'd require
>> a memset() afterwards, it gets rather ugly.
>>
>> Could you at least replace 512 by BDRV_SECTOR_SIZE, and maybe even
>> align_offset() by ROUND_UP()? We should probably do the latter in all of
>> the qcow2 code, though, I think it's just there because it has been
>> around since before there was a ROUND_UP()...
>>
> Good, I will replace 512 with BDRV_SECTOR_SIZE, and replace align_offset with ROUND_UP.
>>> + if (l1_size2 && l1_table == NULL) {
>>> + ret = -ENOMEM;
>>> + goto fail;
>>> + }
>>>
>>> s->cache_discards = true;
>>>
>>> @@ -896,13 +900,6 @@ int qcow2_update_snapshot_refcount(BlockDriverState *bs,
>>> * l1_table_offset when it is the current s->l1_table_offset! Be careful
>>> * when changing this! */
>>> if (l1_table_offset != s->l1_table_offset) {
>>> - l1_table = g_try_malloc0(align_offset(l1_size2, 512));
>>> - if (l1_size2 && l1_table == NULL) {
>>> - ret = -ENOMEM;
>>> - goto fail;
>>> - }
>>> - l1_allocated = true;
>>> -
>>> ret = bdrv_pread(bs->file, l1_table_offset, l1_table, l1_size2);
>>> if (ret < 0) {
>>> goto fail;
>>> @@ -912,8 +909,7 @@ int qcow2_update_snapshot_refcount(BlockDriverState *bs,
>>> be64_to_cpus(&l1_table[i]);
>>> } else {
>>> assert(l1_size == s->l1_size);
>>> - l1_table = s->l1_table;
>>> - l1_allocated = false;
>>> + memcpy(l1_table, s->l1_table, l1_size2);
>>> }
>>>
>>> for(i = 0; i < l1_size; i++) {
>>> @@ -1055,12 +1051,8 @@ fail:
>> I don't think it will change a lot, but could you wrap the
>> "s->cache_discards = false; qcow2_process_discards(bs, ret);" in an "if
>> (s->cache_discards)"? You have introduced a case where s->cache_discards
>> was still false, so we don't need to call qcow2_process_discards() then
>> (which will hopefully return immediately, but well...).
> s->cache_discards's initial value is true in qcow2_update_snapshot_refcount(),
> where s->cache_discards is set to false?
It is? I only see it set to true after the "if (l1_size2 && l1_table ==
NULL)" conditional block.
Well, okay, I don't know about the callers of qcow2_process_discards(),
so they may have set s->cache_discards to true and then expect this
function to always call qcow2_process_discards() and set
s->cache_discards to false. Okay, then let's just keep it as it is.
Max
> Or you means s->cache_discards should be set to false
> after g_try_malloc0(align_offset(l1_size2, 512)) failed.
>
>>> }
>>>
>>> ret = bdrv_pwrite_sync(bs->file, l1_table_offset, l1_table, l1_size2);
>>> -
>>> - for (i = 0; i < l1_size; i++) {
>>> - be64_to_cpus(&l1_table[i]);
>>> - }
>>> }
>>> - if (l1_allocated)
>>> + if (l1_table)
>>> g_free(l1_table);
>> Just drop the condition. g_free(l1_table); is enough.
>>
> OK.
>>> return ret;
>>> }
>> The change itself is good, it just needs some polishing.
>>
>> Max
next prev parent reply other threads:[~2014-10-21 10:54 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
2014-10-21 8:04 [Qemu-devel] [PATCH] snapshot: use local variable to bdrv_pwrite_sync L1 table Zhang Haoyu
2014-10-21 9:24 ` Max Reitz
2014-10-21 10:49 ` [Qemu-devel] [PATCH] snapshot: use local variable to bdrv_pwrite_syncL1 table Zhang Haoyu
2014-10-21 10:53 ` Max Reitz [this message]
2014-10-22 22:18 ` [Qemu-devel] [PATCH] snapshot: use local variable to bdrv_pwrite_sync L1 table Eric Blake
2014-10-23 1:02 ` Gonglei
2014-10-23 7:03 ` Kevin Wolf
2014-10-23 7:12 ` Max Reitz
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=54463B34.3090203@redhat.com \
--to=mreitz@redhat.com \
--cc=kwolf@redhat.com \
--cc=qemu-devel@nongnu.org \
--cc=qemu-trivial@nongnu.org \
--cc=stefanha@redhat.com \
--cc=zhanghy@sangfor.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).