qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Max Reitz <mreitz@redhat.com>
To: Eric Blake <eblake@redhat.com>, qemu-devel@nongnu.org
Cc: Kevin Wolf <kwolf@redhat.com>, Stefan Hajnoczi <stefanha@redhat.com>
Subject: Re: [Qemu-devel] [PATCH] iotests: Test non-self-referential qcow2 refblocks
Date: Mon, 02 Mar 2015 11:31:22 -0500	[thread overview]
Message-ID: <54F4905A.8050601@redhat.com> (raw)
In-Reply-To: <5481EEB7.9030308@redhat.com>

On 2014-12-05 at 12:43, Eric Blake wrote:
> On 12/05/2014 09:53 AM, Max Reitz wrote:
>> It is easy to create only self-referential refblocks, but there are
>> cases where that is impossible. This adds a test for two of those cases
>> (combined in a single test case).
>>
>> Suggested-by: Eric Blake <eblake@redhat.com>
>> Signed-off-by: Max Reitz <mreitz@redhat.com>
>> ---
>> This patch depends on version 4 (or hopefully any later version) of my
>> "qcow2: Support refcount orders != 4" series.
>> ---
>>   tests/qemu-iotests/115     | 95 ++++++++++++++++++++++++++++++++++++++++++++++
>>   tests/qemu-iotests/115.out |  8 ++++
>>   tests/qemu-iotests/group   |  1 +
>>   3 files changed, 104 insertions(+)
>>   create mode 100755 tests/qemu-iotests/115
>>   create mode 100644 tests/qemu-iotests/115.out
>> +
>> +# One refblock can describe (with cluster_size=512 and refcount_bits=64)
>> +# 512/8 = 64 clusters, therefore the L1 table should cover 128 clusters, which
>> +# equals 128 * (512/8) = 8192 entries (actually, 8192 - 512/8 = 8129 would
> This math is slightly off; you really only need 127 consecutive clusters
> to guarantee that you have an aligned 64 clusters somewhere in the mix.
>   Also, 8192 - 512/8 = 8128 (you are missing the +1 L2 entry that is
> necessary to ensure the rollover to 128 clusters).  The real minimum of
> L2 entries to provoke the situation we are after is thus 127 * (512/8) -
> 512/8 + 1 = 8065...
>
>> +# suffice, but it does not really matter). 8192 L2 tables can in turn describe
> ...at any rate, your conclusion that it does not really matter is
> correct; and rounding up to the actual power of 2 is both easier to
> explain and more likely to happen in practice (I'm not going to go out
> of my way to create a 255.9M guest image, after all).
>
> So, whether or not you want to tweak the comment wording, the test
> itself is correct.

Once again, I'm super-fast with my responses.

You're right. However, since the wording is not "This is the minimum for 
getting a non-self-referential refblock" but more like "If we do this, 
we are sure to get a non-self-referential refblock", I'll just leave it 
as it is.

> Reviewed-by: Eric Blake <eblake@redhat.com>

Thanks!

  reply	other threads:[~2015-03-02 16:31 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-12-05 16:53 [Qemu-devel] [PATCH] iotests: Test non-self-referential qcow2 refblocks Max Reitz
2014-12-05 17:43 ` Eric Blake
2015-03-02 16:31   ` Max Reitz [this message]
2015-03-02 16:36 ` Max Reitz

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=54F4905A.8050601@redhat.com \
    --to=mreitz@redhat.com \
    --cc=eblake@redhat.com \
    --cc=kwolf@redhat.com \
    --cc=qemu-devel@nongnu.org \
    --cc=stefanha@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).