qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Max Reitz <mreitz@redhat.com>
To: Kevin Wolf <kwolf@redhat.com>
Cc: Nir Soffer <nsoffer@redhat.com>,
	Alberto Garcia <berto@igalia.com>,
	qemu-devel@nongnu.org, qemu-block@nongnu.org
Subject: Re: [PATCH v2 4/4] qcow2: Use BDRV_SECTOR_SIZE instead of the hardcoded value
Date: Fri, 17 Jan 2020 12:01:06 +0100	[thread overview]
Message-ID: <f0de9a2b-6ce2-22a5-8bfe-526100fcd544@redhat.com> (raw)
In-Reply-To: <20200117095538.GB7394@dhcp-200-226.str.redhat.com>


[-- Attachment #1.1: Type: text/plain, Size: 3105 bytes --]

On 17.01.20 10:55, Kevin Wolf wrote:
> Am 17.01.2020 um 10:12 hat Max Reitz geschrieben:
>> On 17.01.20 00:26, Alberto Garcia wrote:
>>> On Tue 14 Jan 2020 03:15:48 PM CET, Max Reitz wrote:
>>>>> @@ -219,7 +219,7 @@ static int l2_load(BlockDriverState *bs, uint64_t offset,
>>>>>   * Writes one sector of the L1 table to the disk (can't update single entries
>>>>>   * and we really don't want bdrv_pread to perform a read-modify-write)
>>>>>   */
>>>>> -#define L1_ENTRIES_PER_SECTOR (512 / 8)
>>>>> +#define L1_ENTRIES_PER_SECTOR (BDRV_SECTOR_SIZE / 8)
>>>>>  int qcow2_write_l1_entry(BlockDriverState *bs, int l1_index)
>>>>
>>>> Here it’s because the comment is wrong: “Can’t update single entries” –
>>>> yes, we can.  We’d just have to do a bdrv_pwrite() to a single entry.
>>>
>>> What's the point of qcow2_write_l1_entry() then?
>>
>> I think the point was that we couldn’t, for a long time, because the
>> block layer only provided sector-granularity access.  This function
>> simply was never changed when the block layer gained the ability to do
>> byte-granularity I/O.
>>
>> (We’d still need this function, but only for the endian swap, I think.)
> 
> We still can't do byte-granularity writes with O_DIRECT, because that's
> a kernel requirement.

Ah, yes.  But that makes BDRV_SECTOR_SIZE the wrong choice.

> The comment explains that we don't want to do a RMW cycle to write a
> single entry because that would be slower than just writing a whole
> sector. I think this is still accurate. Maybe we should change the
> comment to say "can't necessarily update". (The part that looks really
> wrong in the comment is "bdrv_pread", that should be "bdrv_pwrite"...)

Hm.  But we wouldn’t do an RMW cycle without O_DIRECT, would we?

> Now, what's wrong about the logic to avoid the RMW is that it assumes
> a fixed required alignment of 512. What it should do is looking at
> bs->file->bl.request_alignment and rounding accordingly.

Yes.

>>>>> @@ -3836,7 +3837,7 @@ qcow2_co_copy_range_from(BlockDriverState *bs,
>>>>>          case QCOW2_CLUSTER_NORMAL:
>>>>>              child = s->data_file;
>>>>>              copy_offset += offset_into_cluster(s, src_offset);
>>>>> -            if ((copy_offset & 511) != 0) {
>>>>> +            if (!QEMU_IS_ALIGNED(copy_offset, BDRV_SECTOR_SIZE)) {
>>>>
>>>> Hm.  I don’t get this one.
>>>
>>> Checking the code (e.g. block_copy_do_copy()) it seems that the whole
>>> chunk must be cluster aligned so I don't get this one either.
>>
>> Hm, how did you get to block_copy_do_copy()?  That’s part of the
>> block-copy infrastructure that’s only used for the backup job, as far as
>> I’m aware.  It’s different from copy_range.
>>
>> I don’t see any limitation for copy_range.  I suppose maybe it doesn’t
>> work for anything that isn’t aligned to physical sectors?  But the qcow2
>> driver shouldn’t care about that.
>>
>> On thing’s for sure, the raw driver doesn’t care about it.
> 
> I don't understand this one either.

Good. :-)

Max


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

  reply	other threads:[~2020-01-17 11:03 UTC|newest]

Thread overview: 23+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-01-09 19:12 [PATCH v2 0/4] qcow2: Misc BDRV_SECTOR_SIZE updates Alberto Garcia
2020-01-09 19:12 ` [PATCH v2 1/4] qcow2: Require that the virtual size is a multiple of the sector size Alberto Garcia
2020-01-14 13:01   ` Max Reitz
2020-01-14 13:20     ` Alberto Garcia
2020-01-14 13:47       ` Max Reitz
2020-01-14 13:58         ` Alberto Garcia
2020-01-14 14:03           ` Max Reitz
2020-01-16 14:24             ` Alberto Garcia
2020-01-09 19:13 ` [PATCH v2 2/4] qcow2: Don't round the L1 table allocation up to " Alberto Garcia
2020-01-14 13:46   ` Max Reitz
2020-01-09 19:13 ` [PATCH v2 3/4] qcow2: Tighten cluster_offset alignment assertions Alberto Garcia
2020-01-10 12:14   ` Alberto Garcia
2020-01-14 13:58     ` Max Reitz
2020-01-14 13:56   ` Max Reitz
2020-01-09 19:13 ` [PATCH v2 4/4] qcow2: Use BDRV_SECTOR_SIZE instead of the hardcoded value Alberto Garcia
2020-01-14 14:15   ` Max Reitz
2020-01-16 23:26     ` Alberto Garcia
2020-01-17  9:12       ` Max Reitz
2020-01-17  9:55         ` Kevin Wolf
2020-01-17 11:01           ` Max Reitz [this message]
2020-01-17 11:51             ` Kevin Wolf
2020-01-17 14:34               ` Alberto Garcia
2020-01-18 18:07     ` Alberto Garcia

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=f0de9a2b-6ce2-22a5-8bfe-526100fcd544@redhat.com \
    --to=mreitz@redhat.com \
    --cc=berto@igalia.com \
    --cc=kwolf@redhat.com \
    --cc=nsoffer@redhat.com \
    --cc=qemu-block@nongnu.org \
    --cc=qemu-devel@nongnu.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).