From: Max Reitz <mreitz@redhat.com>
To: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>,
qemu-block@nongnu.org
Cc: Kevin Wolf <kwolf@redhat.com>, Eric Blake <eblake@redhat.com>,
qemu-devel@nongnu.org
Subject: Re: [PATCH v2 2/6] block: block-status cache for data regions
Date: Thu, 24 Jun 2021 13:11:45 +0200 [thread overview]
Message-ID: <a933812a-1015-d3d1-051c-e5f430fedcc6@redhat.com> (raw)
In-Reply-To: <6796cd47-6a05-69e3-fa75-1fb25d7a6931@virtuozzo.com>
On 24.06.21 12:05, Vladimir Sementsov-Ogievskiy wrote:
> 23.06.2021 18:01, Max Reitz wrote:
>> As we have attempted before
>> (https://lists.gnu.org/archive/html/qemu-devel/2019-01/msg06451.html,
>> "file-posix: Cache lseek result for data regions";
>> https://lists.nongnu.org/archive/html/qemu-block/2021-02/msg00934.html,
>> "file-posix: Cache next hole"), this patch seeks to reduce the number of
>> SEEK_DATA/HOLE operations the file-posix driver has to perform. The
>> main difference is that this time it is implemented as part of the
>> general block layer code.
>>
>> The problem we face is that on some filesystems or in some
>> circumstances, SEEK_DATA/HOLE is unreasonably slow. Given the
>> implementation is outside of qemu, there is little we can do about its
>> performance.
>>
>> We have already introduced the want_zero parameter to
>> bdrv_co_block_status() to reduce the number of SEEK_DATA/HOLE calls
>> unless we really want zero information; but sometimes we do want that
>> information, because for files that consist largely of zero areas,
>> special-casing those areas can give large performance boosts. So the
>> real problem is with files that consist largely of data, so that
>> inquiring the block status does not gain us much performance, but where
>> such an inquiry itself takes a lot of time.
>>
>> To address this, we want to cache data regions. Most of the time, when
>> bad performance is reported, it is in places where the image is iterated
>> over from start to end (qemu-img convert or the mirror job), so a simple
>> yet effective solution is to cache only the current data region.
>>
>> (Note that only caching data regions but not zero regions means that
>> returning false information from the cache is not catastrophic: Treating
>> zeroes as data is fine. While we try to invalidate the cache on zero
>> writes and discards, such incongruences may still occur when there are
>> other processes writing to the image.)
>>
>> We only use the cache for nodes without children (i.e. protocol nodes),
>> because that is where the problem is: Drivers that rely on block-status
>> implementations outside of qemu (e.g. SEEK_DATA/HOLE).
>>
>> Resolves: https://gitlab.com/qemu-project/qemu/-/issues/307
>> Signed-off-by: Max Reitz <mreitz@redhat.com>
>
> I'm new to RCU, so my review can't be reliable..
Yeah, well, same here. :)
>> ---
>> include/block/block_int.h | 47 ++++++++++++++++++++++
>> block.c | 84 +++++++++++++++++++++++++++++++++++++++
>> block/io.c | 61 ++++++++++++++++++++++++++--
>> 3 files changed, 189 insertions(+), 3 deletions(-)
>>
>> diff --git a/include/block/block_int.h b/include/block/block_int.h
>> index a8f9598102..fcb599dd1c 100644
>> --- a/include/block/block_int.h
>> +++ b/include/block/block_int.h
>> @@ -832,6 +832,22 @@ struct BdrvChild {
>> QLIST_ENTRY(BdrvChild) next_parent;
>> };
>> +/*
>> + * Allows bdrv_co_block_status() to cache one data region for a
>> + * protocol node.
>> + *
>> + * @valid: Whether the cache is valid (should be accessed with atomic
>> + * functions so this can be reset by RCU readers)
>> + * @data_start: Offset where we know (or strongly assume) is data
>> + * @data_end: Offset where the data region ends (which is not
>> necessarily
>> + * the start of a zeroed region)
>> + */
>> +typedef struct BdrvBlockStatusCache {
>> + bool valid;
>> + int64_t data_start;
>> + int64_t data_end;
>> +} BdrvBlockStatusCache;
>> +
>> struct BlockDriverState {
>> /* Protected by big QEMU lock or read-only after opening. No
>> special
>> * locking needed during I/O...
>> @@ -997,6 +1013,11 @@ struct BlockDriverState {
>> /* BdrvChild links to this node may never be frozen */
>> bool never_freeze;
>> +
>> + /* Lock for block-status cache RCU writers */
>> + CoMutex bsc_modify_lock;
>> + /* Always non-NULL, but must only be dereferenced under an RCU
>> read guard */
>> + BdrvBlockStatusCache *block_status_cache;> };
>> struct BlockBackendRootState {
>> @@ -1422,4 +1443,30 @@ static inline BlockDriverState
>> *bdrv_primary_bs(BlockDriverState *bs)
>> */
>> void bdrv_drain_all_end_quiesce(BlockDriverState *bs);
>> +/**
>> + * Check whether the given offset is in the cached block-status data
>> + * region.
>> + *
>> + * If it is, and @pnum is not NULL, *pnum is set to
>> + * `bsc.data_end - offset`, i.e. how many bytes, starting from
>> + * @offset, are data (according to the cache).
>> + * Otherwise, *pnum is not touched.
>> + */
>> +bool bdrv_bsc_is_data(BlockDriverState *bs, int64_t offset, int64_t
>> *pnum);
>> +
>> +/**
>> + * If [offset, offset + bytes) overlaps with the currently cached
>> + * block-status region, invalidate the cache.
>> + *
>> + * (To be used by I/O paths that cause data regions to be zero or
>> + * holes.)
>> + */
>> +void bdrv_bsc_invalidate_range(BlockDriverState *bs,
>> + int64_t offset, int64_t bytes);
>> +
>> +/**
>> + * Mark the range [offset, offset + bytes) as a data region.
>> + */
>> +void bdrv_bsc_fill(BlockDriverState *bs, int64_t offset, int64_t
>> bytes);
>> +
>> #endif /* BLOCK_INT_H */
>> diff --git a/block.c b/block.c
>> index 3f456892d0..9ab9459f7a 100644
>> --- a/block.c
>> +++ b/block.c
>> @@ -49,6 +49,8 @@
>> #include "qemu/timer.h"
>> #include "qemu/cutils.h"
>> #include "qemu/id.h"
>> +#include "qemu/range.h"
>> +#include "qemu/rcu.h"
>> #include "block/coroutines.h"
>> #ifdef CONFIG_BSD
>> @@ -398,6 +400,9 @@ BlockDriverState *bdrv_new(void)
>> qemu_co_queue_init(&bs->flush_queue);
>> + qemu_co_mutex_init(&bs->bsc_modify_lock);
>> + bs->block_status_cache = g_new0(BdrvBlockStatusCache, 1);
>> +
>> for (i = 0; i < bdrv_drain_all_count; i++) {
>> bdrv_drained_begin(bs);
>> }
>> @@ -4635,6 +4640,8 @@ static void bdrv_close(BlockDriverState *bs)
>> bs->explicit_options = NULL;
>> qobject_unref(bs->full_open_options);
>> bs->full_open_options = NULL;
>> + g_free(bs->block_status_cache);
>> + bs->block_status_cache = NULL;
>> bdrv_release_named_dirty_bitmaps(bs);
>> assert(QLIST_EMPTY(&bs->dirty_bitmaps));
>> @@ -7590,3 +7597,80 @@ BlockDriverState
>> *bdrv_backing_chain_next(BlockDriverState *bs)
>> {
>> return bdrv_skip_filters(bdrv_cow_bs(bdrv_skip_filters(bs)));
>> }
>> +
>> +/**
>> + * Check whether [offset, offset + bytes) overlaps with the cached
>> + * block-status data region.
>> + *
>> + * If so, and @pnum is not NULL, set *pnum to `bsc.data_end - offset`,
>> + * which is what bdrv_bsc_is_data()'s interface needs.
>> + * Otherwise, *pnum is not touched.
>> + */
>> +static bool bdrv_bsc_range_overlaps_locked(BlockDriverState *bs,
>> + int64_t offset, int64_t
>> bytes,
>> + int64_t *pnum)
>> +{
>> + BdrvBlockStatusCache *bsc = bs->block_status_cache;
>
> Shouldn't use qatomic_rcu_read() ?
Oh, right, probably so.
>
>> + bool overlaps;
>> +
>> + overlaps =
>> + qatomic_read(&bsc->valid) &&
>
> Hmm. Why you need atomic access? I thought that after getting rcu
> pointer, we are safe to read the fields.
>
> Ah, I see, you want to also set it in rcu-reader code..
>
> Isn't it better just to do normal rcu-write and set pointer to NULL,
> when cache becomes invalid? I think keeping heap-allocated structure
> with valid=false inside doesn't make much sense.
It does, because this way I don’t need an expensive RCU write.
>> + ranges_overlap(offset, bytes, bsc->data_start,
>> + bsc->data_end - bsc->data_start);
>> +
>> + if (overlaps && pnum) {
>> + *pnum = bsc->data_end - offset;
>> + }
>> +
>> + return overlaps;
>> +}
>> +
>> +/**
>> + * See block_int.h for this function's documentation.
>> + */
>> +bool bdrv_bsc_is_data(BlockDriverState *bs, int64_t offset, int64_t
>> *pnum)
>> +{
>> + bool overlaps;
>> +
>> + WITH_RCU_READ_LOCK_GUARD() {
>> + overlaps = bdrv_bsc_range_overlaps_locked(bs, offset, 1, pnum);
>> + }
>> +
>> + return overlaps;
>> +}
>
> this may be written simpler I think:
>
> RCU_READ_LOCK_GUARD();
> return bdrv_bsc_range_overlaps_locked(..);
Hm, I’ll see whether it grows on me. I kind of like the explicit scope,
even if it’s longer.
>> +
>> +/**
>> + * See block_int.h for this function's documentation.
>> + */
>> +void bdrv_bsc_invalidate_range(BlockDriverState *bs,
>> + int64_t offset, int64_t bytes)
>> +{
>> + WITH_RCU_READ_LOCK_GUARD() {
>> + if (bdrv_bsc_range_overlaps_locked(bs, offset, bytes, NULL)) {
>> + qatomic_set(&bs->block_status_cache->valid, false);
>> + }
>> + }
>> +}
>
> Same here, why not use RCU_READ_LOCK_GUARD() ?
>
>> +
>> +/**
>> + * See block_int.h for this function's documentation.
>> + */
>> +void bdrv_bsc_fill(BlockDriverState *bs, int64_t offset, int64_t bytes)
>> +{
>> + BdrvBlockStatusCache *new_bsc = g_new(BdrvBlockStatusCache, 1);
>> + BdrvBlockStatusCache *old_bsc;
>> +
>> + *new_bsc = (BdrvBlockStatusCache) {
>> + .valid = true,
>> + .data_start = offset,
>> + .data_end = offset + bytes,
>> + };
>> +
>> + WITH_QEMU_LOCK_GUARD(&bs->bsc_modify_lock) {
>> + old_bsc = bs->block_status_cache;
>> + qatomic_rcu_set(&bs->block_status_cache, new_bsc);
>> + synchronize_rcu();
>
> Interesting, that until this, synchronize_rcu() is used only in
> tests.. (I tried to search examples of rcu writing in the code)
Well, as far as I understood the docs, synchronize_rcu() is a thing that
can be used, besides call_rcu(). I didn’t want to use call_rcu(),
because it requires adding an rcu_head struct to the protected object...
Now that I look closer at the docs, it says "it is better" to release
all locks before synchronize_rcu(), including the BQL. Perhaps I should
give call_rcu() a try after all.
Max
>
>
>> + }
>> +
>> + g_free(old_bsc);
>> +}
>> diff --git a/block/io.c b/block/io.c
>> index 323854d063..85fa449bf9 100644
>> --- a/block/io.c
>> +++ b/block/io.c
>> @@ -1878,6 +1878,9 @@ static int coroutine_fn
>> bdrv_co_do_pwrite_zeroes(BlockDriverState *bs,
>> return -ENOTSUP;
>> }
>> + /* Invalidate the cached block-status data range if this write
>> overlaps */
>> + bdrv_bsc_invalidate_range(bs, offset, bytes);
>> +
>> assert(alignment % bs->bl.request_alignment == 0);
>> head = offset % alignment;
>> tail = (offset + bytes) % alignment;
>> @@ -2442,9 +2445,58 @@ static int coroutine_fn
>> bdrv_co_block_status(BlockDriverState *bs,
>> aligned_bytes = ROUND_UP(offset + bytes, align) - aligned_offset;
>> if (bs->drv->bdrv_co_block_status) {
>> - ret = bs->drv->bdrv_co_block_status(bs, want_zero,
>> aligned_offset,
>> - aligned_bytes, pnum,
>> &local_map,
>> - &local_file);
>> + bool from_cache = false;
>> +
>> + /*
>> + * Use the block-status cache only for protocol nodes: Format
>> + * drivers are generally quick to inquire the status, but
>> protocol
>> + * drivers often need to get information from outside of
>> qemu, so
>> + * we do not have control over the actual implementation.
>> There
>> + * have been cases where inquiring the status took an
>> unreasonably
>> + * long time, and we can do nothing in qemu to fix it.
>> + * This is especially problematic for images with large data
>> areas,
>> + * because finding the few holes in them and giving them
>> special
>> + * treatment does not gain much performance. Therefore, we
>> try to
>> + * cache the last-identified data region.
>> + *
>> + * Second, limiting ourselves to protocol nodes allows us to
>> assume
>> + * the block status for data regions to be DATA |
>> OFFSET_VALID, and
>> + * that the host offset is the same as the guest offset.
>> + *
>> + * Note that it is possible that external writers zero parts of
>> + * the cached regions without the cache being invalidated,
>> and so
>> + * we may report zeroes as data. This is not catastrophic,
>> + * however, because reporting zeroes as data is fine.
>> + */
>> + if (QLIST_EMPTY(&bs->children)) {
>> + if (bdrv_bsc_is_data(bs, aligned_offset, pnum)) {
>> + ret = BDRV_BLOCK_DATA | BDRV_BLOCK_OFFSET_VALID;
>> + local_file = bs;
>> + local_map = aligned_offset;
>> +
>> + from_cache = true;
>> + }
>> + }
>> +
>> + if (!from_cache) {
>> + ret = bs->drv->bdrv_co_block_status(bs, want_zero,
>> aligned_offset,
>> + aligned_bytes, pnum,
>> &local_map,
>> + &local_file);
>> +
>> + /*
>> + * Note that checking QLIST_EMPTY(&bs->children) is also
>> done when
>> + * the cache is queried above. Technically, we do not
>> need to check
>> + * it here; the worst that can happen is that we fill
>> the cache for
>> + * non-protocol nodes, and then it is never used.
>> However, filling
>> + * the cache requires an RCU update, so double check
>> here to avoid
>> + * such an update if possible.
>> + */
>> + if (ret == (BDRV_BLOCK_DATA | BDRV_BLOCK_OFFSET_VALID) &&
>> + QLIST_EMPTY(&bs->children))
>> + {
>> + bdrv_bsc_fill(bs, aligned_offset, *pnum);
>> + }
>> + }
>> } else {
>> /* Default code for filters */
>> @@ -2997,6 +3049,9 @@ int coroutine_fn bdrv_co_pdiscard(BdrvChild
>> *child, int64_t offset,
>> return 0;
>> }
>> + /* Invalidate the cached block-status data range if this
>> discard overlaps */
>> + bdrv_bsc_invalidate_range(bs, offset, bytes);
>> +
>> /* Discard is advisory, but some devices track and coalesce
>> * unaligned requests, so we must pass everything down rather than
>> * round here. Still, most devices will just silently ignore
>>
>
>
next prev parent reply other threads:[~2021-06-24 11:29 UTC|newest]
Thread overview: 18+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-06-23 15:01 [PATCH v2 0/6] block: block-status cache for data regions Max Reitz
2021-06-23 15:01 ` [PATCH v2 1/6] block: Drop BDS comment regarding bdrv_append() Max Reitz
2021-06-23 15:01 ` [PATCH v2 2/6] block: block-status cache for data regions Max Reitz
2021-06-24 10:05 ` Vladimir Sementsov-Ogievskiy
2021-06-24 11:11 ` Max Reitz [this message]
2021-07-06 17:04 ` Kevin Wolf
2021-07-12 7:45 ` Max Reitz
2021-06-23 15:01 ` [PATCH v2 3/6] block: Clarify that @bytes is no limit on *pnum Max Reitz
2021-06-24 9:15 ` Vladimir Sementsov-Ogievskiy
2021-06-24 10:16 ` Max Reitz
2021-06-24 10:25 ` Vladimir Sementsov-Ogievskiy
2021-06-24 11:12 ` Max Reitz
2021-06-28 19:10 ` Eric Blake
2021-07-12 7:47 ` Max Reitz
2021-06-23 15:01 ` [PATCH v2 4/6] block/file-posix: Do not force-cap *pnum Max Reitz
2021-06-23 15:01 ` [PATCH v2 5/6] block/gluster: " Max Reitz
2021-06-23 15:01 ` [PATCH v2 6/6] block/iscsi: " Max Reitz
2021-07-06 17:06 ` [PATCH v2 0/6] block: block-status cache for data regions Kevin Wolf
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=a933812a-1015-d3d1-051c-e5f430fedcc6@redhat.com \
--to=mreitz@redhat.com \
--cc=eblake@redhat.com \
--cc=kwolf@redhat.com \
--cc=qemu-block@nongnu.org \
--cc=qemu-devel@nongnu.org \
--cc=vsementsov@virtuozzo.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).