qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
To: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>,
	"qemu-block@nongnu.org" <qemu-block@nongnu.org>
Cc: "kwolf@redhat.com" <kwolf@redhat.com>,
	Denis Lunev <den@virtuozzo.com>,
	"jsnow@redhat.com" <jsnow@redhat.com>,
	"qemu-devel@nongnu.org" <qemu-devel@nongnu.org>,
	"mreitz@redhat.com" <mreitz@redhat.com>
Subject: Re: [PATCH v2 6/7] block/block-copy: reduce intersecting request lock
Date: Thu, 30 Jan 2020 19:24:19 +0300	[thread overview]
Message-ID: <d451b4b9-27b9-bc43-afb9-eecc682c5586@virtuozzo.com> (raw)
In-Reply-To: <61f94246-4f74-c9ba-05a5-0da1c88be541@virtuozzo.com>



On 30/01/2020 16:45, Vladimir Sementsov-Ogievskiy wrote:
> 29.01.2020 23:05, Andrey Shinkevich wrote:
>>
>>
>> On 27/11/2019 21:08, Vladimir Sementsov-Ogievskiy wrote:
>>> Currently, block_copy operation lock the whole requested region. But
>>> there is no reason to lock clusters, which are already copied, it will
>>> disturb other parallel block_copy requests for no reason.
>>>
>>> Let's instead do the following:
>>>
>>> Lock only sub-region, which we are going to operate on. Then, after
>>> copying all dirty sub-regions, we should wait for intersecting
>>> requests block-copy, if they failed, we should retry these new dirty
>>> clusters.
>>>
>>> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
>>> ---
>>>    block/block-copy.c | 116 
>>> +++++++++++++++++++++++++++++++++++++--------
>>>    1 file changed, 95 insertions(+), 21 deletions(-)
>>>
>>> diff --git a/block/block-copy.c b/block/block-copy.c
>>> index 20068cd699..aca44b13fb 100644
>>> --- a/block/block-copy.c
>>> +++ b/block/block-copy.c
>>> @@ -39,29 +39,62 @@ static BlockCopyInFlightReq 
>>> *block_copy_find_inflight_req(BlockCopyState *s,
>>>        return NULL;
>>>    }
>>> -static void coroutine_fn 
>>> block_copy_wait_inflight_reqs(BlockCopyState *s,
>>> -                                                       int64_t offset,
>>> -                                                       int64_t bytes)
>>> +/*
>>> + * If there are no intersecting requests return false. Otherwise, 
>>> wait for the
>>> + * first found intersecting request to finish and return true.
>>> + */
>>> +static bool coroutine_fn block_copy_wait_one(BlockCopyState *s, 
>>> int64_t start,
>>> +                                             int64_t end)
>>>    {
>>> -    BlockCopyInFlightReq *req;
>>> +    BlockCopyInFlightReq *req = block_copy_find_inflight_req(s, 
>>> start, end);
>>> -    while ((req = block_copy_find_inflight_req(s, offset, bytes))) {
>>> -        qemu_co_queue_wait(&req->wait_queue, NULL);
>>> +    if (!req) {
>>> +        return false;
>>>        }
>>> +
>>> +    qemu_co_queue_wait(&req->wait_queue, NULL);
>>> +
>>> +    return true;
>>>    }
>>> +/* Called only on full-dirty region */
>>>    static void block_copy_inflight_req_begin(BlockCopyState *s,
>>>                                              BlockCopyInFlightReq *req,
>>>                                              int64_t offset, int64_t 
>>> bytes)
>>>    {
>>> +    assert(!block_copy_find_inflight_req(s, offset, bytes));
>>> +
>>> +    bdrv_reset_dirty_bitmap(s->copy_bitmap, offset, bytes);
>>> +
>>>        req->offset = offset;
>>>        req->bytes = bytes;
>>>        qemu_co_queue_init(&req->wait_queue);
>>>        QLIST_INSERT_HEAD(&s->inflight_reqs, req, list);
>>>    }
>>> -static void coroutine_fn 
>>> block_copy_inflight_req_end(BlockCopyInFlightReq *req)
>>> +static void coroutine_fn 
>>> block_copy_inflight_req_shrink(BlockCopyState *s,
>>> +        BlockCopyInFlightReq *req, int64_t new_bytes)
>>>    {
>>> +    if (new_bytes == req->bytes) {
>>> +        return;
>>> +    }
>>> +
>>> +    assert(new_bytes > 0 && new_bytes < req->bytes);
>>> +
>>> +    bdrv_set_dirty_bitmap(s->copy_bitmap,
>>> +                          req->offset + new_bytes, req->bytes - 
>>> new_bytes);
>>> +
>>> +    req->bytes = new_bytes;
>>> +    qemu_co_queue_restart_all(&req->wait_queue);
>>
>> Won't we get the performance degradation with that function frequent 
>> call?
> 
> Why do you think so? In IO most of performance depends on disk speed and 
> how
> we organize requests sequence. The whole original series shows 
> performance improvement.
> 
> This patch reduces lock around request, locking only the part we are 
> working on now,
> this is for better interactivity. After calling block-status, the 
> request is shrinked
> to possibly unlock some other requests, waiting on the tail of our 
> request.. Do you
> have a better suggestion on this synchronization?
> 

I cannot answer right away. One need to measure the performance in each 
case.

Andrey
>>
>>> +}
>>> +
>>> +static void coroutine_fn block_copy_inflight_req_end(BlockCopyState *s,
>>> +                                                     
>>> BlockCopyInFlightReq *req,
>>> +                                                     int ret)
>>> +{
>>> +    if (ret < 0) {
>>> +        bdrv_set_dirty_bitmap(s->copy_bitmap, req->offset, req->bytes);
>>> +    }
>>>        QLIST_REMOVE(req, list);
>>>        qemu_co_queue_restart_all(&req->wait_queue);
>>>    }
>>> @@ -344,12 +377,19 @@ int64_t 
>>> block_copy_reset_unallocated(BlockCopyState *s,
>>>        return ret;
>>>    }
>>> -int coroutine_fn block_copy(BlockCopyState *s,
>>> -                            int64_t offset, uint64_t bytes,
>>> -                            bool *error_is_read)
>>> +/*
>>> + * block_copy_dirty_clusters
>>> + *
>>> + * Copy dirty clusters in @start/@bytes range.
>>
>> %s/start/offset/ ?
>>
>>> + * Returns 1 if dirty clusters found and successfully copied, 0 if 
>>> no dirty
>>> + * clusters found and -errno on failure.
>>> + */
>>> +static int coroutine_fn block_copy_dirty_clusters(BlockCopyState *s,
>>> +                                                  int64_t offset, 
>>> int64_t bytes,
>>> +                                                  bool *error_is_read)
>>>    {
>>>        int ret = 0;
>>> -    BlockCopyInFlightReq req;
>>> +    bool found_dirty = false;
>>>        /*
>>>         * block_copy() user is responsible for keeping source and 
>>> target in same
>>> @@ -361,10 +401,8 @@ int coroutine_fn block_copy(BlockCopyState *s,
>>>        assert(QEMU_IS_ALIGNED(offset, s->cluster_size));
>>>        assert(QEMU_IS_ALIGNED(bytes, s->cluster_size));
>>> -    block_copy_wait_inflight_reqs(s, offset, bytes);
>>> -    block_copy_inflight_req_begin(s, &req, offset, bytes);
>>> -
>>>        while (bytes) {
>>> +        BlockCopyInFlightReq req;
>>>            int64_t next_zero, cur_bytes, status_bytes;
>>>            if (!bdrv_dirty_bitmap_get(s->copy_bitmap, offset)) {
>>> @@ -374,6 +412,8 @@ int coroutine_fn block_copy(BlockCopyState *s,
>>>                continue; /* already copied */
>>>            }
>>> +        found_dirty = true;
>>> +
>>>            cur_bytes = MIN(bytes, s->copy_size);
>>>            next_zero = bdrv_dirty_bitmap_next_zero(s->copy_bitmap, 
>>> offset,
>>> @@ -383,10 +423,12 @@ int coroutine_fn block_copy(BlockCopyState *s,
>>>                assert(next_zero < offset + cur_bytes); /* no need to 
>>> do MIN() */
>>>                cur_bytes = next_zero - offset;
>>>            }
>>> +        block_copy_inflight_req_begin(s, &req, offset, cur_bytes);
>>>            ret = block_copy_block_status(s, offset, cur_bytes, 
>>> &status_bytes);
>>> +        block_copy_inflight_req_shrink(s, &req, status_bytes);
>>>            if (s->skip_unallocated && !(ret & BDRV_BLOCK_ALLOCATED)) {
>>> -            bdrv_reset_dirty_bitmap(s->copy_bitmap, offset, 
>>> status_bytes);
>>> +            block_copy_inflight_req_end(s, &req, 0);
>>>                s->progress_reset_callback(s->progress_opaque);
>>>                trace_block_copy_skip_range(s, offset, status_bytes);
>>>                offset += status_bytes;
>>> @@ -398,15 +440,13 @@ int coroutine_fn block_copy(BlockCopyState *s,
>>>            trace_block_copy_process(s, offset);
>>> -        bdrv_reset_dirty_bitmap(s->copy_bitmap, offset, cur_bytes);
>>> -
>>>            co_get_from_shres(s->mem, cur_bytes);
>>>            ret = block_copy_do_copy(s, offset, cur_bytes, ret & 
>>> BDRV_BLOCK_ZERO,
>>>                                     error_is_read);
>>>            co_put_to_shres(s->mem, cur_bytes);
>>> +        block_copy_inflight_req_end(s, &req, ret);
>>>            if (ret < 0) {
>>> -            bdrv_set_dirty_bitmap(s->copy_bitmap, offset, cur_bytes);
>>> -            break;
>>> +            return ret;
>>>            }
>>>            s->progress_bytes_callback(cur_bytes, s->progress_opaque);
>>> @@ -414,7 +454,41 @@ int coroutine_fn block_copy(BlockCopyState *s,
>>>            bytes -= cur_bytes;
>>>        }
>>> -    block_copy_inflight_req_end(&req);
>>> +    return found_dirty;
>>> +}
>>> -    return ret;
>>> +int coroutine_fn block_copy(BlockCopyState *s, int64_t start, 
>>> uint64_t bytes,
>>> +                            bool *error_is_read)
>>> +{
>>> +    while (true) {
>>> +        int ret = block_copy_dirty_clusters(s, start, bytes, 
>>> error_is_read);
>>> +
>>> +        if (ret < 0) {
>>> +            /*
>>> +             * IO operation failed, which means the whole block_copy 
>>> request
>>> +             * failed.
>>> +             */
>>> +            return ret;
>>> +        }
>>> +        if (ret) {
>>> +            /*
>>> +             * Something was copied, which means that there were 
>>> yield points
>>> +             * and some new dirty bits may appered (due to failed 
>>> parallel
>>> +             * block-copy requests).
>>> +             */
>>> +            continue;
>>> +        }
>>> +
>>> +        /*
>>> +         * Here ret == 0, which means that there is no dirty 
>>> clusters in
>>
>> there is no dirty cluster in
>>
>>> +         * requested region.
>>> +         */
>>> +
>>> +        if (!block_copy_wait_one(s, start, bytes)) {
>>> +            /* No dirty bits and nothing to wait: the whole request 
>>> is done */
>>> +            break;
>>> +        }
>>> +    }
>>> +
>>> +    return 0;
>>>    }
>>>
>>
> 
> 

-- 
With the best regards,
Andrey Shinkevich



  reply	other threads:[~2020-01-30 16:29 UTC|newest]

Thread overview: 40+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-11-27 18:08 [PATCH v2 for-5.0 0/7] block-copy improvements: part I Vladimir Sementsov-Ogievskiy
2019-11-27 18:08 ` [PATCH v2 1/7] block/block-copy: specialcase first copy_range request Vladimir Sementsov-Ogievskiy
2020-01-29  7:38   ` Andrey Shinkevich
2020-02-07 17:28   ` Max Reitz
2020-02-08 12:32     ` Vladimir Sementsov-Ogievskiy
2019-11-27 18:08 ` [PATCH v2 2/7] block/block-copy: use block_status Vladimir Sementsov-Ogievskiy
2020-01-29  9:12   ` Andrey Shinkevich
2020-02-07 17:46   ` Max Reitz
2020-02-08 12:25     ` Vladimir Sementsov-Ogievskiy
2020-02-17 11:48       ` Max Reitz
2019-11-27 18:08 ` [PATCH v2 3/7] block/block-copy: factor out block_copy_find_inflight_req Vladimir Sementsov-Ogievskiy
2020-01-29  9:25   ` Andrey Shinkevich
2020-02-07 17:50   ` Max Reitz
2019-11-27 18:08 ` [PATCH v2 4/7] block/block-copy: refactor interfaces to use bytes instead of end Vladimir Sementsov-Ogievskiy
2020-01-29 17:12   ` Andrey Shinkevich
2020-02-05 11:36     ` Vladimir Sementsov-Ogievskiy
2020-02-07 18:01   ` Max Reitz
2020-02-08 12:55     ` Vladimir Sementsov-Ogievskiy
2019-11-27 18:08 ` [PATCH v2 5/7] block/block-copy: rename start to offset in interfaces Vladimir Sementsov-Ogievskiy
2020-01-29 17:37   ` Andrey Shinkevich
2020-02-07 18:04   ` Max Reitz
2019-11-27 18:08 ` [PATCH v2 6/7] block/block-copy: reduce intersecting request lock Vladimir Sementsov-Ogievskiy
2020-01-29 20:05   ` Andrey Shinkevich
2020-01-30 13:45     ` Vladimir Sementsov-Ogievskiy
2020-01-30 16:24       ` Andrey Shinkevich [this message]
2020-01-30 17:09         ` Vladimir Sementsov-Ogievskiy
2020-01-30 18:00           ` Andrey Shinkevich
2020-01-30 15:53   ` Andrey Shinkevich
2020-01-30 16:05     ` Vladimir Sementsov-Ogievskiy
2020-02-17 13:38   ` Max Reitz
2020-02-20  7:21     ` Vladimir Sementsov-Ogievskiy
2020-02-20  9:10       ` Max Reitz
2019-11-27 18:08 ` [PATCH v2 7/7] block/block-copy: hide structure definitions Vladimir Sementsov-Ogievskiy
2020-01-30 18:58   ` Andrey Shinkevich
2020-02-17 14:04   ` Max Reitz
2020-02-20  7:26     ` Vladimir Sementsov-Ogievskiy
2019-12-19  9:01 ` [PATCH v2 for-5.0 0/7] block-copy improvements: part I Vladimir Sementsov-Ogievskiy
2020-01-20  9:09   ` Vladimir Sementsov-Ogievskiy
2020-02-07 18:05     ` Max Reitz
2020-02-08 10:28       ` Vladimir Sementsov-Ogievskiy

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=d451b4b9-27b9-bc43-afb9-eecc682c5586@virtuozzo.com \
    --to=andrey.shinkevich@virtuozzo.com \
    --cc=den@virtuozzo.com \
    --cc=jsnow@redhat.com \
    --cc=kwolf@redhat.com \
    --cc=mreitz@redhat.com \
    --cc=qemu-block@nongnu.org \
    --cc=qemu-devel@nongnu.org \
    --cc=vsementsov@virtuozzo.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).