From: Liang Li <liliangleo@didiglobal.com>
To: qemu-block@nongnu.org, qemu-devel@nongnu.org
Cc: John Snow <jsnow@redhat.com>, Kevin Wolf <kwolf@redhat.com>,
Max Reitz <mreitz@redhat.com>,
Wen Congyang <wencongyang2@huawei.com>,
Xie Changlong <xiechanglong.d@gmail.com>,
Markus Armbruster <armbru@redhat.com>,
Eric Blake <eblake@redhat.com>, Fam Zheng <fam@euphon.net>
Subject: [Qemu-devel] [PATCH 0/2] buffer and delay backup COW write operation
Date: Sun, 28 Apr 2019 18:01:17 +0800 [thread overview]
Message-ID: <20190428100052.GA63525@localhost> (raw)
If the backup target is a slow device like ceph rbd, the backup
process will affect guest BLK write IO performance seriously,
it's cause by the drawback of COW mechanism, if guest overwrite the
backup BLK area, the IO can only be processed after the data has
been written to backup target.
The impact can be relieved by buffering data read from backup
source and writing to backup target later, so the guest BLK write
IO can be processed in time.
Data area with no overwrite will be process like before without
buffering, in most case, we don't need a very large buffer.
An fio test was done when the backup was going on, the test resut
show a obvious performance improvement by buffering.
Test result(1GB buffer):
========================
fio setting:
[random-writers]
ioengine=libaio
iodepth=8
rw=randwrite
bs=32k
direct=1
size=1G
numjobs=1
result:
IOPS AVG latency
no backup: 19389 410 us
backup: 1402 5702 us
backup w/ buffer: 8684 918 us
==============================================
Cc: John Snow <jsnow@redhat.com>
Cc: Kevin Wolf <kwolf@redhat.com>
Cc: Max Reitz <mreitz@redhat.com>
Cc: Wen Congyang <wencongyang2@huawei.com>
Cc: Xie Changlong <xiechanglong.d@gmail.com>
Cc: Markus Armbruster <armbru@redhat.com>
Cc: Eric Blake <eblake@redhat.com>
Cc: Fam Zheng <fam@euphon.net>
Liang Li (2):
backup: buffer COW request and delay the write operation
qapi: add interface for setting backup cow buffer size
block/backup.c | 118 +++++++++++++++++++++++++++++++++++++++++-----
block/replication.c | 2 +-
blockdev.c | 5 ++
include/block/block_int.h | 2 +
qapi/block-core.json | 5 ++
5 files changed, 118 insertions(+), 14 deletions(-)
--
2.14.1
WARNING: multiple messages have this Message-ID (diff)
From: Liang Li <liliangleo@didiglobal.com>
To: <qemu-block@nongnu.org>, <qemu-devel@nongnu.org>
Cc: Kevin Wolf <kwolf@redhat.com>, Fam Zheng <fam@euphon.net>,
Wen Congyang <wencongyang2@huawei.com>,
Xie Changlong <xiechanglong.d@gmail.com>,
Markus Armbruster <armbru@redhat.com>,
Max Reitz <mreitz@redhat.com>, John Snow <jsnow@redhat.com>
Subject: [Qemu-devel] [PATCH 0/2] buffer and delay backup COW write operation
Date: Sun, 28 Apr 2019 18:01:17 +0800 [thread overview]
Message-ID: <20190428100052.GA63525@localhost> (raw)
Message-ID: <20190428100117.sf7JdWCjCBT2SCu8uuoVzHh2R5zwG0x2HorEFHG7Tjs@z> (raw)
If the backup target is a slow device like ceph rbd, the backup
process will affect guest BLK write IO performance seriously,
it's cause by the drawback of COW mechanism, if guest overwrite the
backup BLK area, the IO can only be processed after the data has
been written to backup target.
The impact can be relieved by buffering data read from backup
source and writing to backup target later, so the guest BLK write
IO can be processed in time.
Data area with no overwrite will be process like before without
buffering, in most case, we don't need a very large buffer.
An fio test was done when the backup was going on, the test resut
show a obvious performance improvement by buffering.
Test result(1GB buffer):
========================
fio setting:
[random-writers]
ioengine=libaio
iodepth=8
rw=randwrite
bs=32k
direct=1
size=1G
numjobs=1
result:
IOPS AVG latency
no backup: 19389 410 us
backup: 1402 5702 us
backup w/ buffer: 8684 918 us
==============================================
Cc: John Snow <jsnow@redhat.com>
Cc: Kevin Wolf <kwolf@redhat.com>
Cc: Max Reitz <mreitz@redhat.com>
Cc: Wen Congyang <wencongyang2@huawei.com>
Cc: Xie Changlong <xiechanglong.d@gmail.com>
Cc: Markus Armbruster <armbru@redhat.com>
Cc: Eric Blake <eblake@redhat.com>
Cc: Fam Zheng <fam@euphon.net>
Liang Li (2):
backup: buffer COW request and delay the write operation
qapi: add interface for setting backup cow buffer size
block/backup.c | 118 +++++++++++++++++++++++++++++++++++++++++-----
block/replication.c | 2 +-
blockdev.c | 5 ++
include/block/block_int.h | 2 +
qapi/block-core.json | 5 ++
5 files changed, 118 insertions(+), 14 deletions(-)
--
2.14.1
next reply other threads:[~2019-04-28 10:01 UTC|newest]
Thread overview: 6+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-04-28 10:01 Liang Li [this message]
2019-04-28 10:01 ` [Qemu-devel] [PATCH 0/2] buffer and delay backup COW write operation Liang Li
2019-04-30 10:35 ` Vladimir Sementsov-Ogievskiy
2019-04-30 10:35 ` Vladimir Sementsov-Ogievskiy
2019-05-06 4:24 ` Liang Li
2019-05-06 11:55 ` Vladimir Sementsov-Ogievskiy
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20190428100052.GA63525@localhost \
--to=liliangleo@didiglobal.com \
--cc=armbru@redhat.com \
--cc=eblake@redhat.com \
--cc=fam@euphon.net \
--cc=jsnow@redhat.com \
--cc=kwolf@redhat.com \
--cc=mreitz@redhat.com \
--cc=qemu-block@nongnu.org \
--cc=qemu-devel@nongnu.org \
--cc=wencongyang2@huawei.com \
--cc=xiechanglong.d@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).