From: Fam Zheng <famz@redhat.com>
To: Li Feng <lifeng1519@gmail.com>
Cc: fengli@smartx.com, Kevin Wolf <kwolf@redhat.com>,
Max Reitz <mreitz@redhat.com>,
"open list:NVMe Block Driver" <qemu-block@nongnu.org>,
"open list:All patches CC here" <qemu-devel@nongnu.org>
Subject: Re: [Qemu-devel] [PATCH] block/nvme: optimize the performance of nvme driver based on vfio-pci
Date: Fri, 2 Nov 2018 14:47:47 +0800 [thread overview]
Message-ID: <20181102064747.GA21032@magic> (raw)
In-Reply-To: <20181101103807.25862-1-lifeng1519@gmail.com>
On Thu, 11/01 18:38, Li Feng wrote:
> When the IO size is larger than 2 pages, we move the the pointer one by
> one in the pagelist, this is inefficient.
>
> This is a simple benchmark result:
>
> Before:
> $ qemu-io -c 'write 0 1G' nvme://0000:00:04.0/1
>
> wrote 1073741824/1073741824 bytes at offset 0
> 1 GiB, 1 ops; 0:00:02.41 (424.504 MiB/sec and 0.4146 ops/sec)
>
> $ qemu-io -c 'read 0 1G' nvme://0000:00:04.0/1
>
> read 1073741824/1073741824 bytes at offset 0
> 1 GiB, 1 ops; 0:00:02.03 (503.055 MiB/sec and 0.4913 ops/sec)
>
> After:
> $ qemu-io -c 'write 0 1G' nvme://0000:00:04.0/1
>
> wrote 1073741824/1073741824 bytes at offset 0
> 1 GiB, 1 ops; 0:00:02.17 (471.517 MiB/sec and 0.4605 ops/sec)
>
> $ qemu-io -c 'read 0 1G' nvme://0000:00:04.0/1 1 ↵
>
> read 1073741824/1073741824 bytes at offset 0
> 1 GiB, 1 ops; 0:00:01.94 (526.770 MiB/sec and 0.5144 ops/sec)
>
> Signed-off-by: Li Feng <lifeng1519@gmail.com>
> ---
> block/nvme.c | 16 ++++++----------
> 1 file changed, 6 insertions(+), 10 deletions(-)
>
> diff --git a/block/nvme.c b/block/nvme.c
> index 29294038fc..982097b5b1 100644
> --- a/block/nvme.c
> +++ b/block/nvme.c
> @@ -837,7 +837,7 @@ try_map:
> }
>
> for (j = 0; j < qiov->iov[i].iov_len / s->page_size; j++) {
> - pagelist[entries++] = iova + j * s->page_size;
> + pagelist[entries++] = cpu_to_le64(iova + j * s->page_size);
> }
> trace_nvme_cmd_map_qiov_iov(s, i, qiov->iov[i].iov_base,
> qiov->iov[i].iov_len / s->page_size);
> @@ -850,20 +850,16 @@ try_map:
> case 0:
> abort();
> case 1:
> - cmd->prp1 = cpu_to_le64(pagelist[0]);
> + cmd->prp1 = pagelist[0];
> cmd->prp2 = 0;
> break;
> case 2:
> - cmd->prp1 = cpu_to_le64(pagelist[0]);
> - cmd->prp2 = cpu_to_le64(pagelist[1]);;
> + cmd->prp1 = pagelist[0];
> + cmd->prp2 = pagelist[1];
> break;
> default:
> - cmd->prp1 = cpu_to_le64(pagelist[0]);
> - cmd->prp2 = cpu_to_le64(req->prp_list_iova);
> - for (i = 0; i < entries - 1; ++i) {
> - pagelist[i] = cpu_to_le64(pagelist[i + 1]);
> - }
> - pagelist[entries - 1] = 0;
> + cmd->prp1 = pagelist[0];
> + cmd->prp2 = cpu_to_le64(req->prp_list_iova + sizeof(uint64_t));
> break;
> }
> trace_nvme_cmd_map_qiov(s, cmd, req, qiov, entries);
> --
> 2.11.0
>
Nice! Thanks. I've queued the patch.
Fam
prev parent reply other threads:[~2018-11-02 6:48 UTC|newest]
Thread overview: 2+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-11-01 10:38 [Qemu-devel] [PATCH] block/nvme: optimize the performance of nvme driver based on vfio-pci Li Feng
2018-11-02 6:47 ` Fam Zheng [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20181102064747.GA21032@magic \
--to=famz@redhat.com \
--cc=fengli@smartx.com \
--cc=kwolf@redhat.com \
--cc=lifeng1519@gmail.com \
--cc=mreitz@redhat.com \
--cc=qemu-block@nongnu.org \
--cc=qemu-devel@nongnu.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).