From: Fam Zheng <famz@redhat.com>
To: Stefan Hajnoczi <stefanha@redhat.com>
Cc: qemu-devel@nongnu.org, Paolo Bonzini <pbonzini@redhat.com>,
Keith Busch <keith.busch@intel.com>,
qemu-block@nongnu.org, Kevin Wolf <kwolf@redhat.com>,
Max Reitz <mreitz@redhat.com>, Karl Rister <krister@redhat.com>
Subject: Re: [Qemu-devel] [PATCH v3 2/6] block: Add VFIO based NVMe driver
Date: Wed, 12 Jul 2017 10:14:48 +0800 [thread overview]
Message-ID: <20170712013231.GC7449@lemon> (raw)
In-Reply-To: <20170710145526.GG14195@stefanha-x1.localdomain>
On Mon, 07/10 15:55, Stefan Hajnoczi wrote:
> On Wed, Jul 05, 2017 at 09:36:31PM +0800, Fam Zheng wrote:
> > +static bool nvme_identify(BlockDriverState *bs, int namespace, Error **errp)
> > +{
> > + BDRVNVMeState *s = bs->opaque;
> > + uint8_t *resp;
> > + int r;
> > + uint64_t iova;
> > + NvmeCmd cmd = {
> > + .opcode = NVME_ADM_CMD_IDENTIFY,
> > + .cdw10 = cpu_to_le32(0x1),
> > + };
> > +
> > + resp = qemu_try_blockalign0(bs, 4096);
>
> Is it possible to use struct NvmeIdCtrl to make this code clearer and
> eliminate the hardcoded sizes/offsets?
Yes, will do.
>
> > + if (!resp) {
> > + error_setg(errp, "Cannot allocate buffer for identify response");
> > + return false;
> > + }
> > + r = nvme_vfio_dma_map(s->vfio, resp, 4096, true, &iova);
> > + if (r) {
> > + error_setg(errp, "Cannot map buffer for DMA");
> > + goto fail;
> > + }
> > + cmd.prp1 = cpu_to_le64(iova);
> > +
> > + if (nvme_cmd_sync(bs, s->queues[0], &cmd)) {
> > + error_setg(errp, "Failed to identify controller");
> > + goto fail;
> > + }
> > +
> > + if (le32_to_cpu(*(uint32_t *)&resp[516]) < namespace) {
> > + error_setg(errp, "Invalid namespace");
> > + goto fail;
> > + }
> > + s->write_cache = le32_to_cpu(resp[525]) & 0x1;
> > + s->max_transfer = (resp[77] ? 1 << resp[77] : 0) * s->page_size;
> > + /* For now the page list buffer per command is one page, to hold at most
> > + * s->page_size / sizeof(uint64_t) entries. */
> > + s->max_transfer = MIN_NON_ZERO(s->max_transfer,
> > + s->page_size / sizeof(uint64_t) * s->page_size);
> > +
> > + memset((char *)resp, 0, 4096);
> > +
> > + cmd.cdw10 = 0;
> > + cmd.nsid = namespace;
> > + if (nvme_cmd_sync(bs, s->queues[0], &cmd)) {
> > + error_setg(errp, "Failed to identify namespace");
> > + goto fail;
> > + }
> > +
> > + s->nsze = le64_to_cpu(*(uint64_t *)&resp[0]);
> > +
> > + nvme_vfio_dma_unmap(s->vfio, resp);
> > + qemu_vfree(resp);
> > + return true;
> > +fail:
> > + qemu_vfree(resp);
> > + return false;
>
> nvme_vfio_dma_unmap() is not called in the error path.
Will fix.
>
> > +static coroutine_fn int nvme_cmd_map_qiov(BlockDriverState *bs, NvmeCmd *cmd,
> > + NVMeRequest *req, QEMUIOVector *qiov)
> > +{
> > + BDRVNVMeState *s = bs->opaque;
> > + uint64_t *pagelist = req->prp_list_page;
> > + int i, j, r;
> > + int entries = 0;
> > +
> > + assert(qiov->size);
> > + assert(QEMU_IS_ALIGNED(qiov->size, s->page_size));
> > + assert(qiov->size / s->page_size <= s->page_size / sizeof(uint64_t));
> > + for (i = 0; i < qiov->niov; ++i) {
> > + bool retry = true;
> > + uint64_t iova;
> > + qemu_co_mutex_lock(&s->dma_map_lock);
> > +try_map:
> > + r = nvme_vfio_dma_map(s->vfio,
> > + qiov->iov[i].iov_base,
> > + qiov->iov[i].iov_len,
> > + true, &iova);
> > + if (r == -ENOMEM && retry) {
> > + retry = false;
> > + trace_nvme_dma_flush_queue_wait(s);
> > + if (s->inflight) {
> > + trace_nvme_dma_map_flush(s);
> > + qemu_co_queue_wait(&s->dma_flush_queue, &s->dma_map_lock);
> > + } else {
> > + r = nvme_vfio_dma_reset_temporary(s->vfio);
> > + if (r) {
> > + return r;
>
> dma_map_lock is held here!
Oops, will fix.
>
> > +static int nvme_co_prw(BlockDriverState *bs, uint64_t offset, uint64_t bytes,
> > + QEMUIOVector *qiov, bool is_write, int flags)
> > +{
> > + BDRVNVMeState *s = bs->opaque;
> > + int r;
> > + uint8_t *buf = NULL;
> > + QEMUIOVector local_qiov;
> > +
> > + assert(QEMU_IS_ALIGNED(offset, s->page_size));
> > + assert(QEMU_IS_ALIGNED(bytes, s->page_size));
> > + assert(bytes <= s->max_transfer);
>
> Who guarantees max_transfer? I think request alignment is enforced by
> block/io.c but there is no generic max_transfer handling code, so this
> assertion can be triggered by the guest. Please handle it as a genuine
> request error instead of using an assertion.
There has been one since 04ed95f4843281e292d93018d56d4b14705f9f2c, see the code
around max_transfer in block/io.c:bdrv_aligned_*.
>
> > +static int nvme_reopen_prepare(BDRVReopenState *reopen_state,
> > + BlockReopenQueue *queue, Error **errp)
> > +{
> > + return 0;
> > +}
>
> What is the purpose of this dummy .bdrv_reopen_prepare() implementation?
This is necessary for block jobs to work, other drivers provides dummy
implementations as well.
Fam
next prev parent reply other threads:[~2017-07-12 2:15 UTC|newest]
Thread overview: 33+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-07-05 13:36 [Qemu-devel] [PATCH v3 0/6] block: Add VFIO based driver for NVMe device Fam Zheng
2017-07-05 13:36 ` [Qemu-devel] [PATCH v3 1/6] stubs: Add stubs for ram block API Fam Zheng
2017-07-05 13:36 ` [Qemu-devel] [PATCH v3 2/6] block: Add VFIO based NVMe driver Fam Zheng
2017-07-06 17:38 ` Keith Busch
2017-07-06 23:27 ` Fam Zheng
2017-07-07 10:06 ` [Qemu-devel] [Qemu-block] " Paolo Bonzini
2017-07-07 17:15 ` [Qemu-devel] " Stefan Hajnoczi
2017-07-10 14:55 ` Stefan Hajnoczi
2017-07-12 2:14 ` Fam Zheng [this message]
2017-07-12 10:49 ` Stefan Hajnoczi
2017-07-05 13:36 ` [Qemu-devel] [PATCH v3 3/6] block: Introduce bdrv_dma_map and bdrv_dma_unmap Fam Zheng
2017-07-10 14:57 ` Stefan Hajnoczi
2017-07-10 15:07 ` Stefan Hajnoczi
2017-07-10 15:08 ` Paolo Bonzini
2017-07-11 10:05 ` [Qemu-devel] [Qemu-block] " Stefan Hajnoczi
2017-07-11 10:28 ` Paolo Bonzini
2017-07-12 1:07 ` Fam Zheng
2017-07-12 14:03 ` Paolo Bonzini
2017-07-14 13:37 ` Stefan Hajnoczi
2017-07-14 13:46 ` Paolo Bonzini
2017-07-05 13:36 ` [Qemu-devel] [PATCH v3 4/6] block/nvme: Implement .bdrv_dma_map and .bdrv_dma_unmap Fam Zheng
2017-07-10 14:59 ` Stefan Hajnoczi
2017-07-10 15:09 ` Paolo Bonzini
2017-07-11 10:04 ` Stefan Hajnoczi
2017-07-05 13:36 ` [Qemu-devel] [PATCH v3 5/6] qemu-img: Map bench buffer Fam Zheng
2017-07-05 13:36 ` [Qemu-devel] [PATCH v3 6/6] block: Move NVMe spec definitions to a separate header Fam Zheng
2017-07-05 13:39 ` Paolo Bonzini
2017-07-10 15:01 ` Stefan Hajnoczi
2017-07-05 13:41 ` [Qemu-devel] [PATCH v3 0/6] block: Add VFIO based driver for NVMe device Paolo Bonzini
2017-07-06 14:06 ` no-reply
2017-07-06 14:22 ` Paolo Bonzini
2017-07-06 14:36 ` Fam Zheng
2017-07-06 14:44 ` Paolo Bonzini
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20170712013231.GC7449@lemon \
--to=famz@redhat.com \
--cc=keith.busch@intel.com \
--cc=krister@redhat.com \
--cc=kwolf@redhat.com \
--cc=mreitz@redhat.com \
--cc=pbonzini@redhat.com \
--cc=qemu-block@nongnu.org \
--cc=qemu-devel@nongnu.org \
--cc=stefanha@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).