From: Ross Zwisler <ross.zwisler@linux.intel.com>
To: Dave Jiang <dave.jiang@intel.com>
Cc: vinod.koul@intel.com, dmaengine@vger.kernel.org,
linux-nvdimm@lists.01.org
Subject: Re: [PATCH v2 5/5] libnvdimm: add DMA support for pmem blk-mq
Date: Thu, 3 Aug 2017 14:20:40 -0600 [thread overview]
Message-ID: <20170803202039.GB18341@linux.intel.com> (raw)
In-Reply-To: <150169928551.59677.14690799553760064519.stgit@djiang5-desk3.ch.intel.com>
On Wed, Aug 02, 2017 at 11:41:25AM -0700, Dave Jiang wrote:
> Adding DMA support for pmem blk reads. This provides signficant CPU
> reduction with large memory reads with good performance. DMAs are triggered
> with test against bio_multiple_segment(), so the small I/Os (4k or less?)
> are still performed by the CPU in order to reduce latency. By default
> the pmem driver will be using blk-mq with DMA.
>
> Numbers below are measured against pmem simulated via DRAM using
> memmap=NN!SS. DMA engine used is the ioatdma on Intel Skylake Xeon
> platform. Keep in mind the performance for actual persistent memory
> will differ.
> Fio 2.21 was used.
>
> 64k: 1 task queuedepth=1
> CPU Read: 7631 MB/s 99.7% CPU DMA Read: 2415 MB/s 54% CPU
> CPU Write: 3552 MB/s 100% CPU DMA Write 2173 MB/s 54% CPU
>
> 64k: 16 tasks queuedepth=16
> CPU Read: 36800 MB/s 1593% CPU DMA Read: 29100 MB/s 607% CPU
> CPU Write 20900 MB/s 1589% CPU DMA Write: 23400 MB/s 585% CPU
>
> 2M: 1 task queuedepth=1
> CPU Read: 6013 MB/s 99.3% CPU DMA Read: 7986 MB/s 59.3% CPU
> CPU Write: 3579 MB/s 100% CPU DMA Write: 5211 MB/s 58.3% CPU
>
> 2M: 16 tasks queuedepth=16
> CPU Read: 18100 MB/s 1588% CPU DMA Read: 21300 MB/s 180.9% CPU
> CPU Write: 14100 MB/s 1594% CPU DMA Write: 20400 MB/s 446.9% CPU
>
> Signed-off-by: Dave Jiang <dave.jiang@intel.com>
<>
> +static int pmem_handle_cmd_dma(struct pmem_cmd *cmd, bool is_write)
> +{
...
> +err_set_unmap:
> + dmaengine_unmap_put(unmap);
> +err_unmap_buffer:
> + dma_unmap_page(dev, dma_addr, len, dir);
> +err_unmap_sg:
> + if (dir == DMA_TO_DEVICE)
> + dir = DMA_FROM_DEVICE;
> + else
> + dir = DMA_TO_DEVICE;
> + dma_unmap_sg(dev, cmd->sg, cmd->sg_nents, dir);
> + dmaengine_unmap_put(unmap);
> +err:
> + blk_mq_end_request(cmd->rq, -ENXIO);
Should this be:
blk_mq_end_request(cmd->rq, rc);
?
Otherwise:
Reviewed-by: Ross Zwisler <ross.zwisler@linux.intel.com>
> + return rc;
> +}
> +
_______________________________________________
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm
prev parent reply other threads:[~2017-08-03 20:18 UTC|newest]
Thread overview: 28+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-08-02 18:40 [PATCH v2 0/5] Adding blk-mq and DMA support to pmem block driver Dave Jiang
2017-08-02 18:41 ` [PATCH v2 1/5] dmaengine: ioatdma: revert 7618d035 to allow sharing of DMA channels Dave Jiang
2017-08-02 18:41 ` [PATCH v2 2/5] dmaengine: ioatdma: dma_prep_memcpy_sg support Dave Jiang
2017-08-02 18:41 ` [PATCH v2 3/5] dmaengine: add SG support to dmaengine_unmap Dave Jiang
2017-08-02 18:41 ` [PATCH v2 4/5] libnvdimm: Adding blk-mq support to the pmem driver Dave Jiang
2017-08-03 20:04 ` Ross Zwisler
2017-08-02 18:41 ` [PATCH v2 5/5] libnvdimm: add DMA support for pmem blk-mq Dave Jiang
2017-08-02 19:22 ` Sinan Kaya
2017-08-02 20:52 ` Dave Jiang
2017-08-02 21:10 ` Sinan Kaya
2017-08-02 21:13 ` Dave Jiang
2017-08-03 5:01 ` Vinod Koul
2017-08-03 5:11 ` Jiang, Dave
2017-08-03 5:28 ` Vinod Koul
2017-08-03 5:36 ` Jiang, Dave
2017-08-03 8:59 ` Vinod Koul
2017-08-03 14:36 ` Jiang, Dave
2017-08-03 15:55 ` Vinod Koul
2017-08-03 16:14 ` Dan Williams
2017-08-03 17:07 ` Dave Jiang
2017-08-03 18:35 ` Allen Hubbe
2017-08-16 16:50 ` Vinod Koul
2017-08-16 17:06 ` Dan Williams
2017-08-16 17:16 ` Dave Jiang
2017-08-16 17:20 ` Dan Williams
2017-08-16 17:27 ` Dave Jiang
2017-08-18 5:35 ` Vinod Koul
2017-08-03 20:20 ` Ross Zwisler [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20170803202039.GB18341@linux.intel.com \
--to=ross.zwisler@linux.intel.com \
--cc=dave.jiang@intel.com \
--cc=dmaengine@vger.kernel.org \
--cc=linux-nvdimm@lists.01.org \
--cc=vinod.koul@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox