public inbox for linux-nvdimm@lists.01.org
 help / color / mirror / Atom feed
From: Dave Jiang <dave.jiang@intel.com>
To: Dan Williams <dan.j.williams@intel.com>
Cc: "Koul, Vinod" <vinod.koul@intel.com>,
	Sinan Kaya <okaya@codeaurora.org>,
	"dmaengine@vger.kernel.org" <dmaengine@vger.kernel.org>,
	"linux-nvdimm@lists.01.org" <linux-nvdimm@lists.01.org>
Subject: Re: [PATCH v2 5/5] libnvdimm: add DMA support for pmem blk-mq
Date: Wed, 16 Aug 2017 10:27:17 -0700	[thread overview]
Message-ID: <5a4ed56e-73af-d300-2575-20e7cde4ff01@intel.com> (raw)
In-Reply-To: <CAPcyv4iwH87NjzjtakCj+as8tTVEiMu=Ge=5v8NzNP1Ed0H_yQ@mail.gmail.com>



On 08/16/2017 10:20 AM, Dan Williams wrote:
> On Wed, Aug 16, 2017 at 10:16 AM, Dave Jiang <dave.jiang@intel.com> wrote:
>>
>>
>> On 08/16/2017 10:06 AM, Dan Williams wrote:
>>> On Wed, Aug 16, 2017 at 9:50 AM, Vinod Koul <vinod.koul@intel.com> wrote:
>>>> On Thu, Aug 03, 2017 at 09:14:13AM -0700, Dan Williams wrote:
>>>>>>>>>>>>>>>> Do we need a new API / new function, or new capability?
>>>>>>>>>>>>>>> Hmmm...you are right. I wonder if we need something like DMA_SG cap....
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Unfortunately, DMA_SG means something else. Maybe, we need DMA_MEMCPY_SG
>>>>>>>>>>>>>> to be similar with DMA_MEMSET_SG.
>>>>>>>>>>>>>
>>>>>>>>>>>>> I'm ok with that if Vinod is.
>>>>>>>>>>>>
>>>>>>>>>>>> So what exactly is the ask here, are you trying to do MEMCPY or SG or MEMSET
>>>>>>>>>>>> or all :). We should have done bitfields for this though...
>>>>>>>>>>>
>>>>>>>>>>> Add DMA_MEMCPY_SG to transaction type.
>>>>>>>>>>
>>>>>>>>>> Not MEMSET right, then why not use DMA_SG, DMA_SG is supposed for
>>>>>>>>>> scatterlist to scatterlist copy which is used to check for
>>>>>>>>>> device_prep_dma_sg() calls
>>>>>>>>>>
>>>>>>>>> Right. But we are doing flat buffer to/from scatterlist, not sg to sg. So
>>>>>>>>> we need something separate than what DMA_SG is used for.
>>>>>>>>
>>>>>>>> Hmm, its SG-buffer  and its memcpy, so should we call it DMA_SG_BUFFER,
>>>>>>>> since it is not memset (or is it) I would not call it memset, or maybe we
>>>>>>>> should also change DMA_SG to DMA_SG_SG to make it terribly clear :D
>>>>>>>
>>>>>>> I can create patches for both.
>>>>>>
>>>>>> Great, anyone who disagrees or can give better names :)
>>>>>
>>>>> All my suggestions would involve a lot more work. If we had infinite
>>>>> time we'd stop with the per-operation-type entry points and make this
>>>>> look like a typical driver sub-system that takes commands like
>>>>> block-devices or usb, but perhaps that ship has sailed.
>>>>
>>>> Can you elaborate on this :)
>>>>
>>>> I have been thinking about the need to redo the API. So lets discuss :)
>>>
>>> The high level is straightforward, the devil is in the details. Define
>>> a generic dma command object, perhaps 'struct dma_io' certainly not
>>> 'struct dma_async_tx_descriptor', and have just one entry point
>>> per-driver. That 'struct dma_io' would carry a generic command number
>>> a target address and a scatterlist. The driver entry point would then
>>> convert and build the command to the hardware command format plus
>>> submission queue. The basic driving design principle is convert all
>>> the current function pointer complexity with the prep_* routines into
>>> data structure complexity in the common command format.
>>>
>>> This trades off some efficiency because now you need to write the
>>> generic command and write the descriptor, but I think if the operation
>>> is worth offloading those conversion costs must already be in the
>>> noise.
>>
>> Vinod, I think if you want to look at existing examples take a look at
>> the block layer request queue. Or even better blk-mq. I think this is
>> pretty close to what Dan is envisioning? Also, it's probably time we
>> looking into supporting hotplugging for DMA engines? Maybe this will
>> make it easier to do so. I'm willing to help and hoping that it will
>> make things easier for me for the next gen hardware.
> 
> Yes, device hotplug is a good one to add to the list. We didn't have
> 'struct percpu_ref' when dmaengine started, that would make hotplug
> support easier to handle without coarse locking.
> 

And also perhaps hw queues / channels hotplug. Future hardware may have
reconfigurable queues that can be dynamic in numbers.
_______________________________________________
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm

  reply	other threads:[~2017-08-16 17:25 UTC|newest]

Thread overview: 28+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-08-02 18:40 [PATCH v2 0/5] Adding blk-mq and DMA support to pmem block driver Dave Jiang
2017-08-02 18:41 ` [PATCH v2 1/5] dmaengine: ioatdma: revert 7618d035 to allow sharing of DMA channels Dave Jiang
2017-08-02 18:41 ` [PATCH v2 2/5] dmaengine: ioatdma: dma_prep_memcpy_sg support Dave Jiang
2017-08-02 18:41 ` [PATCH v2 3/5] dmaengine: add SG support to dmaengine_unmap Dave Jiang
2017-08-02 18:41 ` [PATCH v2 4/5] libnvdimm: Adding blk-mq support to the pmem driver Dave Jiang
2017-08-03 20:04   ` Ross Zwisler
2017-08-02 18:41 ` [PATCH v2 5/5] libnvdimm: add DMA support for pmem blk-mq Dave Jiang
2017-08-02 19:22   ` Sinan Kaya
2017-08-02 20:52     ` Dave Jiang
2017-08-02 21:10       ` Sinan Kaya
2017-08-02 21:13         ` Dave Jiang
2017-08-03  5:01           ` Vinod Koul
2017-08-03  5:11             ` Jiang, Dave
2017-08-03  5:28               ` Vinod Koul
2017-08-03  5:36                 ` Jiang, Dave
2017-08-03  8:59                   ` Vinod Koul
2017-08-03 14:36                     ` Jiang, Dave
2017-08-03 15:55                       ` Vinod Koul
2017-08-03 16:14                         ` Dan Williams
2017-08-03 17:07                           ` Dave Jiang
2017-08-03 18:35                             ` Allen Hubbe
2017-08-16 16:50                           ` Vinod Koul
2017-08-16 17:06                             ` Dan Williams
2017-08-16 17:16                               ` Dave Jiang
2017-08-16 17:20                                 ` Dan Williams
2017-08-16 17:27                                   ` Dave Jiang [this message]
2017-08-18  5:35                                 ` Vinod Koul
2017-08-03 20:20   ` Ross Zwisler

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=5a4ed56e-73af-d300-2575-20e7cde4ff01@intel.com \
    --to=dave.jiang@intel.com \
    --cc=dan.j.williams@intel.com \
    --cc=dmaengine@vger.kernel.org \
    --cc=linux-nvdimm@lists.01.org \
    --cc=okaya@codeaurora.org \
    --cc=vinod.koul@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox