public inbox for linux-nvme@lists.infradead.org
 help / color / mirror / Atom feed
From: Niklas Cassel <cassel@kernel.org>
To: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
Cc: "Damien Le Moal" <dlemoal@kernel.org>,
	"Vinod Koul" <vkoul@kernel.org>,
	linux-nvme@lists.infradead.org, "Christoph Hellwig" <hch@lst.de>,
	"Keith Busch" <kbusch@kernel.org>,
	"Sagi Grimberg" <sagi@grimberg.me>,
	linux-pci@vger.kernel.org, "Krzysztof Wilczyński" <kw@linux.com>,
	"Kishon Vijay Abraham I" <kishon@kernel.org>,
	"Bjorn Helgaas" <bhelgaas@google.com>,
	"Lorenzo Pieralisi" <lpieralisi@kernel.org>,
	"Rick Wertenbroek" <rick.wertenbroek@gmail.com>
Subject: Re: [PATCH v4 17/18] nvmet: New NVMe PCI endpoint target driver
Date: Tue, 17 Dec 2024 16:59:50 +0100	[thread overview]
Message-ID: <Z2Gf9lv6hLROjM8e@ryzen> (raw)
In-Reply-To: <20241217090129.6dodrgi4tn7l3cod@thinkpad>

On Tue, Dec 17, 2024 at 02:31:29PM +0530, Manivannan Sadhasivam wrote:
> On Tue, Dec 17, 2024 at 11:51:44AM +0530, Manivannan Sadhasivam wrote:
> > On Tue, Dec 17, 2024 at 10:57:02AM +0530, Vinod Koul wrote:
> > > On 16-12-24, 11:12, Damien Le Moal wrote:
> > > > On 2024/12/16 8:35, Vinod Koul wrote:
> > > > > Hi Niklas,
> > > > > 
> > > > > On 13-12-24, 17:59, Niklas Cassel wrote:
> > > > >> Hello Vinod,
> > > > >>
> > > > >> I am a bit confused about the usage of the dmaengine API, and I hope that you
> > > > >> could help make me slightly less confused :)
> > > > > 
> > > > > Sure thing!
> > > > > 
> > > > >> If you look at the nvmet_pciep_epf_dma_transfer() function below, it takes a
> > > > >> mutex around the dmaengine_slave_config(), dmaengine_prep_slave_single(),
> > > > >> dmaengine_submit(), dma_sync_wait(), and dmaengine_terminate_sync() calls.
> > > > >>
> > > > >> I really wish that we would remove this mutex, to get better performance.
> > > > >>
> > > > >>
> > > > >> If I look at e.g. the drivers/dma/dw-edma/dw-edma-core.c driver, I can see
> > > > >> that dmaengine_prep_slave_single() (which will call
> > > > >> device_prep_slave_sg(.., .., 1, .., .., ..)) allocates a new
> > > > >> dma_async_tx_descriptor for each function call.
> > > > >>
> > > > >> I can see that device_prep_slave_sg() (dw_edma_device_prep_slave_sg()) will
> > > > >> call dw_edma_device_transfer() which will call vchan_tx_prep(), which adds
> > > > >> the descriptor to the tail of a list.
> > > > >>
> > > > >> I can also see that dw_edma_done_interrupt() will automatically start the
> > > > >> transfer of the next descriptor (using vchan_next_desc()).
> > > > >>
> > > > >> So this looks like it is supposed to be asynchronous... however, if we simply
> > > > >> remove the mutex, we get IOMMU errors, most likely because the DMA writes to
> > > > >> an incorrect address.
> > > > >>
> > > > >> It looks like this is because dmaengine_prep_slave_single() really requires
> > > > >> dmaengine_slave_config() for each transfer. (Since we are supplying a src_addr
> > > > >> in the sconf that we are supplying to dmaengine_slave_config().)
> > > > >>
> > > > >> (i.e. we can't call dmaengine_slave_config() while a DMA transfer is active.)
> > > > >>
> > > > >> So while this API is supposed to be async, to me it looks like it can only
> > > > >> be used in a synchronous manner... But that seems like a really weird design.
> > > > >>
> > > > >> Am I missing something obvious here?
> > > > > 
> > > > > Yes, I feel nvme being treated as slave transfer, which it might not be.
> > > > > This API was designed for peripherals like i2c/spi etc where we have a
> > > > > hardware address to read/write to. So the dma_slave_config would pass on
> > > > > the transfer details for the peripheral like address, width of fifo,
> > > > > depth etc and these are setup config, so call once for a channel and then
> > > > > prepare the descriptor, submit... and repeat of prepare and submit ...
> > > > > 
> > > > > I suspect since you are passing an address which keep changing in the
> > > > > dma_slave_config, you need to guard that and prep_slave_single() call,
> > > > > as while preparing the descriptor driver would lookup what was setup for
> > > > > the configuration.
> > > > > 
> > > > > I suggest then use the prep_memcpy() API instead and pass on source and
> > > > > destination, no need to lock the calls...
> > > > 
> > > > Vinod,
> > > > 
> > > > Thank you for the information. However, I think we can use this only if the DMA
> > > > controller driver implements the device_prep_dma_memcpy operation, no ?
> > > > In our case, the DWC EDMA driver does not seem to implement this.
> > > 
> > > It should be added in that case.
> > > 
> > > Before that, the bigger question is, should nvme be slave transfer or
> > > memcpy.. Was driver support the reason why the slave transfer was used here...?
> > > 
> > > As i said, slave is for peripherals which have a static fifo to
> > > send/receive data from, nvme sounds like a memory transfer to me, is
> > > that a right assumption?
> > > 
> > 
> > My understanding is that DMA_MEMCPY is for local DDR transfer i.e., src and dst
> > are local addresses. And DMA_SLAVE is for transfer between remote and local
> > addresses. I haven't looked into the NVMe EPF driver yet, but it should do
> > the transfer between remote and local addresses. This is similar to MHI EPF
> > driver as well.
> > 
> 
> I had an offline discussion with Vinod and he clarified that the DMA_SLAVE
> implementation is supposed to be used by clients with fixed FIFO. That's why
> 'struct dma_slave_config' has options to configure maxburst, addr_width,
> port_window_size etc... So these are not applicable for our usecase where we
> would be just carrying out transfer between remote and local DDR.
> 
> So we should be implementing prep_memcpy() in dw-edma driver and use DMA_MEMCPY
> in client drivers.
> 
> @Niklas: Since you asked this question, are you volunteering to implement
> prep_memcpy() in dw-edma driver? ;)

I did implement something that seems to be working.

It would be nice if you could help testing it using your MHI EPF driver.
(You need to convert it to use _prep_memcpy() in order to be able to test.)


Kind regards,
Niklas


  reply	other threads:[~2024-12-17 16:34 UTC|newest]

Thread overview: 45+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-12-12 11:34 [PATCH v4 00/18] NVMe PCI endpoint target driver Damien Le Moal
2024-12-12 11:34 ` [PATCH v4 01/18] nvme: Move opcode string helper functions declarations Damien Le Moal
2024-12-12 11:34 ` [PATCH v4 02/18] nvmet: Add vendor_id and subsys_vendor_id subsystem attributes Damien Le Moal
2024-12-12 11:34 ` [PATCH v4 03/18] nvmet: Export nvmet_update_cc() and nvmet_cc_xxx() helpers Damien Le Moal
2024-12-12 11:34 ` [PATCH v4 04/18] nvmet: Introduce nvmet_get_cmd_effects_admin() Damien Le Moal
2024-12-12 11:34 ` [PATCH v4 05/18] nvmet: Add drvdata field to struct nvmet_ctrl Damien Le Moal
2024-12-12 11:34 ` [PATCH v4 06/18] nvme: Add PCI transport type Damien Le Moal
2024-12-12 11:34 ` [PATCH v4 07/18] nvmet: Improve nvmet_alloc_ctrl() interface and implementation Damien Le Moal
2024-12-12 11:34 ` [PATCH v4 08/18] nvmet: Introduce nvmet_req_transfer_len() Damien Le Moal
2024-12-12 11:34 ` [PATCH v4 09/18] nvmet: Introduce nvmet_sq_create() and nvmet_cq_create() Damien Le Moal
2024-12-12 11:34 ` [PATCH v4 10/18] nvmet: Add support for I/O queue management admin commands Damien Le Moal
2024-12-12 11:34 ` [PATCH v4 11/18] nvmet: Do not require SGL for PCI target controller commands Damien Le Moal
2024-12-12 11:34 ` [PATCH v4 12/18] nvmet: Introduce get/set_feature controller operations Damien Le Moal
2024-12-12 11:34 ` [PATCH v4 13/18] nvmet: Implement host identifier set feature support Damien Le Moal
2024-12-12 18:50   ` Bjorn Helgaas
2024-12-12 11:34 ` [PATCH v4 14/18] nvmet: Implement interrupt coalescing " Damien Le Moal
2024-12-12 11:34 ` [PATCH v4 15/18] nvmet: Implement interrupt config " Damien Le Moal
2024-12-12 11:34 ` [PATCH v4 16/18] nvmet: Implement arbitration " Damien Le Moal
2024-12-12 11:34 ` [PATCH v4 17/18] nvmet: New NVMe PCI endpoint target driver Damien Le Moal
2024-12-12 18:55   ` Bjorn Helgaas
2024-12-14  5:52     ` Damien Le Moal
2024-12-13 16:59   ` Niklas Cassel
2024-12-16 16:35     ` Vinod Koul
2024-12-16 19:12       ` Damien Le Moal
2024-12-17  5:27         ` Vinod Koul
2024-12-17  6:21           ` Manivannan Sadhasivam
2024-12-17  9:01             ` Manivannan Sadhasivam
2024-12-17 15:59               ` Niklas Cassel [this message]
2024-12-17 16:04                 ` [PATCH 1/3] dmaengine: dw-edma: Add support for DMA_MEMCPY Niklas Cassel
2024-12-17 16:04                   ` [PATCH 2/3] PCI: endpoint: pci-epf-test: Use private DMA_MEMCPY channel Niklas Cassel
2024-12-17 16:04                   ` [PATCH 3/3] debug prints - DO NOT MERGE Niklas Cassel
2024-12-18 18:37                 ` [PATCH v4 17/18] nvmet: New NVMe PCI endpoint target driver Manivannan Sadhasivam
2024-12-18 18:01       ` Niklas Cassel
2024-12-17  8:53   ` Manivannan Sadhasivam
2024-12-17 14:35     ` Damien Le Moal
2024-12-17 16:41       ` Manivannan Sadhasivam
2024-12-17 17:03         ` Damien Le Moal
2024-12-17 17:17           ` Manivannan Sadhasivam
2024-12-19  5:47         ` Christoph Hellwig
2024-12-19  5:45       ` Christoph Hellwig
2024-12-12 11:34 ` [PATCH v4 18/18] Documentation: Document the " Damien Le Moal
2024-12-12 18:48   ` Bjorn Helgaas
2024-12-17 17:30   ` Manivannan Sadhasivam
2024-12-17 17:40     ` Damien Le Moal
2024-12-16  6:07 ` [PATCH v4 00/18] " Manivannan Sadhasivam

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=Z2Gf9lv6hLROjM8e@ryzen \
    --to=cassel@kernel.org \
    --cc=bhelgaas@google.com \
    --cc=dlemoal@kernel.org \
    --cc=hch@lst.de \
    --cc=kbusch@kernel.org \
    --cc=kishon@kernel.org \
    --cc=kw@linux.com \
    --cc=linux-nvme@lists.infradead.org \
    --cc=linux-pci@vger.kernel.org \
    --cc=lpieralisi@kernel.org \
    --cc=manivannan.sadhasivam@linaro.org \
    --cc=rick.wertenbroek@gmail.com \
    --cc=sagi@grimberg.me \
    --cc=vkoul@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox