From: Jason Gunthorpe <jgg@nvidia.com>
To: Leon Romanovsky <leon@kernel.org>
Cc: Doug Ledford <dledford@redhat.com>,
Adit Ranadive <aditr@vmware.com>,
Ariel Elior <aelior@marvell.com>,
Bernard Metzler <bmt@zurich.ibm.com>,
Christian Benvenuti <benve@cisco.com>,
Christoph Hellwig <hch@infradead.org>,
Dennis Dalessandro <dennis.dalessandro@intel.com>,
Devesh Sharma <devesh.sharma@broadcom.com>,
Faisal Latif <faisal.latif@intel.com>,
"Gal Pressman" <galpress@amazon.com>,
Lijun Ou <oulijun@huawei.com>, <linux-rdma@vger.kernel.org>,
Michal Kalderon <mkalderon@marvell.com>,
"Mike Marciniszyn" <mike.marciniszyn@intel.com>,
Naresh Kumar PBS <nareshkumar.pbs@broadcom.com>,
Nelson Escobar <neescoba@cisco.com>,
"Parav Pandit" <parav@nvidia.com>,
Parvi Kaustubhi <pkaustub@cisco.com>,
"Potnuri Bharat Teja" <bharat@chelsio.com>,
Selvin Xavier <selvin.xavier@broadcom.com>,
Shiraz Saleem <shiraz.saleem@intel.com>,
Somnath Kotur <somnath.kotur@broadcom.com>,
Sriharsha Basavapatna <sriharsha.basavapatna@broadcom.com>,
VMware PV-Drivers <pv-drivers@vmware.com>,
Weihang Li <liweihang@huawei.com>,
"Wei Hu(Xavier)" <huwei87@hisilicon.com>,
Yishai Hadas <yishaih@nvidia.com>,
Zhu Yanjun <yanjunz@nvidia.com>
Subject: Re: [PATCH rdma-next v4] RDMA: Explicitly pass in the dma_device to ib_register_device
Date: Fri, 9 Oct 2020 12:55:26 -0300 [thread overview]
Message-ID: <20201009155526.GA540955@nvidia.com> (raw)
In-Reply-To: <20201008082752.275846-1-leon@kernel.org>
On Thu, Oct 08, 2020 at 11:27:52AM +0300, Leon Romanovsky wrote:
> From: Jason Gunthorpe <jgg@nvidia.com>
>
> The code in setup_dma_device has become rather convoluted, move all of
> this to the drivers. Drives now pass in a DMA capable struct device which
> will be used to setup DMA, or drivers must fully configure the ibdev for
> DMA and pass in NULL.
>
> Other than setting the masks in rvt all drivers were doing this already
> anyhow.
>
> mthca, mlx4 and mlx5 were already setting up maximum DMA segment size for
> DMA based on their hardweare limits in:
> __mthca_init_one()
> dma_set_max_seg_size (1G)
>
> __mlx4_init_one()
> dma_set_max_seg_size (1G)
>
> mlx5_pci_init()
> set_dma_caps()
> dma_set_max_seg_size (2G)
>
> Other non software drivers (except usnic) were extended to UINT_MAX [1, 2]
> instead of 2G as was before.
>
> [1] https://lore.kernel.org/linux-rdma/20200924114940.GE9475@nvidia.com/
> [2] https://lore.kernel.org/linux-rdma/20200924114940.GE9475@nvidia.com/
> Suggested-by: Christoph Hellwig <hch@infradead.org>
> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
> Signed-off-by: Parav Pandit <parav@nvidia.com>
> Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
> Reviewed-by: Christoph Hellwig <hch@lst.de>
> ---
> Changelog:
> v4:
> * Deleted dma_virt_op assignments and masks in rvt, siw and rxe drivers
> v3: https://lore.kernel.org/linux-rdma/20201007070641.3552647-1-leon@kernel.org
> * Changed hardcoded max_segment_size to use dma_set_max_seg_size for
> RXE and SIW.
> * Protected dma_virt_ops from linkage failure without CONFIG_DMA_OPS.
> * Removed not needed mask setting in RVT.
> v2: https://lore.kernel.org/linux-rdma/20201006073229.2347811-1-leon@kernel.org
> * Simplified setup_dma_device() by removing extra if()s over various
> * WARN_ON().
> v1: https://lore.kernel.org/linux-rdma/20201005110050.1703618-1-leon@kernel.org
> * Moved dma_set_max_seg_size() to be part of the drivers and increased
> the limit to UINT_MAX.
> ---
> drivers/infiniband/core/device.c | 65 +++++--------------
> drivers/infiniband/hw/bnxt_re/main.c | 3 +-
> drivers/infiniband/hw/cxgb4/provider.c | 4 +-
> drivers/infiniband/hw/efa/efa_main.c | 4 +-
> drivers/infiniband/hw/hns/hns_roce_main.c | 3 +-
> drivers/infiniband/hw/i40iw/i40iw_verbs.c | 3 +-
> drivers/infiniband/hw/mlx4/main.c | 3 +-
> drivers/infiniband/hw/mlx5/main.c | 2 +-
> drivers/infiniband/hw/mthca/mthca_provider.c | 2 +-
> drivers/infiniband/hw/ocrdma/ocrdma_main.c | 4 +-
> drivers/infiniband/hw/qedr/main.c | 3 +-
> drivers/infiniband/hw/usnic/usnic_ib_main.c | 3 +-
> .../infiniband/hw/vmw_pvrdma/pvrdma_main.c | 4 +-
> drivers/infiniband/sw/rdmavt/vt.c | 6 +-
> drivers/infiniband/sw/rxe/rxe_verbs.c | 9 +--
> drivers/infiniband/sw/siw/siw_main.c | 8 +--
> include/rdma/ib_verbs.h | 3 +-
> 17 files changed, 52 insertions(+), 77 deletions(-)
Applied to for-next thanks
Jason
prev parent reply other threads:[~2020-10-09 15:55 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-10-08 8:27 [PATCH rdma-next v4] RDMA: Explicitly pass in the dma_device to ib_register_device Leon Romanovsky
2020-10-08 13:56 ` Christoph Hellwig
2020-10-08 14:55 ` Leon Romanovsky
2020-10-09 15:55 ` Jason Gunthorpe [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20201009155526.GA540955@nvidia.com \
--to=jgg@nvidia.com \
--cc=aditr@vmware.com \
--cc=aelior@marvell.com \
--cc=benve@cisco.com \
--cc=bharat@chelsio.com \
--cc=bmt@zurich.ibm.com \
--cc=dennis.dalessandro@intel.com \
--cc=devesh.sharma@broadcom.com \
--cc=dledford@redhat.com \
--cc=faisal.latif@intel.com \
--cc=galpress@amazon.com \
--cc=hch@infradead.org \
--cc=huwei87@hisilicon.com \
--cc=leon@kernel.org \
--cc=linux-rdma@vger.kernel.org \
--cc=liweihang@huawei.com \
--cc=mike.marciniszyn@intel.com \
--cc=mkalderon@marvell.com \
--cc=nareshkumar.pbs@broadcom.com \
--cc=neescoba@cisco.com \
--cc=oulijun@huawei.com \
--cc=parav@nvidia.com \
--cc=pkaustub@cisco.com \
--cc=pv-drivers@vmware.com \
--cc=selvin.xavier@broadcom.com \
--cc=shiraz.saleem@intel.com \
--cc=somnath.kotur@broadcom.com \
--cc=sriharsha.basavapatna@broadcom.com \
--cc=yanjunz@nvidia.com \
--cc=yishaih@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).