From: "Saleem, Shiraz" <shiraz.saleem@intel.com>
To: Jason Gunthorpe <jgg@nvidia.com>,
"Williams, Dan J" <dan.j.williams@intel.com>
Cc: Leon Romanovsky <leon@kernel.org>,
"dledford@redhat.com" <dledford@redhat.com>,
"kuba@kernel.org" <kuba@kernel.org>,
"davem@davemloft.net" <davem@davemloft.net>,
"linux-rdma@vger.kernel.org" <linux-rdma@vger.kernel.org>,
"gregkh@linuxfoundation.org" <gregkh@linuxfoundation.org>,
"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
"Ertman, David M" <david.m.ertman@intel.com>,
"Nguyen, Anthony L" <anthony.l.nguyen@intel.com>,
"Ismail, Mustafa" <mustafa.ismail@intel.com>,
"Samudrala, Sridhar" <sridhar.samudrala@intel.com>,
"Patil, Kiran" <kiran.patil@intel.com>
Subject: RE: [PATCH 07/22] RDMA/irdma: Register an auxiliary driver and implement private channel OPs
Date: Tue, 2 Feb 2021 19:42:11 +0000 [thread overview]
Message-ID: <4720390ef608423dac481d813e8b8a62@intel.com> (raw)
In-Reply-To: <20210202171454.GX4247@nvidia.com>
> Subject: Re: [PATCH 07/22] RDMA/irdma: Register an auxiliary driver and
> implement private channel OPs
>
> On Mon, Feb 01, 2021 at 05:06:58PM -0800, Dan Williams wrote:
> > On Mon, Feb 1, 2021 at 4:40 PM Saleem, Shiraz <shiraz.saleem@intel.com>
> wrote:
> > >
> > > > Subject: Re: [PATCH 07/22] RDMA/irdma: Register an auxiliary
> > > > driver and implement private channel OPs
> > > >
> > > > On Sat, Jan 30, 2021 at 01:19:36AM +0000, Saleem, Shiraz wrote:
> > > > > > Subject: Re: [PATCH 07/22] RDMA/irdma: Register an auxiliary
> > > > > > driver and implement private channel OPs
> > > > > >
> > > > > > On Wed, Jan 27, 2021 at 07:16:41PM -0400, Jason Gunthorpe wrote:
> > > > > > > On Wed, Jan 27, 2021 at 10:17:56PM +0000, Saleem, Shiraz wrote:
> > > > > > >
> > > > > > > > Even with another core PCI driver, there still needs to be
> > > > > > > > private communication channel between the aux rdma driver
> > > > > > > > and this PCI driver to pass things like QoS updates.
> > > > > > >
> > > > > > > Data pushed from the core driver to its aux drivers should
> > > > > > > either be done through new callbacks in a struct
> > > > > > > device_driver or by having a notifier chain scheme from the core
> driver.
> > > > > >
> > > > > > Right, and internal to driver/core device_lock will protect
> > > > > > from parallel probe/remove and PCI flows.
> > > > > >
> > > > >
> > > > > OK. We will hold the device_lock while issuing the .ops
> > > > > callbacks from core
> > > > driver.
> > > > > This should solve our synchronization issue.
> > > > >
> > > > > There have been a few discussions in this thread. And I would
> > > > > like to be clear on what to do.
> > > > >
> > > > > So we will,
> > > > >
> > > > > 1. Remove .open/.close, .peer_register/.peer_unregister 2.
> > > > > Protect ops callbacks issued from core driver to the aux driver
> > > > > with device_lock
> > > >
> > > > A notifier chain is probably better, honestly.
> > > >
> > > > Especially since you don't want to split the netdev side, a
> > > > notifier chain can be used by both cases equally.
> > > >
> > >
> > > The device_lock seems to be a simple solution to this synchronization
> problem.
> > > May I ask what makes the notifier scheme better to solve this?
> > >
> >
> > Only loosely following the arguments here, but one of the requirements
> > of the driver-op scheme is that the notifying agent needs to know the
> > target device. With the notifier-chain approach the target device
> > becomes anonymous to the notifier agent.
>
> Yes, and you need to have an aux device in the first place. The netdev side has
> neither of this things.
But we do. The ice PCI driver is thing spawning the aux device. And we are trying to do
something directed here specifically between the ice PCI driver and the irdma aux driver.
Seems the notifier chain approach, from the comment above, is less directed and when
you want to broadcast events from core driver to multiple registered subscribers.
I think there is going to be need for some ops even if we were to use notifier chains.
Such as ones that need a ret_code.
I think it would be a bit odd to have extensive callbacks that
> are for RDMA only, that suggests something in the core API is not general enough.
>
Yes there are some domain specific ops. But it is within the boundary of how the aux bus should be used no?
https://www.kernel.org/doc/html/latest/driver-api/auxiliary_bus.html
"The auxiliary_driver can also be encapsulated inside custom drivers that make the core device's functionality extensible by adding additional domain-specific ops as follows:"
struct my_driver {
struct auxiliary_driver auxiliary_drv;
const struct my_ops ops;
};
next prev parent reply other threads:[~2021-02-02 19:43 UTC|newest]
Thread overview: 80+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-01-22 23:48 [PATCH 00/22] Add Intel Ethernet Protocol Driver for RDMA (irdma) Shiraz Saleem
2021-01-22 23:48 ` [PATCH 01/22] iidc: Introduce iidc.h Shiraz Saleem
2021-01-22 23:48 ` [PATCH 02/22] ice: Initialize RDMA support Shiraz Saleem
2021-01-22 23:48 ` [PATCH 03/22] ice: Implement iidc operations Shiraz Saleem
2021-01-22 23:48 ` [PATCH 04/22] ice: Register auxiliary device to provide RDMA Shiraz Saleem
2021-01-25 19:09 ` Jason Gunthorpe
2021-02-05 15:23 ` Saleem, Shiraz
2021-02-05 15:27 ` Jason Gunthorpe
2021-01-22 23:48 ` [PATCH 05/22] i40e: Prep i40e header for aux bus conversion Shiraz Saleem
2021-01-22 23:48 ` [PATCH 06/22] i40e: Register auxiliary devices to provide RDMA Shiraz Saleem
2021-01-22 23:48 ` [PATCH 07/22] RDMA/irdma: Register an auxiliary driver and implement private channel OPs Shiraz Saleem
2021-01-24 13:45 ` Leon Romanovsky
2021-01-25 13:28 ` Jason Gunthorpe
2021-01-25 20:52 ` Jakub Kicinski
2021-01-26 0:39 ` Saleem, Shiraz
2021-01-26 0:47 ` Jason Gunthorpe
2021-01-26 0:57 ` Keller, Jacob E
2021-01-26 1:01 ` Jacob Keller
2021-01-26 1:10 ` Jason Gunthorpe
2021-01-27 0:42 ` Saleem, Shiraz
2021-01-27 2:44 ` Jason Gunthorpe
2021-01-30 1:19 ` Saleem, Shiraz
2021-01-26 5:29 ` Leon Romanovsky
2021-01-26 22:07 ` Jacob Keller
2021-01-27 1:02 ` Saleem, Shiraz
2021-01-26 0:39 ` Saleem, Shiraz
2021-01-25 18:42 ` Jason Gunthorpe
2021-01-26 0:42 ` Saleem, Shiraz
2021-01-26 0:59 ` Jason Gunthorpe
2021-01-27 0:41 ` Saleem, Shiraz
2021-01-27 12:18 ` Leon Romanovsky
2021-01-27 22:17 ` Saleem, Shiraz
2021-01-27 23:16 ` Jason Gunthorpe
2021-01-28 5:41 ` Leon Romanovsky
2021-01-30 1:19 ` Saleem, Shiraz
2021-02-01 6:09 ` Leon Romanovsky
2021-02-01 19:18 ` Jason Gunthorpe
2021-02-02 0:40 ` Saleem, Shiraz
2021-02-02 1:06 ` Dan Williams
2021-02-02 17:14 ` Jason Gunthorpe
2021-02-02 19:42 ` Saleem, Shiraz [this message]
2021-02-02 23:17 ` Jason Gunthorpe
2021-02-01 19:21 ` Jason Gunthorpe
2021-01-26 5:37 ` Leon Romanovsky
2021-01-30 1:19 ` Saleem, Shiraz
2021-02-01 19:19 ` Jason Gunthorpe
2021-01-25 19:16 ` Jason Gunthorpe
2021-01-30 1:19 ` Saleem, Shiraz
2021-01-22 23:48 ` [PATCH 08/22] RDMA/irdma: Implement device initialization definitions Shiraz Saleem
2021-01-22 23:48 ` [PATCH 09/22] RDMA/irdma: Implement HW Admin Queue OPs Shiraz Saleem
2021-01-25 19:23 ` Jason Gunthorpe
2021-01-27 0:41 ` Saleem, Shiraz
2021-01-27 2:41 ` Jason Gunthorpe
2021-01-30 1:18 ` Saleem, Shiraz
2021-01-22 23:48 ` [PATCH 10/22] RDMA/irdma: Add HMC backing store setup functions Shiraz Saleem
2021-01-22 23:48 ` [PATCH 11/22] RDMA/irdma: Add privileged UDA queue implementation Shiraz Saleem
2021-01-22 23:48 ` [PATCH 12/22] RDMA/irdma: Add QoS definitions Shiraz Saleem
2021-01-22 23:48 ` [PATCH 13/22] RDMA/irdma: Add connection manager Shiraz Saleem
2021-01-22 23:48 ` [PATCH 14/22] RDMA/irdma: Add PBLE resource manager Shiraz Saleem
2021-01-22 23:48 ` [PATCH 15/22] RDMA/irdma: Implement device supported verb APIs Shiraz Saleem
2021-01-24 14:18 ` Leon Romanovsky
2021-01-27 1:04 ` Saleem, Shiraz
2021-01-22 23:48 ` [PATCH 16/22] RDMA/irdma: Add RoCEv2 UD OP support Shiraz Saleem
2021-01-22 23:48 ` [PATCH 17/22] RDMA/irdma: Add user/kernel shared libraries Shiraz Saleem
2021-01-22 23:48 ` [PATCH 18/22] RDMA/irdma: Add miscellaneous utility definitions Shiraz Saleem
2021-01-25 19:37 ` Jason Gunthorpe
2021-01-22 23:48 ` [PATCH 19/22] RDMA/irdma: Add dynamic tracing for CM Shiraz Saleem
2021-01-22 23:48 ` [PATCH 20/22] RDMA/irdma: Add ABI definitions Shiraz Saleem
2021-01-25 19:45 ` Jason Gunthorpe
2021-01-30 1:18 ` Saleem, Shiraz
2021-02-01 19:21 ` Jason Gunthorpe
2021-02-05 20:12 ` Saleem, Shiraz
2021-01-22 23:48 ` [PATCH 21/22] RDMA/irdma: Add irdma Kconfig/Makefile and remove i40iw Shiraz Saleem
2021-01-25 18:50 ` Jason Gunthorpe
2021-01-26 0:39 ` Saleem, Shiraz
2021-01-26 5:47 ` Leon Romanovsky
2021-01-22 23:48 ` [PATCH 22/22] RDMA/irdma: Update MAINTAINERS file Shiraz Saleem
2021-01-25 13:29 ` [PATCH 00/22] Add Intel Ethernet Protocol Driver for RDMA (irdma) Jason Gunthorpe
2021-01-25 18:44 ` Jason Gunthorpe
2021-01-26 0:39 ` Saleem, Shiraz
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=4720390ef608423dac481d813e8b8a62@intel.com \
--to=shiraz.saleem@intel.com \
--cc=anthony.l.nguyen@intel.com \
--cc=dan.j.williams@intel.com \
--cc=davem@davemloft.net \
--cc=david.m.ertman@intel.com \
--cc=dledford@redhat.com \
--cc=gregkh@linuxfoundation.org \
--cc=jgg@nvidia.com \
--cc=kiran.patil@intel.com \
--cc=kuba@kernel.org \
--cc=leon@kernel.org \
--cc=linux-rdma@vger.kernel.org \
--cc=mustafa.ismail@intel.com \
--cc=netdev@vger.kernel.org \
--cc=sridhar.samudrala@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).