public inbox for linux-rdma@vger.kernel.org
 help / color / mirror / Atom feed
From: Greg KH <gregkh@linuxfoundation.org>
To: Jason Gunthorpe <jgg@mellanox.com>
Cc: Jeff Kirsher <jeffrey.t.kirsher@intel.com>,
	"davem@davemloft.net" <davem@davemloft.net>,
	"dledford@redhat.com" <dledford@redhat.com>,
	Tony Nguyen <anthony.l.nguyen@intel.com>,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	"linux-rdma@vger.kernel.org" <linux-rdma@vger.kernel.org>,
	"nhorman@redhat.com" <nhorman@redhat.com>,
	"sassmann@redhat.com" <sassmann@redhat.com>,
	"poswald@suse.com" <poswald@suse.com>,
	"mustafa.ismail@intel.com" <mustafa.ismail@intel.com>,
	"shiraz.saleem@intel.com" <shiraz.saleem@intel.com>,
	Dave Ertman <david.m.ertman@intel.com>,
	Andrew Bowers <andrewx.bowers@intel.com>
Subject: Re: [net-next 1/3] ice: Initialize and register platform device to provide RDMA
Date: Thu, 4 Jul 2019 15:46:12 +0200	[thread overview]
Message-ID: <20190704134612.GB10963@kroah.com> (raw)
In-Reply-To: <20190704124824.GK3401@mellanox.com>

On Thu, Jul 04, 2019 at 12:48:29PM +0000, Jason Gunthorpe wrote:
> On Thu, Jul 04, 2019 at 02:42:47PM +0200, Greg KH wrote:
> > On Thu, Jul 04, 2019 at 12:37:33PM +0000, Jason Gunthorpe wrote:
> > > On Thu, Jul 04, 2019 at 02:29:50PM +0200, Greg KH wrote:
> > > > On Thu, Jul 04, 2019 at 12:16:41PM +0000, Jason Gunthorpe wrote:
> > > > > On Wed, Jul 03, 2019 at 07:12:50PM -0700, Jeff Kirsher wrote:
> > > > > > From: Tony Nguyen <anthony.l.nguyen@intel.com>
> > > > > > 
> > > > > > The RDMA block does not advertise on the PCI bus or any other bus.
> > > > > > Thus the ice driver needs to provide access to the RDMA hardware block
> > > > > > via a virtual bus; utilize the platform bus to provide this access.
> > > > > > 
> > > > > > This patch initializes the driver to support RDMA as well as creates
> > > > > > and registers a platform device for the RDMA driver to register to. At
> > > > > > this point the driver is fully initialized to register a platform
> > > > > > driver, however, can not yet register as the ops have not been
> > > > > > implemented.
> > > > > 
> > > > > I think you need Greg's ack on all this driver stuff - particularly
> > > > > that a platform_device is OK.
> > > > 
> > > > A platform_device is almost NEVER ok.
> > > > 
> > > > Don't abuse it, make a real device on a real bus.  If you don't have a
> > > > real bus and just need to create a device to hang other things off of,
> > > > then use the virtual one, that's what it is there for.
> > > 
> > > Ideally I'd like to see all the RDMA drivers that connect to ethernet
> > > drivers use some similar scheme.
> > 
> > Why?  They should be attached to a "real" device, why make any up?
> 
> ? A "real" device, like struct pci_device, can only bind to one
> driver. How can we bind it concurrently to net, rdma, scsi, etc?

MFD was designed for this very problem.

> > > This is for a PCI device that plugs into multiple subsystems in the
> > > kernel, ie it has net driver functionality, rdma functionality, some
> > > even have SCSI functionality
> > 
> > Sounds like a MFD device, why aren't you using that functionality
> > instead?
> 
> This was also my advice, but in another email Jeff says:
> 
>   MFD architecture was also considered, and we selected the simpler
>   platform model. Supporting a MFD architecture would require an
>   additional MFD core driver, individual platform netdev, RDMA function
>   drivers, and stripping a large portion of the netdev drivers into
>   MFD core. The sub-devices registered by MFD core for function
>   drivers are indeed platform devices.  

So, "mfd is too hard, let's abuse a platform device" is ok?

People have been wanting to do MFD drivers for PCI devices for a long
time, it's about time someone actually did the work for it, I bet it
will not be all that complex if tiny embedded drivers can do it :)

thanks,

greg k-h

  reply	other threads:[~2019-07-04 13:46 UTC|newest]

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <20190704021252.15534-1-jeffrey.t.kirsher@intel.com>
2019-07-04 12:15 ` [net-next 0/3][pull request] Intel Wired LAN ver Updates 2019-07-03 Jason Gunthorpe
     [not found] ` <20190704021252.15534-2-jeffrey.t.kirsher@intel.com>
2019-07-04 12:16   ` [net-next 1/3] ice: Initialize and register platform device to provide RDMA Jason Gunthorpe
2019-07-04 12:29     ` Greg KH
2019-07-04 12:37       ` Jason Gunthorpe
2019-07-04 12:42         ` Greg KH
2019-07-04 12:48           ` Jason Gunthorpe
2019-07-04 13:46             ` Greg KH [this message]
2019-07-04 13:53               ` Jason Gunthorpe
2019-07-05 16:33               ` Saleem, Shiraz
2019-07-06  8:25                 ` Greg KH
2019-07-06 16:03                   ` Saleem, Shiraz

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20190704134612.GB10963@kroah.com \
    --to=gregkh@linuxfoundation.org \
    --cc=andrewx.bowers@intel.com \
    --cc=anthony.l.nguyen@intel.com \
    --cc=davem@davemloft.net \
    --cc=david.m.ertman@intel.com \
    --cc=dledford@redhat.com \
    --cc=jeffrey.t.kirsher@intel.com \
    --cc=jgg@mellanox.com \
    --cc=linux-rdma@vger.kernel.org \
    --cc=mustafa.ismail@intel.com \
    --cc=netdev@vger.kernel.org \
    --cc=nhorman@redhat.com \
    --cc=poswald@suse.com \
    --cc=sassmann@redhat.com \
    --cc=shiraz.saleem@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox