netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Jason Gunthorpe <jgg@ziepe.ca>
To: Shiraz Saleem <shiraz.saleem@intel.com>
Cc: dledford@redhat.com, davem@davemloft.net,
	linux-rdma@vger.kernel.org, netdev@vger.kernel.org,
	mustafa.ismail@intel.com, jeffrey.t.kirsher@intel.com
Subject: Re: [RFC v1 01/19] net/i40e: Add peer register/unregister to struct i40e_netdev_priv
Date: Fri, 15 Feb 2019 10:22:33 -0700	[thread overview]
Message-ID: <20190215172233.GC30706@ziepe.ca> (raw)
In-Reply-To: <20190215171107.6464-2-shiraz.saleem@intel.com>

On Fri, Feb 15, 2019 at 11:10:48AM -0600, Shiraz Saleem wrote:
> Expose the register/unregister function pointers in the struct
> i40e_netdev_priv which is accesible via the netdev_priv() interface
> in the RDMA driver. On a netdev notification in the RDMA driver,
> the appropriate LAN driver register/unregister functions are invoked
> from the struct i40e_netdev_priv structure,

Why? In later patches we get an entire device_add() based thing. Why do
you need two things?

The RDMA driver should bind to the thing that device_add created and
from there reliably get the netdev. It should not listen to netdev
notifiers for attachment.

It would be excellent if you could make this more general as pretty
much every single RDMA driver has some open coded (and often wrongly
locked) version of this attachment process.

This series is very big, so if you can see a way to make a general
attachment scheme based around device_add/etc it would be a great
pre-cursor series.

Jason

  reply	other threads:[~2019-02-15 17:22 UTC|newest]

Thread overview: 55+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-02-15 17:10 [RFC v1 00/19] Add unified Intel Ethernet RDMA driver (irdma) Shiraz Saleem
2019-02-15 17:10 ` [RFC v1 01/19] net/i40e: Add peer register/unregister to struct i40e_netdev_priv Shiraz Saleem
2019-02-15 17:22   ` Jason Gunthorpe [this message]
2019-02-21  2:19     ` Saleem, Shiraz
2019-02-21 19:35       ` Jason Gunthorpe
2019-02-22 20:13         ` Ertman, David M
2019-02-22 20:23           ` Jason Gunthorpe
2019-03-13  2:11             ` Jeff Kirsher
2019-03-13 13:28               ` Jason Gunthorpe
2019-05-10 13:31                 ` Shiraz Saleem
2019-05-10 18:17                   ` Jason Gunthorpe
2019-02-15 17:10 ` [RFC v1 02/19] net/ice: Create framework for VSI queue context Shiraz Saleem
2019-02-15 17:10 ` [RFC v1 03/19] net/ice: Add support for ice peer devices and drivers Shiraz Saleem
2019-02-15 17:10 ` [RFC v1 04/19] RDMA/irdma: Add driver framework definitions Shiraz Saleem
2019-02-24 15:02   ` Gal Pressman
2019-02-26 21:08     ` Saleem, Shiraz
2019-02-15 17:10 ` [RFC v1 05/19] RDMA/irdma: Implement device initialization definitions Shiraz Saleem
2019-02-15 17:10 ` [RFC v1 06/19] RDMA/irdma: Implement HW Admin Queue OPs Shiraz Saleem
2019-02-15 17:10 ` [RFC v1 07/19] RDMA/irdma: Add HMC backing store setup functions Shiraz Saleem
2019-02-15 17:10 ` [RFC v1 08/19] RDMA/irdma: Add privileged UDA queue implementation Shiraz Saleem
2019-02-24 11:42   ` Gal Pressman
2019-02-15 17:10 ` [RFC v1 09/19] RDMA/irdma: Add QoS definitions Shiraz Saleem
2019-02-15 17:10 ` [RFC v1 10/19] RDMA/irdma: Add connection manager Shiraz Saleem
2019-02-24 11:21   ` Gal Pressman
2019-02-25 18:46     ` Jason Gunthorpe
2019-02-26 21:07       ` Saleem, Shiraz
2019-02-15 17:10 ` [RFC v1 11/19] RDMA/irdma: Add PBLE resource manager Shiraz Saleem
2019-02-27  6:58   ` Leon Romanovsky
2019-02-15 17:10 ` [RFC v1 12/19] RDMA/irdma: Implement device supported verb APIs Shiraz Saleem
2019-02-15 17:35   ` Jason Gunthorpe
2019-02-15 22:19     ` Shiraz Saleem
2019-02-15 22:32       ` Jason Gunthorpe
2019-02-20 14:52     ` Saleem, Shiraz
2019-02-20 16:51       ` Jason Gunthorpe
2019-02-24 14:35   ` Gal Pressman
2019-02-25 18:50     ` Jason Gunthorpe
2019-02-26 21:09       ` Saleem, Shiraz
2019-02-26 21:09     ` Saleem, Shiraz
2019-02-27  7:31       ` Gal Pressman
2019-02-15 17:11 ` [RFC v1 13/19] RDMA/irdma: Add RoCEv2 UD OP support Shiraz Saleem
2019-02-27  6:50   ` Leon Romanovsky
2019-02-15 17:11 ` [RFC v1 14/19] RDMA/irdma: Add user/kernel shared libraries Shiraz Saleem
2019-02-15 17:11 ` [RFC v1 15/19] RDMA/irdma: Add miscellaneous utility definitions Shiraz Saleem
2019-02-15 17:47   ` Jason Gunthorpe
2019-02-20  7:51     ` Leon Romanovsky
2019-02-20 14:53     ` Saleem, Shiraz
2019-02-20 16:53       ` Jason Gunthorpe
2019-02-15 17:11 ` [RFC v1 16/19] RDMA/irdma: Add dynamic tracing for CM Shiraz Saleem
2019-02-15 17:11 ` [RFC v1 17/19] RDMA/irdma: Add ABI definitions Shiraz Saleem
2019-02-15 17:16   ` Jason Gunthorpe
2019-02-20 14:52     ` Saleem, Shiraz
2019-02-20 16:50       ` Jason Gunthorpe
2019-02-15 17:11 ` [RFC v1 18/19] RDMA/irdma: Add Kconfig and Makefile Shiraz Saleem
2019-02-15 17:11 ` [RFC v1 19/19] RDMA/irdma: Update MAINTAINERS file Shiraz Saleem
2019-02-15 17:20 ` [RFC v1 00/19] Add unified Intel Ethernet RDMA driver (irdma) Jason Gunthorpe

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20190215172233.GC30706@ziepe.ca \
    --to=jgg@ziepe.ca \
    --cc=davem@davemloft.net \
    --cc=dledford@redhat.com \
    --cc=jeffrey.t.kirsher@intel.com \
    --cc=linux-rdma@vger.kernel.org \
    --cc=mustafa.ismail@intel.com \
    --cc=netdev@vger.kernel.org \
    --cc=shiraz.saleem@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).