From: Jason Gunthorpe <jgg@mellanox.com>
To: Jason Wang <jasowang@redhat.com>
Cc: mst@redhat.com, linux-kernel@vger.kernel.org,
kvm@vger.kernel.org, virtualization@lists.linux-foundation.org,
netdev@vger.kernel.org, maxime.coquelin@redhat.com,
cunming.liang@intel.com, zhihong.wang@intel.com,
rob.miller@broadcom.com, xiao.w.wang@intel.com,
lingshan.zhu@intel.com, eperezma@redhat.com, lulu@redhat.com,
parav@mellanox.com, kevin.tian@intel.com, stefanha@redhat.com,
rdunlap@infradead.org, hch@infradead.org, aadam@redhat.com,
jiri@mellanox.com, shahafs@mellanox.com, hanand@xilinx.com,
mhabets@solarflare.com, gdawar@xilinx.com, saugatm@xilinx.com,
vmireyno@marvell.com, Bie Tiwei <tiwei.bie@intel.com>
Subject: Re: [PATCH V6 8/8] virtio: Intel IFC VF driver for VDPA
Date: Thu, 19 Mar 2020 10:02:39 -0300 [thread overview]
Message-ID: <20200319130239.GW13183@mellanox.com> (raw)
In-Reply-To: <30359bae-d66a-0311-0028-d7d33b8295f2@redhat.com>
On Thu, Mar 19, 2020 at 04:14:37PM +0800, Jason Wang wrote:
>
> On 2020/3/18 下午8:22, Jason Gunthorpe wrote:
> > On Wed, Mar 18, 2020 at 04:03:27PM +0800, Jason Wang wrote:
> > > From: Zhu Lingshan <lingshan.zhu@intel.com>
> > > +
> > > +static int ifcvf_vdpa_attach(struct ifcvf_adapter *adapter)
> > > +{
> > > + int ret;
> > > +
> > > + adapter->vdpa_dev = vdpa_alloc_device(adapter->dev, adapter->dev,
> > > + &ifc_vdpa_ops);
> > > + if (IS_ERR(adapter->vdpa_dev)) {
> > > + IFCVF_ERR(adapter->dev, "Failed to init ifcvf on vdpa bus");
> > > + put_device(&adapter->vdpa_dev->dev);
> > > + return -ENODEV;
> > > + }
> > The point of having an alloc call is so that the drivers
> > ifcvf_adaptor memory could be placed in the same struct - eg use
> > container_of to flip between them, and have a kref for both memories.
> >
> > It seem really weird to have an alloc followed immediately by
> > register.
>
>
> I admit the ifcvf_adapter is not correctly ref-counted. What you suggest
> should work. But it looks to me the following is more cleaner since the
> members of ifcvf_adapter are all related to PCI device not vDPA itself.
I've done it both ways (eg tpm is as you describe, ib is using alloc).
I tend to prefer the alloc method today, allowing the driver memory to
have a proper refcount makes the driver structure usable with RCU and
allows simple solutions to some tricky cases. It is a bit hard to
switch to this later..
> - keep the current layout of ifcvf_adapter
> - merge vdpa_alloc_device() and vdpa_register_device()
> - use devres to bind ifcvf_adapter refcnt/lifcycle to the under PCI device
This is almost what tpm does. Keep in mind the lifecycle with devm is
just slightly past the driver remove call, so remove still
must revoke all external references to the memory.
The merging alloc and register rarely works out, the register must be
the very last thing done, and usually you need the subsystem pointer
to do pre-registration setup in anything but the most trivial of
subsystems and drivers.
> If we go for the container_of method, we probably need
>
> - accept a size of parent parent structure in vdpa_alloc_device() and
> mandate vdpa_device to be the first member of ifcvf_adapter
> - we need provide a way to free resources of parent structure when we
> destroy vDPA device
Yep. netdev and rdma work this way with a free memory callback in the
existing ops structures.
Jason
next prev parent reply other threads:[~2020-03-19 13:02 UTC|newest]
Thread overview: 18+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-03-18 8:03 [PATCH V6 0/8] vDPA support Jason Wang
2020-03-18 8:03 ` [PATCH V6 1/8] vhost: allow per device message handler Jason Wang
2020-03-18 8:03 ` [PATCH V6 2/8] vhost: factor out IOTLB Jason Wang
2020-03-18 8:03 ` [PATCH V6 3/8] vringh: IOTLB support Jason Wang
2020-03-18 16:49 ` kbuild test robot
2020-03-18 19:48 ` kbuild test robot
2020-03-18 8:03 ` [PATCH V6 4/8] vDPA: introduce vDPA bus Jason Wang
2020-03-18 8:03 ` [PATCH V6 5/8] virtio: introduce a vDPA based transport Jason Wang
2020-03-18 8:03 ` [PATCH V6 6/8] vhost: introduce vDPA-based backend Jason Wang
2020-03-18 8:03 ` [PATCH V6 7/8] vdpasim: vDPA device simulator Jason Wang
2020-03-18 8:03 ` [PATCH V6 8/8] virtio: Intel IFC VF driver for VDPA Jason Wang
2020-03-18 12:22 ` Jason Gunthorpe
2020-03-19 7:28 ` Zhu Lingshan
2020-03-19 8:14 ` Jason Wang
2020-03-19 13:02 ` Jason Gunthorpe [this message]
2020-03-20 9:17 ` Jason Wang
2020-03-18 13:24 ` kbuild test robot
2020-03-18 13:30 ` kbuild test robot
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20200319130239.GW13183@mellanox.com \
--to=jgg@mellanox.com \
--cc=aadam@redhat.com \
--cc=cunming.liang@intel.com \
--cc=eperezma@redhat.com \
--cc=gdawar@xilinx.com \
--cc=hanand@xilinx.com \
--cc=hch@infradead.org \
--cc=jasowang@redhat.com \
--cc=jiri@mellanox.com \
--cc=kevin.tian@intel.com \
--cc=kvm@vger.kernel.org \
--cc=lingshan.zhu@intel.com \
--cc=linux-kernel@vger.kernel.org \
--cc=lulu@redhat.com \
--cc=maxime.coquelin@redhat.com \
--cc=mhabets@solarflare.com \
--cc=mst@redhat.com \
--cc=netdev@vger.kernel.org \
--cc=parav@mellanox.com \
--cc=rdunlap@infradead.org \
--cc=rob.miller@broadcom.com \
--cc=saugatm@xilinx.com \
--cc=shahafs@mellanox.com \
--cc=stefanha@redhat.com \
--cc=tiwei.bie@intel.com \
--cc=virtualization@lists.linux-foundation.org \
--cc=vmireyno@marvell.com \
--cc=xiao.w.wang@intel.com \
--cc=zhihong.wang@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).