From: Yi Liu <yi.l.liu@intel.com>
To: alex.williamson@redhat.com, jgg@nvidia.com, kevin.tian@intel.com
Cc: joro@8bytes.org, robin.murphy@arm.com, cohuck@redhat.com,
eric.auger@redhat.com, nicolinc@nvidia.com, kvm@vger.kernel.org,
mjrosato@linux.ibm.com, chao.p.peng@linux.intel.com,
yi.l.liu@intel.com, yi.y.sun@linux.intel.com, peterx@redhat.com,
jasowang@redhat.com, shameerali.kolothum.thodi@huawei.com,
lulu@redhat.com, suravee.suthikulpanit@amd.com,
intel-gvt-dev@lists.freedesktop.org,
intel-gfx@lists.freedesktop.org, linux-s390@vger.kernel.org,
xudong.hao@intel.com, yan.y.zhao@intel.com,
terrence.xu@intel.com, yanting.jiang@intel.com,
zhenzhong.duan@intel.com, clegoate@redhat.com
Subject: [PATCH v13 08/22] vfio: Add cdev_device_open_cnt to vfio_group
Date: Fri, 16 Jun 2023 02:39:32 -0700 [thread overview]
Message-ID: <20230616093946.68711-9-yi.l.liu@intel.com> (raw)
In-Reply-To: <20230616093946.68711-1-yi.l.liu@intel.com>
This is for counting the devices that are opened via the cdev path. This
count is increased and decreased by the cdev path. The group path checks
it to achieve exclusion with the cdev path. With this, only one path
(group path or cdev path) will claim DMA ownership. This avoids scenarios
in which devices within the same group may be opened via different paths.
Reviewed-by: Kevin Tian <kevin.tian@intel.com>
Reviewed-by: Jason Gunthorpe <jgg@nvidia.com>
Reviewed-by: Eric Auger <eric.auger@redhat.com>
Tested-by: Terrence Xu <terrence.xu@intel.com>
Tested-by: Nicolin Chen <nicolinc@nvidia.com>
Tested-by: Matthew Rosato <mjrosato@linux.ibm.com>
Tested-by: Yanting Jiang <yanting.jiang@intel.com>
Tested-by: Shameer Kolothum <shameerali.kolothum.thodi@huawei.com>
Signed-off-by: Yi Liu <yi.l.liu@intel.com>
---
drivers/vfio/group.c | 33 +++++++++++++++++++++++++++++++++
drivers/vfio/vfio.h | 3 +++
2 files changed, 36 insertions(+)
diff --git a/drivers/vfio/group.c b/drivers/vfio/group.c
index 088dd34c8931..2751d61689c4 100644
--- a/drivers/vfio/group.c
+++ b/drivers/vfio/group.c
@@ -383,6 +383,33 @@ static long vfio_group_fops_unl_ioctl(struct file *filep,
}
}
+int vfio_device_block_group(struct vfio_device *device)
+{
+ struct vfio_group *group = device->group;
+ int ret = 0;
+
+ mutex_lock(&group->group_lock);
+ if (group->opened_file) {
+ ret = -EBUSY;
+ goto out_unlock;
+ }
+
+ group->cdev_device_open_cnt++;
+
+out_unlock:
+ mutex_unlock(&group->group_lock);
+ return ret;
+}
+
+void vfio_device_unblock_group(struct vfio_device *device)
+{
+ struct vfio_group *group = device->group;
+
+ mutex_lock(&group->group_lock);
+ group->cdev_device_open_cnt--;
+ mutex_unlock(&group->group_lock);
+}
+
static int vfio_group_fops_open(struct inode *inode, struct file *filep)
{
struct vfio_group *group =
@@ -405,6 +432,11 @@ static int vfio_group_fops_open(struct inode *inode, struct file *filep)
goto out_unlock;
}
+ if (group->cdev_device_open_cnt) {
+ ret = -EBUSY;
+ goto out_unlock;
+ }
+
/*
* Do we need multiple instances of the group open? Seems not.
*/
@@ -479,6 +511,7 @@ static void vfio_group_release(struct device *dev)
mutex_destroy(&group->device_lock);
mutex_destroy(&group->group_lock);
WARN_ON(group->iommu_group);
+ WARN_ON(group->cdev_device_open_cnt);
ida_free(&vfio.group_ida, MINOR(group->dev.devt));
kfree(group);
}
diff --git a/drivers/vfio/vfio.h b/drivers/vfio/vfio.h
index 4478a1e77a5e..ae7dd2ca14b9 100644
--- a/drivers/vfio/vfio.h
+++ b/drivers/vfio/vfio.h
@@ -84,8 +84,11 @@ struct vfio_group {
struct blocking_notifier_head notifier;
struct iommufd_ctx *iommufd;
spinlock_t kvm_ref_lock;
+ unsigned int cdev_device_open_cnt;
};
+int vfio_device_block_group(struct vfio_device *device);
+void vfio_device_unblock_group(struct vfio_device *device);
int vfio_device_set_group(struct vfio_device *device,
enum vfio_group_type type);
void vfio_device_remove_group(struct vfio_device *device);
--
2.34.1
next prev parent reply other threads:[~2023-06-16 9:41 UTC|newest]
Thread overview: 38+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-06-16 9:39 [PATCH v13 00/22] Add vfio_device cdev for iommufd support Yi Liu
2023-06-16 9:39 ` [PATCH v13 01/22] vfio: Allocate per device file structure Yi Liu
2023-06-16 9:39 ` [PATCH v13 02/22] vfio: Refine vfio file kAPIs for KVM Yi Liu
2023-06-16 9:39 ` [PATCH v13 03/22] vfio: Accept vfio device file in the KVM facing kAPI Yi Liu
2023-06-16 9:39 ` [PATCH v13 04/22] kvm/vfio: Prepare for accepting vfio device fd Yi Liu
2023-06-16 9:39 ` [PATCH v13 05/22] kvm/vfio: Accept vfio device file from userspace Yi Liu
2023-06-16 9:39 ` [PATCH v13 06/22] vfio: Pass struct vfio_device_file * to vfio_device_open/close() Yi Liu
2023-06-16 9:39 ` [PATCH v13 07/22] vfio: Block device access via device fd until device is opened Yi Liu
2023-06-16 9:39 ` Yi Liu [this message]
2023-06-16 9:39 ` [PATCH v13 09/22] vfio: Make vfio_df_open() single open for device cdev path Yi Liu
2023-06-16 9:39 ` [PATCH v13 10/22] vfio-iommufd: Move noiommu compat validation out of vfio_iommufd_bind() Yi Liu
2023-06-16 9:39 ` [PATCH v13 11/22] vfio-iommufd: Split bind/attach into two steps Yi Liu
2023-06-16 9:39 ` [PATCH v13 12/22] vfio: Record devid in vfio_device_file Yi Liu
2023-06-16 9:39 ` [PATCH v13 13/22] vfio-iommufd: Add detach_ioas support for physical VFIO devices Yi Liu
2023-06-16 9:39 ` [PATCH v13 14/22] iommufd/device: Add iommufd_access_detach() API Yi Liu
2023-06-16 9:39 ` [PATCH v13 15/22] vfio-iommufd: Add detach_ioas support for emulated VFIO devices Yi Liu
2023-06-16 9:39 ` [PATCH v13 16/22] vfio: Move vfio_device_group_unregister() to be the first operation in unregister Yi Liu
2023-06-16 9:39 ` [PATCH v13 17/22] vfio: Add cdev for vfio_device Yi Liu
2023-06-16 9:39 ` [PATCH v13 18/22] vfio: Add VFIO_DEVICE_BIND_IOMMUFD Yi Liu
2023-06-16 9:39 ` [PATCH v13 19/22] vfio: Add VFIO_DEVICE_[AT|DE]TACH_IOMMUFD_PT Yi Liu
2023-06-16 9:39 ` [PATCH v13 20/22] vfio: Move the IOMMU_CAP_CACHE_COHERENCY check in __vfio_register_dev() Yi Liu
2023-06-16 9:39 ` [PATCH v13 21/22] vfio: Compile vfio_group infrastructure optionally Yi Liu
2023-07-17 6:36 ` Liu, Yi L
2023-07-17 8:08 ` Liu, Yi L
2023-07-17 18:45 ` Alex Williamson
2023-07-18 1:18 ` Liu, Yi L
2023-07-17 12:33 ` Jason Gunthorpe
2023-07-17 12:50 ` Liu, Yi L
2023-06-16 9:39 ` [PATCH v13 22/22] docs: vfio: Add vfio device cdev description Yi Liu
2023-06-21 21:54 ` Alex Williamson
2023-06-27 8:54 ` Liu, Yi L
2023-06-27 16:12 ` Jason Gunthorpe
2023-06-28 0:56 ` Liu, Yi L
2023-06-28 12:33 ` Jason Gunthorpe
[not found] ` <20230627113430.129811ef.alex.williamson@redhat.com>
2023-06-28 1:10 ` Liu, Yi L
2023-06-28 12:34 ` Jason Gunthorpe
2023-06-21 9:17 ` [PATCH v13 00/22] Add vfio_device cdev for iommufd support Duan, Zhenzhong
2023-06-23 16:48 ` Jason Gunthorpe
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20230616093946.68711-9-yi.l.liu@intel.com \
--to=yi.l.liu@intel.com \
--cc=alex.williamson@redhat.com \
--cc=chao.p.peng@linux.intel.com \
--cc=clegoate@redhat.com \
--cc=cohuck@redhat.com \
--cc=eric.auger@redhat.com \
--cc=intel-gfx@lists.freedesktop.org \
--cc=intel-gvt-dev@lists.freedesktop.org \
--cc=jasowang@redhat.com \
--cc=jgg@nvidia.com \
--cc=joro@8bytes.org \
--cc=kevin.tian@intel.com \
--cc=kvm@vger.kernel.org \
--cc=linux-s390@vger.kernel.org \
--cc=lulu@redhat.com \
--cc=mjrosato@linux.ibm.com \
--cc=nicolinc@nvidia.com \
--cc=peterx@redhat.com \
--cc=robin.murphy@arm.com \
--cc=shameerali.kolothum.thodi@huawei.com \
--cc=suravee.suthikulpanit@amd.com \
--cc=terrence.xu@intel.com \
--cc=xudong.hao@intel.com \
--cc=yan.y.zhao@intel.com \
--cc=yanting.jiang@intel.com \
--cc=yi.y.sun@linux.intel.com \
--cc=zhenzhong.duan@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox