* [PATCH 1/8] vfio-pci: Fix vfio_pci_ioeventfd() to return int
2022-08-17 16:07 [PATCH 0/8] Break up ioctl dispatch functions to one function per ioctl Jason Gunthorpe
@ 2022-08-17 16:07 ` Jason Gunthorpe
2022-08-29 0:30 ` Tian, Kevin
2022-08-17 16:07 ` [PATCH 2/8] vfio-pci: Break up vfio_pci_core_ioctl() into one function per ioctl Jason Gunthorpe
` (6 subsequent siblings)
7 siblings, 1 reply; 26+ messages in thread
From: Jason Gunthorpe @ 2022-08-17 16:07 UTC (permalink / raw)
To: Alex Williamson, Cornelia Huck, kvm, llvm, Nathan Chancellor,
Nick Desaulniers, Tom Rix
This only returns 0 or -ERRNO, it should return int like all the other
ioctl dispatch functions.
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
---
drivers/vfio/pci/vfio_pci_priv.h | 4 ++--
drivers/vfio/pci/vfio_pci_rdwr.c | 4 ++--
2 files changed, 4 insertions(+), 4 deletions(-)
diff --git a/drivers/vfio/pci/vfio_pci_priv.h b/drivers/vfio/pci/vfio_pci_priv.h
index 4830fb01a1caa2..58b8d34c162cd6 100644
--- a/drivers/vfio/pci/vfio_pci_priv.h
+++ b/drivers/vfio/pci/vfio_pci_priv.h
@@ -48,8 +48,8 @@ static inline ssize_t vfio_pci_vga_rw(struct vfio_pci_core_device *vdev,
}
#endif
-long vfio_pci_ioeventfd(struct vfio_pci_core_device *vdev, loff_t offset,
- uint64_t data, int count, int fd);
+int vfio_pci_ioeventfd(struct vfio_pci_core_device *vdev, loff_t offset,
+ uint64_t data, int count, int fd);
int vfio_pci_init_perm_bits(void);
void vfio_pci_uninit_perm_bits(void);
diff --git a/drivers/vfio/pci/vfio_pci_rdwr.c b/drivers/vfio/pci/vfio_pci_rdwr.c
index d5e9883c1eee10..e352a033b4aef7 100644
--- a/drivers/vfio/pci/vfio_pci_rdwr.c
+++ b/drivers/vfio/pci/vfio_pci_rdwr.c
@@ -412,8 +412,8 @@ static void vfio_pci_ioeventfd_thread(void *opaque, void *unused)
vfio_pci_ioeventfd_do_write(ioeventfd, ioeventfd->test_mem);
}
-long vfio_pci_ioeventfd(struct vfio_pci_core_device *vdev, loff_t offset,
- uint64_t data, int count, int fd)
+int vfio_pci_ioeventfd(struct vfio_pci_core_device *vdev, loff_t offset,
+ uint64_t data, int count, int fd)
{
struct pci_dev *pdev = vdev->pdev;
loff_t pos = offset & VFIO_PCI_OFFSET_MASK;
--
2.37.2
^ permalink raw reply related [flat|nested] 26+ messages in thread* RE: [PATCH 1/8] vfio-pci: Fix vfio_pci_ioeventfd() to return int
2022-08-17 16:07 ` [PATCH 1/8] vfio-pci: Fix vfio_pci_ioeventfd() to return int Jason Gunthorpe
@ 2022-08-29 0:30 ` Tian, Kevin
0 siblings, 0 replies; 26+ messages in thread
From: Tian, Kevin @ 2022-08-29 0:30 UTC (permalink / raw)
To: Jason Gunthorpe, Alex Williamson, Cornelia Huck,
kvm@vger.kernel.org, llvm@lists.linux.dev, Nathan Chancellor,
Nick Desaulniers, Tom Rix
> From: Jason Gunthorpe <jgg@nvidia.com>
> Sent: Thursday, August 18, 2022 12:07 AM
>
> This only returns 0 or -ERRNO, it should return int like all the other
> ioctl dispatch functions.
>
> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
Reviewed-by: Kevin Tian <kevin.tian@intel.com>
^ permalink raw reply [flat|nested] 26+ messages in thread
* [PATCH 2/8] vfio-pci: Break up vfio_pci_core_ioctl() into one function per ioctl
2022-08-17 16:07 [PATCH 0/8] Break up ioctl dispatch functions to one function per ioctl Jason Gunthorpe
2022-08-17 16:07 ` [PATCH 1/8] vfio-pci: Fix vfio_pci_ioeventfd() to return int Jason Gunthorpe
@ 2022-08-17 16:07 ` Jason Gunthorpe
2022-08-29 0:31 ` Tian, Kevin
2022-08-29 0:37 ` Tian, Kevin
2022-08-17 16:07 ` [PATCH 3/8] vfio-pci: Re-indent what was vfio_pci_core_ioctl() Jason Gunthorpe
` (5 subsequent siblings)
7 siblings, 2 replies; 26+ messages in thread
From: Jason Gunthorpe @ 2022-08-17 16:07 UTC (permalink / raw)
To: Alex Williamson, Cornelia Huck, kvm, llvm, Nathan Chancellor,
Nick Desaulniers, Tom Rix
500 lines is a bit long for a single function, move the bodies of each
ioctl into separate functions and leave behind a switch statement to
dispatch them. This patch just adds the function declarations and does not
fix the indenting. The next patch will restore the indenting.
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
---
drivers/vfio/pci/vfio_pci_core.c | 97 ++++++++++++++++++++++----------
1 file changed, 68 insertions(+), 29 deletions(-)
diff --git a/drivers/vfio/pci/vfio_pci_core.c b/drivers/vfio/pci/vfio_pci_core.c
index 04180a0836cc90..6094f237552b35 100644
--- a/drivers/vfio/pci/vfio_pci_core.c
+++ b/drivers/vfio/pci/vfio_pci_core.c
@@ -689,21 +689,15 @@ int vfio_pci_register_dev_region(struct vfio_pci_core_device *vdev,
}
EXPORT_SYMBOL_GPL(vfio_pci_register_dev_region);
-long vfio_pci_core_ioctl(struct vfio_device *core_vdev, unsigned int cmd,
- unsigned long arg)
+static int vfio_pci_ioctl_get_info(struct vfio_pci_core_device *vdev,
+ void __user *arg)
{
- struct vfio_pci_core_device *vdev =
- container_of(core_vdev, struct vfio_pci_core_device, vdev);
- unsigned long minsz;
-
- if (cmd == VFIO_DEVICE_GET_INFO) {
+ unsigned long minsz = offsetofend(struct vfio_device_info, num_irqs);
struct vfio_device_info info;
struct vfio_info_cap caps = { .buf = NULL, .size = 0 };
unsigned long capsz;
int ret;
- minsz = offsetofend(struct vfio_device_info, num_irqs);
-
/* For backward compatibility, cannot require this */
capsz = offsetofend(struct vfio_iommu_type1_info, cap_offset);
@@ -752,15 +746,17 @@ long vfio_pci_core_ioctl(struct vfio_device *core_vdev, unsigned int cmd,
return copy_to_user((void __user *)arg, &info, minsz) ?
-EFAULT : 0;
+}
- } else if (cmd == VFIO_DEVICE_GET_REGION_INFO) {
+static int vfio_pci_ioctl_get_region_info(struct vfio_pci_core_device *vdev,
+ void __user *arg)
+{
+ unsigned long minsz = offsetofend(struct vfio_region_info, offset);
struct pci_dev *pdev = vdev->pdev;
struct vfio_region_info info;
struct vfio_info_cap caps = { .buf = NULL, .size = 0 };
int i, ret;
- minsz = offsetofend(struct vfio_region_info, offset);
-
if (copy_from_user(&info, (void __user *)arg, minsz))
return -EFAULT;
@@ -897,12 +893,14 @@ long vfio_pci_core_ioctl(struct vfio_device *core_vdev, unsigned int cmd,
return copy_to_user((void __user *)arg, &info, minsz) ?
-EFAULT : 0;
+}
- } else if (cmd == VFIO_DEVICE_GET_IRQ_INFO) {
+static int vfio_pci_ioctl_get_irq_info(struct vfio_pci_core_device *vdev,
+ void __user *arg)
+{
+ unsigned long minsz = offsetofend(struct vfio_irq_info, count);
struct vfio_irq_info info;
- minsz = offsetofend(struct vfio_irq_info, count);
-
if (copy_from_user(&info, (void __user *)arg, minsz))
return -EFAULT;
@@ -933,15 +931,17 @@ long vfio_pci_core_ioctl(struct vfio_device *core_vdev, unsigned int cmd,
return copy_to_user((void __user *)arg, &info, minsz) ?
-EFAULT : 0;
+}
- } else if (cmd == VFIO_DEVICE_SET_IRQS) {
+static int vfio_pci_ioctl_set_irqs(struct vfio_pci_core_device *vdev,
+ void __user *arg)
+{
+ unsigned long minsz = offsetofend(struct vfio_irq_set, count);
struct vfio_irq_set hdr;
u8 *data = NULL;
int max, ret = 0;
size_t data_size = 0;
- minsz = offsetofend(struct vfio_irq_set, count);
-
if (copy_from_user(&hdr, (void __user *)arg, minsz))
return -EFAULT;
@@ -968,8 +968,11 @@ long vfio_pci_core_ioctl(struct vfio_device *core_vdev, unsigned int cmd,
kfree(data);
return ret;
+}
- } else if (cmd == VFIO_DEVICE_RESET) {
+static int vfio_pci_ioctl_reset(struct vfio_pci_core_device *vdev,
+ void __user *arg)
+{
int ret;
if (!vdev->reset_works)
@@ -993,16 +996,20 @@ long vfio_pci_core_ioctl(struct vfio_device *core_vdev, unsigned int cmd,
up_write(&vdev->memory_lock);
return ret;
+}
- } else if (cmd == VFIO_DEVICE_GET_PCI_HOT_RESET_INFO) {
+static int
+vfio_pci_ioctl_get_pci_hot_reset_info(struct vfio_pci_core_device *vdev,
+ void __user *arg)
+{
+ unsigned long minsz =
+ offsetofend(struct vfio_pci_hot_reset_info, count);
struct vfio_pci_hot_reset_info hdr;
struct vfio_pci_fill_info fill = { 0 };
struct vfio_pci_dependent_device *devices = NULL;
bool slot = false;
int ret = 0;
- minsz = offsetofend(struct vfio_pci_hot_reset_info, count);
-
if (copy_from_user(&hdr, (void __user *)arg, minsz))
return -EFAULT;
@@ -1066,8 +1073,12 @@ long vfio_pci_core_ioctl(struct vfio_device *core_vdev, unsigned int cmd,
kfree(devices);
return ret;
+}
- } else if (cmd == VFIO_DEVICE_PCI_HOT_RESET) {
+static int vfio_pci_ioctl_pci_hot_reset(struct vfio_pci_core_device *vdev,
+ void __user *arg)
+{
+ unsigned long minsz = offsetofend(struct vfio_pci_hot_reset, count);
struct vfio_pci_hot_reset hdr;
int32_t *group_fds;
struct file **files;
@@ -1075,8 +1086,6 @@ long vfio_pci_core_ioctl(struct vfio_device *core_vdev, unsigned int cmd,
bool slot = false;
int file_idx, count = 0, ret = 0;
- minsz = offsetofend(struct vfio_pci_hot_reset, count);
-
if (copy_from_user(&hdr, (void __user *)arg, minsz))
return -EFAULT;
@@ -1160,12 +1169,15 @@ long vfio_pci_core_ioctl(struct vfio_device *core_vdev, unsigned int cmd,
kfree(files);
return ret;
- } else if (cmd == VFIO_DEVICE_IOEVENTFD) {
+}
+
+static int vfio_pci_ioctl_ioeventfd(struct vfio_pci_core_device *vdev,
+ void __user *arg)
+{
+ unsigned long minsz = offsetofend(struct vfio_device_ioeventfd, fd);
struct vfio_device_ioeventfd ioeventfd;
int count;
- minsz = offsetofend(struct vfio_device_ioeventfd, fd);
-
if (copy_from_user(&ioeventfd, (void __user *)arg, minsz))
return -EFAULT;
@@ -1182,8 +1194,35 @@ long vfio_pci_core_ioctl(struct vfio_device *core_vdev, unsigned int cmd,
return vfio_pci_ioeventfd(vdev, ioeventfd.offset,
ioeventfd.data, count, ioeventfd.fd);
+}
+
+long vfio_pci_core_ioctl(struct vfio_device *core_vdev, unsigned int cmd,
+ unsigned long arg)
+{
+ struct vfio_pci_core_device *vdev =
+ container_of(core_vdev, struct vfio_pci_core_device, vdev);
+ void __user *uarg = (void __user *)arg;
+
+ switch (cmd) {
+ case VFIO_DEVICE_GET_INFO:
+ return vfio_pci_ioctl_get_info(vdev, uarg);
+ case VFIO_DEVICE_GET_IRQ_INFO:
+ return vfio_pci_ioctl_get_irq_info(vdev, uarg);
+ case VFIO_DEVICE_GET_PCI_HOT_RESET_INFO:
+ return vfio_pci_ioctl_get_pci_hot_reset_info(vdev, uarg);
+ case VFIO_DEVICE_GET_REGION_INFO:
+ return vfio_pci_ioctl_get_region_info(vdev, uarg);
+ case VFIO_DEVICE_IOEVENTFD:
+ return vfio_pci_ioctl_ioeventfd(vdev, uarg);
+ case VFIO_DEVICE_PCI_HOT_RESET:
+ return vfio_pci_ioctl_pci_hot_reset(vdev, uarg);
+ case VFIO_DEVICE_RESET:
+ return vfio_pci_ioctl_reset(vdev, uarg);
+ case VFIO_DEVICE_SET_IRQS:
+ return vfio_pci_ioctl_set_irqs(vdev, uarg);
+ default:
+ return -ENOTTY;
}
- return -ENOTTY;
}
EXPORT_SYMBOL_GPL(vfio_pci_core_ioctl);
--
2.37.2
^ permalink raw reply related [flat|nested] 26+ messages in thread* RE: [PATCH 2/8] vfio-pci: Break up vfio_pci_core_ioctl() into one function per ioctl
2022-08-17 16:07 ` [PATCH 2/8] vfio-pci: Break up vfio_pci_core_ioctl() into one function per ioctl Jason Gunthorpe
@ 2022-08-29 0:31 ` Tian, Kevin
2022-08-29 0:37 ` Tian, Kevin
1 sibling, 0 replies; 26+ messages in thread
From: Tian, Kevin @ 2022-08-29 0:31 UTC (permalink / raw)
To: Jason Gunthorpe, Alex Williamson, Cornelia Huck,
kvm@vger.kernel.org, llvm@lists.linux.dev, Nathan Chancellor,
Nick Desaulniers, Tom Rix
> From: Jason Gunthorpe <jgg@nvidia.com>
> Sent: Thursday, August 18, 2022 12:07 AM
>
> 500 lines is a bit long for a single function, move the bodies of each
> ioctl into separate functions and leave behind a switch statement to
> dispatch them. This patch just adds the function declarations and does not
> fix the indenting. The next patch will restore the indenting.
>
> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
Signed-off-by: Kevin Tian <kevin.tian@intel.com>
^ permalink raw reply [flat|nested] 26+ messages in thread
* RE: [PATCH 2/8] vfio-pci: Break up vfio_pci_core_ioctl() into one function per ioctl
2022-08-17 16:07 ` [PATCH 2/8] vfio-pci: Break up vfio_pci_core_ioctl() into one function per ioctl Jason Gunthorpe
2022-08-29 0:31 ` Tian, Kevin
@ 2022-08-29 0:37 ` Tian, Kevin
1 sibling, 0 replies; 26+ messages in thread
From: Tian, Kevin @ 2022-08-29 0:37 UTC (permalink / raw)
To: Jason Gunthorpe, Alex Williamson, Cornelia Huck,
kvm@vger.kernel.org, llvm@lists.linux.dev, Nathan Chancellor,
Nick Desaulniers, Tom Rix
> From: Tian, Kevin
> Sent: Monday, August 29, 2022 8:31 AM
>
> > From: Jason Gunthorpe <jgg@nvidia.com>
> > Sent: Thursday, August 18, 2022 12:07 AM
> >
> > 500 lines is a bit long for a single function, move the bodies of each
> > ioctl into separate functions and leave behind a switch statement to
> > dispatch them. This patch just adds the function declarations and does not
> > fix the indenting. The next patch will restore the indenting.
> >
> > Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
>
> Signed-off-by: Kevin Tian <kevin.tian@intel.com>
Ah, quick typo. Should be:
Reviewed-by: Kevin Tian <kevin.tian@intel.com>
^ permalink raw reply [flat|nested] 26+ messages in thread
* [PATCH 3/8] vfio-pci: Re-indent what was vfio_pci_core_ioctl()
2022-08-17 16:07 [PATCH 0/8] Break up ioctl dispatch functions to one function per ioctl Jason Gunthorpe
2022-08-17 16:07 ` [PATCH 1/8] vfio-pci: Fix vfio_pci_ioeventfd() to return int Jason Gunthorpe
2022-08-17 16:07 ` [PATCH 2/8] vfio-pci: Break up vfio_pci_core_ioctl() into one function per ioctl Jason Gunthorpe
@ 2022-08-17 16:07 ` Jason Gunthorpe
2022-08-29 0:32 ` Tian, Kevin
2022-08-29 0:37 ` Tian, Kevin
2022-08-17 16:07 ` [PATCH 4/8] vfio-pci: Replace 'void __user *' with proper types in the ioctl functions Jason Gunthorpe
` (4 subsequent siblings)
7 siblings, 2 replies; 26+ messages in thread
From: Jason Gunthorpe @ 2022-08-17 16:07 UTC (permalink / raw)
To: Alex Williamson, Cornelia Huck, kvm, llvm, Nathan Chancellor,
Nick Desaulniers, Tom Rix
Done mechanically with:
$ git clang-format-14 -i --lines 675:1210 drivers/vfio/pci/vfio_pci_core.c
And manually reflow the multi-line comments clang-format doesn't fix.
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
---
drivers/vfio/pci/vfio_pci_core.c | 711 +++++++++++++++----------------
1 file changed, 349 insertions(+), 362 deletions(-)
diff --git a/drivers/vfio/pci/vfio_pci_core.c b/drivers/vfio/pci/vfio_pci_core.c
index 6094f237552b35..a0a855a52bbf1d 100644
--- a/drivers/vfio/pci/vfio_pci_core.c
+++ b/drivers/vfio/pci/vfio_pci_core.c
@@ -693,309 +693,300 @@ static int vfio_pci_ioctl_get_info(struct vfio_pci_core_device *vdev,
void __user *arg)
{
unsigned long minsz = offsetofend(struct vfio_device_info, num_irqs);
- struct vfio_device_info info;
- struct vfio_info_cap caps = { .buf = NULL, .size = 0 };
- unsigned long capsz;
- int ret;
+ struct vfio_device_info info;
+ struct vfio_info_cap caps = { .buf = NULL, .size = 0 };
+ unsigned long capsz;
+ int ret;
- /* For backward compatibility, cannot require this */
- capsz = offsetofend(struct vfio_iommu_type1_info, cap_offset);
+ /* For backward compatibility, cannot require this */
+ capsz = offsetofend(struct vfio_iommu_type1_info, cap_offset);
- if (copy_from_user(&info, (void __user *)arg, minsz))
- return -EFAULT;
+ if (copy_from_user(&info, (void __user *)arg, minsz))
+ return -EFAULT;
- if (info.argsz < minsz)
- return -EINVAL;
+ if (info.argsz < minsz)
+ return -EINVAL;
- if (info.argsz >= capsz) {
- minsz = capsz;
- info.cap_offset = 0;
- }
+ if (info.argsz >= capsz) {
+ minsz = capsz;
+ info.cap_offset = 0;
+ }
- info.flags = VFIO_DEVICE_FLAGS_PCI;
+ info.flags = VFIO_DEVICE_FLAGS_PCI;
- if (vdev->reset_works)
- info.flags |= VFIO_DEVICE_FLAGS_RESET;
+ if (vdev->reset_works)
+ info.flags |= VFIO_DEVICE_FLAGS_RESET;
- info.num_regions = VFIO_PCI_NUM_REGIONS + vdev->num_regions;
- info.num_irqs = VFIO_PCI_NUM_IRQS;
+ info.num_regions = VFIO_PCI_NUM_REGIONS + vdev->num_regions;
+ info.num_irqs = VFIO_PCI_NUM_IRQS;
- ret = vfio_pci_info_zdev_add_caps(vdev, &caps);
- if (ret && ret != -ENODEV) {
- pci_warn(vdev->pdev, "Failed to setup zPCI info capabilities\n");
- return ret;
- }
+ ret = vfio_pci_info_zdev_add_caps(vdev, &caps);
+ if (ret && ret != -ENODEV) {
+ pci_warn(vdev->pdev,
+ "Failed to setup zPCI info capabilities\n");
+ return ret;
+ }
- if (caps.size) {
- info.flags |= VFIO_DEVICE_FLAGS_CAPS;
- if (info.argsz < sizeof(info) + caps.size) {
- info.argsz = sizeof(info) + caps.size;
- } else {
- vfio_info_cap_shift(&caps, sizeof(info));
- if (copy_to_user((void __user *)arg +
- sizeof(info), caps.buf,
- caps.size)) {
- kfree(caps.buf);
- return -EFAULT;
- }
- info.cap_offset = sizeof(info);
+ if (caps.size) {
+ info.flags |= VFIO_DEVICE_FLAGS_CAPS;
+ if (info.argsz < sizeof(info) + caps.size) {
+ info.argsz = sizeof(info) + caps.size;
+ } else {
+ vfio_info_cap_shift(&caps, sizeof(info));
+ if (copy_to_user((void __user *)arg + sizeof(info),
+ caps.buf, caps.size)) {
+ kfree(caps.buf);
+ return -EFAULT;
}
-
- kfree(caps.buf);
+ info.cap_offset = sizeof(info);
}
- return copy_to_user((void __user *)arg, &info, minsz) ?
- -EFAULT : 0;
+ kfree(caps.buf);
+ }
+
+ return copy_to_user((void __user *)arg, &info, minsz) ? -EFAULT : 0;
}
static int vfio_pci_ioctl_get_region_info(struct vfio_pci_core_device *vdev,
void __user *arg)
{
unsigned long minsz = offsetofend(struct vfio_region_info, offset);
- struct pci_dev *pdev = vdev->pdev;
- struct vfio_region_info info;
- struct vfio_info_cap caps = { .buf = NULL, .size = 0 };
- int i, ret;
+ struct pci_dev *pdev = vdev->pdev;
+ struct vfio_region_info info;
+ struct vfio_info_cap caps = { .buf = NULL, .size = 0 };
+ int i, ret;
- if (copy_from_user(&info, (void __user *)arg, minsz))
- return -EFAULT;
+ if (copy_from_user(&info, (void __user *)arg, minsz))
+ return -EFAULT;
- if (info.argsz < minsz)
- return -EINVAL;
+ if (info.argsz < minsz)
+ return -EINVAL;
- switch (info.index) {
- case VFIO_PCI_CONFIG_REGION_INDEX:
- info.offset = VFIO_PCI_INDEX_TO_OFFSET(info.index);
- info.size = pdev->cfg_size;
- info.flags = VFIO_REGION_INFO_FLAG_READ |
- VFIO_REGION_INFO_FLAG_WRITE;
+ switch (info.index) {
+ case VFIO_PCI_CONFIG_REGION_INDEX:
+ info.offset = VFIO_PCI_INDEX_TO_OFFSET(info.index);
+ info.size = pdev->cfg_size;
+ info.flags = VFIO_REGION_INFO_FLAG_READ |
+ VFIO_REGION_INFO_FLAG_WRITE;
+ break;
+ case VFIO_PCI_BAR0_REGION_INDEX ... VFIO_PCI_BAR5_REGION_INDEX:
+ info.offset = VFIO_PCI_INDEX_TO_OFFSET(info.index);
+ info.size = pci_resource_len(pdev, info.index);
+ if (!info.size) {
+ info.flags = 0;
break;
- case VFIO_PCI_BAR0_REGION_INDEX ... VFIO_PCI_BAR5_REGION_INDEX:
- info.offset = VFIO_PCI_INDEX_TO_OFFSET(info.index);
- info.size = pci_resource_len(pdev, info.index);
- if (!info.size) {
- info.flags = 0;
- break;
- }
+ }
- info.flags = VFIO_REGION_INFO_FLAG_READ |
- VFIO_REGION_INFO_FLAG_WRITE;
- if (vdev->bar_mmap_supported[info.index]) {
- info.flags |= VFIO_REGION_INFO_FLAG_MMAP;
- if (info.index == vdev->msix_bar) {
- ret = msix_mmappable_cap(vdev, &caps);
- if (ret)
- return ret;
- }
+ info.flags = VFIO_REGION_INFO_FLAG_READ |
+ VFIO_REGION_INFO_FLAG_WRITE;
+ if (vdev->bar_mmap_supported[info.index]) {
+ info.flags |= VFIO_REGION_INFO_FLAG_MMAP;
+ if (info.index == vdev->msix_bar) {
+ ret = msix_mmappable_cap(vdev, &caps);
+ if (ret)
+ return ret;
}
+ }
- break;
- case VFIO_PCI_ROM_REGION_INDEX:
- {
- void __iomem *io;
- size_t size;
- u16 cmd;
+ break;
+ case VFIO_PCI_ROM_REGION_INDEX: {
+ void __iomem *io;
+ size_t size;
+ u16 cmd;
+
+ info.offset = VFIO_PCI_INDEX_TO_OFFSET(info.index);
+ info.flags = 0;
+
+ /* Report the BAR size, not the ROM size */
+ info.size = pci_resource_len(pdev, info.index);
+ if (!info.size) {
+ /* Shadow ROMs appear as PCI option ROMs */
+ if (pdev->resource[PCI_ROM_RESOURCE].flags &
+ IORESOURCE_ROM_SHADOW)
+ info.size = 0x20000;
+ else
+ break;
+ }
- info.offset = VFIO_PCI_INDEX_TO_OFFSET(info.index);
- info.flags = 0;
+ /*
+ * Is it really there? Enable memory decode for implicit access
+ * in pci_map_rom().
+ */
+ cmd = vfio_pci_memory_lock_and_enable(vdev);
+ io = pci_map_rom(pdev, &size);
+ if (io) {
+ info.flags = VFIO_REGION_INFO_FLAG_READ;
+ pci_unmap_rom(pdev, io);
+ } else {
+ info.size = 0;
+ }
+ vfio_pci_memory_unlock_and_restore(vdev, cmd);
- /* Report the BAR size, not the ROM size */
- info.size = pci_resource_len(pdev, info.index);
- if (!info.size) {
- /* Shadow ROMs appear as PCI option ROMs */
- if (pdev->resource[PCI_ROM_RESOURCE].flags &
- IORESOURCE_ROM_SHADOW)
- info.size = 0x20000;
- else
- break;
- }
+ break;
+ }
+ case VFIO_PCI_VGA_REGION_INDEX:
+ if (!vdev->has_vga)
+ return -EINVAL;
- /*
- * Is it really there? Enable memory decode for
- * implicit access in pci_map_rom().
- */
- cmd = vfio_pci_memory_lock_and_enable(vdev);
- io = pci_map_rom(pdev, &size);
- if (io) {
- info.flags = VFIO_REGION_INFO_FLAG_READ;
- pci_unmap_rom(pdev, io);
- } else {
- info.size = 0;
- }
- vfio_pci_memory_unlock_and_restore(vdev, cmd);
+ info.offset = VFIO_PCI_INDEX_TO_OFFSET(info.index);
+ info.size = 0xc0000;
+ info.flags = VFIO_REGION_INFO_FLAG_READ |
+ VFIO_REGION_INFO_FLAG_WRITE;
- break;
- }
- case VFIO_PCI_VGA_REGION_INDEX:
- if (!vdev->has_vga)
- return -EINVAL;
+ break;
+ default: {
+ struct vfio_region_info_cap_type cap_type = {
+ .header.id = VFIO_REGION_INFO_CAP_TYPE,
+ .header.version = 1
+ };
- info.offset = VFIO_PCI_INDEX_TO_OFFSET(info.index);
- info.size = 0xc0000;
- info.flags = VFIO_REGION_INFO_FLAG_READ |
- VFIO_REGION_INFO_FLAG_WRITE;
-
- break;
- default:
- {
- struct vfio_region_info_cap_type cap_type = {
- .header.id = VFIO_REGION_INFO_CAP_TYPE,
- .header.version = 1 };
+ if (info.index >= VFIO_PCI_NUM_REGIONS + vdev->num_regions)
+ return -EINVAL;
+ info.index = array_index_nospec(
+ info.index, VFIO_PCI_NUM_REGIONS + vdev->num_regions);
- if (info.index >=
- VFIO_PCI_NUM_REGIONS + vdev->num_regions)
- return -EINVAL;
- info.index = array_index_nospec(info.index,
- VFIO_PCI_NUM_REGIONS +
- vdev->num_regions);
+ i = info.index - VFIO_PCI_NUM_REGIONS;
- i = info.index - VFIO_PCI_NUM_REGIONS;
+ info.offset = VFIO_PCI_INDEX_TO_OFFSET(info.index);
+ info.size = vdev->region[i].size;
+ info.flags = vdev->region[i].flags;
- info.offset = VFIO_PCI_INDEX_TO_OFFSET(info.index);
- info.size = vdev->region[i].size;
- info.flags = vdev->region[i].flags;
+ cap_type.type = vdev->region[i].type;
+ cap_type.subtype = vdev->region[i].subtype;
- cap_type.type = vdev->region[i].type;
- cap_type.subtype = vdev->region[i].subtype;
+ ret = vfio_info_add_capability(&caps, &cap_type.header,
+ sizeof(cap_type));
+ if (ret)
+ return ret;
- ret = vfio_info_add_capability(&caps, &cap_type.header,
- sizeof(cap_type));
+ if (vdev->region[i].ops->add_capability) {
+ ret = vdev->region[i].ops->add_capability(
+ vdev, &vdev->region[i], &caps);
if (ret)
return ret;
-
- if (vdev->region[i].ops->add_capability) {
- ret = vdev->region[i].ops->add_capability(vdev,
- &vdev->region[i], &caps);
- if (ret)
- return ret;
- }
- }
}
+ }
+ }
- if (caps.size) {
- info.flags |= VFIO_REGION_INFO_FLAG_CAPS;
- if (info.argsz < sizeof(info) + caps.size) {
- info.argsz = sizeof(info) + caps.size;
- info.cap_offset = 0;
- } else {
- vfio_info_cap_shift(&caps, sizeof(info));
- if (copy_to_user((void __user *)arg +
- sizeof(info), caps.buf,
- caps.size)) {
- kfree(caps.buf);
- return -EFAULT;
- }
- info.cap_offset = sizeof(info);
+ if (caps.size) {
+ info.flags |= VFIO_REGION_INFO_FLAG_CAPS;
+ if (info.argsz < sizeof(info) + caps.size) {
+ info.argsz = sizeof(info) + caps.size;
+ info.cap_offset = 0;
+ } else {
+ vfio_info_cap_shift(&caps, sizeof(info));
+ if (copy_to_user((void __user *)arg + sizeof(info),
+ caps.buf, caps.size)) {
+ kfree(caps.buf);
+ return -EFAULT;
}
-
- kfree(caps.buf);
+ info.cap_offset = sizeof(info);
}
- return copy_to_user((void __user *)arg, &info, minsz) ?
- -EFAULT : 0;
+ kfree(caps.buf);
+ }
+
+ return copy_to_user((void __user *)arg, &info, minsz) ? -EFAULT : 0;
}
static int vfio_pci_ioctl_get_irq_info(struct vfio_pci_core_device *vdev,
void __user *arg)
{
unsigned long minsz = offsetofend(struct vfio_irq_info, count);
- struct vfio_irq_info info;
+ struct vfio_irq_info info;
- if (copy_from_user(&info, (void __user *)arg, minsz))
- return -EFAULT;
+ if (copy_from_user(&info, (void __user *)arg, minsz))
+ return -EFAULT;
- if (info.argsz < minsz || info.index >= VFIO_PCI_NUM_IRQS)
- return -EINVAL;
+ if (info.argsz < minsz || info.index >= VFIO_PCI_NUM_IRQS)
+ return -EINVAL;
- switch (info.index) {
- case VFIO_PCI_INTX_IRQ_INDEX ... VFIO_PCI_MSIX_IRQ_INDEX:
- case VFIO_PCI_REQ_IRQ_INDEX:
+ switch (info.index) {
+ case VFIO_PCI_INTX_IRQ_INDEX ... VFIO_PCI_MSIX_IRQ_INDEX:
+ case VFIO_PCI_REQ_IRQ_INDEX:
+ break;
+ case VFIO_PCI_ERR_IRQ_INDEX:
+ if (pci_is_pcie(vdev->pdev))
break;
- case VFIO_PCI_ERR_IRQ_INDEX:
- if (pci_is_pcie(vdev->pdev))
- break;
- fallthrough;
- default:
- return -EINVAL;
- }
+ fallthrough;
+ default:
+ return -EINVAL;
+ }
- info.flags = VFIO_IRQ_INFO_EVENTFD;
+ info.flags = VFIO_IRQ_INFO_EVENTFD;
- info.count = vfio_pci_get_irq_count(vdev, info.index);
+ info.count = vfio_pci_get_irq_count(vdev, info.index);
- if (info.index == VFIO_PCI_INTX_IRQ_INDEX)
- info.flags |= (VFIO_IRQ_INFO_MASKABLE |
- VFIO_IRQ_INFO_AUTOMASKED);
- else
- info.flags |= VFIO_IRQ_INFO_NORESIZE;
+ if (info.index == VFIO_PCI_INTX_IRQ_INDEX)
+ info.flags |=
+ (VFIO_IRQ_INFO_MASKABLE | VFIO_IRQ_INFO_AUTOMASKED);
+ else
+ info.flags |= VFIO_IRQ_INFO_NORESIZE;
- return copy_to_user((void __user *)arg, &info, minsz) ?
- -EFAULT : 0;
+ return copy_to_user((void __user *)arg, &info, minsz) ? -EFAULT : 0;
}
static int vfio_pci_ioctl_set_irqs(struct vfio_pci_core_device *vdev,
void __user *arg)
{
unsigned long minsz = offsetofend(struct vfio_irq_set, count);
- struct vfio_irq_set hdr;
- u8 *data = NULL;
- int max, ret = 0;
- size_t data_size = 0;
+ struct vfio_irq_set hdr;
+ u8 *data = NULL;
+ int max, ret = 0;
+ size_t data_size = 0;
- if (copy_from_user(&hdr, (void __user *)arg, minsz))
- return -EFAULT;
+ if (copy_from_user(&hdr, (void __user *)arg, minsz))
+ return -EFAULT;
- max = vfio_pci_get_irq_count(vdev, hdr.index);
+ max = vfio_pci_get_irq_count(vdev, hdr.index);
- ret = vfio_set_irqs_validate_and_prepare(&hdr, max,
- VFIO_PCI_NUM_IRQS, &data_size);
- if (ret)
- return ret;
+ ret = vfio_set_irqs_validate_and_prepare(&hdr, max, VFIO_PCI_NUM_IRQS,
+ &data_size);
+ if (ret)
+ return ret;
- if (data_size) {
- data = memdup_user((void __user *)(arg + minsz),
- data_size);
- if (IS_ERR(data))
- return PTR_ERR(data);
- }
+ if (data_size) {
+ data = memdup_user((void __user *)(arg + minsz), data_size);
+ if (IS_ERR(data))
+ return PTR_ERR(data);
+ }
- mutex_lock(&vdev->igate);
+ mutex_lock(&vdev->igate);
- ret = vfio_pci_set_irqs_ioctl(vdev, hdr.flags, hdr.index,
- hdr.start, hdr.count, data);
+ ret = vfio_pci_set_irqs_ioctl(vdev, hdr.flags, hdr.index, hdr.start,
+ hdr.count, data);
- mutex_unlock(&vdev->igate);
- kfree(data);
+ mutex_unlock(&vdev->igate);
+ kfree(data);
- return ret;
+ return ret;
}
static int vfio_pci_ioctl_reset(struct vfio_pci_core_device *vdev,
void __user *arg)
{
- int ret;
+ int ret;
- if (!vdev->reset_works)
- return -EINVAL;
+ if (!vdev->reset_works)
+ return -EINVAL;
- vfio_pci_zap_and_down_write_memory_lock(vdev);
+ vfio_pci_zap_and_down_write_memory_lock(vdev);
- /*
- * This function can be invoked while the power state is non-D0.
- * If pci_try_reset_function() has been called while the power
- * state is non-D0, then pci_try_reset_function() will
- * internally set the power state to D0 without vfio driver
- * involvement. For the devices which have NoSoftRst-, the
- * reset function can cause the PCI config space reset without
- * restoring the original state (saved locally in
- * 'vdev->pm_save').
- */
- vfio_pci_set_power_state(vdev, PCI_D0);
+ /*
+ * This function can be invoked while the power state is non-D0. If
+ * pci_try_reset_function() has been called while the power state is
+ * non-D0, then pci_try_reset_function() will internally set the power
+ * state to D0 without vfio driver involvement. For the devices which
+ * have NoSoftRst-, the reset function can cause the PCI config space
+ * reset without restoring the original state (saved locally in
+ * 'vdev->pm_save').
+ */
+ vfio_pci_set_power_state(vdev, PCI_D0);
- ret = pci_try_reset_function(vdev->pdev);
- up_write(&vdev->memory_lock);
+ ret = pci_try_reset_function(vdev->pdev);
+ up_write(&vdev->memory_lock);
- return ret;
+ return ret;
}
static int
@@ -1004,196 +995,192 @@ vfio_pci_ioctl_get_pci_hot_reset_info(struct vfio_pci_core_device *vdev,
{
unsigned long minsz =
offsetofend(struct vfio_pci_hot_reset_info, count);
- struct vfio_pci_hot_reset_info hdr;
- struct vfio_pci_fill_info fill = { 0 };
- struct vfio_pci_dependent_device *devices = NULL;
- bool slot = false;
- int ret = 0;
+ struct vfio_pci_hot_reset_info hdr;
+ struct vfio_pci_fill_info fill = { 0 };
+ struct vfio_pci_dependent_device *devices = NULL;
+ bool slot = false;
+ int ret = 0;
- if (copy_from_user(&hdr, (void __user *)arg, minsz))
- return -EFAULT;
+ if (copy_from_user(&hdr, (void __user *)arg, minsz))
+ return -EFAULT;
- if (hdr.argsz < minsz)
- return -EINVAL;
+ if (hdr.argsz < minsz)
+ return -EINVAL;
- hdr.flags = 0;
+ hdr.flags = 0;
- /* Can we do a slot or bus reset or neither? */
- if (!pci_probe_reset_slot(vdev->pdev->slot))
- slot = true;
- else if (pci_probe_reset_bus(vdev->pdev->bus))
- return -ENODEV;
+ /* Can we do a slot or bus reset or neither? */
+ if (!pci_probe_reset_slot(vdev->pdev->slot))
+ slot = true;
+ else if (pci_probe_reset_bus(vdev->pdev->bus))
+ return -ENODEV;
- /* How many devices are affected? */
- ret = vfio_pci_for_each_slot_or_bus(vdev->pdev,
- vfio_pci_count_devs,
- &fill.max, slot);
- if (ret)
- return ret;
+ /* How many devices are affected? */
+ ret = vfio_pci_for_each_slot_or_bus(vdev->pdev, vfio_pci_count_devs,
+ &fill.max, slot);
+ if (ret)
+ return ret;
- WARN_ON(!fill.max); /* Should always be at least one */
+ WARN_ON(!fill.max); /* Should always be at least one */
- /*
- * If there's enough space, fill it now, otherwise return
- * -ENOSPC and the number of devices affected.
- */
- if (hdr.argsz < sizeof(hdr) + (fill.max * sizeof(*devices))) {
- ret = -ENOSPC;
- hdr.count = fill.max;
- goto reset_info_exit;
- }
+ /*
+ * If there's enough space, fill it now, otherwise return -ENOSPC and
+ * the number of devices affected.
+ */
+ if (hdr.argsz < sizeof(hdr) + (fill.max * sizeof(*devices))) {
+ ret = -ENOSPC;
+ hdr.count = fill.max;
+ goto reset_info_exit;
+ }
- devices = kcalloc(fill.max, sizeof(*devices), GFP_KERNEL);
- if (!devices)
- return -ENOMEM;
+ devices = kcalloc(fill.max, sizeof(*devices), GFP_KERNEL);
+ if (!devices)
+ return -ENOMEM;
- fill.devices = devices;
+ fill.devices = devices;
- ret = vfio_pci_for_each_slot_or_bus(vdev->pdev,
- vfio_pci_fill_devs,
- &fill, slot);
+ ret = vfio_pci_for_each_slot_or_bus(vdev->pdev, vfio_pci_fill_devs,
+ &fill, slot);
- /*
- * If a device was removed between counting and filling,
- * we may come up short of fill.max. If a device was
- * added, we'll have a return of -EAGAIN above.
- */
- if (!ret)
- hdr.count = fill.cur;
+ /*
+ * If a device was removed between counting and filling, we may come up
+ * short of fill.max. If a device was added, we'll have a return of
+ * -EAGAIN above.
+ */
+ if (!ret)
+ hdr.count = fill.cur;
reset_info_exit:
- if (copy_to_user((void __user *)arg, &hdr, minsz))
- ret = -EFAULT;
+ if (copy_to_user((void __user *)arg, &hdr, minsz))
+ ret = -EFAULT;
- if (!ret) {
- if (copy_to_user((void __user *)(arg + minsz), devices,
- hdr.count * sizeof(*devices)))
- ret = -EFAULT;
- }
+ if (!ret) {
+ if (copy_to_user((void __user *)(arg + minsz), devices,
+ hdr.count * sizeof(*devices)))
+ ret = -EFAULT;
+ }
- kfree(devices);
- return ret;
+ kfree(devices);
+ return ret;
}
static int vfio_pci_ioctl_pci_hot_reset(struct vfio_pci_core_device *vdev,
void __user *arg)
{
unsigned long minsz = offsetofend(struct vfio_pci_hot_reset, count);
- struct vfio_pci_hot_reset hdr;
- int32_t *group_fds;
- struct file **files;
- struct vfio_pci_group_info info;
- bool slot = false;
- int file_idx, count = 0, ret = 0;
-
- if (copy_from_user(&hdr, (void __user *)arg, minsz))
- return -EFAULT;
+ struct vfio_pci_hot_reset hdr;
+ int32_t *group_fds;
+ struct file **files;
+ struct vfio_pci_group_info info;
+ bool slot = false;
+ int file_idx, count = 0, ret = 0;
- if (hdr.argsz < minsz || hdr.flags)
- return -EINVAL;
+ if (copy_from_user(&hdr, (void __user *)arg, minsz))
+ return -EFAULT;
- /* Can we do a slot or bus reset or neither? */
- if (!pci_probe_reset_slot(vdev->pdev->slot))
- slot = true;
- else if (pci_probe_reset_bus(vdev->pdev->bus))
- return -ENODEV;
+ if (hdr.argsz < minsz || hdr.flags)
+ return -EINVAL;
- /*
- * We can't let userspace give us an arbitrarily large
- * buffer to copy, so verify how many we think there
- * could be. Note groups can have multiple devices so
- * one group per device is the max.
- */
- ret = vfio_pci_for_each_slot_or_bus(vdev->pdev,
- vfio_pci_count_devs,
- &count, slot);
- if (ret)
- return ret;
+ /* Can we do a slot or bus reset or neither? */
+ if (!pci_probe_reset_slot(vdev->pdev->slot))
+ slot = true;
+ else if (pci_probe_reset_bus(vdev->pdev->bus))
+ return -ENODEV;
- /* Somewhere between 1 and count is OK */
- if (!hdr.count || hdr.count > count)
- return -EINVAL;
+ /*
+ * We can't let userspace give us an arbitrarily large buffer to copy,
+ * so verify how many we think there could be. Note groups can have
+ * multiple devices so one group per device is the max.
+ */
+ ret = vfio_pci_for_each_slot_or_bus(vdev->pdev, vfio_pci_count_devs,
+ &count, slot);
+ if (ret)
+ return ret;
- group_fds = kcalloc(hdr.count, sizeof(*group_fds), GFP_KERNEL);
- files = kcalloc(hdr.count, sizeof(*files), GFP_KERNEL);
- if (!group_fds || !files) {
- kfree(group_fds);
- kfree(files);
- return -ENOMEM;
- }
+ /* Somewhere between 1 and count is OK */
+ if (!hdr.count || hdr.count > count)
+ return -EINVAL;
- if (copy_from_user(group_fds, (void __user *)(arg + minsz),
- hdr.count * sizeof(*group_fds))) {
- kfree(group_fds);
- kfree(files);
- return -EFAULT;
- }
+ group_fds = kcalloc(hdr.count, sizeof(*group_fds), GFP_KERNEL);
+ files = kcalloc(hdr.count, sizeof(*files), GFP_KERNEL);
+ if (!group_fds || !files) {
+ kfree(group_fds);
+ kfree(files);
+ return -ENOMEM;
+ }
- /*
- * For each group_fd, get the group through the vfio external
- * user interface and store the group and iommu ID. This
- * ensures the group is held across the reset.
- */
- for (file_idx = 0; file_idx < hdr.count; file_idx++) {
- struct file *file = fget(group_fds[file_idx]);
+ if (copy_from_user(group_fds, (void __user *)(arg + minsz),
+ hdr.count * sizeof(*group_fds))) {
+ kfree(group_fds);
+ kfree(files);
+ return -EFAULT;
+ }
- if (!file) {
- ret = -EBADF;
- break;
- }
+ /*
+ * For each group_fd, get the group through the vfio external user
+ * interface and store the group and iommu ID. This ensures the group
+ * is held across the reset.
+ */
+ for (file_idx = 0; file_idx < hdr.count; file_idx++) {
+ struct file *file = fget(group_fds[file_idx]);
- /* Ensure the FD is a vfio group FD.*/
- if (!vfio_file_iommu_group(file)) {
- fput(file);
- ret = -EINVAL;
- break;
- }
+ if (!file) {
+ ret = -EBADF;
+ break;
+ }
- files[file_idx] = file;
+ /* Ensure the FD is a vfio group FD.*/
+ if (!vfio_file_iommu_group(file)) {
+ fput(file);
+ ret = -EINVAL;
+ break;
}
- kfree(group_fds);
+ files[file_idx] = file;
+ }
- /* release reference to groups on error */
- if (ret)
- goto hot_reset_release;
+ kfree(group_fds);
+
+ /* release reference to groups on error */
+ if (ret)
+ goto hot_reset_release;
- info.count = hdr.count;
- info.files = files;
+ info.count = hdr.count;
+ info.files = files;
- ret = vfio_pci_dev_set_hot_reset(vdev->vdev.dev_set, &info);
+ ret = vfio_pci_dev_set_hot_reset(vdev->vdev.dev_set, &info);
hot_reset_release:
- for (file_idx--; file_idx >= 0; file_idx--)
- fput(files[file_idx]);
+ for (file_idx--; file_idx >= 0; file_idx--)
+ fput(files[file_idx]);
- kfree(files);
- return ret;
+ kfree(files);
+ return ret;
}
static int vfio_pci_ioctl_ioeventfd(struct vfio_pci_core_device *vdev,
void __user *arg)
{
unsigned long minsz = offsetofend(struct vfio_device_ioeventfd, fd);
- struct vfio_device_ioeventfd ioeventfd;
- int count;
+ struct vfio_device_ioeventfd ioeventfd;
+ int count;
- if (copy_from_user(&ioeventfd, (void __user *)arg, minsz))
- return -EFAULT;
+ if (copy_from_user(&ioeventfd, (void __user *)arg, minsz))
+ return -EFAULT;
- if (ioeventfd.argsz < minsz)
- return -EINVAL;
+ if (ioeventfd.argsz < minsz)
+ return -EINVAL;
- if (ioeventfd.flags & ~VFIO_DEVICE_IOEVENTFD_SIZE_MASK)
- return -EINVAL;
+ if (ioeventfd.flags & ~VFIO_DEVICE_IOEVENTFD_SIZE_MASK)
+ return -EINVAL;
- count = ioeventfd.flags & VFIO_DEVICE_IOEVENTFD_SIZE_MASK;
+ count = ioeventfd.flags & VFIO_DEVICE_IOEVENTFD_SIZE_MASK;
- if (hweight8(count) != 1 || ioeventfd.fd < -1)
- return -EINVAL;
+ if (hweight8(count) != 1 || ioeventfd.fd < -1)
+ return -EINVAL;
- return vfio_pci_ioeventfd(vdev, ioeventfd.offset,
- ioeventfd.data, count, ioeventfd.fd);
+ return vfio_pci_ioeventfd(vdev, ioeventfd.offset, ioeventfd.data, count,
+ ioeventfd.fd);
}
long vfio_pci_core_ioctl(struct vfio_device *core_vdev, unsigned int cmd,
--
2.37.2
^ permalink raw reply related [flat|nested] 26+ messages in thread* RE: [PATCH 3/8] vfio-pci: Re-indent what was vfio_pci_core_ioctl()
2022-08-17 16:07 ` [PATCH 3/8] vfio-pci: Re-indent what was vfio_pci_core_ioctl() Jason Gunthorpe
@ 2022-08-29 0:32 ` Tian, Kevin
2022-08-29 0:37 ` Tian, Kevin
1 sibling, 0 replies; 26+ messages in thread
From: Tian, Kevin @ 2022-08-29 0:32 UTC (permalink / raw)
To: Jason Gunthorpe, Alex Williamson, Cornelia Huck,
kvm@vger.kernel.org, llvm@lists.linux.dev, Nathan Chancellor,
Nick Desaulniers, Tom Rix
> From: Jason Gunthorpe <jgg@nvidia.com>
> Sent: Thursday, August 18, 2022 12:07 AM
>
> Done mechanically with:
>
> $ git clang-format-14 -i --lines 675:1210 drivers/vfio/pci/vfio_pci_core.c
>
> And manually reflow the multi-line comments clang-format doesn't fix.
>
> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
Signed-off-by: Kevin Tian <kevin.tian@intel.com>
^ permalink raw reply [flat|nested] 26+ messages in thread
* RE: [PATCH 3/8] vfio-pci: Re-indent what was vfio_pci_core_ioctl()
2022-08-17 16:07 ` [PATCH 3/8] vfio-pci: Re-indent what was vfio_pci_core_ioctl() Jason Gunthorpe
2022-08-29 0:32 ` Tian, Kevin
@ 2022-08-29 0:37 ` Tian, Kevin
1 sibling, 0 replies; 26+ messages in thread
From: Tian, Kevin @ 2022-08-29 0:37 UTC (permalink / raw)
To: Jason Gunthorpe, Alex Williamson, Cornelia Huck,
kvm@vger.kernel.org, llvm@lists.linux.dev, Nathan Chancellor,
Nick Desaulniers, Tom Rix
> From: Tian, Kevin
> Sent: Monday, August 29, 2022 8:33 AM
>
> > From: Jason Gunthorpe <jgg@nvidia.com>
> > Sent: Thursday, August 18, 2022 12:07 AM
> >
> > Done mechanically with:
> >
> > $ git clang-format-14 -i --lines 675:1210 drivers/vfio/pci/vfio_pci_core.c
> >
> > And manually reflow the multi-line comments clang-format doesn't fix.
> >
> > Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
>
> Signed-off-by: Kevin Tian <kevin.tian@intel.com>
Ah, quick typo. Should be:
Reviewed-by: Kevin Tian <kevin.tian@intel.com>
^ permalink raw reply [flat|nested] 26+ messages in thread
* [PATCH 4/8] vfio-pci: Replace 'void __user *' with proper types in the ioctl functions
2022-08-17 16:07 [PATCH 0/8] Break up ioctl dispatch functions to one function per ioctl Jason Gunthorpe
` (2 preceding siblings ...)
2022-08-17 16:07 ` [PATCH 3/8] vfio-pci: Re-indent what was vfio_pci_core_ioctl() Jason Gunthorpe
@ 2022-08-17 16:07 ` Jason Gunthorpe
2022-08-29 0:35 ` Tian, Kevin
2022-08-29 0:38 ` Tian, Kevin
2022-08-17 16:07 ` [PATCH 5/8] vfio: Fold VFIO_GROUP_GET_DEVICE_FD into vfio_group_get_device_fd() Jason Gunthorpe
` (3 subsequent siblings)
7 siblings, 2 replies; 26+ messages in thread
From: Jason Gunthorpe @ 2022-08-17 16:07 UTC (permalink / raw)
To: Alex Williamson, Cornelia Huck, kvm, llvm, Nathan Chancellor,
Nick Desaulniers, Tom Rix
This makes the code clearer and replaces a few places trying to access a
flex array with an actual flex array.
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
---
drivers/vfio/pci/vfio_pci_core.c | 58 +++++++++++++++-----------------
1 file changed, 28 insertions(+), 30 deletions(-)
diff --git a/drivers/vfio/pci/vfio_pci_core.c b/drivers/vfio/pci/vfio_pci_core.c
index a0a855a52bbf1d..050b9d4b8c290c 100644
--- a/drivers/vfio/pci/vfio_pci_core.c
+++ b/drivers/vfio/pci/vfio_pci_core.c
@@ -690,7 +690,7 @@ int vfio_pci_register_dev_region(struct vfio_pci_core_device *vdev,
EXPORT_SYMBOL_GPL(vfio_pci_register_dev_region);
static int vfio_pci_ioctl_get_info(struct vfio_pci_core_device *vdev,
- void __user *arg)
+ struct vfio_device_info __user *arg)
{
unsigned long minsz = offsetofend(struct vfio_device_info, num_irqs);
struct vfio_device_info info;
@@ -701,7 +701,7 @@ static int vfio_pci_ioctl_get_info(struct vfio_pci_core_device *vdev,
/* For backward compatibility, cannot require this */
capsz = offsetofend(struct vfio_iommu_type1_info, cap_offset);
- if (copy_from_user(&info, (void __user *)arg, minsz))
+ if (copy_from_user(&info, arg, minsz))
return -EFAULT;
if (info.argsz < minsz)
@@ -733,22 +733,21 @@ static int vfio_pci_ioctl_get_info(struct vfio_pci_core_device *vdev,
info.argsz = sizeof(info) + caps.size;
} else {
vfio_info_cap_shift(&caps, sizeof(info));
- if (copy_to_user((void __user *)arg + sizeof(info),
- caps.buf, caps.size)) {
+ if (copy_to_user(arg + 1, caps.buf, caps.size)) {
kfree(caps.buf);
return -EFAULT;
}
- info.cap_offset = sizeof(info);
+ info.cap_offset = sizeof(*arg);
}
kfree(caps.buf);
}
- return copy_to_user((void __user *)arg, &info, minsz) ? -EFAULT : 0;
+ return copy_to_user(arg, &info, minsz) ? -EFAULT : 0;
}
static int vfio_pci_ioctl_get_region_info(struct vfio_pci_core_device *vdev,
- void __user *arg)
+ struct vfio_region_info __user *arg)
{
unsigned long minsz = offsetofend(struct vfio_region_info, offset);
struct pci_dev *pdev = vdev->pdev;
@@ -756,7 +755,7 @@ static int vfio_pci_ioctl_get_region_info(struct vfio_pci_core_device *vdev,
struct vfio_info_cap caps = { .buf = NULL, .size = 0 };
int i, ret;
- if (copy_from_user(&info, (void __user *)arg, minsz))
+ if (copy_from_user(&info, arg, minsz))
return -EFAULT;
if (info.argsz < minsz)
@@ -875,27 +874,26 @@ static int vfio_pci_ioctl_get_region_info(struct vfio_pci_core_device *vdev,
info.cap_offset = 0;
} else {
vfio_info_cap_shift(&caps, sizeof(info));
- if (copy_to_user((void __user *)arg + sizeof(info),
- caps.buf, caps.size)) {
+ if (copy_to_user(arg + 1, caps.buf, caps.size)) {
kfree(caps.buf);
return -EFAULT;
}
- info.cap_offset = sizeof(info);
+ info.cap_offset = sizeof(*arg);
}
kfree(caps.buf);
}
- return copy_to_user((void __user *)arg, &info, minsz) ? -EFAULT : 0;
+ return copy_to_user(arg, &info, minsz) ? -EFAULT : 0;
}
static int vfio_pci_ioctl_get_irq_info(struct vfio_pci_core_device *vdev,
- void __user *arg)
+ struct vfio_irq_info __user *arg)
{
unsigned long minsz = offsetofend(struct vfio_irq_info, count);
struct vfio_irq_info info;
- if (copy_from_user(&info, (void __user *)arg, minsz))
+ if (copy_from_user(&info, arg, minsz))
return -EFAULT;
if (info.argsz < minsz || info.index >= VFIO_PCI_NUM_IRQS)
@@ -923,11 +921,11 @@ static int vfio_pci_ioctl_get_irq_info(struct vfio_pci_core_device *vdev,
else
info.flags |= VFIO_IRQ_INFO_NORESIZE;
- return copy_to_user((void __user *)arg, &info, minsz) ? -EFAULT : 0;
+ return copy_to_user(arg, &info, minsz) ? -EFAULT : 0;
}
static int vfio_pci_ioctl_set_irqs(struct vfio_pci_core_device *vdev,
- void __user *arg)
+ struct vfio_irq_set __user *arg)
{
unsigned long minsz = offsetofend(struct vfio_irq_set, count);
struct vfio_irq_set hdr;
@@ -935,7 +933,7 @@ static int vfio_pci_ioctl_set_irqs(struct vfio_pci_core_device *vdev,
int max, ret = 0;
size_t data_size = 0;
- if (copy_from_user(&hdr, (void __user *)arg, minsz))
+ if (copy_from_user(&hdr, arg, minsz))
return -EFAULT;
max = vfio_pci_get_irq_count(vdev, hdr.index);
@@ -946,7 +944,7 @@ static int vfio_pci_ioctl_set_irqs(struct vfio_pci_core_device *vdev,
return ret;
if (data_size) {
- data = memdup_user((void __user *)(arg + minsz), data_size);
+ data = memdup_user(&arg->data, data_size);
if (IS_ERR(data))
return PTR_ERR(data);
}
@@ -989,9 +987,9 @@ static int vfio_pci_ioctl_reset(struct vfio_pci_core_device *vdev,
return ret;
}
-static int
-vfio_pci_ioctl_get_pci_hot_reset_info(struct vfio_pci_core_device *vdev,
- void __user *arg)
+static int vfio_pci_ioctl_get_pci_hot_reset_info(
+ struct vfio_pci_core_device *vdev,
+ struct vfio_pci_hot_reset_info __user *arg)
{
unsigned long minsz =
offsetofend(struct vfio_pci_hot_reset_info, count);
@@ -1001,7 +999,7 @@ vfio_pci_ioctl_get_pci_hot_reset_info(struct vfio_pci_core_device *vdev,
bool slot = false;
int ret = 0;
- if (copy_from_user(&hdr, (void __user *)arg, minsz))
+ if (copy_from_user(&hdr, arg, minsz))
return -EFAULT;
if (hdr.argsz < minsz)
@@ -1051,11 +1049,11 @@ vfio_pci_ioctl_get_pci_hot_reset_info(struct vfio_pci_core_device *vdev,
hdr.count = fill.cur;
reset_info_exit:
- if (copy_to_user((void __user *)arg, &hdr, minsz))
+ if (copy_to_user(arg, &hdr, minsz))
ret = -EFAULT;
if (!ret) {
- if (copy_to_user((void __user *)(arg + minsz), devices,
+ if (copy_to_user(&arg->devices, devices,
hdr.count * sizeof(*devices)))
ret = -EFAULT;
}
@@ -1065,7 +1063,7 @@ vfio_pci_ioctl_get_pci_hot_reset_info(struct vfio_pci_core_device *vdev,
}
static int vfio_pci_ioctl_pci_hot_reset(struct vfio_pci_core_device *vdev,
- void __user *arg)
+ struct vfio_pci_hot_reset __user *arg)
{
unsigned long minsz = offsetofend(struct vfio_pci_hot_reset, count);
struct vfio_pci_hot_reset hdr;
@@ -1075,7 +1073,7 @@ static int vfio_pci_ioctl_pci_hot_reset(struct vfio_pci_core_device *vdev,
bool slot = false;
int file_idx, count = 0, ret = 0;
- if (copy_from_user(&hdr, (void __user *)arg, minsz))
+ if (copy_from_user(&hdr, arg, minsz))
return -EFAULT;
if (hdr.argsz < minsz || hdr.flags)
@@ -1109,7 +1107,7 @@ static int vfio_pci_ioctl_pci_hot_reset(struct vfio_pci_core_device *vdev,
return -ENOMEM;
}
- if (copy_from_user(group_fds, (void __user *)(arg + minsz),
+ if (copy_from_user(group_fds, arg->group_fds,
hdr.count * sizeof(*group_fds))) {
kfree(group_fds);
kfree(files);
@@ -1159,13 +1157,13 @@ static int vfio_pci_ioctl_pci_hot_reset(struct vfio_pci_core_device *vdev,
}
static int vfio_pci_ioctl_ioeventfd(struct vfio_pci_core_device *vdev,
- void __user *arg)
+ struct vfio_device_ioeventfd __user *arg)
{
unsigned long minsz = offsetofend(struct vfio_device_ioeventfd, fd);
struct vfio_device_ioeventfd ioeventfd;
int count;
- if (copy_from_user(&ioeventfd, (void __user *)arg, minsz))
+ if (copy_from_user(&ioeventfd, arg, minsz))
return -EFAULT;
if (ioeventfd.argsz < minsz)
@@ -1214,7 +1212,7 @@ long vfio_pci_core_ioctl(struct vfio_device *core_vdev, unsigned int cmd,
EXPORT_SYMBOL_GPL(vfio_pci_core_ioctl);
static int vfio_pci_core_feature_token(struct vfio_device *device, u32 flags,
- void __user *arg, size_t argsz)
+ uuid_t __user *arg, size_t argsz)
{
struct vfio_pci_core_device *vdev =
container_of(device, struct vfio_pci_core_device, vdev);
--
2.37.2
^ permalink raw reply related [flat|nested] 26+ messages in thread* RE: [PATCH 4/8] vfio-pci: Replace 'void __user *' with proper types in the ioctl functions
2022-08-17 16:07 ` [PATCH 4/8] vfio-pci: Replace 'void __user *' with proper types in the ioctl functions Jason Gunthorpe
@ 2022-08-29 0:35 ` Tian, Kevin
2022-08-29 0:38 ` Tian, Kevin
1 sibling, 0 replies; 26+ messages in thread
From: Tian, Kevin @ 2022-08-29 0:35 UTC (permalink / raw)
To: Jason Gunthorpe, Alex Williamson, Cornelia Huck,
kvm@vger.kernel.org, llvm@lists.linux.dev, Nathan Chancellor,
Nick Desaulniers, Tom Rix
> From: Jason Gunthorpe <jgg@nvidia.com>
> Sent: Thursday, August 18, 2022 12:07 AM
>
> This makes the code clearer and replaces a few places trying to access a
> flex array with an actual flex array.
>
> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
Signed-off-by: Kevin Tian <kevin.tian@intel.com>
^ permalink raw reply [flat|nested] 26+ messages in thread
* RE: [PATCH 4/8] vfio-pci: Replace 'void __user *' with proper types in the ioctl functions
2022-08-17 16:07 ` [PATCH 4/8] vfio-pci: Replace 'void __user *' with proper types in the ioctl functions Jason Gunthorpe
2022-08-29 0:35 ` Tian, Kevin
@ 2022-08-29 0:38 ` Tian, Kevin
1 sibling, 0 replies; 26+ messages in thread
From: Tian, Kevin @ 2022-08-29 0:38 UTC (permalink / raw)
To: Jason Gunthorpe, Alex Williamson, Cornelia Huck,
kvm@vger.kernel.org, llvm@lists.linux.dev, Nathan Chancellor,
Nick Desaulniers, Tom Rix
> From: Tian, Kevin
> Sent: Monday, August 29, 2022 8:35 AM
>
> > From: Jason Gunthorpe <jgg@nvidia.com>
> > Sent: Thursday, August 18, 2022 12:07 AM
> >
> > This makes the code clearer and replaces a few places trying to access a
> > flex array with an actual flex array.
> >
> > Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
>
> Signed-off-by: Kevin Tian <kevin.tian@intel.com>
Ah, quick typo. Should be:
Reviewed-by: Kevin Tian <kevin.tian@intel.com>
^ permalink raw reply [flat|nested] 26+ messages in thread
* [PATCH 5/8] vfio: Fold VFIO_GROUP_GET_DEVICE_FD into vfio_group_get_device_fd()
2022-08-17 16:07 [PATCH 0/8] Break up ioctl dispatch functions to one function per ioctl Jason Gunthorpe
` (3 preceding siblings ...)
2022-08-17 16:07 ` [PATCH 4/8] vfio-pci: Replace 'void __user *' with proper types in the ioctl functions Jason Gunthorpe
@ 2022-08-17 16:07 ` Jason Gunthorpe
2022-08-29 0:36 ` Tian, Kevin
2022-08-17 16:07 ` [PATCH 6/8] vfio: Fold VFIO_GROUP_SET_CONTAINER into vfio_group_set_container() Jason Gunthorpe
` (2 subsequent siblings)
7 siblings, 1 reply; 26+ messages in thread
From: Jason Gunthorpe @ 2022-08-17 16:07 UTC (permalink / raw)
To: Alex Williamson, Cornelia Huck, kvm, llvm, Nathan Chancellor,
Nick Desaulniers, Tom Rix
No reason to split it up like this, just have one function to process the
ioctl.
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
---
drivers/vfio/vfio_main.c | 24 +++++++++++-------------
1 file changed, 11 insertions(+), 13 deletions(-)
diff --git a/drivers/vfio/vfio_main.c b/drivers/vfio/vfio_main.c
index 7cb56c382c97a2..3afef45b8d1a26 100644
--- a/drivers/vfio/vfio_main.c
+++ b/drivers/vfio/vfio_main.c
@@ -1178,14 +1178,21 @@ static struct file *vfio_device_open(struct vfio_device *device)
return ERR_PTR(ret);
}
-static int vfio_group_get_device_fd(struct vfio_group *group, char *buf)
+static int vfio_group_ioctl_get_device_fd(struct vfio_group *group,
+ char __user *arg)
{
struct vfio_device *device;
struct file *filep;
+ char *buf;
int fdno;
int ret;
+ buf = strndup_user(arg, PAGE_SIZE);
+ if (IS_ERR(buf))
+ return PTR_ERR(buf);
+
device = vfio_device_get_from_name(group, buf);
+ kfree(buf);
if (IS_ERR(device))
return PTR_ERR(device);
@@ -1215,9 +1222,12 @@ static long vfio_group_fops_unl_ioctl(struct file *filep,
unsigned int cmd, unsigned long arg)
{
struct vfio_group *group = filep->private_data;
+ void __user *uarg = (void __user *)arg;
long ret = -ENOTTY;
switch (cmd) {
+ case VFIO_GROUP_GET_DEVICE_FD:
+ return vfio_group_ioctl_get_device_fd(group, uarg);
case VFIO_GROUP_GET_STATUS:
{
struct vfio_group_status status;
@@ -1267,18 +1277,6 @@ static long vfio_group_fops_unl_ioctl(struct file *filep,
ret = vfio_group_unset_container(group);
up_write(&group->group_rwsem);
break;
- case VFIO_GROUP_GET_DEVICE_FD:
- {
- char *buf;
-
- buf = strndup_user((const char __user *)arg, PAGE_SIZE);
- if (IS_ERR(buf))
- return PTR_ERR(buf);
-
- ret = vfio_group_get_device_fd(group, buf);
- kfree(buf);
- break;
- }
}
return ret;
--
2.37.2
^ permalink raw reply related [flat|nested] 26+ messages in thread* RE: [PATCH 5/8] vfio: Fold VFIO_GROUP_GET_DEVICE_FD into vfio_group_get_device_fd()
2022-08-17 16:07 ` [PATCH 5/8] vfio: Fold VFIO_GROUP_GET_DEVICE_FD into vfio_group_get_device_fd() Jason Gunthorpe
@ 2022-08-29 0:36 ` Tian, Kevin
0 siblings, 0 replies; 26+ messages in thread
From: Tian, Kevin @ 2022-08-29 0:36 UTC (permalink / raw)
To: Jason Gunthorpe, Alex Williamson, Cornelia Huck,
kvm@vger.kernel.org, llvm@lists.linux.dev, Nathan Chancellor,
Nick Desaulniers, Tom Rix
> From: Jason Gunthorpe <jgg@nvidia.com>
> Sent: Thursday, August 18, 2022 12:07 AM
>
> No reason to split it up like this, just have one function to process the
> ioctl.
>
> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
Reviewed-by: Kevin Tian <kevin.tian@intel.com>
^ permalink raw reply [flat|nested] 26+ messages in thread
* [PATCH 6/8] vfio: Fold VFIO_GROUP_SET_CONTAINER into vfio_group_set_container()
2022-08-17 16:07 [PATCH 0/8] Break up ioctl dispatch functions to one function per ioctl Jason Gunthorpe
` (4 preceding siblings ...)
2022-08-17 16:07 ` [PATCH 5/8] vfio: Fold VFIO_GROUP_GET_DEVICE_FD into vfio_group_get_device_fd() Jason Gunthorpe
@ 2022-08-17 16:07 ` Jason Gunthorpe
2022-08-29 0:39 ` Tian, Kevin
2022-08-17 16:07 ` [PATCH 7/8] vfio: Follow the naming pattern for vfio_group_ioctl_unset_container() Jason Gunthorpe
2022-08-17 16:07 ` [PATCH 8/8] vfio: Split VFIO_GROUP_GET_STATUS into a function Jason Gunthorpe
7 siblings, 1 reply; 26+ messages in thread
From: Jason Gunthorpe @ 2022-08-17 16:07 UTC (permalink / raw)
To: Alex Williamson, Cornelia Huck, kvm, llvm, Nathan Chancellor,
Nick Desaulniers, Tom Rix
No reason to split it up like this, just have one function to process the
ioctl. Move the lock into the function as well to avoid having a lockdep
annotation.
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
---
drivers/vfio/vfio_main.c | 51 +++++++++++++++++++---------------------
1 file changed, 24 insertions(+), 27 deletions(-)
diff --git a/drivers/vfio/vfio_main.c b/drivers/vfio/vfio_main.c
index 3afef45b8d1a26..f7b02d3fd3108b 100644
--- a/drivers/vfio/vfio_main.c
+++ b/drivers/vfio/vfio_main.c
@@ -980,47 +980,54 @@ static int vfio_group_unset_container(struct vfio_group *group)
return 0;
}
-static int vfio_group_set_container(struct vfio_group *group, int container_fd)
+static int vfio_group_ioctl_set_container(struct vfio_group *group,
+ int __user *arg)
{
struct fd f;
struct vfio_container *container;
struct vfio_iommu_driver *driver;
+ int container_fd;
int ret = 0;
- lockdep_assert_held_write(&group->group_rwsem);
-
- if (group->container || WARN_ON(group->container_users))
- return -EINVAL;
-
if (group->type == VFIO_NO_IOMMU && !capable(CAP_SYS_RAWIO))
return -EPERM;
+ if (get_user(container_fd, arg))
+ return -EFAULT;
+ if (container_fd < 0)
+ return -EINVAL;
f = fdget(container_fd);
if (!f.file)
return -EBADF;
/* Sanity check, is this really our fd? */
if (f.file->f_op != &vfio_fops) {
- fdput(f);
- return -EINVAL;
+ ret = -EINVAL;
+ goto out_fdput;
}
-
container = f.file->private_data;
WARN_ON(!container); /* fget ensures we don't race vfio_release */
+ down_write(&group->group_rwsem);
+
+ if (group->container || WARN_ON(group->container_users)) {
+ ret = -EINVAL;
+ goto out_unlock_group;
+ }
+
down_write(&container->group_lock);
/* Real groups and fake groups cannot mix */
if (!list_empty(&container->group_list) &&
container->noiommu != (group->type == VFIO_NO_IOMMU)) {
ret = -EPERM;
- goto unlock_out;
+ goto out_unlock_container;
}
if (group->type == VFIO_IOMMU) {
ret = iommu_group_claim_dma_owner(group->iommu_group, f.file);
if (ret)
- goto unlock_out;
+ goto out_unlock_container;
}
driver = container->iommu_driver;
@@ -1032,7 +1039,7 @@ static int vfio_group_set_container(struct vfio_group *group, int container_fd)
if (group->type == VFIO_IOMMU)
iommu_group_release_dma_owner(
group->iommu_group);
- goto unlock_out;
+ goto out_unlock_container;
}
}
@@ -1044,8 +1051,11 @@ static int vfio_group_set_container(struct vfio_group *group, int container_fd)
/* Get a reference on the container and mark a user within the group */
vfio_container_get(container);
-unlock_out:
+out_unlock_container:
up_write(&container->group_lock);
+out_unlock_group:
+ up_write(&group->group_rwsem);
+out_fdput:
fdput(f);
return ret;
}
@@ -1258,20 +1268,7 @@ static long vfio_group_fops_unl_ioctl(struct file *filep,
break;
}
case VFIO_GROUP_SET_CONTAINER:
- {
- int fd;
-
- if (get_user(fd, (int __user *)arg))
- return -EFAULT;
-
- if (fd < 0)
- return -EINVAL;
-
- down_write(&group->group_rwsem);
- ret = vfio_group_set_container(group, fd);
- up_write(&group->group_rwsem);
- break;
- }
+ return vfio_group_ioctl_set_container(group, uarg);
case VFIO_GROUP_UNSET_CONTAINER:
down_write(&group->group_rwsem);
ret = vfio_group_unset_container(group);
--
2.37.2
^ permalink raw reply related [flat|nested] 26+ messages in thread* RE: [PATCH 6/8] vfio: Fold VFIO_GROUP_SET_CONTAINER into vfio_group_set_container()
2022-08-17 16:07 ` [PATCH 6/8] vfio: Fold VFIO_GROUP_SET_CONTAINER into vfio_group_set_container() Jason Gunthorpe
@ 2022-08-29 0:39 ` Tian, Kevin
0 siblings, 0 replies; 26+ messages in thread
From: Tian, Kevin @ 2022-08-29 0:39 UTC (permalink / raw)
To: Jason Gunthorpe, Alex Williamson, Cornelia Huck,
kvm@vger.kernel.org, llvm@lists.linux.dev, Nathan Chancellor,
Nick Desaulniers, Tom Rix
> From: Jason Gunthorpe <jgg@nvidia.com>
> Sent: Thursday, August 18, 2022 12:07 AM
>
> No reason to split it up like this, just have one function to process the
> ioctl. Move the lock into the function as well to avoid having a lockdep
> annotation.
>
> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
Reviewed-by: Kevin Tian <kevin.tian@intel.com>
^ permalink raw reply [flat|nested] 26+ messages in thread
* [PATCH 7/8] vfio: Follow the naming pattern for vfio_group_ioctl_unset_container()
2022-08-17 16:07 [PATCH 0/8] Break up ioctl dispatch functions to one function per ioctl Jason Gunthorpe
` (5 preceding siblings ...)
2022-08-17 16:07 ` [PATCH 6/8] vfio: Fold VFIO_GROUP_SET_CONTAINER into vfio_group_set_container() Jason Gunthorpe
@ 2022-08-17 16:07 ` Jason Gunthorpe
2022-08-29 0:41 ` Tian, Kevin
2022-08-17 16:07 ` [PATCH 8/8] vfio: Split VFIO_GROUP_GET_STATUS into a function Jason Gunthorpe
7 siblings, 1 reply; 26+ messages in thread
From: Jason Gunthorpe @ 2022-08-17 16:07 UTC (permalink / raw)
To: Alex Williamson, Cornelia Huck, kvm, llvm, Nathan Chancellor,
Nick Desaulniers, Tom Rix
Make it clear that this is the body of the ioctl - keep the mutex outside
the function since this function doesn't have and wouldn't benefit from
error unwind.
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
---
drivers/vfio/vfio_main.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/vfio/vfio_main.c b/drivers/vfio/vfio_main.c
index f7b02d3fd3108b..78957f45c37a34 100644
--- a/drivers/vfio/vfio_main.c
+++ b/drivers/vfio/vfio_main.c
@@ -968,7 +968,7 @@ static void __vfio_group_unset_container(struct vfio_group *group)
* the group, we know that still exists, therefore the only valid
* transition here is 1->0.
*/
-static int vfio_group_unset_container(struct vfio_group *group)
+static int vfio_group_ioctl_unset_container(struct vfio_group *group)
{
lockdep_assert_held_write(&group->group_rwsem);
@@ -1271,7 +1271,7 @@ static long vfio_group_fops_unl_ioctl(struct file *filep,
return vfio_group_ioctl_set_container(group, uarg);
case VFIO_GROUP_UNSET_CONTAINER:
down_write(&group->group_rwsem);
- ret = vfio_group_unset_container(group);
+ ret = vfio_group_ioctl_unset_container(group);
up_write(&group->group_rwsem);
break;
}
--
2.37.2
^ permalink raw reply related [flat|nested] 26+ messages in thread* RE: [PATCH 7/8] vfio: Follow the naming pattern for vfio_group_ioctl_unset_container()
2022-08-17 16:07 ` [PATCH 7/8] vfio: Follow the naming pattern for vfio_group_ioctl_unset_container() Jason Gunthorpe
@ 2022-08-29 0:41 ` Tian, Kevin
2022-08-29 15:28 ` Jason Gunthorpe
0 siblings, 1 reply; 26+ messages in thread
From: Tian, Kevin @ 2022-08-29 0:41 UTC (permalink / raw)
To: Jason Gunthorpe, Alex Williamson, Cornelia Huck,
kvm@vger.kernel.org, llvm@lists.linux.dev, Nathan Chancellor,
Nick Desaulniers, Tom Rix
> From: Jason Gunthorpe <jgg@nvidia.com>
> Sent: Thursday, August 18, 2022 12:07 AM
>
> Make it clear that this is the body of the ioctl - keep the mutex outside
> the function since this function doesn't have and wouldn't benefit from
> error unwind.
but doing so make unset_container() unpair with set_container() and
be the only one doing additional things in main ioctl body.
I'd prefer to moving mutex inside unset_container() for better readability.
>
> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
> ---
> drivers/vfio/vfio_main.c | 4 ++--
> 1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/vfio/vfio_main.c b/drivers/vfio/vfio_main.c
> index f7b02d3fd3108b..78957f45c37a34 100644
> --- a/drivers/vfio/vfio_main.c
> +++ b/drivers/vfio/vfio_main.c
> @@ -968,7 +968,7 @@ static void __vfio_group_unset_container(struct
> vfio_group *group)
> * the group, we know that still exists, therefore the only valid
> * transition here is 1->0.
> */
> -static int vfio_group_unset_container(struct vfio_group *group)
> +static int vfio_group_ioctl_unset_container(struct vfio_group *group)
> {
> lockdep_assert_held_write(&group->group_rwsem);
>
> @@ -1271,7 +1271,7 @@ static long vfio_group_fops_unl_ioctl(struct file
> *filep,
> return vfio_group_ioctl_set_container(group, uarg);
> case VFIO_GROUP_UNSET_CONTAINER:
> down_write(&group->group_rwsem);
> - ret = vfio_group_unset_container(group);
> + ret = vfio_group_ioctl_unset_container(group);
> up_write(&group->group_rwsem);
> break;
> }
> --
> 2.37.2
^ permalink raw reply [flat|nested] 26+ messages in thread* Re: [PATCH 7/8] vfio: Follow the naming pattern for vfio_group_ioctl_unset_container()
2022-08-29 0:41 ` Tian, Kevin
@ 2022-08-29 15:28 ` Jason Gunthorpe
2022-08-30 16:42 ` Alex Williamson
0 siblings, 1 reply; 26+ messages in thread
From: Jason Gunthorpe @ 2022-08-29 15:28 UTC (permalink / raw)
To: Tian, Kevin
Cc: Alex Williamson, Cornelia Huck, kvm@vger.kernel.org,
llvm@lists.linux.dev, Nathan Chancellor, Nick Desaulniers,
Tom Rix
On Mon, Aug 29, 2022 at 12:41:30AM +0000, Tian, Kevin wrote:
> > From: Jason Gunthorpe <jgg@nvidia.com>
> > Sent: Thursday, August 18, 2022 12:07 AM
> >
> > Make it clear that this is the body of the ioctl - keep the mutex outside
> > the function since this function doesn't have and wouldn't benefit from
> > error unwind.
>
> but doing so make unset_container() unpair with set_container() and
> be the only one doing additional things in main ioctl body.
>
> I'd prefer to moving mutex inside unset_container() for better readability.
Yes, I tried both ways and ended up here since adding the goto unwind
was kind of ungainly for this function. Don't mind either way
The functions are not intended as strict pairs, they are ioctl
dispatch functions.
Jason
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [PATCH 7/8] vfio: Follow the naming pattern for vfio_group_ioctl_unset_container()
2022-08-29 15:28 ` Jason Gunthorpe
@ 2022-08-30 16:42 ` Alex Williamson
2022-08-30 16:51 ` Jason Gunthorpe
0 siblings, 1 reply; 26+ messages in thread
From: Alex Williamson @ 2022-08-30 16:42 UTC (permalink / raw)
To: Jason Gunthorpe
Cc: Tian, Kevin, Cornelia Huck, kvm@vger.kernel.org,
llvm@lists.linux.dev, Nathan Chancellor, Nick Desaulniers,
Tom Rix
On Mon, 29 Aug 2022 12:28:07 -0300
Jason Gunthorpe <jgg@nvidia.com> wrote:
> On Mon, Aug 29, 2022 at 12:41:30AM +0000, Tian, Kevin wrote:
> > > From: Jason Gunthorpe <jgg@nvidia.com>
> > > Sent: Thursday, August 18, 2022 12:07 AM
> > >
> > > Make it clear that this is the body of the ioctl - keep the mutex outside
> > > the function since this function doesn't have and wouldn't benefit from
> > > error unwind.
> >
> > but doing so make unset_container() unpair with set_container() and
> > be the only one doing additional things in main ioctl body.
> >
> > I'd prefer to moving mutex inside unset_container() for better readability.
>
> Yes, I tried both ways and ended up here since adding the goto unwind
> was kind of ungainly for this function. Don't mind either way
>
> The functions are not intended as strict pairs, they are ioctl
> dispatch functions.
The lockdep annotation seems sufficient, but what about simply
prefixing the unset ioctl function with underscores to reinforce the
locking requirement, as done by the called function
__vfio_group_unset_container()? Thanks,
Alex
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [PATCH 7/8] vfio: Follow the naming pattern for vfio_group_ioctl_unset_container()
2022-08-30 16:42 ` Alex Williamson
@ 2022-08-30 16:51 ` Jason Gunthorpe
2022-08-31 17:48 ` Alex Williamson
0 siblings, 1 reply; 26+ messages in thread
From: Jason Gunthorpe @ 2022-08-30 16:51 UTC (permalink / raw)
To: Alex Williamson
Cc: Tian, Kevin, Cornelia Huck, kvm@vger.kernel.org,
llvm@lists.linux.dev, Nathan Chancellor, Nick Desaulniers,
Tom Rix
On Tue, Aug 30, 2022 at 10:42:31AM -0600, Alex Williamson wrote:
> On Mon, 29 Aug 2022 12:28:07 -0300
> Jason Gunthorpe <jgg@nvidia.com> wrote:
>
> > On Mon, Aug 29, 2022 at 12:41:30AM +0000, Tian, Kevin wrote:
> > > > From: Jason Gunthorpe <jgg@nvidia.com>
> > > > Sent: Thursday, August 18, 2022 12:07 AM
> > > >
> > > > Make it clear that this is the body of the ioctl - keep the mutex outside
> > > > the function since this function doesn't have and wouldn't benefit from
> > > > error unwind.
> > >
> > > but doing so make unset_container() unpair with set_container() and
> > > be the only one doing additional things in main ioctl body.
> > >
> > > I'd prefer to moving mutex inside unset_container() for better readability.
> >
> > Yes, I tried both ways and ended up here since adding the goto unwind
> > was kind of ungainly for this function. Don't mind either way
> >
> > The functions are not intended as strict pairs, they are ioctl
> > dispatch functions.
>
> The lockdep annotation seems sufficient, but what about simply
> prefixing the unset ioctl function with underscores to reinforce the
> locking requirement, as done by the called function
> __vfio_group_unset_container()? Thanks,
Could do, but IMHO, that is not a popular convention in VFIO
(anymore?).
I've been adding lockdep annotations, not adding __ to indicate
the function is called with some kind of lock.
In my tree __vfio_group_get_from_iommu() is the only one left using
that style. uset_container was turned into
vfio_container_detatch_group() in aother series
Conversly we have __vfio_register_dev() which I guess __ indicates is
internal or wrappered, not some statement about locking. I think this
has become the more popular usage of the __ generally in the kernel
So I will do a v2 with the extra goto unwind then
Are there any other remarks before I do it?
Thanks,
Jason
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [PATCH 7/8] vfio: Follow the naming pattern for vfio_group_ioctl_unset_container()
2022-08-30 16:51 ` Jason Gunthorpe
@ 2022-08-31 17:48 ` Alex Williamson
0 siblings, 0 replies; 26+ messages in thread
From: Alex Williamson @ 2022-08-31 17:48 UTC (permalink / raw)
To: Jason Gunthorpe
Cc: Tian, Kevin, Cornelia Huck, kvm@vger.kernel.org,
llvm@lists.linux.dev, Nathan Chancellor, Nick Desaulniers,
Tom Rix
On Tue, 30 Aug 2022 13:51:18 -0300
Jason Gunthorpe <jgg@nvidia.com> wrote:
> On Tue, Aug 30, 2022 at 10:42:31AM -0600, Alex Williamson wrote:
> > On Mon, 29 Aug 2022 12:28:07 -0300
> > Jason Gunthorpe <jgg@nvidia.com> wrote:
> >
> > > On Mon, Aug 29, 2022 at 12:41:30AM +0000, Tian, Kevin wrote:
> > > > > From: Jason Gunthorpe <jgg@nvidia.com>
> > > > > Sent: Thursday, August 18, 2022 12:07 AM
> > > > >
> > > > > Make it clear that this is the body of the ioctl - keep the mutex outside
> > > > > the function since this function doesn't have and wouldn't benefit from
> > > > > error unwind.
> > > >
> > > > but doing so make unset_container() unpair with set_container() and
> > > > be the only one doing additional things in main ioctl body.
> > > >
> > > > I'd prefer to moving mutex inside unset_container() for better readability.
> > >
> > > Yes, I tried both ways and ended up here since adding the goto unwind
> > > was kind of ungainly for this function. Don't mind either way
> > >
> > > The functions are not intended as strict pairs, they are ioctl
> > > dispatch functions.
> >
> > The lockdep annotation seems sufficient, but what about simply
> > prefixing the unset ioctl function with underscores to reinforce the
> > locking requirement, as done by the called function
> > __vfio_group_unset_container()? Thanks,
>
> Could do, but IMHO, that is not a popular convention in VFIO
> (anymore?).
>
> I've been adding lockdep annotations, not adding __ to indicate
> the function is called with some kind of lock.
>
> In my tree __vfio_group_get_from_iommu() is the only one left using
> that style. uset_container was turned into
> vfio_container_detatch_group() in aother series
>
> Conversly we have __vfio_register_dev() which I guess __ indicates is
> internal or wrappered, not some statement about locking. I think this
> has become the more popular usage of the __ generally in the kernel
>
> So I will do a v2 with the extra goto unwind then
>
> Are there any other remarks before I do it?
None from me. The pile of things relying on this series is growing, so
it looks like time to push v2. Thanks,
Alex
^ permalink raw reply [flat|nested] 26+ messages in thread
* [PATCH 8/8] vfio: Split VFIO_GROUP_GET_STATUS into a function
2022-08-17 16:07 [PATCH 0/8] Break up ioctl dispatch functions to one function per ioctl Jason Gunthorpe
` (6 preceding siblings ...)
2022-08-17 16:07 ` [PATCH 7/8] vfio: Follow the naming pattern for vfio_group_ioctl_unset_container() Jason Gunthorpe
@ 2022-08-17 16:07 ` Jason Gunthorpe
2022-08-26 23:43 ` Jason Gunthorpe
2022-08-29 0:43 ` Tian, Kevin
7 siblings, 2 replies; 26+ messages in thread
From: Jason Gunthorpe @ 2022-08-17 16:07 UTC (permalink / raw)
To: Alex Williamson, Cornelia Huck, kvm, llvm, Nathan Chancellor,
Nick Desaulniers, Tom Rix
This is the last sizable implementation in vfio_group_fops_unl_ioctl(),
move it to a function so vfio_group_fops_unl_ioctl() is emptied out.
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
---
drivers/vfio/vfio_main.c | 55 ++++++++++++++++++++--------------------
1 file changed, 27 insertions(+), 28 deletions(-)
diff --git a/drivers/vfio/vfio_main.c b/drivers/vfio/vfio_main.c
index 78957f45c37a34..6f96e6d07a5e98 100644
--- a/drivers/vfio/vfio_main.c
+++ b/drivers/vfio/vfio_main.c
@@ -1227,6 +1227,32 @@ static int vfio_group_ioctl_get_device_fd(struct vfio_group *group,
vfio_device_put(device);
return ret;
}
+static int vfio_group_ioctl_get_status(struct vfio_group *group,
+ struct vfio_group_status __user *arg)
+{
+ unsigned long minsz = offsetofend(struct vfio_group_status, flags);
+ struct vfio_group_status status;
+
+ if (copy_from_user(&status, arg, minsz))
+ return -EFAULT;
+
+ if (status.argsz < minsz)
+ return -EINVAL;
+
+ status.flags = 0;
+
+ down_read(&group->group_rwsem);
+ if (group->container)
+ status.flags |= VFIO_GROUP_FLAGS_CONTAINER_SET |
+ VFIO_GROUP_FLAGS_VIABLE;
+ else if (!iommu_group_dma_owner_claimed(group->iommu_group))
+ status.flags |= VFIO_GROUP_FLAGS_VIABLE;
+ up_read(&group->group_rwsem);
+
+ if (copy_to_user(arg, &status, minsz))
+ return -EFAULT;
+ return 0;
+}
static long vfio_group_fops_unl_ioctl(struct file *filep,
unsigned int cmd, unsigned long arg)
@@ -1239,34 +1265,7 @@ static long vfio_group_fops_unl_ioctl(struct file *filep,
case VFIO_GROUP_GET_DEVICE_FD:
return vfio_group_ioctl_get_device_fd(group, uarg);
case VFIO_GROUP_GET_STATUS:
- {
- struct vfio_group_status status;
- unsigned long minsz;
-
- minsz = offsetofend(struct vfio_group_status, flags);
-
- if (copy_from_user(&status, (void __user *)arg, minsz))
- return -EFAULT;
-
- if (status.argsz < minsz)
- return -EINVAL;
-
- status.flags = 0;
-
- down_read(&group->group_rwsem);
- if (group->container)
- status.flags |= VFIO_GROUP_FLAGS_CONTAINER_SET |
- VFIO_GROUP_FLAGS_VIABLE;
- else if (!iommu_group_dma_owner_claimed(group->iommu_group))
- status.flags |= VFIO_GROUP_FLAGS_VIABLE;
- up_read(&group->group_rwsem);
-
- if (copy_to_user((void __user *)arg, &status, minsz))
- return -EFAULT;
-
- ret = 0;
- break;
- }
+ return vfio_group_ioctl_get_status(group, uarg);
case VFIO_GROUP_SET_CONTAINER:
return vfio_group_ioctl_set_container(group, uarg);
case VFIO_GROUP_UNSET_CONTAINER:
--
2.37.2
^ permalink raw reply related [flat|nested] 26+ messages in thread* Re: [PATCH 8/8] vfio: Split VFIO_GROUP_GET_STATUS into a function
2022-08-17 16:07 ` [PATCH 8/8] vfio: Split VFIO_GROUP_GET_STATUS into a function Jason Gunthorpe
@ 2022-08-26 23:43 ` Jason Gunthorpe
2022-08-30 16:43 ` Alex Williamson
2022-08-29 0:43 ` Tian, Kevin
1 sibling, 1 reply; 26+ messages in thread
From: Jason Gunthorpe @ 2022-08-26 23:43 UTC (permalink / raw)
To: Alex Williamson, Cornelia Huck, kvm, llvm, Nathan Chancellor,
Nick Desaulniers, Tom Rix
On Wed, Aug 17, 2022 at 01:07:25PM -0300, Jason Gunthorpe wrote:
> This is the last sizable implementation in vfio_group_fops_unl_ioctl(),
> move it to a function so vfio_group_fops_unl_ioctl() is emptied out.
>
> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
> ---
> drivers/vfio/vfio_main.c | 55 ++++++++++++++++++++--------------------
> 1 file changed, 27 insertions(+), 28 deletions(-)
>
> diff --git a/drivers/vfio/vfio_main.c b/drivers/vfio/vfio_main.c
> index 78957f45c37a34..6f96e6d07a5e98 100644
> --- a/drivers/vfio/vfio_main.c
> +++ b/drivers/vfio/vfio_main.c
> @@ -1227,6 +1227,32 @@ static int vfio_group_ioctl_get_device_fd(struct vfio_group *group,
> vfio_device_put(device);
> return ret;
> }
> +static int vfio_group_ioctl_get_status(struct vfio_group *group,
> + struct vfio_group_status __user *arg)
There is a missing blank line after the } here
Jason
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [PATCH 8/8] vfio: Split VFIO_GROUP_GET_STATUS into a function
2022-08-26 23:43 ` Jason Gunthorpe
@ 2022-08-30 16:43 ` Alex Williamson
0 siblings, 0 replies; 26+ messages in thread
From: Alex Williamson @ 2022-08-30 16:43 UTC (permalink / raw)
To: Jason Gunthorpe
Cc: Cornelia Huck, kvm, llvm, Nathan Chancellor, Nick Desaulniers,
Tom Rix
On Fri, 26 Aug 2022 20:43:07 -0300
Jason Gunthorpe <jgg@nvidia.com> wrote:
> On Wed, Aug 17, 2022 at 01:07:25PM -0300, Jason Gunthorpe wrote:
> > This is the last sizable implementation in vfio_group_fops_unl_ioctl(),
> > move it to a function so vfio_group_fops_unl_ioctl() is emptied out.
> >
> > Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
> > ---
> > drivers/vfio/vfio_main.c | 55 ++++++++++++++++++++--------------------
> > 1 file changed, 27 insertions(+), 28 deletions(-)
> >
> > diff --git a/drivers/vfio/vfio_main.c b/drivers/vfio/vfio_main.c
> > index 78957f45c37a34..6f96e6d07a5e98 100644
> > --- a/drivers/vfio/vfio_main.c
> > +++ b/drivers/vfio/vfio_main.c
> > @@ -1227,6 +1227,32 @@ static int vfio_group_ioctl_get_device_fd(struct vfio_group *group,
> > vfio_device_put(device);
> > return ret;
> > }
> > +static int vfio_group_ioctl_get_status(struct vfio_group *group,
> > + struct vfio_group_status __user *arg)
>
> There is a missing blank line after the } here
Fixed locally if there's no other need for a respin. Thanks,
Alex
^ permalink raw reply [flat|nested] 26+ messages in thread
* RE: [PATCH 8/8] vfio: Split VFIO_GROUP_GET_STATUS into a function
2022-08-17 16:07 ` [PATCH 8/8] vfio: Split VFIO_GROUP_GET_STATUS into a function Jason Gunthorpe
2022-08-26 23:43 ` Jason Gunthorpe
@ 2022-08-29 0:43 ` Tian, Kevin
1 sibling, 0 replies; 26+ messages in thread
From: Tian, Kevin @ 2022-08-29 0:43 UTC (permalink / raw)
To: Jason Gunthorpe, Alex Williamson, Cornelia Huck,
kvm@vger.kernel.org, llvm@lists.linux.dev, Nathan Chancellor,
Nick Desaulniers, Tom Rix
> From: Jason Gunthorpe <jgg@nvidia.com>
> Sent: Thursday, August 18, 2022 12:07 AM
>
> This is the last sizable implementation in vfio_group_fops_unl_ioctl(),
> move it to a function so vfio_group_fops_unl_ioctl() is emptied out.
>
> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
Reviewed-by: Kevin Tian <kevin.tian@intel.com>
^ permalink raw reply [flat|nested] 26+ messages in thread