virtualization.lists.linux-foundation.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2 0/2] vhost-vdpa: Add support for NO-IOMMU mode
@ 2024-09-20 14:05 Srujana Challa
  2024-09-20 14:05 ` [PATCH v2 1/2] vhost-vdpa: introduce module parameter for no-IOMMU mode Srujana Challa
                   ` (2 more replies)
  0 siblings, 3 replies; 27+ messages in thread
From: Srujana Challa @ 2024-09-20 14:05 UTC (permalink / raw)
  To: virtualization, kvm; +Cc: mst, jasowang, eperezma, ndabilpuram, jerinj

This patchset introduces support for an UNSAFE, no-IOMMU mode in the
vhost-vdpa driver. When enabled, this mode provides no device isolation,
no DMA translation, no host kernel protection, and cannot be used for
device assignment to virtual machines. It requires RAWIO permissions
and will taint the kernel.

This mode requires enabling the "enable_vhost_vdpa_unsafe_noiommu_mode"
option on the vhost-vdpa driver and also negotiate the feature flag
VHOST_BACKEND_F_NOIOMMU. This mode would be useful to get
better performance on specifice low end machines and can be leveraged
by embedded platforms where applications run in controlled environment.

First patch introduces a module parameter
"enable_vhost_vdpa_unsafe_noiommu_mode", while the second patch
introduces a feature flag VHOST_BACKEND_F_NOIOMMU to the vhost vdpa
driver to support NO-IOMMU mode.

This feature flag indicates to userspace that the driver can safely
operate in NO-IOMMU mode. If the flag is not present, userspace should
assume NO-IOMMU mode is unsupported and must not proceed.

v1->v2:
- Introduced new feature flag to vhost backend features for negotiating
  NO-IOMMU feature.

Srujana Challa (2):
  vhost-vdpa: introduce module parameter for no-IOMMU mode
  vhost-vdpa: introduce NO-IOMMU backend feature bit

 drivers/vhost/vdpa.c             | 39 +++++++++++++++++++++++++++++++-
 include/uapi/linux/vhost_types.h |  2 ++
 2 files changed, 40 insertions(+), 1 deletion(-)

-- 
2.25.1


^ permalink raw reply	[flat|nested] 27+ messages in thread

* [PATCH v2 1/2] vhost-vdpa: introduce module parameter for no-IOMMU mode
  2024-09-20 14:05 [PATCH v2 0/2] vhost-vdpa: Add support for NO-IOMMU mode Srujana Challa
@ 2024-09-20 14:05 ` Srujana Challa
  2024-09-21 17:28   ` kernel test robot
  2024-09-20 14:05 ` [PATCH v2 2/2] vhost-vdpa: introduce NO-IOMMU backend feature bit Srujana Challa
  2024-10-01  8:47 ` [PATCH v2 0/2] vhost-vdpa: Add support for NO-IOMMU mode Christoph Hellwig
  2 siblings, 1 reply; 27+ messages in thread
From: Srujana Challa @ 2024-09-20 14:05 UTC (permalink / raw)
  To: virtualization, kvm; +Cc: mst, jasowang, eperezma, ndabilpuram, jerinj

This commit introduces module parameter
"enable_vhost_vdpa_unsafe_noiommu_mode" to enable an UNSAFE, no-IOMMU
mode in the vhost-vdpa driver. When enabled, this mode provides no
device isolation, no DMA translation, no host kernel protection, and
cannot be used for device assignment to virtual machines.
It requires RAWIO permissions and will taint the kernel.

Signed-off-by: Srujana Challa <schalla@marvell.com>
---
 drivers/vhost/vdpa.c | 28 ++++++++++++++++++++++++++++
 1 file changed, 28 insertions(+)

diff --git a/drivers/vhost/vdpa.c b/drivers/vhost/vdpa.c
index 5a49b5a6d496..b3085189ea4a 100644
--- a/drivers/vhost/vdpa.c
+++ b/drivers/vhost/vdpa.c
@@ -36,6 +36,11 @@ enum {
 
 #define VHOST_VDPA_IOTLB_BUCKETS 16
 
+bool vhost_vdpa_noiommu;
+module_param_named(enable_vhost_vdpa_unsafe_noiommu_mode,
+		   vhost_vdpa_noiommu, bool, 0644);
+MODULE_PARM_DESC(enable_vhost_vdpa_unsafe_noiommu_mode, "Enable UNSAFE, no-IOMMU mode.  This mode provides no device isolation, no DMA translation, no host kernel protection, cannot be used for device assignment to virtual machines, requires RAWIO permissions, and will taint the kernel.  If you do not know what this is for, step away. (default: false)");
+
 struct vhost_vdpa_as {
 	struct hlist_node hash_link;
 	struct vhost_iotlb iotlb;
@@ -60,6 +65,7 @@ struct vhost_vdpa {
 	struct vdpa_iova_range range;
 	u32 batch_asid;
 	bool suspended;
+	bool noiommu_en;
 };
 
 static DEFINE_IDA(vhost_vdpa_ida);
@@ -910,6 +916,10 @@ static void vhost_vdpa_general_unmap(struct vhost_vdpa *v,
 {
 	struct vdpa_device *vdpa = v->vdpa;
 	const struct vdpa_config_ops *ops = vdpa->config;
+
+	if (v->noiommu_en)
+		return;
+
 	if (ops->dma_map) {
 		ops->dma_unmap(vdpa, asid, map->start, map->size);
 	} else if (ops->set_map == NULL) {
@@ -1003,6 +1013,9 @@ static int vhost_vdpa_map(struct vhost_vdpa *v, struct vhost_iotlb *iotlb,
 	if (r)
 		return r;
 
+	if (v->noiommu_en)
+		goto skip_map;
+
 	if (ops->dma_map) {
 		r = ops->dma_map(vdpa, asid, iova, size, pa, perm, opaque);
 	} else if (ops->set_map) {
@@ -1018,6 +1031,7 @@ static int vhost_vdpa_map(struct vhost_vdpa *v, struct vhost_iotlb *iotlb,
 		return r;
 	}
 
+skip_map:
 	if (!vdpa->use_va)
 		atomic64_add(PFN_DOWN(size), &dev->mm->pinned_vm);
 
@@ -1321,12 +1335,26 @@ static int vhost_vdpa_alloc_domain(struct vhost_vdpa *v)
 	struct vdpa_device *vdpa = v->vdpa;
 	const struct vdpa_config_ops *ops = vdpa->config;
 	struct device *dma_dev = vdpa_get_dma_dev(vdpa);
+	struct iommu_domain *domain;
 	int ret;
 
 	/* Device want to do DMA by itself */
 	if (ops->set_map || ops->dma_map)
 		return 0;
 
+	domain = iommu_get_domain_for_dev(dma_dev);
+	if (!domain && vhost_vdpa_noiommu) {
+		if (!capable(CAP_SYS_RAWIO)) {
+			dev_warn_once(&v->dev, "No RAWIO permissions, couldn't support NOIOMMU\n");
+			return -ENOTSUPP;
+		}
+		add_taint(TAINT_USER, LOCKDEP_STILL_OK);
+		dev_warn(&v->dev, "Adding kernel taint for noiommu on device\n");
+		v->noiommu_en = true;
+		return 0;
+	}
+	v->noiommu_en = false;
+
 	if (!device_iommu_capable(dma_dev, IOMMU_CAP_CACHE_COHERENCY)) {
 		dev_warn_once(&v->dev,
 			      "Failed to allocate domain, device is not IOMMU cache coherent capable\n");
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH v2 2/2] vhost-vdpa: introduce NO-IOMMU backend feature bit
  2024-09-20 14:05 [PATCH v2 0/2] vhost-vdpa: Add support for NO-IOMMU mode Srujana Challa
  2024-09-20 14:05 ` [PATCH v2 1/2] vhost-vdpa: introduce module parameter for no-IOMMU mode Srujana Challa
@ 2024-09-20 14:05 ` Srujana Challa
  2024-09-24  7:43   ` Jason Wang
  2024-10-01  8:47 ` [PATCH v2 0/2] vhost-vdpa: Add support for NO-IOMMU mode Christoph Hellwig
  2 siblings, 1 reply; 27+ messages in thread
From: Srujana Challa @ 2024-09-20 14:05 UTC (permalink / raw)
  To: virtualization, kvm; +Cc: mst, jasowang, eperezma, ndabilpuram, jerinj

This patch introduces the VHOST_BACKEND_F_NOIOMMU feature flag.
This flag allows userspace to identify if the driver can operate
without an IOMMU, providing more flexibility in environments where
IOMMU is not available or desired.

Key changes include:
    - Addition of the VHOST_BACKEND_F_NOIOMMU feature flag.
    - Updates to vhost_vdpa_unlocked_ioctl to handle the NO-IOMMU
      feature.
The NO-IOMMU mode is enabled if:
    - The vdpa device lacks an IOMMU domain.
    - The system has the required RAWIO permissions.
    - The vdpa device explicitly supports NO-IOMMU mode.

This feature flag indicates to userspace that the driver can safely
operate in NO-IOMMU mode. If the flag is absent, userspace should
assume NO-IOMMU mode is unsupported and take appropriate actions.

Signed-off-by: Srujana Challa <schalla@marvell.com>
---
 drivers/vhost/vdpa.c             | 11 ++++++++++-
 include/uapi/linux/vhost_types.h |  2 ++
 2 files changed, 12 insertions(+), 1 deletion(-)

diff --git a/drivers/vhost/vdpa.c b/drivers/vhost/vdpa.c
index b3085189ea4a..de47349eceff 100644
--- a/drivers/vhost/vdpa.c
+++ b/drivers/vhost/vdpa.c
@@ -797,7 +797,8 @@ static long vhost_vdpa_unlocked_ioctl(struct file *filep,
 				 BIT_ULL(VHOST_BACKEND_F_IOTLB_PERSIST) |
 				 BIT_ULL(VHOST_BACKEND_F_SUSPEND) |
 				 BIT_ULL(VHOST_BACKEND_F_RESUME) |
-				 BIT_ULL(VHOST_BACKEND_F_ENABLE_AFTER_DRIVER_OK)))
+				 BIT_ULL(VHOST_BACKEND_F_ENABLE_AFTER_DRIVER_OK) |
+				 BIT_ULL(VHOST_BACKEND_F_NOIOMMU)))
 			return -EOPNOTSUPP;
 		if ((features & BIT_ULL(VHOST_BACKEND_F_SUSPEND)) &&
 		     !vhost_vdpa_can_suspend(v))
@@ -814,6 +815,12 @@ static long vhost_vdpa_unlocked_ioctl(struct file *filep,
 		if ((features & BIT_ULL(VHOST_BACKEND_F_IOTLB_PERSIST)) &&
 		     !vhost_vdpa_has_persistent_map(v))
 			return -EOPNOTSUPP;
+		if ((features & BIT_ULL(VHOST_BACKEND_F_NOIOMMU)) &&
+		    !v->noiommu_en)
+			return -EOPNOTSUPP;
+		if (!(features & BIT_ULL(VHOST_BACKEND_F_NOIOMMU)) &&
+		    v->noiommu_en)
+			return -EOPNOTSUPP;
 		vhost_set_backend_features(&v->vdev, features);
 		return 0;
 	}
@@ -871,6 +878,8 @@ static long vhost_vdpa_unlocked_ioctl(struct file *filep,
 			features |= BIT_ULL(VHOST_BACKEND_F_DESC_ASID);
 		if (vhost_vdpa_has_persistent_map(v))
 			features |= BIT_ULL(VHOST_BACKEND_F_IOTLB_PERSIST);
+		if (v->noiommu_en)
+			features |= BIT_ULL(VHOST_BACKEND_F_NOIOMMU);
 		features |= vhost_vdpa_get_backend_features(v);
 		if (copy_to_user(featurep, &features, sizeof(features)))
 			r = -EFAULT;
diff --git a/include/uapi/linux/vhost_types.h b/include/uapi/linux/vhost_types.h
index d7656908f730..dda673c3456a 100644
--- a/include/uapi/linux/vhost_types.h
+++ b/include/uapi/linux/vhost_types.h
@@ -192,5 +192,7 @@ struct vhost_vdpa_iova_range {
 #define VHOST_BACKEND_F_DESC_ASID    0x7
 /* IOTLB don't flush memory mapping across device reset */
 #define VHOST_BACKEND_F_IOTLB_PERSIST  0x8
+/* Enables the device to operate in NO-IOMMU mode as well */
+#define VHOST_BACKEND_F_NOIOMMU  0x9
 
 #endif
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* Re: [PATCH v2 1/2] vhost-vdpa: introduce module parameter for no-IOMMU mode
  2024-09-20 14:05 ` [PATCH v2 1/2] vhost-vdpa: introduce module parameter for no-IOMMU mode Srujana Challa
@ 2024-09-21 17:28   ` kernel test robot
  0 siblings, 0 replies; 27+ messages in thread
From: kernel test robot @ 2024-09-21 17:28 UTC (permalink / raw)
  To: Srujana Challa, virtualization, kvm
  Cc: oe-kbuild-all, mst, jasowang, eperezma, ndabilpuram, jerinj

Hi Srujana,

kernel test robot noticed the following build warnings:

[auto build test WARNING on mst-vhost/linux-next]
[also build test WARNING on linus/master v6.11 next-20240920]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch#_base_tree_information]

url:    https://github.com/intel-lab-lkp/linux/commits/Srujana-Challa/vhost-vdpa-introduce-module-parameter-for-no-IOMMU-mode/20240920-220751
base:   https://git.kernel.org/pub/scm/linux/kernel/git/mst/vhost.git linux-next
patch link:    https://lore.kernel.org/r/20240920140530.775307-2-schalla%40marvell.com
patch subject: [PATCH v2 1/2] vhost-vdpa: introduce module parameter for no-IOMMU mode
config: x86_64-randconfig-122-20240921 (https://download.01.org/0day-ci/archive/20240922/202409220134.x63FmsHx-lkp@intel.com/config)
compiler: gcc-12 (Debian 12.2.0-14) 12.2.0
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20240922/202409220134.x63FmsHx-lkp@intel.com/reproduce)

If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202409220134.x63FmsHx-lkp@intel.com/

sparse warnings: (new ones prefixed by >>)
>> drivers/vhost/vdpa.c:39:6: sparse: sparse: symbol 'vhost_vdpa_noiommu' was not declared. Should it be static?

vim +/vhost_vdpa_noiommu +39 drivers/vhost/vdpa.c

    38	
  > 39	bool vhost_vdpa_noiommu;
    40	module_param_named(enable_vhost_vdpa_unsafe_noiommu_mode,
    41			   vhost_vdpa_noiommu, bool, 0644);
    42	MODULE_PARM_DESC(enable_vhost_vdpa_unsafe_noiommu_mode, "Enable UNSAFE, no-IOMMU mode.  This mode provides no device isolation, no DMA translation, no host kernel protection, cannot be used for device assignment to virtual machines, requires RAWIO permissions, and will taint the kernel.  If you do not know what this is for, step away. (default: false)");
    43	

-- 
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH v2 2/2] vhost-vdpa: introduce NO-IOMMU backend feature bit
  2024-09-20 14:05 ` [PATCH v2 2/2] vhost-vdpa: introduce NO-IOMMU backend feature bit Srujana Challa
@ 2024-09-24  7:43   ` Jason Wang
  2024-09-24 10:01     ` [EXTERNAL] " Srujana Challa
  0 siblings, 1 reply; 27+ messages in thread
From: Jason Wang @ 2024-09-24  7:43 UTC (permalink / raw)
  To: Srujana Challa; +Cc: virtualization, kvm, mst, eperezma, ndabilpuram, jerinj

On Fri, Sep 20, 2024 at 10:05 PM Srujana Challa <schalla@marvell.com> wrote:
>
> This patch introduces the VHOST_BACKEND_F_NOIOMMU feature flag.
> This flag allows userspace to identify if the driver can operate
> without an IOMMU, providing more flexibility in environments where
> IOMMU is not available or desired.
>
> Key changes include:
>     - Addition of the VHOST_BACKEND_F_NOIOMMU feature flag.
>     - Updates to vhost_vdpa_unlocked_ioctl to handle the NO-IOMMU
>       feature.
> The NO-IOMMU mode is enabled if:
>     - The vdpa device lacks an IOMMU domain.
>     - The system has the required RAWIO permissions.
>     - The vdpa device explicitly supports NO-IOMMU mode.
>
> This feature flag indicates to userspace that the driver can safely
> operate in NO-IOMMU mode. If the flag is absent, userspace should
> assume NO-IOMMU mode is unsupported and take appropriate actions.

This seems contradictory to what you said in patch 1

"""
When enabled, this mode provides no
device isolation, no DMA translation, no host kernel protection, and
cannot be used for device assignment to virtual machines.
"""

And I wonder what "appropriate actions" could the userspace take?
Generally, the IOMMU concept should be hidden from the userspace.

Thanks

>
> Signed-off-by: Srujana Challa <schalla@marvell.com>
> ---
>  drivers/vhost/vdpa.c             | 11 ++++++++++-
>  include/uapi/linux/vhost_types.h |  2 ++
>  2 files changed, 12 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/vhost/vdpa.c b/drivers/vhost/vdpa.c
> index b3085189ea4a..de47349eceff 100644
> --- a/drivers/vhost/vdpa.c
> +++ b/drivers/vhost/vdpa.c
> @@ -797,7 +797,8 @@ static long vhost_vdpa_unlocked_ioctl(struct file *filep,
>                                  BIT_ULL(VHOST_BACKEND_F_IOTLB_PERSIST) |
>                                  BIT_ULL(VHOST_BACKEND_F_SUSPEND) |
>                                  BIT_ULL(VHOST_BACKEND_F_RESUME) |
> -                                BIT_ULL(VHOST_BACKEND_F_ENABLE_AFTER_DRIVER_OK)))
> +                                BIT_ULL(VHOST_BACKEND_F_ENABLE_AFTER_DRIVER_OK) |
> +                                BIT_ULL(VHOST_BACKEND_F_NOIOMMU)))
>                         return -EOPNOTSUPP;
>                 if ((features & BIT_ULL(VHOST_BACKEND_F_SUSPEND)) &&
>                      !vhost_vdpa_can_suspend(v))
> @@ -814,6 +815,12 @@ static long vhost_vdpa_unlocked_ioctl(struct file *filep,
>                 if ((features & BIT_ULL(VHOST_BACKEND_F_IOTLB_PERSIST)) &&
>                      !vhost_vdpa_has_persistent_map(v))
>                         return -EOPNOTSUPP;
> +               if ((features & BIT_ULL(VHOST_BACKEND_F_NOIOMMU)) &&
> +                   !v->noiommu_en)
> +                       return -EOPNOTSUPP;
> +               if (!(features & BIT_ULL(VHOST_BACKEND_F_NOIOMMU)) &&
> +                   v->noiommu_en)
> +                       return -EOPNOTSUPP;
>                 vhost_set_backend_features(&v->vdev, features);
>                 return 0;
>         }
> @@ -871,6 +878,8 @@ static long vhost_vdpa_unlocked_ioctl(struct file *filep,
>                         features |= BIT_ULL(VHOST_BACKEND_F_DESC_ASID);
>                 if (vhost_vdpa_has_persistent_map(v))
>                         features |= BIT_ULL(VHOST_BACKEND_F_IOTLB_PERSIST);
> +               if (v->noiommu_en)
> +                       features |= BIT_ULL(VHOST_BACKEND_F_NOIOMMU);
>                 features |= vhost_vdpa_get_backend_features(v);
>                 if (copy_to_user(featurep, &features, sizeof(features)))
>                         r = -EFAULT;
> diff --git a/include/uapi/linux/vhost_types.h b/include/uapi/linux/vhost_types.h
> index d7656908f730..dda673c3456a 100644
> --- a/include/uapi/linux/vhost_types.h
> +++ b/include/uapi/linux/vhost_types.h
> @@ -192,5 +192,7 @@ struct vhost_vdpa_iova_range {
>  #define VHOST_BACKEND_F_DESC_ASID    0x7
>  /* IOTLB don't flush memory mapping across device reset */
>  #define VHOST_BACKEND_F_IOTLB_PERSIST  0x8
> +/* Enables the device to operate in NO-IOMMU mode as well */
> +#define VHOST_BACKEND_F_NOIOMMU  0x9
>
>  #endif
> --
> 2.25.1
>


^ permalink raw reply	[flat|nested] 27+ messages in thread

* RE: [EXTERNAL] Re: [PATCH v2 2/2] vhost-vdpa: introduce NO-IOMMU backend feature bit
  2024-09-24  7:43   ` Jason Wang
@ 2024-09-24 10:01     ` Srujana Challa
  0 siblings, 0 replies; 27+ messages in thread
From: Srujana Challa @ 2024-09-24 10:01 UTC (permalink / raw)
  To: Jason Wang
  Cc: virtualization@lists.linux.dev, kvm@vger.kernel.org,
	mst@redhat.com, eperezma@redhat.com, Nithin Kumar Dabilpuram,
	Jerin Jacob

> On Fri, Sep 20, 2024 at 10:05 PM Srujana Challa <schalla@marvell.com>
> wrote:
> >
> > This patch introduces the VHOST_BACKEND_F_NOIOMMU feature flag.
> > This flag allows userspace to identify if the driver can operate
> > without an IOMMU, providing more flexibility in environments where
> > IOMMU is not available or desired.
> >
> > Key changes include:
> >     - Addition of the VHOST_BACKEND_F_NOIOMMU feature flag.
> >     - Updates to vhost_vdpa_unlocked_ioctl to handle the NO-IOMMU
> >       feature.
> > The NO-IOMMU mode is enabled if:
> >     - The vdpa device lacks an IOMMU domain.
> >     - The system has the required RAWIO permissions.
> >     - The vdpa device explicitly supports NO-IOMMU mode.
> >
> > This feature flag indicates to userspace that the driver can safely
> > operate in NO-IOMMU mode. If the flag is absent, userspace should
> > assume NO-IOMMU mode is unsupported and take appropriate actions.
>
> This seems contradictory to what you said in patch 1
> 
> """
> When enabled, this mode provides no
> device isolation, no DMA translation, no host kernel protection, and cannot be
> used for device assignment to virtual machines.
> """
> 
> And I wonder what "appropriate actions" could the userspace take?
I apologize, I should have stated that this feature flag indicates to userspace that the driver
can operate in NO-IOMMU mode. If the flag is absent, userspace should treat NO-IOMMU mode
as unsupported and refrain from proceeding when IOMMU is not present.

> Generally, the IOMMU concept should be hidden from the userspace.
The userspace framework, like DPDK, can determine the presence of IOMMU, enabling it to
choose between IOVA and PA as needed.  Since this is aimed at embedded use cases,
to be more secure, both Kernel VDPA driver and userspace app should require to establish
support for NO-IOMMU like vfio-noiommu.
Thanks.
> Thanks
> 
> >
> > Signed-off-by: Srujana Challa <schalla@marvell.com>
> > ---
> >  drivers/vhost/vdpa.c             | 11 ++++++++++-
> >  include/uapi/linux/vhost_types.h |  2 ++
> >  2 files changed, 12 insertions(+), 1 deletion(-)
> >
> > diff --git a/drivers/vhost/vdpa.c b/drivers/vhost/vdpa.c index
> > b3085189ea4a..de47349eceff 100644
> > --- a/drivers/vhost/vdpa.c
> > +++ b/drivers/vhost/vdpa.c
> > @@ -797,7 +797,8 @@ static long vhost_vdpa_unlocked_ioctl(struct file
> *filep,
> >                                  BIT_ULL(VHOST_BACKEND_F_IOTLB_PERSIST) |
> >                                  BIT_ULL(VHOST_BACKEND_F_SUSPEND) |
> >                                  BIT_ULL(VHOST_BACKEND_F_RESUME) |
> > -
> BIT_ULL(VHOST_BACKEND_F_ENABLE_AFTER_DRIVER_OK)))
> > +                                BIT_ULL(VHOST_BACKEND_F_ENABLE_AFTER_DRIVER_OK)
> |
> > +                                BIT_ULL(VHOST_BACKEND_F_NOIOMMU)))
> >                         return -EOPNOTSUPP;
> >                 if ((features & BIT_ULL(VHOST_BACKEND_F_SUSPEND)) &&
> >                      !vhost_vdpa_can_suspend(v)) @@ -814,6 +815,12 @@
> > static long vhost_vdpa_unlocked_ioctl(struct file *filep,
> >                 if ((features & BIT_ULL(VHOST_BACKEND_F_IOTLB_PERSIST)) &&
> >                      !vhost_vdpa_has_persistent_map(v))
> >                         return -EOPNOTSUPP;
> > +               if ((features & BIT_ULL(VHOST_BACKEND_F_NOIOMMU)) &&
> > +                   !v->noiommu_en)
> > +                       return -EOPNOTSUPP;
> > +               if (!(features & BIT_ULL(VHOST_BACKEND_F_NOIOMMU)) &&
> > +                   v->noiommu_en)
> > +                       return -EOPNOTSUPP;
> >                 vhost_set_backend_features(&v->vdev, features);
> >                 return 0;
> >         }
> > @@ -871,6 +878,8 @@ static long vhost_vdpa_unlocked_ioctl(struct file
> *filep,
> >                         features |= BIT_ULL(VHOST_BACKEND_F_DESC_ASID);
> >                 if (vhost_vdpa_has_persistent_map(v))
> >                         features |=
> > BIT_ULL(VHOST_BACKEND_F_IOTLB_PERSIST);
> > +               if (v->noiommu_en)
> > +                       features |= BIT_ULL(VHOST_BACKEND_F_NOIOMMU);
> >                 features |= vhost_vdpa_get_backend_features(v);
> >                 if (copy_to_user(featurep, &features, sizeof(features)))
> >                         r = -EFAULT;
> > diff --git a/include/uapi/linux/vhost_types.h
> > b/include/uapi/linux/vhost_types.h
> > index d7656908f730..dda673c3456a 100644
> > --- a/include/uapi/linux/vhost_types.h
> > +++ b/include/uapi/linux/vhost_types.h
> > @@ -192,5 +192,7 @@ struct vhost_vdpa_iova_range {
> >  #define VHOST_BACKEND_F_DESC_ASID    0x7
> >  /* IOTLB don't flush memory mapping across device reset */  #define
> > VHOST_BACKEND_F_IOTLB_PERSIST  0x8
> > +/* Enables the device to operate in NO-IOMMU mode as well */ #define
> > +VHOST_BACKEND_F_NOIOMMU  0x9
> >
> >  #endif
> > --
> > 2.25.1
> >


^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH v2 0/2] vhost-vdpa: Add support for NO-IOMMU mode
  2024-09-20 14:05 [PATCH v2 0/2] vhost-vdpa: Add support for NO-IOMMU mode Srujana Challa
  2024-09-20 14:05 ` [PATCH v2 1/2] vhost-vdpa: introduce module parameter for no-IOMMU mode Srujana Challa
  2024-09-20 14:05 ` [PATCH v2 2/2] vhost-vdpa: introduce NO-IOMMU backend feature bit Srujana Challa
@ 2024-10-01  8:47 ` Christoph Hellwig
  2024-10-14 13:18   ` [EXTERNAL] " Srujana Challa
  2 siblings, 1 reply; 27+ messages in thread
From: Christoph Hellwig @ 2024-10-01  8:47 UTC (permalink / raw)
  To: Srujana Challa
  Cc: virtualization, kvm, mst, jasowang, eperezma, ndabilpuram, jerinj

On Fri, Sep 20, 2024 at 07:35:28PM +0530, Srujana Challa wrote:
> This patchset introduces support for an UNSAFE, no-IOMMU mode in the
> vhost-vdpa driver. When enabled, this mode provides no device isolation,
> no DMA translation, no host kernel protection, and cannot be used for
> device assignment to virtual machines. It requires RAWIO permissions
> and will taint the kernel.
> 
> This mode requires enabling the "enable_vhost_vdpa_unsafe_noiommu_mode"
> option on the vhost-vdpa driver and also negotiate the feature flag
> VHOST_BACKEND_F_NOIOMMU. This mode would be useful to get
> better performance on specifice low end machines and can be leveraged
> by embedded platforms where applications run in controlled environment.

... and is completely broken and dangerous.


^ permalink raw reply	[flat|nested] 27+ messages in thread

* RE: [EXTERNAL] Re: [PATCH v2 0/2] vhost-vdpa: Add support for NO-IOMMU mode
  2024-10-01  8:47 ` [PATCH v2 0/2] vhost-vdpa: Add support for NO-IOMMU mode Christoph Hellwig
@ 2024-10-14 13:18   ` Srujana Challa
  2024-10-15  3:48     ` Christoph Hellwig
  0 siblings, 1 reply; 27+ messages in thread
From: Srujana Challa @ 2024-10-14 13:18 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: virtualization@lists.linux.dev, kvm@vger.kernel.org,
	mst@redhat.com, jasowang@redhat.com, eperezma@redhat.com,
	Nithin Kumar Dabilpuram, Jerin Jacob

> On Fri, Sep 20, 2024 at 07:35:28PM +0530, Srujana Challa wrote:
> > This patchset introduces support for an UNSAFE, no-IOMMU mode in the
> > vhost-vdpa driver. When enabled, this mode provides no device
> > isolation, no DMA translation, no host kernel protection, and cannot
> > be used for device assignment to virtual machines. It requires RAWIO
> > permissions and will taint the kernel.
> >
> > This mode requires enabling the
> "enable_vhost_vdpa_unsafe_noiommu_mode"
> > option on the vhost-vdpa driver and also negotiate the feature flag
> > VHOST_BACKEND_F_NOIOMMU. This mode would be useful to get better
> > performance on specifice low end machines and can be leveraged by
> > embedded platforms where applications run in controlled environment.
> 
> ... and is completely broken and dangerous.
Based on the discussions in this thread https://www.spinics.net/lists/kvm/msg357569.html,
we have decided to proceed with this implementation. Could you please share any
alternative ideas or suggestions you might have?
Thanks.  

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [EXTERNAL] Re: [PATCH v2 0/2] vhost-vdpa: Add support for NO-IOMMU mode
  2024-10-14 13:18   ` [EXTERNAL] " Srujana Challa
@ 2024-10-15  3:48     ` Christoph Hellwig
  2024-10-16 17:28       ` Srujana Challa
  2024-10-16 17:41       ` Michael S. Tsirkin
  0 siblings, 2 replies; 27+ messages in thread
From: Christoph Hellwig @ 2024-10-15  3:48 UTC (permalink / raw)
  To: Srujana Challa
  Cc: Christoph Hellwig, virtualization@lists.linux.dev,
	kvm@vger.kernel.org, mst@redhat.com, jasowang@redhat.com,
	eperezma@redhat.com, Nithin Kumar Dabilpuram, Jerin Jacob

On Mon, Oct 14, 2024 at 01:18:01PM +0000, Srujana Challa wrote:
> > On Fri, Sep 20, 2024 at 07:35:28PM +0530, Srujana Challa wrote:
> > > This patchset introduces support for an UNSAFE, no-IOMMU mode in the
> > > vhost-vdpa driver. When enabled, this mode provides no device
> > > isolation, no DMA translation, no host kernel protection, and cannot
> > > be used for device assignment to virtual machines. It requires RAWIO
> > > permissions and will taint the kernel.
> > >
> > > This mode requires enabling the
> > "enable_vhost_vdpa_unsafe_noiommu_mode"
> > > option on the vhost-vdpa driver and also negotiate the feature flag
> > > VHOST_BACKEND_F_NOIOMMU. This mode would be useful to get better
> > > performance on specifice low end machines and can be leveraged by
> > > embedded platforms where applications run in controlled environment.
> > 
> > ... and is completely broken and dangerous.
> Based on the discussions in this thread https://www.spinics.net/lists/kvm/msg357569.html,
> we have decided to proceed with this implementation. Could you please share any
> alternative ideas or suggestions you might have?

Don't do this.  It is inherently unsafe and dangerous and there is not
valid reason to implement it.

Double-Nacked-by: Christoph Hellwig <hch@lst.de>

^ permalink raw reply	[flat|nested] 27+ messages in thread

* RE: [EXTERNAL] Re: [PATCH v2 0/2] vhost-vdpa: Add support for NO-IOMMU mode
  2024-10-15  3:48     ` Christoph Hellwig
@ 2024-10-16 17:28       ` Srujana Challa
  2024-10-17  6:19         ` Christoph Hellwig
  2024-10-16 17:41       ` Michael S. Tsirkin
  1 sibling, 1 reply; 27+ messages in thread
From: Srujana Challa @ 2024-10-16 17:28 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: virtualization@lists.linux.dev, kvm@vger.kernel.org,
	mst@redhat.com, jasowang@redhat.com, eperezma@redhat.com,
	Nithin Kumar Dabilpuram, Jerin Jacob

> Subject: Re: [EXTERNAL] Re: [PATCH v2 0/2] vhost-vdpa: Add support for NO-
> IOMMU mode
> 
> On Mon, Oct 14, 2024 at 01: 18: 01PM +0000, Srujana Challa wrote: > > On Fri,
> Sep 20, 2024 at 07: 35: 28PM +0530, Srujana Challa wrote: > > > This patchset
> introduces support for an UNSAFE, no-IOMMU mode in the > > > vhost-vdpa
> 
> On Mon, Oct 14, 2024 at 01:18:01PM +0000, Srujana Challa wrote:
> > > On Fri, Sep 20, 2024 at 07:35:28PM +0530, Srujana Challa wrote:
> > > > This patchset introduces support for an UNSAFE, no-IOMMU mode in
> > > > the vhost-vdpa driver. When enabled, this mode provides no device
> > > > isolation, no DMA translation, no host kernel protection, and
> > > > cannot be used for device assignment to virtual machines. It
> > > > requires RAWIO permissions and will taint the kernel.
> > > >
> > > > This mode requires enabling the
> > > "enable_vhost_vdpa_unsafe_noiommu_mode"
> > > > option on the vhost-vdpa driver and also negotiate the feature
> > > > flag VHOST_BACKEND_F_NOIOMMU. This mode would be useful to get
> > > > better performance on specifice low end machines and can be
> > > > leveraged by embedded platforms where applications run in controlled
> environment.
> > >
> > > ... and is completely broken and dangerous.
> > Based on the discussions in this thread
> > https://urldefense.proofpoint.com/v2/url?u=https-3A__www.spinics.net_l
> >
> ists_kvm_msg357569.html&d=DwIBAg&c=nKjWec2b6R0mOyPaz7xtfQ&r=Fj4O
> oD5hcK
> >
> FpANhTWdwQzjT1Jpf7veC5263T47JVpnc&m=Kj2YVdoGecovW95oPf_fILveQer
> 4EsJfWJ
> >
> XmzmACF_v1jROPwW343ZXF2nEc5JN7&s=Dw7EoKg_W8Ak7E0uGR4gT3vHBv
> Em2uEP1Pvx0
> > 5dXVHI&e=, we have decided to proceed with this implementation. Could
> > you please share any alternative ideas or suggestions you might have?
> 
> Don't do this.  It is inherently unsafe and dangerous and there is not valid
> reason to implement it.
> 
> Double-Nacked-by: Christoph Hellwig <hch@lst.de>

When using the DPDK virtio user PMD, we’ve noticed a significant 70%
performance improvement when IOMMU is disabled on specific low-end
x86 machines. This performance improvement can be particularly
advantageous for embedded platforms where applications operate in
controlled environments. Therefore, we believe supporting the intel_iommu=off
mode is beneficial.

Thanks. 

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [EXTERNAL] Re: [PATCH v2 0/2] vhost-vdpa: Add support for NO-IOMMU mode
  2024-10-15  3:48     ` Christoph Hellwig
  2024-10-16 17:28       ` Srujana Challa
@ 2024-10-16 17:41       ` Michael S. Tsirkin
  2024-10-17  6:16         ` Christoph Hellwig
  1 sibling, 1 reply; 27+ messages in thread
From: Michael S. Tsirkin @ 2024-10-16 17:41 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: Srujana Challa, virtualization@lists.linux.dev,
	kvm@vger.kernel.org, jasowang@redhat.com, eperezma@redhat.com,
	Nithin Kumar Dabilpuram, Jerin Jacob

On Mon, Oct 14, 2024 at 08:48:27PM -0700, Christoph Hellwig wrote:
> On Mon, Oct 14, 2024 at 01:18:01PM +0000, Srujana Challa wrote:
> > > On Fri, Sep 20, 2024 at 07:35:28PM +0530, Srujana Challa wrote:
> > > > This patchset introduces support for an UNSAFE, no-IOMMU mode in the
> > > > vhost-vdpa driver. When enabled, this mode provides no device
> > > > isolation, no DMA translation, no host kernel protection, and cannot
> > > > be used for device assignment to virtual machines. It requires RAWIO
> > > > permissions and will taint the kernel.
> > > >
> > > > This mode requires enabling the
> > > "enable_vhost_vdpa_unsafe_noiommu_mode"
> > > > option on the vhost-vdpa driver and also negotiate the feature flag
> > > > VHOST_BACKEND_F_NOIOMMU. This mode would be useful to get better
> > > > performance on specifice low end machines and can be leveraged by
> > > > embedded platforms where applications run in controlled environment.
> > > 
> > > ... and is completely broken and dangerous.
> > Based on the discussions in this thread https://www.spinics.net/lists/kvm/msg357569.html,
> > we have decided to proceed with this implementation. Could you please share any
> > alternative ideas or suggestions you might have?
> 
> Don't do this.  It is inherently unsafe and dangerous and there is not
> valid reason to implement it.
> 
> Double-Nacked-by: Christoph Hellwig <hch@lst.de>

It's basically because vfio does, so we have to follow suit.

-- 
MST


^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [EXTERNAL] Re: [PATCH v2 0/2] vhost-vdpa: Add support for NO-IOMMU mode
  2024-10-16 17:41       ` Michael S. Tsirkin
@ 2024-10-17  6:16         ` Christoph Hellwig
  2024-10-20  0:16           ` Michael S. Tsirkin
  0 siblings, 1 reply; 27+ messages in thread
From: Christoph Hellwig @ 2024-10-17  6:16 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Christoph Hellwig, Srujana Challa, virtualization@lists.linux.dev,
	kvm@vger.kernel.org, jasowang@redhat.com, eperezma@redhat.com,
	Nithin Kumar Dabilpuram, Jerin Jacob

On Wed, Oct 16, 2024 at 01:41:51PM -0400, Michael S. Tsirkin wrote:
> It's basically because vfio does, so we have to follow suit.

That's a very bold argument, especially without any rationale of

 a) why you need to match the feature set
 b) how even adding it to vfio was agood idea


^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [EXTERNAL] Re: [PATCH v2 0/2] vhost-vdpa: Add support for NO-IOMMU mode
  2024-10-16 17:28       ` Srujana Challa
@ 2024-10-17  6:19         ` Christoph Hellwig
  2024-10-17  8:53           ` Srujana Challa
  0 siblings, 1 reply; 27+ messages in thread
From: Christoph Hellwig @ 2024-10-17  6:19 UTC (permalink / raw)
  To: Srujana Challa
  Cc: Christoph Hellwig, virtualization@lists.linux.dev,
	kvm@vger.kernel.org, mst@redhat.com, jasowang@redhat.com,
	eperezma@redhat.com, Nithin Kumar Dabilpuram, Jerin Jacob

On Wed, Oct 16, 2024 at 05:28:27PM +0000, Srujana Challa wrote:
> When using the DPDK virtio user PMD, we’ve noticed a significant 70%
> performance improvement when IOMMU is disabled on specific low-end
> x86 machines. This performance improvement can be particularly
> advantageous for embedded platforms where applications operate in
> controlled environments. Therefore, we believe supporting the intel_iommu=off
> mode is beneficial.

While making the system completely unsafe to use.  Maybe you should
fix your stack to use the iommu more itelligently instead?


^ permalink raw reply	[flat|nested] 27+ messages in thread

* RE: [EXTERNAL] Re: [PATCH v2 0/2] vhost-vdpa: Add support for NO-IOMMU mode
  2024-10-17  6:19         ` Christoph Hellwig
@ 2024-10-17  8:53           ` Srujana Challa
  2024-10-18  4:54             ` Jason Wang
  2024-10-18  5:24             ` Christoph Hellwig
  0 siblings, 2 replies; 27+ messages in thread
From: Srujana Challa @ 2024-10-17  8:53 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: virtualization@lists.linux.dev, kvm@vger.kernel.org,
	mst@redhat.com, jasowang@redhat.com, eperezma@redhat.com,
	Nithin Kumar Dabilpuram, Jerin Jacob

> Subject: Re: [EXTERNAL] Re: [PATCH v2 0/2] vhost-vdpa: Add support for NO-
> IOMMU mode
> 
> On Wed, Oct 16, 2024 at 05:28:27PM +0000, Srujana Challa wrote:
> > When using the DPDK virtio user PMD, we’ve noticed a significant 70%
> > performance improvement when IOMMU is disabled on specific low-end
> > x86 machines. This performance improvement can be particularly
> > advantageous for embedded platforms where applications operate in
> > controlled environments. Therefore, we believe supporting the
> > intel_iommu=off mode is beneficial.
> 
> While making the system completely unsafe to use.  Maybe you should fix
> your stack to use the iommu more itelligently instead?

We observed better performance with "intel_iommu=on" in high-end x86 machines,
indicating that the performance limitations are specific to low-end x86 hardware.
This presents a trade-off between performance and security. Since intel_iommu
is enabled by default, users who prioritize security over performance do not need to
disable this option.

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [EXTERNAL] Re: [PATCH v2 0/2] vhost-vdpa: Add support for NO-IOMMU mode
  2024-10-17  8:53           ` Srujana Challa
@ 2024-10-18  4:54             ` Jason Wang
  2024-10-18  5:24             ` Christoph Hellwig
  1 sibling, 0 replies; 27+ messages in thread
From: Jason Wang @ 2024-10-18  4:54 UTC (permalink / raw)
  To: Srujana Challa
  Cc: Christoph Hellwig, virtualization@lists.linux.dev,
	kvm@vger.kernel.org, mst@redhat.com, eperezma@redhat.com,
	Nithin Kumar Dabilpuram, Jerin Jacob

On Thu, Oct 17, 2024 at 4:53 PM Srujana Challa <schalla@marvell.com> wrote:
>
> > Subject: Re: [EXTERNAL] Re: [PATCH v2 0/2] vhost-vdpa: Add support for NO-
> > IOMMU mode
> >
> > On Wed, Oct 16, 2024 at 05:28:27PM +0000, Srujana Challa wrote:
> > > When using the DPDK virtio user PMD, we’ve noticed a significant 70%
> > > performance improvement when IOMMU is disabled on specific low-end
> > > x86 machines. This performance improvement can be particularly
> > > advantageous for embedded platforms where applications operate in
> > > controlled environments. Therefore, we believe supporting the
> > > intel_iommu=off mode is beneficial.
> >
> > While making the system completely unsafe to use.  Maybe you should fix
> > your stack to use the iommu more itelligently instead?
>
> We observed better performance with "intel_iommu=on" in high-end x86 machines,
> indicating that the performance limitations are specific to low-end x86 hardware.

Do you have any analysis on why Intel IOMMU is slow on "low-end" x86
hardware? Anyhow, we can ask Intel IOMMU experts to help here.

Thanks

> This presents a trade-off between performance and security. Since intel_iommu
> is enabled by default, users who prioritize security over performance do not need to
> disable this option.


^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [EXTERNAL] Re: [PATCH v2 0/2] vhost-vdpa: Add support for NO-IOMMU mode
  2024-10-17  8:53           ` Srujana Challa
  2024-10-18  4:54             ` Jason Wang
@ 2024-10-18  5:24             ` Christoph Hellwig
  2024-10-18 13:08               ` Srujana Challa
  1 sibling, 1 reply; 27+ messages in thread
From: Christoph Hellwig @ 2024-10-18  5:24 UTC (permalink / raw)
  To: Srujana Challa
  Cc: Christoph Hellwig, virtualization@lists.linux.dev,
	kvm@vger.kernel.org, mst@redhat.com, jasowang@redhat.com,
	eperezma@redhat.com, Nithin Kumar Dabilpuram, Jerin Jacob

On Thu, Oct 17, 2024 at 08:53:08AM +0000, Srujana Challa wrote:
> We observed better performance with "intel_iommu=on" in high-end x86 machines,
> indicating that the performance limitations are specific to low-end x86 hardware.

What does "low-end" vs "high-end" mean?  Atom vs other cores?

> This presents a trade-off between performance and security. Since intel_iommu
> is enabled by default, users who prioritize security over performance do not need to
> disable this option.

Either way, just disabling essential protection because it is slow
is a no-go.  We'll need to either fix your issues, or you need to use
more suitable hardware for your workload.

^ permalink raw reply	[flat|nested] 27+ messages in thread

* RE: [EXTERNAL] Re: [PATCH v2 0/2] vhost-vdpa: Add support for NO-IOMMU mode
  2024-10-18  5:24             ` Christoph Hellwig
@ 2024-10-18 13:08               ` Srujana Challa
  0 siblings, 0 replies; 27+ messages in thread
From: Srujana Challa @ 2024-10-18 13:08 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: virtualization@lists.linux.dev, kvm@vger.kernel.org,
	mst@redhat.com, jasowang@redhat.com, eperezma@redhat.com,
	Nithin Kumar Dabilpuram, Jerin Jacob

> Subject: Re: [EXTERNAL] Re: [PATCH v2 0/2] vhost-vdpa: Add support for NO-
> IOMMU mode
> 
> On Thu, Oct 17, 2024 at 08: 53: 08AM +0000, Srujana Challa wrote: > We
> observed better performance with "intel_iommu=on" in high-end x86
> machines, > indicating that the performance limitations are specific to low-
> end x86 hardware. What does 
> On Thu, Oct 17, 2024 at 08:53:08AM +0000, Srujana Challa wrote:
> > We observed better performance with "intel_iommu=on" in high-end x86
> > machines, indicating that the performance limitations are specific to low-
> end x86 hardware.
> 
> What does "low-end" vs "high-end" mean?  Atom vs other cores?
 High-end, model name      : Intel(R) Xeon(R) Platinum 8462Y+,  64 Cores
 Low-end,  model name      : 13th Gen Intel(R) Core(TM) i9-13900K, 32 Cores
>  	
> > This presents a trade-off between performance and security. Since
> > intel_iommu is enabled by default, users who prioritize security over
> > performance do not need to disable this option.
> 
> Either way, just disabling essential protection because it is slow is a no-go.
> We'll need to either fix your issues, or you need to use more suitable
> hardware for your workload.
I disagree. Why to pay more HW cost if the use case does not need demand for it?
fox example, embedded environment and trusted application are running.
It is the same thing is done for VFIO scheme. I don't understand your reservation
on a mode you are not planning to use and default is protected.
There are a lot kernel options, which does the correct trade between various
parameter like performance, security, power and HW cost etc.


^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [EXTERNAL] Re: [PATCH v2 0/2] vhost-vdpa: Add support for NO-IOMMU mode
  2024-10-17  6:16         ` Christoph Hellwig
@ 2024-10-20  0:16           ` Michael S. Tsirkin
  2024-10-23  6:58             ` Christoph Hellwig
  0 siblings, 1 reply; 27+ messages in thread
From: Michael S. Tsirkin @ 2024-10-20  0:16 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: Srujana Challa, virtualization@lists.linux.dev,
	kvm@vger.kernel.org, jasowang@redhat.com, eperezma@redhat.com,
	Nithin Kumar Dabilpuram, Jerin Jacob

On Wed, Oct 16, 2024 at 11:16:08PM -0700, Christoph Hellwig wrote:
> On Wed, Oct 16, 2024 at 01:41:51PM -0400, Michael S. Tsirkin wrote:
> > It's basically because vfio does, so we have to follow suit.
> 
> That's a very bold argument, especially without any rationale of
> 
>  a) why you need to match the feature set

Because people want to move from some vendor specific solution with vfio
to a standard vdpa compatible one with vdpa.

We could just block them since we don't support tainted kernels anyway,
but it's a step in the right direction since it allows moving to
virtio, and the kernel is tained so no big support costs (although
qe costs do exist, as with any code).

>  b) how even adding it to vfio was agood idea

That ship has sailed.

-- 
MST


^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [EXTERNAL] Re: [PATCH v2 0/2] vhost-vdpa: Add support for NO-IOMMU mode
  2024-10-20  0:16           ` Michael S. Tsirkin
@ 2024-10-23  6:58             ` Christoph Hellwig
  2024-10-23  8:19               ` Michael S. Tsirkin
  0 siblings, 1 reply; 27+ messages in thread
From: Christoph Hellwig @ 2024-10-23  6:58 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Christoph Hellwig, Srujana Challa, virtualization@lists.linux.dev,
	kvm@vger.kernel.org, jasowang@redhat.com, eperezma@redhat.com,
	Nithin Kumar Dabilpuram, Jerin Jacob

On Sat, Oct 19, 2024 at 08:16:44PM -0400, Michael S. Tsirkin wrote:
> Because people want to move from some vendor specific solution with vfio
> to a standard vdpa compatible one with vdpa.

So now you have a want for new use cases and you turn that into a must
for supporting completely insecure and dangerous crap.


^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [EXTERNAL] Re: [PATCH v2 0/2] vhost-vdpa: Add support for NO-IOMMU mode
  2024-10-23  6:58             ` Christoph Hellwig
@ 2024-10-23  8:19               ` Michael S. Tsirkin
  2024-10-24  9:05                 ` Christoph Hellwig
  0 siblings, 1 reply; 27+ messages in thread
From: Michael S. Tsirkin @ 2024-10-23  8:19 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: Srujana Challa, virtualization@lists.linux.dev,
	kvm@vger.kernel.org, jasowang@redhat.com, eperezma@redhat.com,
	Nithin Kumar Dabilpuram, Jerin Jacob

On Tue, Oct 22, 2024 at 11:58:19PM -0700, Christoph Hellwig wrote:
> On Sat, Oct 19, 2024 at 08:16:44PM -0400, Michael S. Tsirkin wrote:
> > Because people want to move from some vendor specific solution with vfio
> > to a standard vdpa compatible one with vdpa.
> 
> So now you have a want for new use cases and you turn that into a must
> for supporting completely insecure and dangerous crap.

Nope.

kernel is tainted -> unsupported

whoever supports tainted kernels is already in dangerous waters.

-- 
MST


^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [EXTERNAL] Re: [PATCH v2 0/2] vhost-vdpa: Add support for NO-IOMMU mode
  2024-10-23  8:19               ` Michael S. Tsirkin
@ 2024-10-24  9:05                 ` Christoph Hellwig
  2024-11-06 12:38                   ` Srujana Challa
  2024-11-06 18:14                   ` Michael S. Tsirkin
  0 siblings, 2 replies; 27+ messages in thread
From: Christoph Hellwig @ 2024-10-24  9:05 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Christoph Hellwig, Srujana Challa, virtualization@lists.linux.dev,
	kvm@vger.kernel.org, jasowang@redhat.com, eperezma@redhat.com,
	Nithin Kumar Dabilpuram, Jerin Jacob, Greg Kroah-Hartman

On Wed, Oct 23, 2024 at 04:19:02AM -0400, Michael S. Tsirkin wrote:
> On Tue, Oct 22, 2024 at 11:58:19PM -0700, Christoph Hellwig wrote:
> > On Sat, Oct 19, 2024 at 08:16:44PM -0400, Michael S. Tsirkin wrote:
> > > Because people want to move from some vendor specific solution with vfio
> > > to a standard vdpa compatible one with vdpa.
> > 
> > So now you have a want for new use cases and you turn that into a must
> > for supporting completely insecure and dangerous crap.
> 
> Nope.
> 
> kernel is tainted -> unsupported
> 
> whoever supports tainted kernels is already in dangerous waters.

That's not a carte blanche for doing whatever crazy stuff you
want.

And if you don't trust me I'll add Greg who has a very clear opinion
on IOMMU-bypassing user I/O hooks in the style of the uio driver as
well I think :)

^ permalink raw reply	[flat|nested] 27+ messages in thread

* RE: [EXTERNAL] Re: [PATCH v2 0/2] vhost-vdpa: Add support for NO-IOMMU mode
  2024-10-24  9:05                 ` Christoph Hellwig
@ 2024-11-06 12:38                   ` Srujana Challa
  2024-11-06 15:45                     ` Christoph Hellwig
  2024-11-06 18:11                     ` Michael S. Tsirkin
  2024-11-06 18:14                   ` Michael S. Tsirkin
  1 sibling, 2 replies; 27+ messages in thread
From: Srujana Challa @ 2024-11-06 12:38 UTC (permalink / raw)
  To: Christoph Hellwig, Michael S. Tsirkin
  Cc: virtualization@lists.linux.dev, kvm@vger.kernel.org,
	jasowang@redhat.com, eperezma@redhat.com, Nithin Kumar Dabilpuram,
	Jerin Jacob, Greg Kroah-Hartman

> Subject: Re: [EXTERNAL] Re: [PATCH v2 0/2] vhost-vdpa: Add support for NO-
> IOMMU mode
> 
> On Wed, Oct 23, 2024 at 04: 19: 02AM -0400, Michael S. Tsirkin wrote: > On
> Tue, Oct 22, 2024 at 11: 58: 19PM -0700, Christoph Hellwig wrote: > > On Sat,
> Oct 19, 2024 at 08: 16: 44PM -0400, Michael S. Tsirkin wrote: > > > Because
> 
> On Wed, Oct 23, 2024 at 04:19:02AM -0400, Michael S. Tsirkin wrote:
> > On Tue, Oct 22, 2024 at 11:58:19PM -0700, Christoph Hellwig wrote:
> > > On Sat, Oct 19, 2024 at 08:16:44PM -0400, Michael S. Tsirkin wrote:
> > > > Because people want to move from some vendor specific solution
> > > > with vfio to a standard vdpa compatible one with vdpa.
> > >
> > > So now you have a want for new use cases and you turn that into a
> > > must for supporting completely insecure and dangerous crap.
> >
> > Nope.
> >
> > kernel is tainted -> unsupported
> >
> > whoever supports tainted kernels is already in dangerous waters.
> 
> That's not a carte blanche for doing whatever crazy stuff you want.
> 
> And if you don't trust me I'll add Greg who has a very clear opinion on
> IOMMU-bypassing user I/O hooks in the style of the uio driver as well I think
> :)

It is going in circles, let me give the summary,
Issue: We need to address the lack of no-IOMMU support in the vhost vDPA driver for better performance.
Measured Performance: On the machine "13th Gen Intel(R) Core(TM) i9-13900K, 32 Cores", we observed
a performance improvement of 70 - 80% with intel_iommu=off when we run high-throughput network
packet processing.
Rationale for Fix: High-end machines which gives better performance with IOMMU are very expensive,
and certain use cases, such as embedded environment and trusted applications, do not require
the security features provided by IOMMU.
Initial Approach: We initially considered a driver-based solution, specifically integrating no-IOMMU
support into Marvell’s octep-vdpa driver.
Initial Community Feedback: The community suggested adopting a VFIO-like scheme to make the solution
more generic and widely applicable.
Decision Point: Should we pursue a generic approach for no-IOMMU support in the vhost vDPA driver,
or should we implement a driver-specific solution?

Thanks,
Srujana.

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [EXTERNAL] Re: [PATCH v2 0/2] vhost-vdpa: Add support for NO-IOMMU mode
  2024-11-06 12:38                   ` Srujana Challa
@ 2024-11-06 15:45                     ` Christoph Hellwig
  2024-11-07  6:08                       ` Srujana Challa
  2024-11-06 18:11                     ` Michael S. Tsirkin
  1 sibling, 1 reply; 27+ messages in thread
From: Christoph Hellwig @ 2024-11-06 15:45 UTC (permalink / raw)
  To: Srujana Challa
  Cc: Christoph Hellwig, Michael S. Tsirkin,
	virtualization@lists.linux.dev, kvm@vger.kernel.org,
	jasowang@redhat.com, eperezma@redhat.com, Nithin Kumar Dabilpuram,
	Jerin Jacob, Greg Kroah-Hartman

On Wed, Nov 06, 2024 at 12:38:02PM +0000, Srujana Challa wrote:
> It is going in circles, let me give the summary,
> Issue: We need to address the lack of no-IOMMU support in the vhost vDPA driver for better performance.
> Measured Performance: On the machine "13th Gen Intel(R) Core(TM) i9-13900K, 32 Cores", we observed

Looks ike you are going in circles indeed.  Lack of performance is never
a reason to disable the basic memoy safety for userspace drivers.

The (also quite bad) reason why vfio-nummu was added was to support
hardware entirely with an iommu.

There is absolutely no reason to add krnel code for new methods of
unsafer userspace I/O without an IOMMU ever.


^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [EXTERNAL] Re: [PATCH v2 0/2] vhost-vdpa: Add support for NO-IOMMU mode
  2024-11-06 12:38                   ` Srujana Challa
  2024-11-06 15:45                     ` Christoph Hellwig
@ 2024-11-06 18:11                     ` Michael S. Tsirkin
  1 sibling, 0 replies; 27+ messages in thread
From: Michael S. Tsirkin @ 2024-11-06 18:11 UTC (permalink / raw)
  To: Srujana Challa
  Cc: Christoph Hellwig, virtualization@lists.linux.dev,
	kvm@vger.kernel.org, jasowang@redhat.com, eperezma@redhat.com,
	Nithin Kumar Dabilpuram, Jerin Jacob, Greg Kroah-Hartman

On Wed, Nov 06, 2024 at 12:38:02PM +0000, Srujana Challa wrote:
> > Subject: Re: [EXTERNAL] Re: [PATCH v2 0/2] vhost-vdpa: Add support for NO-
> > IOMMU mode
> > 
> > On Wed, Oct 23, 2024 at 04: 19: 02AM -0400, Michael S. Tsirkin wrote: > On
> > Tue, Oct 22, 2024 at 11: 58: 19PM -0700, Christoph Hellwig wrote: > > On Sat,
> > Oct 19, 2024 at 08: 16: 44PM -0400, Michael S. Tsirkin wrote: > > > Because
> > 
> > On Wed, Oct 23, 2024 at 04:19:02AM -0400, Michael S. Tsirkin wrote:
> > > On Tue, Oct 22, 2024 at 11:58:19PM -0700, Christoph Hellwig wrote:
> > > > On Sat, Oct 19, 2024 at 08:16:44PM -0400, Michael S. Tsirkin wrote:
> > > > > Because people want to move from some vendor specific solution
> > > > > with vfio to a standard vdpa compatible one with vdpa.
> > > >
> > > > So now you have a want for new use cases and you turn that into a
> > > > must for supporting completely insecure and dangerous crap.
> > >
> > > Nope.
> > >
> > > kernel is tainted -> unsupported
> > >
> > > whoever supports tainted kernels is already in dangerous waters.
> > 
> > That's not a carte blanche for doing whatever crazy stuff you want.
> > 
> > And if you don't trust me I'll add Greg who has a very clear opinion on
> > IOMMU-bypassing user I/O hooks in the style of the uio driver as well I think
> > :)
> 
> It is going in circles, let me give the summary,
> Issue: We need to address the lack of no-IOMMU support in the vhost vDPA driver for better performance.
> Measured Performance: On the machine "13th Gen Intel(R) Core(TM) i9-13900K, 32 Cores", we observed
> a performance improvement of 70 - 80% with intel_iommu=off when we run high-throughput network
> packet processing.
> Rationale for Fix: High-end machines which gives better performance with IOMMU are very expensive,
> and certain use cases, such as embedded environment and trusted applications, do not require
> the security features provided by IOMMU.
> Initial Approach: We initially considered a driver-based solution, specifically integrating no-IOMMU
> support into Marvell’s octep-vdpa driver.
> Initial Community Feedback: The community suggested adopting a VFIO-like scheme to make the solution
> more generic and widely applicable.
> Decision Point: Should we pursue a generic approach for no-IOMMU support in the vhost vDPA driver,
> or should we implement a driver-specific solution?
> 
> Thanks,
> Srujana.

This point does not matter for Christoph.

-- 
MST


^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [EXTERNAL] Re: [PATCH v2 0/2] vhost-vdpa: Add support for NO-IOMMU mode
  2024-10-24  9:05                 ` Christoph Hellwig
  2024-11-06 12:38                   ` Srujana Challa
@ 2024-11-06 18:14                   ` Michael S. Tsirkin
  1 sibling, 0 replies; 27+ messages in thread
From: Michael S. Tsirkin @ 2024-11-06 18:14 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: Srujana Challa, virtualization@lists.linux.dev,
	kvm@vger.kernel.org, jasowang@redhat.com, eperezma@redhat.com,
	Nithin Kumar Dabilpuram, Jerin Jacob, Greg Kroah-Hartman

On Thu, Oct 24, 2024 at 02:05:43AM -0700, Christoph Hellwig wrote:
> On Wed, Oct 23, 2024 at 04:19:02AM -0400, Michael S. Tsirkin wrote:
> > On Tue, Oct 22, 2024 at 11:58:19PM -0700, Christoph Hellwig wrote:
> > > On Sat, Oct 19, 2024 at 08:16:44PM -0400, Michael S. Tsirkin wrote:
> > > > Because people want to move from some vendor specific solution with vfio
> > > > to a standard vdpa compatible one with vdpa.
> > > 
> > > So now you have a want for new use cases and you turn that into a must
> > > for supporting completely insecure and dangerous crap.
> > 
> > Nope.
> > 
> > kernel is tainted -> unsupported
> > 
> > whoever supports tainted kernels is already in dangerous waters.
> 
> That's not a carte blanche for doing whatever crazy stuff you
> want.
> 
> And if you don't trust me I'll add Greg who has a very clear opinion
> on IOMMU-bypassing user I/O hooks in the style of the uio driver as
> well I think :)

As a supporter of one of uio drivers, I agree with him.

-- 
MST


^ permalink raw reply	[flat|nested] 27+ messages in thread

* RE: [EXTERNAL] Re: [PATCH v2 0/2] vhost-vdpa: Add support for NO-IOMMU mode
  2024-11-06 15:45                     ` Christoph Hellwig
@ 2024-11-07  6:08                       ` Srujana Challa
  2024-11-12  7:11                         ` Srujana Challa
  0 siblings, 1 reply; 27+ messages in thread
From: Srujana Challa @ 2024-11-07  6:08 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: Michael S. Tsirkin, virtualization@lists.linux.dev,
	kvm@vger.kernel.org, jasowang@redhat.com, eperezma@redhat.com,
	Nithin Kumar Dabilpuram, Jerin Jacob, Greg Kroah-Hartman

> Subject: Re: [EXTERNAL] Re: [PATCH v2 0/2] vhost-vdpa: Add support for NO-
> IOMMU mode
> 
> On Wed, Nov 06, 2024 at 12: 38: 02PM +0000, Srujana Challa wrote: > It is
> going in circles, let me give the summary, > Issue: We need to address the lack
> of no-IOMMU support in the vhost vDPA driver for better performance. >
> Measured 
> On Wed, Nov 06, 2024 at 12:38:02PM +0000, Srujana Challa wrote:
> > It is going in circles, let me give the summary,
> > Issue: We need to address the lack of no-IOMMU support in the vhost vDPA
> driver for better performance.
> > Measured Performance: On the machine "13th Gen Intel(R) Core(TM)
> > i9-13900K, 32 Cores", we observed
> 
> Looks ike you are going in circles indeed.  Lack of performance is never a
> reason to disable the basic memoy safety for userspace drivers.
If security is a priority, the IOMMU should stay enabled. In that case,
this patch wouldn’t cause any issues, right?
> 
> The (also quite bad) reason why vfio-nummu was added was to support
> hardware entirely with an iommu.
> 
> There is absolutely no reason to add krnel code for new methods of unsafer
> userspace I/O without an IOMMU ever.


^ permalink raw reply	[flat|nested] 27+ messages in thread

* RE: [EXTERNAL] Re: [PATCH v2 0/2] vhost-vdpa: Add support for NO-IOMMU mode
  2024-11-07  6:08                       ` Srujana Challa
@ 2024-11-12  7:11                         ` Srujana Challa
  0 siblings, 0 replies; 27+ messages in thread
From: Srujana Challa @ 2024-11-12  7:11 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: Michael S. Tsirkin, virtualization@lists.linux.dev,
	kvm@vger.kernel.org, jasowang@redhat.com, eperezma@redhat.com,
	Nithin Kumar Dabilpuram, Jerin Jacob, Greg Kroah-Hartman

> Subject: RE: [EXTERNAL] Re: [PATCH v2 0/2] vhost-vdpa: Add support for NO-
> IOMMU mode
> 
> > Subject: Re: [EXTERNAL] Re: [PATCH v2 0/2] vhost-vdpa: Add support for
> > NO- IOMMU mode
> >
> > On Wed, Nov 06, 2024 at 12: 38: 02PM +0000, Srujana Challa wrote: > It
> > is going in circles, let me give the summary, > Issue: We need to
> > address the lack of no-IOMMU support in the vhost vDPA driver for
> > better performance. > Measured 
> > On Wed, Nov 06, 2024 at 12:38:02PM +0000, Srujana Challa wrote:
> > > It is going in circles, let me give the summary,
> > > Issue: We need to address the lack of no-IOMMU support in the vhost
> > > vDPA
> > driver for better performance.
> > > Measured Performance: On the machine "13th Gen Intel(R) Core(TM)
> > > i9-13900K, 32 Cores", we observed
> >
> > Looks ike you are going in circles indeed.  Lack of performance is
> > never a reason to disable the basic memoy safety for userspace drivers.
> If security is a priority, the IOMMU should stay enabled. In that case, this
> patch wouldn’t cause any issues, right?
Another important aspect of this patch is that it enables vhost-vdpa functionality on machines that do not support IOMMU, such as the Intel® Core™ i5-2320 CPU @ 3.00GHz. I believe, this enhancement is essential as it broadens the compatibility of our system, allowing users with embedded or less advanced hardware to benefit from vhost-vdpa features.
Thanks.

> >
> > The (also quite bad) reason why vfio-nummu was added was to support
> > hardware entirely with an iommu.
> >
> > There is absolutely no reason to add krnel code for new methods of
> > unsafer userspace I/O without an IOMMU ever.


^ permalink raw reply	[flat|nested] 27+ messages in thread

end of thread, other threads:[~2024-11-12  7:11 UTC | newest]

Thread overview: 27+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-09-20 14:05 [PATCH v2 0/2] vhost-vdpa: Add support for NO-IOMMU mode Srujana Challa
2024-09-20 14:05 ` [PATCH v2 1/2] vhost-vdpa: introduce module parameter for no-IOMMU mode Srujana Challa
2024-09-21 17:28   ` kernel test robot
2024-09-20 14:05 ` [PATCH v2 2/2] vhost-vdpa: introduce NO-IOMMU backend feature bit Srujana Challa
2024-09-24  7:43   ` Jason Wang
2024-09-24 10:01     ` [EXTERNAL] " Srujana Challa
2024-10-01  8:47 ` [PATCH v2 0/2] vhost-vdpa: Add support for NO-IOMMU mode Christoph Hellwig
2024-10-14 13:18   ` [EXTERNAL] " Srujana Challa
2024-10-15  3:48     ` Christoph Hellwig
2024-10-16 17:28       ` Srujana Challa
2024-10-17  6:19         ` Christoph Hellwig
2024-10-17  8:53           ` Srujana Challa
2024-10-18  4:54             ` Jason Wang
2024-10-18  5:24             ` Christoph Hellwig
2024-10-18 13:08               ` Srujana Challa
2024-10-16 17:41       ` Michael S. Tsirkin
2024-10-17  6:16         ` Christoph Hellwig
2024-10-20  0:16           ` Michael S. Tsirkin
2024-10-23  6:58             ` Christoph Hellwig
2024-10-23  8:19               ` Michael S. Tsirkin
2024-10-24  9:05                 ` Christoph Hellwig
2024-11-06 12:38                   ` Srujana Challa
2024-11-06 15:45                     ` Christoph Hellwig
2024-11-07  6:08                       ` Srujana Challa
2024-11-12  7:11                         ` Srujana Challa
2024-11-06 18:11                     ` Michael S. Tsirkin
2024-11-06 18:14                   ` Michael S. Tsirkin

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).