From: Nicolin Chen <nicolinc@nvidia.com>
To: "Tian, Kevin" <kevin.tian@intel.com>
Cc: Jason Gunthorpe <jgg@nvidia.com>,
"corbet@lwn.net" <corbet@lwn.net>,
"will@kernel.org" <will@kernel.org>,
"joro@8bytes.org" <joro@8bytes.org>,
"suravee.suthikulpanit@amd.com" <suravee.suthikulpanit@amd.com>,
"robin.murphy@arm.com" <robin.murphy@arm.com>,
"dwmw2@infradead.org" <dwmw2@infradead.org>,
"baolu.lu@linux.intel.com" <baolu.lu@linux.intel.com>,
"shuah@kernel.org" <shuah@kernel.org>,
"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
"iommu@lists.linux.dev" <iommu@lists.linux.dev>,
"linux-arm-kernel@lists.infradead.org"
<linux-arm-kernel@lists.infradead.org>,
"linux-kselftest@vger.kernel.org"
<linux-kselftest@vger.kernel.org>,
"linux-doc@vger.kernel.org" <linux-doc@vger.kernel.org>,
"eric.auger@redhat.com" <eric.auger@redhat.com>,
"jean-philippe@linaro.org" <jean-philippe@linaro.org>,
"mdf@kernel.org" <mdf@kernel.org>,
"mshavit@google.com" <mshavit@google.com>,
"shameerali.kolothum.thodi@huawei.com"
<shameerali.kolothum.thodi@huawei.com>,
"smostafa@google.com" <smostafa@google.com>,
"ddutile@redhat.com" <ddutile@redhat.com>,
"Liu, Yi L" <yi.l.liu@intel.com>,
"patches@lists.linux.dev" <patches@lists.linux.dev>
Subject: Re: [PATCH v5 08/14] iommufd/viommu: Add iommufd_viommu_report_event helper
Date: Wed, 22 Jan 2025 11:54:52 -0800 [thread overview]
Message-ID: <Z5FNDOqkiPq90c16@nvidia.com> (raw)
In-Reply-To: <BN9PR11MB527600A5B8DC271075936A918CE12@BN9PR11MB5276.namprd11.prod.outlook.com>
On Wed, Jan 22, 2025 at 09:33:35AM +0000, Tian, Kevin wrote:
> > From: Nicolin Chen <nicolinc@nvidia.com>
> > Sent: Wednesday, January 22, 2025 3:16 PM
> >
> > On Tue, Jan 21, 2025 at 08:21:28PM -0400, Jason Gunthorpe wrote:
> > > On Tue, Jan 21, 2025 at 01:40:05PM -0800, Nicolin Chen wrote:
> > > > > There is also the minor detail of what happens if the hypervisor HW
> > > > > queue overflows - I don't know the answer here. It is security
> > > > > concerning since the VM can spam DMA errors at high rate. :|
> > > >
> > > > In my view, the hypervisor queue is the vHW queue for the VM, so
> > > > it should act like a HW, which means it's up to the guest kernel
> > > > driver that handles the high rate DMA errors..
> > >
> > > I'm mainly wondering what happens if the single physical kernel
> > > event queue overflows because it is DOS'd by a VM and the hypervisor
> > > cannot drain it fast enough?
> > >
> > > I haven't looked closely but is there some kind of rate limiting or
> > > otherwise to mitigate DOS attacks on the shared event queue from VMs?
> >
> > SMMUv3 reads the event out of the physical kernel event queue,
> > and adds that to faultq or veventq or prints it out. So, it'd
> > not overflow because of DOS? And all other drivers should do
> > the same?
> >
>
> "add that to faultq or eventq" could take time or the irqthread
> could be preempted for various reasons then there is always an
> window within which an overflow condition could occur due to
> the smmu driver incapable of fetching pending events timely.
Oh, I see..
> On VT-d the driver could disable reporting non-recoverable fault
> for a given device via a control bit in the PASID entry, but I didn't
> see a similar knob for PRQ.
ARM has an event suppressing CD.R bit to disable event recording
for a device. However, the stage-1 CD is controlled by the guest
kernel or VMM having the control of the ram..
ARM seems to also have an interesting event merging feature:
STE.MEV, bit [83]
Merge Events arising from terminated transactions from this stream.
0b0 Do not merge similar fault records
0b1 Permit similar fault records to be merged
The SMMU might be able to reduce the usage of the Event queue by
coalescing fault records that share the same page granule of address,
access type and SubstreamID.
Setting MEV == 1 does not guarantee that faults will be coalesced.
Setting MEV == 0 causes a physical SMMU to prevent coalescing of
fault records, however, a hypervisor might not honour this setting
if it deems a guest to be too verbose.
Note: Software must expect, and be able to deal with, coalesced fault
records even when MEV == 0.
Yet, the driver doesn't seem to care setting it at this moment.
Thanks
Nicolin
next prev parent reply other threads:[~2025-01-22 19:55 UTC|newest]
Thread overview: 77+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-01-07 17:10 [PATCH v5 00/14] iommufd: Add vIOMMU infrastructure (Part-3: vEVENTQ) Nicolin Chen
2025-01-07 17:10 ` [PATCH v5 01/14] iommufd: Keep OBJ/IOCTL lists in an alphabetical order Nicolin Chen
2025-01-10 6:26 ` Tian, Kevin
2025-01-10 17:25 ` Jason Gunthorpe
2025-01-14 19:29 ` Jason Gunthorpe
2025-01-07 17:10 ` [PATCH v5 02/14] iommufd/fault: Add an iommufd_fault_init() helper Nicolin Chen
2025-01-10 17:25 ` Jason Gunthorpe
2025-01-07 17:10 ` [PATCH v5 03/14] iommufd/fault: Move iommufd_fault_iopf_handler() to header Nicolin Chen
2025-01-10 17:25 ` Jason Gunthorpe
2025-01-07 17:10 ` [PATCH v5 04/14] iommufd: Abstract an iommufd_eventq from iommufd_fault Nicolin Chen
2025-01-10 6:26 ` Tian, Kevin
2025-01-10 17:26 ` Jason Gunthorpe
2025-01-10 20:49 ` Nicolin Chen
2025-01-07 17:10 ` [PATCH v5 05/14] iommufd: Rename fault.c to eventq.c Nicolin Chen
2025-01-10 17:27 ` Jason Gunthorpe
2025-01-07 17:10 ` [PATCH v5 06/14] iommufd: Add IOMMUFD_OBJ_VEVENTQ and IOMMUFD_CMD_VEVENTQ_ALLOC Nicolin Chen
2025-01-10 7:06 ` Tian, Kevin
2025-01-10 21:29 ` Nicolin Chen
2025-01-13 2:52 ` Tian, Kevin
2025-01-13 4:51 ` Nicolin Chen
2025-01-13 8:17 ` Tian, Kevin
2025-01-13 19:10 ` Jason Gunthorpe
2025-01-10 17:48 ` Jason Gunthorpe
2025-01-10 19:27 ` Nicolin Chen
2025-01-10 19:49 ` Jason Gunthorpe
2025-01-10 21:58 ` Nicolin Chen
2025-01-13 19:12 ` Jason Gunthorpe
2025-01-13 19:18 ` Nicolin Chen
2025-01-07 17:10 ` [PATCH v5 07/14] iommufd/viommu: Add iommufd_viommu_get_vdev_id helper Nicolin Chen
2025-01-10 7:07 ` Tian, Kevin
2025-01-10 21:35 ` Nicolin Chen
2025-01-07 17:10 ` [PATCH v5 08/14] iommufd/viommu: Add iommufd_viommu_report_event helper Nicolin Chen
2025-01-10 7:12 ` Tian, Kevin
2025-01-10 14:51 ` Jason Gunthorpe
2025-01-10 18:40 ` Nicolin Chen
2025-01-10 17:41 ` Jason Gunthorpe
2025-01-10 18:38 ` Nicolin Chen
2025-01-10 19:51 ` Jason Gunthorpe
2025-01-10 19:56 ` Nicolin Chen
2025-01-13 5:37 ` Nicolin Chen
2025-01-13 19:21 ` Jason Gunthorpe
2025-01-13 19:47 ` Nicolin Chen
2025-01-13 19:54 ` Jason Gunthorpe
2025-01-13 20:44 ` Nicolin Chen
2025-01-14 13:41 ` Jason Gunthorpe
2025-01-17 22:11 ` Nicolin Chen
2025-01-20 18:18 ` Jason Gunthorpe
2025-01-20 20:52 ` Nicolin Chen
2025-01-21 18:36 ` Jason Gunthorpe
2025-01-21 19:55 ` Nicolin Chen
2025-01-21 20:09 ` Jason Gunthorpe
2025-01-21 21:02 ` Nicolin Chen
2025-01-21 21:14 ` Jason Gunthorpe
2025-01-21 21:40 ` Nicolin Chen
2025-01-22 0:21 ` Jason Gunthorpe
2025-01-22 7:15 ` Nicolin Chen
2025-01-22 9:33 ` Tian, Kevin
2025-01-22 19:54 ` Nicolin Chen [this message]
2025-01-23 13:42 ` Jason Gunthorpe
2025-01-22 8:05 ` Nicolin Chen
2025-01-22 18:02 ` Nicolin Chen
2025-01-23 7:02 ` Nicolin Chen
2025-01-23 13:43 ` Jason Gunthorpe
2025-01-07 17:10 ` [PATCH v5 09/14] iommufd/selftest: Require vdev_id when attaching to a nested domain Nicolin Chen
2025-01-07 17:10 ` [PATCH v5 10/14] iommufd/selftest: Add IOMMU_TEST_OP_TRIGGER_VEVENT for vEVENTQ coverage Nicolin Chen
2025-01-07 17:10 ` [PATCH v5 11/14] iommufd/selftest: Add IOMMU_VEVENTQ_ALLOC test coverage Nicolin Chen
2025-01-07 17:10 ` [PATCH v5 12/14] Documentation: userspace-api: iommufd: Update FAULT and VEVENTQ Nicolin Chen
2025-01-10 7:13 ` Tian, Kevin
2025-01-07 17:10 ` [PATCH v5 13/14] iommu/arm-smmu-v3: Introduce struct arm_smmu_vmaster Nicolin Chen
2025-01-13 19:29 ` Jason Gunthorpe
2025-01-13 19:52 ` Nicolin Chen
2025-01-07 17:10 ` [PATCH v5 14/14] iommu/arm-smmu-v3: Report events that belong to devices attached to vIOMMU Nicolin Chen
2025-01-09 11:04 ` kernel test robot
2025-01-13 19:01 ` Nicolin Chen
2025-01-13 19:06 ` Jason Gunthorpe
2025-01-13 19:15 ` Nicolin Chen
2025-01-13 19:18 ` Jason Gunthorpe
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=Z5FNDOqkiPq90c16@nvidia.com \
--to=nicolinc@nvidia.com \
--cc=baolu.lu@linux.intel.com \
--cc=corbet@lwn.net \
--cc=ddutile@redhat.com \
--cc=dwmw2@infradead.org \
--cc=eric.auger@redhat.com \
--cc=iommu@lists.linux.dev \
--cc=jean-philippe@linaro.org \
--cc=jgg@nvidia.com \
--cc=joro@8bytes.org \
--cc=kevin.tian@intel.com \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-doc@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-kselftest@vger.kernel.org \
--cc=mdf@kernel.org \
--cc=mshavit@google.com \
--cc=patches@lists.linux.dev \
--cc=robin.murphy@arm.com \
--cc=shameerali.kolothum.thodi@huawei.com \
--cc=shuah@kernel.org \
--cc=smostafa@google.com \
--cc=suravee.suthikulpanit@amd.com \
--cc=will@kernel.org \
--cc=yi.l.liu@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).