From: Nicolin Chen <nicolinc@nvidia.com>
To: Jason Gunthorpe <jgg@nvidia.com>
Cc: <kevin.tian@intel.com>, <corbet@lwn.net>, <will@kernel.org>,
<joro@8bytes.org>, <suravee.suthikulpanit@amd.com>,
<robin.murphy@arm.com>, <dwmw2@infradead.org>,
<baolu.lu@linux.intel.com>, <shuah@kernel.org>,
<linux-kernel@vger.kernel.org>, <iommu@lists.linux.dev>,
<linux-arm-kernel@lists.infradead.org>,
<linux-kselftest@vger.kernel.org>, <linux-doc@vger.kernel.org>,
<eric.auger@redhat.com>, <jean-philippe@linaro.org>,
<mdf@kernel.org>, <mshavit@google.com>,
<shameerali.kolothum.thodi@huawei.com>, <smostafa@google.com>,
<ddutile@redhat.com>, <yi.l.liu@intel.com>,
<patches@lists.linux.dev>
Subject: Re: [PATCH v5 08/14] iommufd/viommu: Add iommufd_viommu_report_event helper
Date: Tue, 21 Jan 2025 11:55:16 -0800 [thread overview]
Message-ID: <Z4/7pGx6F1mpAUuV@nvidia.com> (raw)
In-Reply-To: <20250121183611.GY5556@nvidia.com>
On Tue, Jan 21, 2025 at 02:36:11PM -0400, Jason Gunthorpe wrote:
> On Mon, Jan 20, 2025 at 12:52:09PM -0800, Nicolin Chen wrote:
> > The counter of the number of events in the vEVENTQ could decrease
> > when userspace reads the queue. But you were saying "the number of
> > events that were sent into the queue", which is like a PROD index
> > that would keep growing but reset to 0 after UINT_MAX?
>
> yes
Ack. Then I think we should name it "index", beside a "counter"
indicating the number of events in the queue. Or perhaps a pair
of consumer and producer indexes that wrap at end of a limit.
> > > IOMMU_VEVENTQ_STATE_OVERFLOW with a 0 length event is seen if events
> > > have been lost and no subsequent events are present. It exists to
> > > ensure timely delivery of the overflow event to userspace. counter
> > > will be the sequence number of the next successful event.
> >
> > So userspace should first read the header to decide whether or not
> > to read a vEVENT. If header is overflows, it should skip the vEVENT
> > struct and read the next header?
>
> Yes, but there won't be a next header. overflow would always be the
> last thing in a read() response. If there is another event then
> overflow is indicated by non-monotonic count.
I am not 100% sure why "overflow would always be the last thing
in a read() response". I thought that kernel should immediately
report an overflow to user space when the vEVENTQ is overflowed.
Yet, thinking about this once again: user space actually has its
own queue. There's probably no point in letting it know about an
overflow immediately when the kernel vEVENTQ overflows until its
own user queue overflows after it reads the entire vEVENTQ so it
can trigger a vHW event/irq to the VM?
> > > If events are lost in the middle of the queue then flags will remain 0
> > > but counter will become non-montonic. A counter delta > 1 indicates
> > > that many events have been lost.
> >
> > I don't quite get the "no subsequent events" v.s. "in the middle of
> > the queue"..
>
> I mean to supress specific overflow events to userspace if the counter already
> fully indicates overflow.
>
> The purpose of the overflow event is specifically, and only, to
> indicate immediately that an overflow occured at the end of the queue,
> and no additional events have been pushed since the overflow.
>
> Without this we could loose an event and userspace may not realize
> it for a long time.
I see. Because there is no further new event, there would be no
new index to indicate a gap. Thus, we need an overflow node.
> > The producer is the driver calling iommufd_viommu_report_event that
> > only produces a single vEVENT at a time. When the number of vEVENTs
> > in the vEVENTQ hits the @veventq_depth, it won't insert new vEVENTs
> > but add an overflow (or exception) node to the head of deliver list
> > and increase the producer index so the next vEVENT that can find an
> > empty space in the queue will have an index with a gap (delta >= 1)?
>
> Yes, but each new overflow should move the single preallocated
> overflow node back to the end of the list, and the read side should
> skip the overflow node if it is not the last entry in the list
If the number of events in the queue is below @veventq_depth as
userspace consumed the events from the queue, I think a new
iommufd_viommu_report_event call should delete the overflow node
from the end of the list, right? Since it can just insert a new
event to the list by marking it a non-monotonic index..
Thanks!
Nicolin
next prev parent reply other threads:[~2025-01-21 19:57 UTC|newest]
Thread overview: 77+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-01-07 17:10 [PATCH v5 00/14] iommufd: Add vIOMMU infrastructure (Part-3: vEVENTQ) Nicolin Chen
2025-01-07 17:10 ` [PATCH v5 01/14] iommufd: Keep OBJ/IOCTL lists in an alphabetical order Nicolin Chen
2025-01-10 6:26 ` Tian, Kevin
2025-01-10 17:25 ` Jason Gunthorpe
2025-01-14 19:29 ` Jason Gunthorpe
2025-01-07 17:10 ` [PATCH v5 02/14] iommufd/fault: Add an iommufd_fault_init() helper Nicolin Chen
2025-01-10 17:25 ` Jason Gunthorpe
2025-01-07 17:10 ` [PATCH v5 03/14] iommufd/fault: Move iommufd_fault_iopf_handler() to header Nicolin Chen
2025-01-10 17:25 ` Jason Gunthorpe
2025-01-07 17:10 ` [PATCH v5 04/14] iommufd: Abstract an iommufd_eventq from iommufd_fault Nicolin Chen
2025-01-10 6:26 ` Tian, Kevin
2025-01-10 17:26 ` Jason Gunthorpe
2025-01-10 20:49 ` Nicolin Chen
2025-01-07 17:10 ` [PATCH v5 05/14] iommufd: Rename fault.c to eventq.c Nicolin Chen
2025-01-10 17:27 ` Jason Gunthorpe
2025-01-07 17:10 ` [PATCH v5 06/14] iommufd: Add IOMMUFD_OBJ_VEVENTQ and IOMMUFD_CMD_VEVENTQ_ALLOC Nicolin Chen
2025-01-10 7:06 ` Tian, Kevin
2025-01-10 21:29 ` Nicolin Chen
2025-01-13 2:52 ` Tian, Kevin
2025-01-13 4:51 ` Nicolin Chen
2025-01-13 8:17 ` Tian, Kevin
2025-01-13 19:10 ` Jason Gunthorpe
2025-01-10 17:48 ` Jason Gunthorpe
2025-01-10 19:27 ` Nicolin Chen
2025-01-10 19:49 ` Jason Gunthorpe
2025-01-10 21:58 ` Nicolin Chen
2025-01-13 19:12 ` Jason Gunthorpe
2025-01-13 19:18 ` Nicolin Chen
2025-01-07 17:10 ` [PATCH v5 07/14] iommufd/viommu: Add iommufd_viommu_get_vdev_id helper Nicolin Chen
2025-01-10 7:07 ` Tian, Kevin
2025-01-10 21:35 ` Nicolin Chen
2025-01-07 17:10 ` [PATCH v5 08/14] iommufd/viommu: Add iommufd_viommu_report_event helper Nicolin Chen
2025-01-10 7:12 ` Tian, Kevin
2025-01-10 14:51 ` Jason Gunthorpe
2025-01-10 18:40 ` Nicolin Chen
2025-01-10 17:41 ` Jason Gunthorpe
2025-01-10 18:38 ` Nicolin Chen
2025-01-10 19:51 ` Jason Gunthorpe
2025-01-10 19:56 ` Nicolin Chen
2025-01-13 5:37 ` Nicolin Chen
2025-01-13 19:21 ` Jason Gunthorpe
2025-01-13 19:47 ` Nicolin Chen
2025-01-13 19:54 ` Jason Gunthorpe
2025-01-13 20:44 ` Nicolin Chen
2025-01-14 13:41 ` Jason Gunthorpe
2025-01-17 22:11 ` Nicolin Chen
2025-01-20 18:18 ` Jason Gunthorpe
2025-01-20 20:52 ` Nicolin Chen
2025-01-21 18:36 ` Jason Gunthorpe
2025-01-21 19:55 ` Nicolin Chen [this message]
2025-01-21 20:09 ` Jason Gunthorpe
2025-01-21 21:02 ` Nicolin Chen
2025-01-21 21:14 ` Jason Gunthorpe
2025-01-21 21:40 ` Nicolin Chen
2025-01-22 0:21 ` Jason Gunthorpe
2025-01-22 7:15 ` Nicolin Chen
2025-01-22 9:33 ` Tian, Kevin
2025-01-22 19:54 ` Nicolin Chen
2025-01-23 13:42 ` Jason Gunthorpe
2025-01-22 8:05 ` Nicolin Chen
2025-01-22 18:02 ` Nicolin Chen
2025-01-23 7:02 ` Nicolin Chen
2025-01-23 13:43 ` Jason Gunthorpe
2025-01-07 17:10 ` [PATCH v5 09/14] iommufd/selftest: Require vdev_id when attaching to a nested domain Nicolin Chen
2025-01-07 17:10 ` [PATCH v5 10/14] iommufd/selftest: Add IOMMU_TEST_OP_TRIGGER_VEVENT for vEVENTQ coverage Nicolin Chen
2025-01-07 17:10 ` [PATCH v5 11/14] iommufd/selftest: Add IOMMU_VEVENTQ_ALLOC test coverage Nicolin Chen
2025-01-07 17:10 ` [PATCH v5 12/14] Documentation: userspace-api: iommufd: Update FAULT and VEVENTQ Nicolin Chen
2025-01-10 7:13 ` Tian, Kevin
2025-01-07 17:10 ` [PATCH v5 13/14] iommu/arm-smmu-v3: Introduce struct arm_smmu_vmaster Nicolin Chen
2025-01-13 19:29 ` Jason Gunthorpe
2025-01-13 19:52 ` Nicolin Chen
2025-01-07 17:10 ` [PATCH v5 14/14] iommu/arm-smmu-v3: Report events that belong to devices attached to vIOMMU Nicolin Chen
2025-01-09 11:04 ` kernel test robot
2025-01-13 19:01 ` Nicolin Chen
2025-01-13 19:06 ` Jason Gunthorpe
2025-01-13 19:15 ` Nicolin Chen
2025-01-13 19:18 ` Jason Gunthorpe
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=Z4/7pGx6F1mpAUuV@nvidia.com \
--to=nicolinc@nvidia.com \
--cc=baolu.lu@linux.intel.com \
--cc=corbet@lwn.net \
--cc=ddutile@redhat.com \
--cc=dwmw2@infradead.org \
--cc=eric.auger@redhat.com \
--cc=iommu@lists.linux.dev \
--cc=jean-philippe@linaro.org \
--cc=jgg@nvidia.com \
--cc=joro@8bytes.org \
--cc=kevin.tian@intel.com \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-doc@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-kselftest@vger.kernel.org \
--cc=mdf@kernel.org \
--cc=mshavit@google.com \
--cc=patches@lists.linux.dev \
--cc=robin.murphy@arm.com \
--cc=shameerali.kolothum.thodi@huawei.com \
--cc=shuah@kernel.org \
--cc=smostafa@google.com \
--cc=suravee.suthikulpanit@amd.com \
--cc=will@kernel.org \
--cc=yi.l.liu@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).