From: Nicolin Chen <nicolinc@nvidia.com>
To: Jason Gunthorpe <jgg@nvidia.com>
Cc: <kevin.tian@intel.com>, <corbet@lwn.net>, <will@kernel.org>,
<joro@8bytes.org>, <suravee.suthikulpanit@amd.com>,
<robin.murphy@arm.com>, <dwmw2@infradead.org>,
<baolu.lu@linux.intel.com>, <shuah@kernel.org>,
<linux-kernel@vger.kernel.org>, <iommu@lists.linux.dev>,
<linux-arm-kernel@lists.infradead.org>,
<linux-kselftest@vger.kernel.org>, <linux-doc@vger.kernel.org>,
<eric.auger@redhat.com>, <jean-philippe@linaro.org>,
<mdf@kernel.org>, <mshavit@google.com>,
<shameerali.kolothum.thodi@huawei.com>, <smostafa@google.com>,
<ddutile@redhat.com>, <yi.l.liu@intel.com>,
<patches@lists.linux.dev>
Subject: Re: [PATCH v6 05/14] iommufd: Add IOMMUFD_OBJ_VEVENTQ and IOMMUFD_CMD_VEVENTQ_ALLOC
Date: Tue, 18 Feb 2025 09:47:50 -0800 [thread overview]
Message-ID: <Z7THxrq/6sYP/AIi@nvidia.com> (raw)
In-Reply-To: <20250218152959.GB4099685@nvidia.com>
On Tue, Feb 18, 2025 at 11:29:59AM -0400, Jason Gunthorpe wrote:
> On Fri, Jan 24, 2025 at 04:30:34PM -0800, Nicolin Chen wrote:
> > + list_add_tail(&vevent->node, &eventq->deliver);
> > + vevent->on_list = true;
> > + vevent->header.sequence = atomic_read(&veventq->sequence);
> > + if (atomic_read(&veventq->sequence) == INT_MAX)
> > + atomic_set(&veventq->sequence, 0);
> > + else
> > + atomic_inc(&veventq->sequence);
> > + spin_unlock(&eventq->lock);
>
> This is all locked, we don't need veventq->sequence to be an atomic?
>
> The bounding can be done with some simple math:
>
> veventq->sequence = (veventq->sequence + 1) & INT_MAX;
Ack. Perhaps we can reuse eventq->lock to fence @num_events too.
> > +static struct iommufd_vevent *
> > +iommufd_veventq_deliver_fetch(struct iommufd_veventq *veventq)
> > +{
> > + struct iommufd_eventq *eventq = &veventq->common;
> > + struct list_head *list = &eventq->deliver;
> > + struct iommufd_vevent *vevent = NULL;
> > +
> > + spin_lock(&eventq->lock);
> > + if (!list_empty(list)) {
> > + vevent = list_first_entry(list, struct iommufd_vevent, node);
> > + list_del(&vevent->node);
> > + vevent->on_list = false;
> > + }
> > + /* Make a copy of the overflow node for copy_to_user */
> > + if (vevent == &veventq->overflow) {
> > + vevent = kzalloc(sizeof(*vevent), GFP_ATOMIC);
> > + if (vevent)
> > + memcpy(vevent, &veventq->overflow, sizeof(*vevent));
> > + }
>
> This error handling is wonky, if we can't allocate then we shouldn't
> have done the list_del. Just return NULL which will cause
> iommufd_veventq_fops_read() to exist and userspace will try again.
OK.
We have two cases to support here:
1) Normal vevent node -- list_del and return the node.
2) Overflow node -- list_del and return a copy.
I think we can do:
if (!list_empty(list)) {
struct iommufd_vevent *next;
next = list_first_entry(list, struct iommufd_vevent, node);
if (next == &veventq->overflow) {
/* Make a copy of the overflow node for copy_to_user */
vevent = kzalloc(sizeof(*vevent), GFP_ATOMIC);
if (!vevent)
goto out_unlock;
}
list_del(&next->node);
if (vevent)
memcpy(vevent, next, sizeof(*vevent));
else
vevent = next;
}
> > @@ -403,6 +531,10 @@ static int iommufd_eventq_fops_release(struct inode *inode, struct file *filep)
> > {
> > struct iommufd_eventq *eventq = filep->private_data;
> >
> > + if (eventq->obj.type == IOMMUFD_OBJ_VEVENTQ) {
> > + atomic_set(&eventq_to_veventq(eventq)->sequence, 0);
> > + atomic_set(&eventq_to_veventq(eventq)->num_events, 0);
> > + }
>
> Why? We are about to free the memory?
Ack. I thought about a re-entry of an open(). But release() does
lose the event_fd completely, and user space wouldn't be able to
open the same fd again.
> > +int iommufd_veventq_alloc(struct iommufd_ucmd *ucmd)
> > +{
> > + struct iommu_veventq_alloc *cmd = ucmd->cmd;
> > + struct iommufd_veventq *veventq;
> > + struct iommufd_viommu *viommu;
> > + int fdno;
> > + int rc;
> > +
> > + if (cmd->flags || cmd->type == IOMMU_VEVENTQ_TYPE_DEFAULT)
> > + return -EOPNOTSUPP;
> > + if (!cmd->veventq_depth)
> > + return -EINVAL;
>
> Check __reserved for 0 too
Kevin is suggesting a 32-bit flag field, so I think we can drop
the __reserved in that case.
Thanks
Nicolin
next prev parent reply other threads:[~2025-02-18 17:50 UTC|newest]
Thread overview: 60+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-01-25 0:30 [PATCH v6 00/14] iommufd: Add vIOMMU infrastructure (Part-3: vEVENTQ) Nicolin Chen
2025-01-25 0:30 ` [PATCH v6 01/14] iommufd/fault: Move two fault functions out of the header Nicolin Chen
2025-02-14 20:15 ` Jason Gunthorpe
2025-02-18 5:05 ` Tian, Kevin
2025-01-25 0:30 ` [PATCH v6 02/14] iommufd/fault: Add an iommufd_fault_init() helper Nicolin Chen
2025-01-25 0:30 ` [PATCH v6 03/14] iommufd: Abstract an iommufd_eventq from iommufd_fault Nicolin Chen
2025-02-14 20:23 ` Jason Gunthorpe
2025-01-25 0:30 ` [PATCH v6 04/14] iommufd: Rename fault.c to eventq.c Nicolin Chen
2025-01-25 0:30 ` [PATCH v6 05/14] iommufd: Add IOMMUFD_OBJ_VEVENTQ and IOMMUFD_CMD_VEVENTQ_ALLOC Nicolin Chen
2025-02-18 5:13 ` Tian, Kevin
2025-02-18 17:53 ` Nicolin Chen
2025-02-18 15:29 ` Jason Gunthorpe
2025-02-18 17:47 ` Nicolin Chen [this message]
2025-02-18 18:08 ` Jason Gunthorpe
2025-02-18 18:15 ` Nicolin Chen
2025-01-25 0:30 ` [PATCH v6 06/14] iommufd/viommu: Add iommufd_viommu_get_vdev_id helper Nicolin Chen
2025-02-18 15:31 ` Jason Gunthorpe
2025-02-20 5:17 ` Nicolin Chen
2025-02-20 16:19 ` Jason Gunthorpe
2025-01-25 0:30 ` [PATCH v6 07/14] iommufd/viommu: Add iommufd_viommu_report_event helper Nicolin Chen
2025-02-18 5:14 ` Tian, Kevin
2025-02-18 15:35 ` Jason Gunthorpe
2025-02-19 6:58 ` Tian, Kevin
2025-02-20 21:16 ` Nicolin Chen
2025-02-21 4:27 ` Tian, Kevin
2025-02-21 13:39 ` Jason Gunthorpe
2025-01-25 0:30 ` [PATCH v6 08/14] iommufd/selftest: Require vdev_id when attaching to a nested domain Nicolin Chen
2025-02-18 5:15 ` Tian, Kevin
2025-01-25 0:30 ` [PATCH v6 09/14] iommufd/selftest: Add IOMMU_TEST_OP_TRIGGER_VEVENT for vEVENTQ coverage Nicolin Chen
2025-02-18 5:16 ` Tian, Kevin
2025-01-25 0:30 ` [PATCH v6 10/14] iommufd/selftest: Add IOMMU_VEVENTQ_ALLOC test coverage Nicolin Chen
2025-02-18 5:19 ` Tian, Kevin
2025-01-25 0:30 ` [PATCH v6 11/14] Documentation: userspace-api: iommufd: Update FAULT and VEVENTQ Nicolin Chen
2025-01-28 8:21 ` Bagas Sanjaya
2025-02-18 17:02 ` Jason Gunthorpe
2025-01-25 0:30 ` [PATCH v6 12/14] iommu/arm-smmu-v3: Introduce struct arm_smmu_vmaster Nicolin Chen
2025-02-18 17:08 ` Jason Gunthorpe
2025-02-20 7:16 ` Nicolin Chen
2025-01-25 0:30 ` [PATCH v6 13/14] iommu/arm-smmu-v3: Report events that belong to devices attached to vIOMMU Nicolin Chen
2025-02-18 5:21 ` Tian, Kevin
2025-02-18 17:18 ` Jason Gunthorpe
2025-02-18 18:28 ` Nicolin Chen
2025-02-18 18:50 ` Jason Gunthorpe
2025-02-18 19:02 ` Nicolin Chen
2025-02-18 19:08 ` Jason Gunthorpe
2025-02-18 19:27 ` Nicolin Chen
2025-02-20 20:45 ` Nicolin Chen
2025-02-20 23:24 ` Jason Gunthorpe
2025-02-21 8:10 ` Nicolin Chen
2025-01-25 0:30 ` [PATCH v6 14/14] iommu/arm-smmu-v3: Set MEV bit in nested STE for DoS mitigations Nicolin Chen
2025-02-18 5:24 ` Tian, Kevin
2025-02-18 18:17 ` Pranjal Shrivastava
2025-02-18 18:52 ` Jason Gunthorpe
2025-02-20 7:12 ` Nicolin Chen
2025-02-18 18:53 ` Nicolin Chen
2025-02-20 16:15 ` Pranjal Shrivastava
2025-02-18 17:21 ` Jason Gunthorpe
2025-02-18 18:14 ` Nicolin Chen
2025-02-20 9:09 ` Nicolin Chen
2025-02-14 8:03 ` [PATCH v6 00/14] iommufd: Add vIOMMU infrastructure (Part-3: vEVENTQ) Nicolin Chen
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=Z7THxrq/6sYP/AIi@nvidia.com \
--to=nicolinc@nvidia.com \
--cc=baolu.lu@linux.intel.com \
--cc=corbet@lwn.net \
--cc=ddutile@redhat.com \
--cc=dwmw2@infradead.org \
--cc=eric.auger@redhat.com \
--cc=iommu@lists.linux.dev \
--cc=jean-philippe@linaro.org \
--cc=jgg@nvidia.com \
--cc=joro@8bytes.org \
--cc=kevin.tian@intel.com \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-doc@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-kselftest@vger.kernel.org \
--cc=mdf@kernel.org \
--cc=mshavit@google.com \
--cc=patches@lists.linux.dev \
--cc=robin.murphy@arm.com \
--cc=shameerali.kolothum.thodi@huawei.com \
--cc=shuah@kernel.org \
--cc=smostafa@google.com \
--cc=suravee.suthikulpanit@amd.com \
--cc=will@kernel.org \
--cc=yi.l.liu@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).