From: Alex Mastro <amastro@fb.com>
To: Jason Gunthorpe <jgg@ziepe.ca>
Cc: Alex Mastro <amastro@fb.com>,
Alex Williamson <alex.williamson@redhat.com>,
Kevin Tian <kevin.tian@intel.com>,
"Bjorn Helgaas" <bhelgaas@google.com>,
David Reiss <dreiss@meta.com>, Joerg Roedel <joro@8bytes.org>,
Keith Busch <kbusch@kernel.org>,
Leon Romanovsky <leon@kernel.org>,
Li Zhe <lizhe.67@bytedance.com>,
Mahmoud Adam <mngyadam@amazon.de>,
Philipp Stanner <pstanner@redhat.com>,
Robin Murphy <robin.murphy@arm.com>,
Vivek Kasireddy <vivek.kasireddy@intel.com>,
"Will Deacon" <will@kernel.org>,
Yunxiang Li <Yunxiang.Li@amd.com>, <linux-kernel@vger.kernel.org>,
<iommu@lists.linux.dev>, <kvm@vger.kernel.org>
Subject: Re: [TECH TOPIC] vfio, iommufd: Enabling user space drivers to vend more granular access to client processes
Date: Fri, 19 Sep 2025 09:13:04 -0700 [thread overview]
Message-ID: <20250919161305.417717-1-amastro@fb.com> (raw)
In-Reply-To: <20250918225739.GS1326709@ziepe.ca>
On Thu, Sep 18, 2025 at 07:57:39PM -0300, Jason Gunthorpe wrote:
> I'm having a somewhat hard time wrapping my head around the security
> model that says your trust your related processes not use DMA in a way
> that is hostile their peers, but you don't trust them not to issue
> hostile ioctls..
Ah, yea. In my original message, I should have emphasized that vending the
entire vfio device fd confers access to inappropriate ioctls *in addition to*
inappropriate BAR regions that the client should be restricted from accessing.
Assuming we make headway on dma_buf_ops.mmap, granting a client process access
to a dma-buf's worth of BAR space does not feel spiritually different than
granting it to a peer device. The onus is on the combination of driver + device
policy to constrain the side-effects of foreign access to the exposed BAR
sub-regions.
Please let me know if I misunderstood your meaning.
> IIRC VFIO should allow partial BAR mappings, so the client process can
> robustly have a subset mapped if you trust it to perform the unix
> SCM_RIGHTS/mapping ioctl/close() sequence.
Yes -- we already do this today actually. The USD just tells the client "these
are the specific set of (offset, length) within the vfio device fd you should
mmap". Those intervals are slices within BARs.
> > > Instead of vending the VFIO device fd to the client process, the USD could bind
> > the necessary BAR regions to a dma-buf fd and share that with the client. If
> > VFIO supported dma_buf_ops.mmap, the client could mmap those into its address
> > space.
>
> I wouldn't object to this, I think it is not too complicated at all.
That's encouraging to hear! Thank you.
> What I've been thinking is if the vending process could "dup" the FD
> and permanently attach a BPF program to the new FD that sits right
> after ioctl. The BPF program would inspect each ioctl when it is
> issued and enforce whatever policy the vending process wants.
This seems totally reasonable to me.
> What would give me alot of pause is your proposal where we effectively
> have the kernel enforce some arbitary policy, and I know from
> experience there will be endless asks for more and more policy
> options.
Agreed. If we can engineer BPF to be able to interact with those ioctls to hoist
these kinds of policy decisions up into user space, I can't argue with that.
> I don't think viommu is really related to this, viommu is more about
> multiple physical devices.
Ack. I wasn't sure how much to read into the "representing a slice of the
physical IOMMU instance" comment [1].
[1] https://docs.kernel.org/userspace-api/iommufd.html
Thanks,
Alex
prev parent reply other threads:[~2025-09-19 16:13 UTC|newest]
Thread overview: 12+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-09-18 21:44 [TECH TOPIC] vfio, iommufd: Enabling user space drivers to vend more granular access to client processes Alex Mastro
2025-09-18 22:57 ` Jason Gunthorpe
2025-09-18 23:24 ` Keith Busch
2025-09-19 7:00 ` Tian, Kevin
2025-09-19 11:58 ` Jason Gunthorpe
2025-09-22 9:14 ` Mostafa Saleh
2025-09-22 17:46 ` Alex Mastro
2025-09-22 17:51 ` Jason Gunthorpe
2025-09-19 11:56 ` Jason Gunthorpe
2025-09-19 15:57 ` Alex Williamson
2025-09-19 17:14 ` Alex Mastro
2025-09-19 16:13 ` Alex Mastro [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20250919161305.417717-1-amastro@fb.com \
--to=amastro@fb.com \
--cc=Yunxiang.Li@amd.com \
--cc=alex.williamson@redhat.com \
--cc=bhelgaas@google.com \
--cc=dreiss@meta.com \
--cc=iommu@lists.linux.dev \
--cc=jgg@ziepe.ca \
--cc=joro@8bytes.org \
--cc=kbusch@kernel.org \
--cc=kevin.tian@intel.com \
--cc=kvm@vger.kernel.org \
--cc=leon@kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=lizhe.67@bytedance.com \
--cc=mngyadam@amazon.de \
--cc=pstanner@redhat.com \
--cc=robin.murphy@arm.com \
--cc=vivek.kasireddy@intel.com \
--cc=will@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox