linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Alex Williamson <alex.williamson@redhat.com>
To: Jason Gunthorpe <jgg@ziepe.ca>
Cc: lizhe.67@bytedance.com, kvm@vger.kernel.org,
	linux-kernel@vger.kernel.org, muchun.song@linux.dev,
	Peter Xu <peterx@redhat.com>
Subject: Re: [PATCH] vfio/type1: optimize vfio_pin_pages_remote() for hugetlbfs folio
Date: Fri, 16 May 2025 08:55:20 -0600	[thread overview]
Message-ID: <20250516085520.4f9477ea.alex.williamson@redhat.com> (raw)
In-Reply-To: <20250516141816.GB530183@ziepe.ca>

On Fri, 16 May 2025 11:18:16 -0300
Jason Gunthorpe <jgg@ziepe.ca> wrote:

> On Thu, May 15, 2025 at 03:19:46PM -0600, Alex Williamson wrote:
> > On Tue, 13 May 2025 11:57:30 +0800
> > lizhe.67@bytedance.com wrote:
> >   
> > > From: Li Zhe <lizhe.67@bytedance.com>
> > > 
> > > When vfio_pin_pages_remote() is called with a range of addresses that
> > > includes hugetlbfs folios, the function currently performs individual
> > > statistics counting operations for each page. This can lead to significant
> > > performance overheads, especially when dealing with large ranges of pages.
> > > 
> > > This patch optimize this process by batching the statistics counting
> > > operations.
> > > 
> > > The performance test results for completing the 8G VFIO IOMMU DMA mapping,
> > > obtained through trace-cmd, are as follows. In this case, the 8G virtual
> > > address space has been mapped to physical memory using hugetlbfs with
> > > pagesize=2M.
> > > 
> > > Before this patch:
> > > funcgraph_entry:      # 33813.703 us |  vfio_pin_map_dma();
> > > 
> > > After this patch:
> > > funcgraph_entry:      # 15635.055 us |  vfio_pin_map_dma();
> > > 
> > > Signed-off-by: Li Zhe <lizhe.67@bytedance.com>
> > > ---
> > >  drivers/vfio/vfio_iommu_type1.c | 49 +++++++++++++++++++++++++++++++++
> > >  1 file changed, 49 insertions(+)  
> > 
> > Hi,
> > 
> > Thanks for looking at improvements in this area...  
> 
> Why not just use iommufd? Doesn't it already does all these
> optimizations?

We don't have feature parity yet (P2P DMA), we don't have libvirt
support, and many users are on kernels or product stacks where iommufd
isn't available yet.

> Indeed today you can use iommufd with a memfd handle which should
> return the huge folios directly from the hugetlbfs and we never
> iterate with 4K pages.

Good to know we won't need to revisit this, and maybe "good enough"
here, without tackling the batch size or gup page pointers is enough to
hold users over until iommufd.  Thanks,

Alex


      reply	other threads:[~2025-05-16 14:55 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-05-13  3:57 [PATCH] vfio/type1: optimize vfio_pin_pages_remote() for hugetlbfs folio lizhe.67
2025-05-15 21:19 ` Alex Williamson
2025-05-16  8:16   ` lizhe.67
2025-05-16 14:17     ` Alex Williamson
2025-05-16 14:18   ` Jason Gunthorpe
2025-05-16 14:55     ` Alex Williamson [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20250516085520.4f9477ea.alex.williamson@redhat.com \
    --to=alex.williamson@redhat.com \
    --cc=jgg@ziepe.ca \
    --cc=kvm@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=lizhe.67@bytedance.com \
    --cc=muchun.song@linux.dev \
    --cc=peterx@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).