From: Daniel Vetter <daniel.vetter@ffwll.ch>
To: Huan Yang <link@vivo.com>
Cc: "Sumit Semwal" <sumit.semwal@linaro.org>,
"Benjamin Gaignard" <benjamin.gaignard@collabora.com>,
"Brian Starkey" <Brian.Starkey@arm.com>,
"John Stultz" <jstultz@google.com>,
"T.J. Mercier" <tjmercier@google.com>,
"Christian König" <christian.koenig@amd.com>,
linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org,
linaro-mm-sig@lists.linaro.org, linux-kernel@vger.kernel.org,
opensource.kernel@vivo.com
Subject: Re: [PATCH v2 0/5] Introduce DMA_HEAP_ALLOC_AND_READ_FILE heap flag
Date: Mon, 5 Aug 2024 19:53:30 +0200 [thread overview]
Message-ID: <ZrERmndxBS5xUvuE@phenom.ffwll.local> (raw)
In-Reply-To: <4e83734a-d0cf-4f8a-9731-d370e1064d65@vivo.com>
On Thu, Aug 01, 2024 at 10:53:45AM +0800, Huan Yang wrote:
>
> 在 2024/8/1 4:46, Daniel Vetter 写道:
> > On Tue, Jul 30, 2024 at 08:04:04PM +0800, Huan Yang wrote:
> > > 在 2024/7/30 17:05, Huan Yang 写道:
> > > > 在 2024/7/30 16:56, Daniel Vetter 写道:
> > > > > [????????? daniel.vetter@ffwll.ch ?????????
> > > > > https://aka.ms/LearnAboutSenderIdentification?????????????]
> > > > >
> > > > > On Tue, Jul 30, 2024 at 03:57:44PM +0800, Huan Yang wrote:
> > > > > > UDMA-BUF step:
> > > > > > 1. memfd_create
> > > > > > 2. open file(buffer/direct)
> > > > > > 3. udmabuf create
> > > > > > 4. mmap memfd
> > > > > > 5. read file into memfd vaddr
> > > > > Yeah this is really slow and the worst way to do it. You absolutely want
> > > > > to start _all_ the io before you start creating the dma-buf, ideally
> > > > > with
> > > > > everything running in parallel. But just starting the direct I/O with
> > > > > async and then creating the umdabuf should be a lot faster and avoid
> > > > That's greate, Let me rephrase that, and please correct me if I'm wrong.
> > > >
> > > > UDMA-BUF step:
> > > > 1. memfd_create
> > > > 2. mmap memfd
> > > > 3. open file(buffer/direct)
> > > > 4. start thread to async read
> > > > 3. udmabuf create
> > > >
> > > > With this, can improve
> > > I just test with it. Step is:
> > >
> > > UDMA-BUF step:
> > > 1. memfd_create
> > > 2. mmap memfd
> > > 3. open file(buffer/direct)
> > > 4. start thread to async read
> > > 5. udmabuf create
> > >
> > > 6 . join wait
> > >
> > > 3G file read all step cost 1,527,103,431ns, it's greate.
> > Ok that's almost the throughput of your patch set, which I think is close
> > enough. The remaining difference is probably just the mmap overhead, not
> > sure whether/how we can do direct i/o to an fd directly ... in principle
> > it's possible for any file that uses the standard pagecache.
>
> Yes, for mmap, IMO, now that we get all folios and pin it. That's mean all
> pfn it's got when udmabuf created.
>
> So, I think mmap with page fault is helpless for save memory but increase
> the mmap access cost.(maybe can save a little page table's memory)
>
> I want to offer a patchset to remove it and more suitable for folios
> operate(And remove unpin list). And contains some fix patch.
>
> I'll send it when I test it's good.
>
>
> About fd operation for direct I/O, maybe use sendfile or copy_file_range?
>
> sendfile base pipe buffer, it's low performance when I test is.
>
> copy_file_range can't work due to it's not the same file system.
>
> So, I can't find other way to do it. Can someone give some suggestions?
Yeah direct I/O to pagecache without an mmap might be too niche to be
supported. Maybe io_uring has something, but I guess as unlikely as
anything else.
-Sima
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
prev parent reply other threads:[~2024-08-05 17:53 UTC|newest]
Thread overview: 26+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-07-30 7:57 [PATCH v2 0/5] Introduce DMA_HEAP_ALLOC_AND_READ_FILE heap flag Huan Yang
2024-07-30 7:57 ` [PATCH v2 1/5] dma-buf: heaps: " Huan Yang
2024-07-31 11:08 ` kernel test robot
2024-07-30 7:57 ` [PATCH v2 2/5] dma-buf: heaps: Introduce async alloc read ops Huan Yang
2024-07-30 7:57 ` [PATCH v2 3/5] dma-buf: heaps: support alloc async read file Huan Yang
2024-07-31 14:44 ` kernel test robot
2024-07-30 7:57 ` [PATCH v2 4/5] dma-buf: heaps: system_heap alloc support async read Huan Yang
2024-07-30 7:57 ` [PATCH v2 5/5] dma-buf: heaps: configurable async read gather limit Huan Yang
2024-07-30 8:03 ` [PATCH v2 0/5] Introduce DMA_HEAP_ALLOC_AND_READ_FILE heap flag Christian König
2024-07-30 8:14 ` Huan Yang
2024-07-30 8:37 ` Christian König
2024-07-30 8:46 ` Huan Yang
2024-07-30 10:43 ` Christian König
2024-07-30 11:36 ` Huan Yang
2024-07-30 13:11 ` Christian König
2024-07-31 1:48 ` Huan Yang
2024-07-30 17:19 ` T.J. Mercier
2024-07-31 1:47 ` Huan Yang
2024-07-30 8:56 ` Daniel Vetter
2024-07-30 9:05 ` Huan Yang
2024-07-30 10:42 ` Christian König
2024-07-30 11:33 ` Huan Yang
2024-07-30 12:04 ` Huan Yang
2024-07-31 20:46 ` Daniel Vetter
2024-08-01 2:53 ` Huan Yang
2024-08-05 17:53 ` Daniel Vetter [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=ZrERmndxBS5xUvuE@phenom.ffwll.local \
--to=daniel.vetter@ffwll.ch \
--cc=Brian.Starkey@arm.com \
--cc=benjamin.gaignard@collabora.com \
--cc=christian.koenig@amd.com \
--cc=dri-devel@lists.freedesktop.org \
--cc=jstultz@google.com \
--cc=linaro-mm-sig@lists.linaro.org \
--cc=link@vivo.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-media@vger.kernel.org \
--cc=opensource.kernel@vivo.com \
--cc=sumit.semwal@linaro.org \
--cc=tjmercier@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox