From: Zhu Yanjun <yanjun.zhu@linux.dev>
To: Matthew Wilcox <willy@infradead.org>
Cc: linux-mm@kvack.org, Andrew Morton <akpm@linux-foundation.org>,
dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org,
intel-gfx@lists.freedesktop.org, linux-afs@lists.infradead.org,
linux-nfs@vger.kernel.org, linux-fsdevel@vger.kernel.org,
netdev@vger.kernel.org
Subject: Re: [PATCH 03/13] scatterlist: Add sg_set_folio()
Date: Fri, 18 Aug 2023 15:05:14 +0800 [thread overview]
Message-ID: <a1ad6a41-edd0-1201-c537-68693d5b70e6@linux.dev> (raw)
In-Reply-To: <ZMbZVjMaIeI1DSj9@casper.infradead.org>
在 2023/7/31 5:42, Matthew Wilcox 写道:
> On Sun, Jul 30, 2023 at 09:57:06PM +0800, Zhu Yanjun wrote:
>> 在 2023/7/30 19:18, Matthew Wilcox 写道:
>>> On Sun, Jul 30, 2023 at 07:01:26PM +0800, Zhu Yanjun wrote:
>>>> Does the following function have folio version?
>>>>
>>>> "
>>>> int sg_alloc_append_table_from_pages(struct sg_append_table *sgt_append,
>>>> struct page **pages, unsigned int n_pages, unsigned int offset,
>>>> unsigned long size, unsigned int max_segment,
>>>> unsigned int left_pages, gfp_t gfp_mask)
>>>> "
>>> No -- I haven't needed to convert anything that uses
>>> sg_alloc_append_table_from_pages() yet. It doesn't look like it should
>>> be _too_ hard to add a folio version.
>> In many places, this function is used. So this function needs the folio
>> version.
> It's not used in very many places. But the first one that I see it used
> (drivers/infiniband/core/umem.c), you can't do a straightforward folio
> conversion:
>
> pinned = pin_user_pages_fast(cur_base,
> min_t(unsigned long, npages,
> PAGE_SIZE /
> sizeof(struct page *)),
> gup_flags, page_list);
> ...
> ret = sg_alloc_append_table_from_pages(
> &umem->sgt_append, page_list, pinned, 0,
> pinned << PAGE_SHIFT, ib_dma_max_seg_size(device),
> npages, GFP_KERNEL);
>
> That can't be converted to folios. The GUP might start in the middle of
> the folio, and we have no way to communicate that.
>
> This particular usage really needs the phyr work that Jason is doing so
> we can efficiently communicate physically contiguous ranges from GUP
> to sg.
Hi, Matthew
Thanks. To the following function, it seems that no folio function
replace vmalloc_to_page.
vmalloc_to_page calls virt_to_page to get page. Finally the followings
will be called.
"
(mem_map + ((pfn) - ARCH_PFN_OFFSET))
"
And I do not find the related folio functions with vmalloc_to_page.
And no folio function replaces dma_map_page.
dma_map_page will call dma_map_page_attrs.
Or these 2 function should not be replaced with folio functions?
int irdma_map_vm_page_list(struct irdma_hw *hw, void *va, dma_addr_t
*pg_dma,
u32 pg_cnt)
{
struct page *vm_page;
int i;
u8 *addr;
addr = (u8 *)(uintptr_t)va;
for (i = 0; i < pg_cnt; i++) {
vm_page = vmalloc_to_page(addr);
if (!vm_page)
goto err;
pg_dma[i] = dma_map_page(hw->device, vm_page, 0, PAGE_SIZE,
DMA_BIDIRECTIONAL);
if (dma_mapping_error(hw->device, pg_dma[i]))
goto err;
addr += PAGE_SIZE;
}
return 0;
err:
irdma_unmap_vm_page_list(hw, pg_dma, i);
return -ENOMEM;
}
Thanks,
Zhu Yanjun
>> Another problem, after folio is used, I want to know the performance after
>> folio is implemented.
>>
>> How to make tests to get the performance?
> You know what you're working on ... I wouldn't know how best to test
> your code.
next prev parent reply other threads:[~2023-08-18 7:06 UTC|newest]
Thread overview: 20+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-06-21 16:45 [PATCH 00/13] Remove pagevecs Matthew Wilcox (Oracle)
2023-06-21 16:45 ` [PATCH 01/13] afs: Convert pagevec to folio_batch in afs_extend_writeback() Matthew Wilcox (Oracle)
2023-06-21 16:45 ` [PATCH 02/13] mm: Add __folio_batch_release() Matthew Wilcox (Oracle)
2023-06-21 16:45 ` [PATCH 03/13] scatterlist: Add sg_set_folio() Matthew Wilcox (Oracle)
2023-07-30 11:01 ` Zhu Yanjun
2023-07-30 11:18 ` Matthew Wilcox
2023-07-30 13:57 ` Zhu Yanjun
2023-07-30 21:42 ` Matthew Wilcox
2023-08-18 7:05 ` Zhu Yanjun [this message]
2023-06-21 16:45 ` [PATCH 04/13] i915: Convert shmem_sg_free_table() to use a folio_batch Matthew Wilcox (Oracle)
2023-06-21 16:45 ` [PATCH 05/13] drm: Convert drm_gem_put_pages() " Matthew Wilcox (Oracle)
2023-06-21 16:45 ` [PATCH 06/13] mm: Remove check_move_unevictable_pages() Matthew Wilcox (Oracle)
2023-06-21 16:45 ` [PATCH 07/13] pagevec: Rename fbatch_count() Matthew Wilcox (Oracle)
2023-06-21 16:45 ` [PATCH 08/13] i915: Convert i915_gpu_error to use a folio_batch Matthew Wilcox (Oracle)
2023-06-21 16:45 ` [PATCH 09/13] net: Convert sunrpc from pagevec to folio_batch Matthew Wilcox (Oracle)
2023-06-21 17:50 ` Chuck Lever
2023-06-21 16:45 ` [PATCH 10/13] mm: Remove struct pagevec Matthew Wilcox (Oracle)
2023-06-21 16:45 ` [PATCH 11/13] mm: Rename invalidate_mapping_pagevec to mapping_try_invalidate Matthew Wilcox (Oracle)
2023-06-21 16:45 ` [PATCH 12/13] mm: Remove references to pagevec Matthew Wilcox (Oracle)
2023-06-21 16:45 ` [PATCH 13/13] mm: Remove unnecessary pagevec includes Matthew Wilcox (Oracle)
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=a1ad6a41-edd0-1201-c537-68693d5b70e6@linux.dev \
--to=yanjun.zhu@linux.dev \
--cc=akpm@linux-foundation.org \
--cc=dri-devel@lists.freedesktop.org \
--cc=intel-gfx@lists.freedesktop.org \
--cc=linux-afs@lists.infradead.org \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=linux-nfs@vger.kernel.org \
--cc=netdev@vger.kernel.org \
--cc=willy@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).