From: "Mika Penttilä" <mpenttil@redhat.com>
To: Balbir Singh <balbirs@nvidia.com>, linux-mm@kvack.org
Cc: akpm@linux-foundation.org, linux-kernel@vger.kernel.org,
"Karol Herbst" <kherbst@redhat.com>,
"Lyude Paul" <lyude@redhat.com>,
"Danilo Krummrich" <dakr@kernel.org>,
"David Airlie" <airlied@gmail.com>,
"Simona Vetter" <simona@ffwll.ch>,
"Jérôme Glisse" <jglisse@redhat.com>,
"Shuah Khan" <shuah@kernel.org>,
"David Hildenbrand" <david@redhat.com>,
"Barry Song" <baohua@kernel.org>,
"Baolin Wang" <baolin.wang@linux.alibaba.com>,
"Ryan Roberts" <ryan.roberts@arm.com>,
"Matthew Wilcox" <willy@infradead.org>,
"Peter Xu" <peterx@redhat.com>, "Zi Yan" <ziy@nvidia.com>,
"Kefeng Wang" <wangkefeng.wang@huawei.com>,
"Jane Chu" <jane.chu@oracle.com>,
"Alistair Popple" <apopple@nvidia.com>,
"Donet Tom" <donettom@linux.ibm.com>
Subject: Re: [v1 resend 08/12] mm/thp: add split during migration support
Date: Fri, 4 Jul 2025 09:43:54 +0300 [thread overview]
Message-ID: <715fc271-1af3-4061-b217-e3d6e32849c6@redhat.com> (raw)
In-Reply-To: <e1889eb8-d2d9-4d97-b9ae-e50158442945@redhat.com>
On 7/4/25 08:17, Mika Penttilä wrote:
> On 7/4/25 02:35, Balbir Singh wrote:
>> Support splitting pages during THP zone device migration as needed.
>> The common case that arises is that after setup, during migrate
>> the destination might not be able to allocate MIGRATE_PFN_COMPOUND
>> pages.
>>
>> Add a new routine migrate_vma_split_pages() to support the splitting
>> of already isolated pages. The pages being migrated are already unmapped
>> and marked for migration during setup (via unmap). folio_split() and
>> __split_unmapped_folio() take additional isolated arguments, to avoid
>> unmapping and remaping these pages and unlocking/putting the folio.
>>
>> Cc: Karol Herbst <kherbst@redhat.com>
>> Cc: Lyude Paul <lyude@redhat.com>
>> Cc: Danilo Krummrich <dakr@kernel.org>
>> Cc: David Airlie <airlied@gmail.com>
>> Cc: Simona Vetter <simona@ffwll.ch>
>> Cc: "Jérôme Glisse" <jglisse@redhat.com>
>> Cc: Shuah Khan <shuah@kernel.org>
>> Cc: David Hildenbrand <david@redhat.com>
>> Cc: Barry Song <baohua@kernel.org>
>> Cc: Baolin Wang <baolin.wang@linux.alibaba.com>
>> Cc: Ryan Roberts <ryan.roberts@arm.com>
>> Cc: Matthew Wilcox <willy@infradead.org>
>> Cc: Peter Xu <peterx@redhat.com>
>> Cc: Zi Yan <ziy@nvidia.com>
>> Cc: Kefeng Wang <wangkefeng.wang@huawei.com>
>> Cc: Jane Chu <jane.chu@oracle.com>
>> Cc: Alistair Popple <apopple@nvidia.com>
>> Cc: Donet Tom <donettom@linux.ibm.com>
>>
>> Signed-off-by: Balbir Singh <balbirs@nvidia.com>
>> ---
>> include/linux/huge_mm.h | 11 ++++++--
>> mm/huge_memory.c | 54 ++++++++++++++++++++-----------------
>> mm/migrate_device.c | 59 ++++++++++++++++++++++++++++++++---------
>> 3 files changed, 85 insertions(+), 39 deletions(-)
>>
>> diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
>> index 65a1bdf29bb9..5f55a754e57c 100644
>> --- a/include/linux/huge_mm.h
>> +++ b/include/linux/huge_mm.h
>> @@ -343,8 +343,8 @@ unsigned long thp_get_unmapped_area_vmflags(struct file *filp, unsigned long add
>> vm_flags_t vm_flags);
>>
>> bool can_split_folio(struct folio *folio, int caller_pins, int *pextra_pins);
>> -int split_huge_page_to_list_to_order(struct page *page, struct list_head *list,
>> - unsigned int new_order);
>> +int __split_huge_page_to_list_to_order(struct page *page, struct list_head *list,
>> + unsigned int new_order, bool isolated);
>> int min_order_for_split(struct folio *folio);
>> int split_folio_to_list(struct folio *folio, struct list_head *list);
>> bool uniform_split_supported(struct folio *folio, unsigned int new_order,
>> @@ -353,6 +353,13 @@ bool non_uniform_split_supported(struct folio *folio, unsigned int new_order,
>> bool warns);
>> int folio_split(struct folio *folio, unsigned int new_order, struct page *page,
>> struct list_head *list);
>> +
>> +static inline int split_huge_page_to_list_to_order(struct page *page, struct list_head *list,
>> + unsigned int new_order)
>> +{
>> + return __split_huge_page_to_list_to_order(page, list, new_order, false);
>> +}
>> +
>> /*
>> * try_folio_split - try to split a @folio at @page using non uniform split.
>> * @folio: folio to be split
>> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
>> index d55e36ae0c39..e00ddfed22fa 100644
>> --- a/mm/huge_memory.c
>> +++ b/mm/huge_memory.c
>> @@ -3424,15 +3424,6 @@ static void __split_folio_to_order(struct folio *folio, int old_order,
>> new_folio->mapping = folio->mapping;
>> new_folio->index = folio->index + i;
>>
>> - /*
>> - * page->private should not be set in tail pages. Fix up and warn once
>> - * if private is unexpectedly set.
>> - */
>> - if (unlikely(new_folio->private)) {
>> - VM_WARN_ON_ONCE_PAGE(true, new_head);
>> - new_folio->private = NULL;
>> - }
>> -
>> if (folio_test_swapcache(folio))
>> new_folio->swap.val = folio->swap.val + i;
>>
>> @@ -3519,7 +3510,7 @@ static int __split_unmapped_folio(struct folio *folio, int new_order,
>> struct page *split_at, struct page *lock_at,
>> struct list_head *list, pgoff_t end,
>> struct xa_state *xas, struct address_space *mapping,
>> - bool uniform_split)
>> + bool uniform_split, bool isolated)
>> {
>> struct lruvec *lruvec;
>> struct address_space *swap_cache = NULL;
>> @@ -3643,8 +3634,9 @@ static int __split_unmapped_folio(struct folio *folio, int new_order,
>> percpu_ref_get_many(&release->pgmap->ref,
>> (1 << new_order) - 1);
>>
>> - lru_add_split_folio(origin_folio, release, lruvec,
>> - list);
>> + if (!isolated)
>> + lru_add_split_folio(origin_folio, release,
>> + lruvec, list);
>>
>> /* Some pages can be beyond EOF: drop them from cache */
>> if (release->index >= end) {
>> @@ -3697,6 +3689,12 @@ static int __split_unmapped_folio(struct folio *folio, int new_order,
>> if (nr_dropped)
>> shmem_uncharge(mapping->host, nr_dropped);
>>
>> + /*
>> + * Don't remap and unlock isolated folios
>> + */
>> + if (isolated)
>> + return ret;
>> +
>> remap_page(origin_folio, 1 << order,
>> folio_test_anon(origin_folio) ?
>> RMP_USE_SHARED_ZEROPAGE : 0);
>> @@ -3790,6 +3788,7 @@ bool uniform_split_supported(struct folio *folio, unsigned int new_order,
>> * @lock_at: a page within @folio to be left locked to caller
>> * @list: after-split folios will be put on it if non NULL
>> * @uniform_split: perform uniform split or not (non-uniform split)
>> + * @isolated: The pages are already unmapped
>> *
>> * It calls __split_unmapped_folio() to perform uniform and non-uniform split.
>> * It is in charge of checking whether the split is supported or not and
>> @@ -3800,7 +3799,7 @@ bool uniform_split_supported(struct folio *folio, unsigned int new_order,
>> */
>> static int __folio_split(struct folio *folio, unsigned int new_order,
>> struct page *split_at, struct page *lock_at,
>> - struct list_head *list, bool uniform_split)
>> + struct list_head *list, bool uniform_split, bool isolated)
>> {
>> struct deferred_split *ds_queue = get_deferred_split_queue(folio);
>> XA_STATE(xas, &folio->mapping->i_pages, folio->index);
>> @@ -3846,14 +3845,16 @@ static int __folio_split(struct folio *folio, unsigned int new_order,
>> * is taken to serialise against parallel split or collapse
>> * operations.
>> */
>> - anon_vma = folio_get_anon_vma(folio);
>> - if (!anon_vma) {
>> - ret = -EBUSY;
>> - goto out;
>> + if (!isolated) {
>> + anon_vma = folio_get_anon_vma(folio);
>> + if (!anon_vma) {
>> + ret = -EBUSY;
>> + goto out;
>> + }
>> + anon_vma_lock_write(anon_vma);
>> }
>> end = -1;
>> mapping = NULL;
>> - anon_vma_lock_write(anon_vma);
>> } else {
>> unsigned int min_order;
>> gfp_t gfp;
>> @@ -3920,7 +3921,8 @@ static int __folio_split(struct folio *folio, unsigned int new_order,
>> goto out_unlock;
>> }
>>
>> - unmap_folio(folio);
>> + if (!isolated)
>> + unmap_folio(folio);
>>
>> /* block interrupt reentry in xa_lock and spinlock */
>> local_irq_disable();
>> @@ -3973,14 +3975,15 @@ static int __folio_split(struct folio *folio, unsigned int new_order,
>>
>> ret = __split_unmapped_folio(folio, new_order,
>> split_at, lock_at, list, end, &xas, mapping,
>> - uniform_split);
>> + uniform_split, isolated);
>> } else {
>> spin_unlock(&ds_queue->split_queue_lock);
>> fail:
>> if (mapping)
>> xas_unlock(&xas);
>> local_irq_enable();
>> - remap_page(folio, folio_nr_pages(folio), 0);
>> + if (!isolated)
>> + remap_page(folio, folio_nr_pages(folio), 0);
>> ret = -EAGAIN;
>> }
>>
>> @@ -4046,12 +4049,13 @@ static int __folio_split(struct folio *folio, unsigned int new_order,
>> * Returns -EINVAL when trying to split to an order that is incompatible
>> * with the folio. Splitting to order 0 is compatible with all folios.
>> */
>> -int split_huge_page_to_list_to_order(struct page *page, struct list_head *list,
>> - unsigned int new_order)
>> +int __split_huge_page_to_list_to_order(struct page *page, struct list_head *list,
>> + unsigned int new_order, bool isolated)
>> {
>> struct folio *folio = page_folio(page);
>>
>> - return __folio_split(folio, new_order, &folio->page, page, list, true);
>> + return __folio_split(folio, new_order, &folio->page, page, list, true,
>> + isolated);
>> }
>>
>> /*
>> @@ -4080,7 +4084,7 @@ int folio_split(struct folio *folio, unsigned int new_order,
>> struct page *split_at, struct list_head *list)
>> {
>> return __folio_split(folio, new_order, split_at, &folio->page, list,
>> - false);
>> + false, false);
>> }
>>
>> int min_order_for_split(struct folio *folio)
>> diff --git a/mm/migrate_device.c b/mm/migrate_device.c
>> index 41d0bd787969..acd2f03b178d 100644
>> --- a/mm/migrate_device.c
>> +++ b/mm/migrate_device.c
>> @@ -813,6 +813,24 @@ static int migrate_vma_insert_huge_pmd_page(struct migrate_vma *migrate,
>> src[i] &= ~MIGRATE_PFN_MIGRATE;
>> return 0;
>> }
>> +
>> +static void migrate_vma_split_pages(struct migrate_vma *migrate,
>> + unsigned long idx, unsigned long addr,
>> + struct folio *folio)
>> +{
>> + unsigned long i;
>> + unsigned long pfn;
>> + unsigned long flags;
>> +
>> + folio_get(folio);
>> + split_huge_pmd_address(migrate->vma, addr, true);
>> + __split_huge_page_to_list_to_order(folio_page(folio, 0), NULL, 0, true);
> We already have reference to folio, why is folio_get() needed ?
>
> Splitting the page splits pmd for anon folios, why is there split_huge_pmd_address() ?
Oh I see
+ if (!isolated)
+ unmap_folio(folio);
which explains the explicit split_huge_pmd_address(migrate->vma, addr, true);
Still, why the folio_get(folio);?
>
>> + migrate->src[idx] &= ~MIGRATE_PFN_COMPOUND;
>> + flags = migrate->src[idx] & ((1UL << MIGRATE_PFN_SHIFT) - 1);
>> + pfn = migrate->src[idx] >> MIGRATE_PFN_SHIFT;
>> + for (i = 1; i < HPAGE_PMD_NR; i++)
>> + migrate->src[i+idx] = migrate_pfn(pfn + i) | flags;
>> +}
>> #else /* !CONFIG_ARCH_ENABLE_THP_MIGRATION */
>> static int migrate_vma_insert_huge_pmd_page(struct migrate_vma *migrate,
>> unsigned long addr,
>> @@ -822,6 +840,11 @@ static int migrate_vma_insert_huge_pmd_page(struct migrate_vma *migrate,
>> {
>> return 0;
>> }
>> +
>> +static void migrate_vma_split_pages(struct migrate_vma *migrate,
>> + unsigned long idx, unsigned long addr,
>> + struct folio *folio)
>> +{}
>> #endif
>>
>> /*
>> @@ -971,8 +994,9 @@ static void __migrate_device_pages(unsigned long *src_pfns,
>> struct migrate_vma *migrate)
>> {
>> struct mmu_notifier_range range;
>> - unsigned long i;
>> + unsigned long i, j;
>> bool notified = false;
>> + unsigned long addr;
>>
>> for (i = 0; i < npages; ) {
>> struct page *newpage = migrate_pfn_to_page(dst_pfns[i]);
>> @@ -1014,12 +1038,16 @@ static void __migrate_device_pages(unsigned long *src_pfns,
>> (!(dst_pfns[i] & MIGRATE_PFN_COMPOUND))) {
>> nr = HPAGE_PMD_NR;
>> src_pfns[i] &= ~MIGRATE_PFN_COMPOUND;
>> - src_pfns[i] &= ~MIGRATE_PFN_MIGRATE;
>> - goto next;
>> + } else {
>> + nr = 1;
>> }
>>
>> - migrate_vma_insert_page(migrate, addr, &dst_pfns[i],
>> - &src_pfns[i]);
>> + for (j = 0; j < nr && i + j < npages; j++) {
>> + src_pfns[i+j] |= MIGRATE_PFN_MIGRATE;
>> + migrate_vma_insert_page(migrate,
>> + addr + j * PAGE_SIZE,
>> + &dst_pfns[i+j], &src_pfns[i+j]);
>> + }
>> goto next;
>> }
>>
>> @@ -1041,7 +1069,9 @@ static void __migrate_device_pages(unsigned long *src_pfns,
>> MIGRATE_PFN_COMPOUND);
>> goto next;
>> }
>> - src_pfns[i] &= ~MIGRATE_PFN_MIGRATE;
>> + nr = 1 << folio_order(folio);
>> + addr = migrate->start + i * PAGE_SIZE;
>> + migrate_vma_split_pages(migrate, i, addr, folio);
>> } else if ((src_pfns[i] & MIGRATE_PFN_MIGRATE) &&
>> (dst_pfns[i] & MIGRATE_PFN_COMPOUND) &&
>> !(src_pfns[i] & MIGRATE_PFN_COMPOUND)) {
>> @@ -1076,12 +1106,17 @@ static void __migrate_device_pages(unsigned long *src_pfns,
>> BUG_ON(folio_test_writeback(folio));
>>
>> if (migrate && migrate->fault_page == page)
>> - extra_cnt = 1;
>> - r = folio_migrate_mapping(mapping, newfolio, folio, extra_cnt);
>> - if (r != MIGRATEPAGE_SUCCESS)
>> - src_pfns[i] &= ~MIGRATE_PFN_MIGRATE;
>> - else
>> - folio_migrate_flags(newfolio, folio);
>> + extra_cnt++;
>> + for (j = 0; j < nr && i + j < npages; j++) {
>> + folio = page_folio(migrate_pfn_to_page(src_pfns[i+j]));
>> + newfolio = page_folio(migrate_pfn_to_page(dst_pfns[i+j]));
>> +
>> + r = folio_migrate_mapping(mapping, newfolio, folio, extra_cnt);
>> + if (r != MIGRATEPAGE_SUCCESS)
>> + src_pfns[i+j] &= ~MIGRATE_PFN_MIGRATE;
>> + else
>> + folio_migrate_flags(newfolio, folio);
>> + }
>> next:
>> i += nr;
>> }
next prev parent reply other threads:[~2025-07-04 6:44 UTC|newest]
Thread overview: 99+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-07-03 23:34 [v1 resend 00/12] THP support for zone device page migration Balbir Singh
2025-07-03 23:35 ` [v1 resend 01/12] mm/zone_device: support large zone device private folios Balbir Singh
2025-07-07 5:28 ` Alistair Popple
2025-07-08 6:47 ` Balbir Singh
2025-07-03 23:35 ` [v1 resend 02/12] mm/migrate_device: flags for selecting device private THP pages Balbir Singh
2025-07-07 5:31 ` Alistair Popple
2025-07-08 7:31 ` Balbir Singh
2025-07-19 20:06 ` Matthew Brost
2025-07-19 20:16 ` Matthew Brost
2025-07-18 3:15 ` Matthew Brost
2025-07-03 23:35 ` [v1 resend 03/12] mm/thp: zone_device awareness in THP handling code Balbir Singh
2025-07-04 4:46 ` Mika Penttilä
2025-07-06 1:21 ` Balbir Singh
2025-07-04 11:10 ` Mika Penttilä
2025-07-05 0:14 ` Balbir Singh
2025-07-07 6:09 ` Alistair Popple
2025-07-08 7:40 ` Balbir Singh
2025-07-07 3:49 ` Mika Penttilä
2025-07-08 4:20 ` Balbir Singh
2025-07-08 4:30 ` Mika Penttilä
2025-07-07 6:07 ` Alistair Popple
2025-07-08 4:59 ` Balbir Singh
2025-07-22 4:42 ` Matthew Brost
2025-07-03 23:35 ` [v1 resend 04/12] mm/migrate_device: THP migration of zone device pages Balbir Singh
2025-07-04 15:35 ` kernel test robot
2025-07-18 6:59 ` Matthew Brost
2025-07-18 7:04 ` Balbir Singh
2025-07-18 7:21 ` Matthew Brost
2025-07-18 8:22 ` Matthew Brost
2025-07-22 4:54 ` Matthew Brost
2025-07-19 2:10 ` Matthew Brost
2025-07-03 23:35 ` [v1 resend 05/12] mm/memory/fault: add support for zone device THP fault handling Balbir Singh
2025-07-17 19:34 ` Matthew Brost
2025-07-03 23:35 ` [v1 resend 06/12] lib/test_hmm: test cases and support for zone device private THP Balbir Singh
2025-07-03 23:35 ` [v1 resend 07/12] mm/memremap: add folio_split support Balbir Singh
2025-07-04 11:14 ` Mika Penttilä
2025-07-06 1:24 ` Balbir Singh
2025-07-03 23:35 ` [v1 resend 08/12] mm/thp: add split during migration support Balbir Singh
2025-07-04 5:17 ` Mika Penttilä
2025-07-04 6:43 ` Mika Penttilä [this message]
2025-07-05 0:26 ` Balbir Singh
2025-07-05 3:17 ` Mika Penttilä
2025-07-07 2:35 ` Balbir Singh
2025-07-07 3:29 ` Mika Penttilä
2025-07-08 7:37 ` Balbir Singh
2025-07-04 11:24 ` Zi Yan
2025-07-05 0:58 ` Balbir Singh
2025-07-05 1:55 ` Zi Yan
2025-07-06 1:15 ` Balbir Singh
2025-07-06 1:34 ` Zi Yan
2025-07-06 1:47 ` Balbir Singh
2025-07-06 2:34 ` Zi Yan
2025-07-06 3:03 ` Zi Yan
2025-07-07 2:29 ` Balbir Singh
2025-07-07 2:45 ` Zi Yan
2025-07-08 3:31 ` Balbir Singh
2025-07-08 7:43 ` Balbir Singh
2025-07-16 5:34 ` Matthew Brost
2025-07-16 11:19 ` Zi Yan
2025-07-16 16:24 ` Matthew Brost
2025-07-16 21:53 ` Balbir Singh
2025-07-17 22:24 ` Matthew Brost
2025-07-17 23:04 ` Zi Yan
2025-07-18 0:41 ` Matthew Brost
2025-07-18 1:25 ` Zi Yan
2025-07-18 3:33 ` Matthew Brost
2025-07-18 15:06 ` Zi Yan
2025-07-23 0:00 ` Matthew Brost
2025-07-03 23:35 ` [v1 resend 09/12] lib/test_hmm: add test case for split pages Balbir Singh
2025-07-03 23:35 ` [v1 resend 10/12] selftests/mm/hmm-tests: new tests for zone device THP migration Balbir Singh
2025-07-03 23:35 ` [v1 resend 11/12] gpu/drm/nouveau: add THP migration support Balbir Singh
2025-07-03 23:35 ` [v1 resend 12/12] selftests/mm/hmm-tests: new throughput tests including THP Balbir Singh
2025-07-04 16:16 ` [v1 resend 00/12] THP support for zone device page migration Zi Yan
2025-07-04 23:56 ` Balbir Singh
2025-07-08 14:53 ` David Hildenbrand
2025-07-08 22:43 ` Balbir Singh
2025-07-17 23:40 ` Matthew Brost
2025-07-18 3:57 ` Balbir Singh
2025-07-18 4:57 ` Matthew Brost
2025-07-21 23:48 ` Balbir Singh
2025-07-22 0:07 ` Matthew Brost
2025-07-22 0:51 ` Balbir Singh
2025-07-19 0:53 ` Matthew Brost
2025-07-21 11:42 ` Francois Dugast
2025-07-21 23:34 ` Balbir Singh
2025-07-22 0:01 ` Matthew Brost
2025-07-22 19:34 ` [PATCH] mm/hmm: Do not fault in device private pages owned by the caller Francois Dugast
2025-07-22 20:07 ` Andrew Morton
2025-07-23 15:34 ` Francois Dugast
2025-07-23 18:05 ` Matthew Brost
2025-07-24 0:25 ` Balbir Singh
2025-07-24 5:02 ` Matthew Brost
2025-07-24 5:46 ` Mika Penttilä
2025-07-24 5:57 ` Matthew Brost
2025-07-24 6:04 ` Mika Penttilä
2025-07-24 6:47 ` Leon Romanovsky
2025-07-28 13:34 ` Jason Gunthorpe
2025-08-08 0:21 ` Matthew Brost
2025-08-08 9:43 ` Francois Dugast
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=715fc271-1af3-4061-b217-e3d6e32849c6@redhat.com \
--to=mpenttil@redhat.com \
--cc=airlied@gmail.com \
--cc=akpm@linux-foundation.org \
--cc=apopple@nvidia.com \
--cc=balbirs@nvidia.com \
--cc=baohua@kernel.org \
--cc=baolin.wang@linux.alibaba.com \
--cc=dakr@kernel.org \
--cc=david@redhat.com \
--cc=donettom@linux.ibm.com \
--cc=jane.chu@oracle.com \
--cc=jglisse@redhat.com \
--cc=kherbst@redhat.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=lyude@redhat.com \
--cc=peterx@redhat.com \
--cc=ryan.roberts@arm.com \
--cc=shuah@kernel.org \
--cc=simona@ffwll.ch \
--cc=wangkefeng.wang@huawei.com \
--cc=willy@infradead.org \
--cc=ziy@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).