From: "Lorenzo Stoakes (Oracle)" <ljs@kernel.org>
To: Andrew Morton <akpm@linux-foundation.org>
Cc: Jonathan Corbet <corbet@lwn.net>,
Clemens Ladisch <clemens@ladisch.de>,
Arnd Bergmann <arnd@arndb.de>,
Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
"K . Y . Srinivasan" <kys@microsoft.com>,
Haiyang Zhang <haiyangz@microsoft.com>,
Wei Liu <wei.liu@kernel.org>, Dexuan Cui <decui@microsoft.com>,
Long Li <longli@microsoft.com>,
Alexander Shishkin <alexander.shishkin@linux.intel.com>,
Maxime Coquelin <mcoquelin.stm32@gmail.com>,
Alexandre Torgue <alexandre.torgue@foss.st.com>,
Miquel Raynal <miquel.raynal@bootlin.com>,
Richard Weinberger <richard@nod.at>,
Vignesh Raghavendra <vigneshr@ti.com>,
Bodo Stroesser <bostroesser@gmail.com>,
"Martin K . Petersen" <martin.petersen@oracle.com>,
David Howells <dhowells@redhat.com>,
Marc Dionne <marc.dionne@auristor.com>,
Alexander Viro <viro@zeniv.linux.org.uk>,
Christian Brauner <brauner@kernel.org>, Jan Kara <jack@suse.cz>,
David Hildenbrand <david@kernel.org>,
"Liam R . Howlett" <Liam.Howlett@oracle.com>,
Vlastimil Babka <vbabka@kernel.org>,
Mike Rapoport <rppt@kernel.org>,
Suren Baghdasaryan <surenb@google.com>,
Michal Hocko <mhocko@suse.com>, Jann Horn <jannh@google.com>,
Pedro Falcato <pfalcato@suse.de>,
linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org,
linux-hyperv@vger.kernel.org,
linux-stm32@st-md-mailman.stormreply.com,
linux-arm-kernel@lists.infradead.org,
linux-mtd@lists.infradead.org, linux-staging@lists.linux.dev,
linux-scsi@vger.kernel.org, target-devel@vger.kernel.org,
linux-afs@lists.infradead.org, linux-fsdevel@vger.kernel.org,
linux-mm@kvack.org, Ryan Roberts <ryan.roberts@arm.com>
Subject: [PATCH v4 20/21] mm: add mmap_action_map_kernel_pages[_full]()
Date: Fri, 20 Mar 2026 22:39:46 +0000 [thread overview]
Message-ID: <926ac961690d856e67ec847bee2370ab3c6b9046.1774045440.git.ljs@kernel.org> (raw)
In-Reply-To: <cover.1774045440.git.ljs@kernel.org>
A user can invoke mmap_action_map_kernel_pages() to specify that the
mapping should map kernel pages starting from desc->start of a specified
number of pages specified in an array.
In order to implement this, adjust mmap_action_prepare() to be able to
return an error code, as it makes sense to assert that the specified
parameters are valid as quickly as possible as well as updating the VMA
flags to include VMA_MIXEDMAP_BIT as necessary.
This provides an mmap_prepare equivalent of vm_insert_pages(). We
additionally update the existing vm_insert_pages() code to use
range_in_vma() and add a new range_in_vma_desc() helper function for the
mmap_prepare case, sharing the code between the two in range_is_subset().
We add both mmap_action_map_kernel_pages() and
mmap_action_map_kernel_pages_full() to allow for both partial and full VMA
mappings.
We update the documentation to reflect the new features.
Finally, we update the VMA tests accordingly to reflect the changes.
Reviewed-by: Suren Baghdasaryan <surenb@google.com>
Signed-off-by: Lorenzo Stoakes (Oracle) <ljs@kernel.org>
---
Documentation/filesystems/mmap_prepare.rst | 8 ++
include/linux/mm.h | 95 +++++++++++++++++++++-
include/linux/mm_types.h | 7 ++
mm/memory.c | 42 +++++++++-
mm/util.c | 7 ++
tools/testing/vma/include/dup.h | 7 ++
6 files changed, 160 insertions(+), 6 deletions(-)
diff --git a/Documentation/filesystems/mmap_prepare.rst b/Documentation/filesystems/mmap_prepare.rst
index 14bb057be564..82c99c95ad85 100644
--- a/Documentation/filesystems/mmap_prepare.rst
+++ b/Documentation/filesystems/mmap_prepare.rst
@@ -156,5 +156,13 @@ pointer. These are:
* mmap_action_simple_ioremap() - Sets up an I/O remap from a specified
physical address and over a specified length.
+* mmap_action_map_kernel_pages() - Maps a specified array of `struct page`
+ pointers in the VMA from a specific offset.
+
+* mmap_action_map_kernel_pages_full() - Maps a specified array of `struct
+ page` pointers over the entire VMA. The caller must ensure there are
+ sufficient entries in the page array to cover the entire range of the
+ described VMA.
+
**NOTE:** The ``action`` field should never normally be manipulated directly,
rather you ought to use one of these helpers.
diff --git a/include/linux/mm.h b/include/linux/mm.h
index df8fa6e6402b..6f0a3edb24e1 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -2912,7 +2912,7 @@ static inline bool folio_maybe_mapped_shared(struct folio *folio)
* The caller must add any reference (e.g., from folio_try_get()) it might be
* holding itself to the result.
*
- * Returns the expected folio refcount.
+ * Returns: the expected folio refcount.
*/
static inline int folio_expected_ref_count(const struct folio *folio)
{
@@ -4364,6 +4364,45 @@ static inline void mmap_action_simple_ioremap(struct vm_area_desc *desc,
action->type = MMAP_SIMPLE_IO_REMAP;
}
+/**
+ * mmap_action_map_kernel_pages - helper for mmap_prepare hook to specify that
+ * @num kernel pages contained in the @pages array should be mapped to userland
+ * starting at virtual address @start.
+ * @desc: The VMA descriptor for the VMA requiring kernel pags to be mapped.
+ * @start: The virtual address from which to map them.
+ * @pages: An array of struct page pointers describing the memory to map.
+ * @nr_pages: The number of entries in the @pages aray.
+ */
+static inline void mmap_action_map_kernel_pages(struct vm_area_desc *desc,
+ unsigned long start, struct page **pages,
+ unsigned long nr_pages)
+{
+ struct mmap_action *action = &desc->action;
+
+ action->type = MMAP_MAP_KERNEL_PAGES;
+ action->map_kernel.start = start;
+ action->map_kernel.pages = pages;
+ action->map_kernel.nr_pages = nr_pages;
+ action->map_kernel.pgoff = desc->pgoff;
+}
+
+/**
+ * mmap_action_map_kernel_pages_full - helper for mmap_prepare hook to specify that
+ * kernel pages contained in the @pages array should be mapped to userland
+ * from @desc->start to @desc->end.
+ * @desc: The VMA descriptor for the VMA requiring kernel pags to be mapped.
+ * @pages: An array of struct page pointers describing the memory to map.
+ *
+ * The caller must ensure that @pages contains sufficient entries to cover the
+ * entire range described by @desc.
+ */
+static inline void mmap_action_map_kernel_pages_full(struct vm_area_desc *desc,
+ struct page **pages)
+{
+ mmap_action_map_kernel_pages(desc, desc->start, pages,
+ vma_desc_pages(desc));
+}
+
int mmap_action_prepare(struct vm_area_desc *desc);
int mmap_action_complete(struct vm_area_struct *vma,
struct mmap_action *action);
@@ -4380,10 +4419,59 @@ static inline struct vm_area_struct *find_exact_vma(struct mm_struct *mm,
return vma;
}
+/**
+ * range_is_subset - Is the specified inner range a subset of the outer range?
+ * @outer_start: The start of the outer range.
+ * @outer_end: The exclusive end of the outer range.
+ * @inner_start: The start of the inner range.
+ * @inner_end: The exclusive end of the inner range.
+ *
+ * Returns: %true if [inner_start, inner_end) is a subset of [outer_start,
+ * outer_end), otherwise %false.
+ */
+static inline bool range_is_subset(unsigned long outer_start,
+ unsigned long outer_end,
+ unsigned long inner_start,
+ unsigned long inner_end)
+{
+ return outer_start <= inner_start && inner_end <= outer_end;
+}
+
+/**
+ * range_in_vma - is the specified [@start, @end) range a subset of the VMA?
+ * @vma: The VMA against which we want to check [@start, @end).
+ * @start: The start of the range we wish to check.
+ * @end: The exclusive end of the range we wish to check.
+ *
+ * Returns: %true if [@start, @end) is a subset of [@vma->vm_start,
+ * @vma->vm_end), %false otherwise.
+ */
static inline bool range_in_vma(const struct vm_area_struct *vma,
unsigned long start, unsigned long end)
{
- return (vma && vma->vm_start <= start && end <= vma->vm_end);
+ if (!vma)
+ return false;
+
+ return range_is_subset(vma->vm_start, vma->vm_end, start, end);
+}
+
+/**
+ * range_in_vma_desc - is the specified [@start, @end) range a subset of the VMA
+ * described by @desc, a VMA descriptor?
+ * @desc: The VMA descriptor against which we want to check [@start, @end).
+ * @start: The start of the range we wish to check.
+ * @end: The exclusive end of the range we wish to check.
+ *
+ * Returns: %true if [@start, @end) is a subset of [@desc->start, @desc->end),
+ * %false otherwise.
+ */
+static inline bool range_in_vma_desc(const struct vm_area_desc *desc,
+ unsigned long start, unsigned long end)
+{
+ if (!desc)
+ return false;
+
+ return range_is_subset(desc->start, desc->end, start, end);
}
#ifdef CONFIG_MMU
@@ -4427,6 +4515,9 @@ int remap_pfn_range(struct vm_area_struct *vma, unsigned long addr,
int vm_insert_page(struct vm_area_struct *, unsigned long addr, struct page *);
int vm_insert_pages(struct vm_area_struct *vma, unsigned long addr,
struct page **pages, unsigned long *num);
+int map_kernel_pages_prepare(struct vm_area_desc *desc);
+int map_kernel_pages_complete(struct vm_area_struct *vma,
+ struct mmap_action *action);
int vm_map_pages(struct vm_area_struct *vma, struct page **pages,
unsigned long num);
int vm_map_pages_zero(struct vm_area_struct *vma, struct page **pages,
diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
index d60eefde1db8..f9face579072 100644
--- a/include/linux/mm_types.h
+++ b/include/linux/mm_types.h
@@ -815,6 +815,7 @@ enum mmap_action_type {
MMAP_REMAP_PFN, /* Remap PFN range. */
MMAP_IO_REMAP_PFN, /* I/O remap PFN range. */
MMAP_SIMPLE_IO_REMAP, /* I/O remap with guardrails. */
+ MMAP_MAP_KERNEL_PAGES, /* Map kernel page range from array. */
};
/*
@@ -833,6 +834,12 @@ struct mmap_action {
phys_addr_t start_phys_addr;
unsigned long size;
} simple_ioremap;
+ struct {
+ unsigned long start;
+ struct page **pages;
+ unsigned long nr_pages;
+ pgoff_t pgoff;
+ } map_kernel;
};
enum mmap_action_type type;
diff --git a/mm/memory.c b/mm/memory.c
index b3bcc21af20a..53ef8ef3d04a 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -2484,13 +2484,14 @@ static int insert_pages(struct vm_area_struct *vma, unsigned long addr,
int vm_insert_pages(struct vm_area_struct *vma, unsigned long addr,
struct page **pages, unsigned long *num)
{
- const unsigned long end_addr = addr + (*num * PAGE_SIZE) - 1;
+ const unsigned long nr_pages = *num;
+ const unsigned long end = addr + PAGE_SIZE * nr_pages;
- if (addr < vma->vm_start || end_addr >= vma->vm_end)
+ if (!range_in_vma(vma, addr, end))
return -EFAULT;
if (!(vma->vm_flags & VM_MIXEDMAP)) {
- BUG_ON(mmap_read_trylock(vma->vm_mm));
- BUG_ON(vma->vm_flags & VM_PFNMAP);
+ VM_WARN_ON_ONCE(mmap_read_trylock(vma->vm_mm));
+ VM_WARN_ON_ONCE(vma->vm_flags & VM_PFNMAP);
vm_flags_set(vma, VM_MIXEDMAP);
}
/* Defer page refcount checking till we're about to map that page. */
@@ -2498,6 +2499,39 @@ int vm_insert_pages(struct vm_area_struct *vma, unsigned long addr,
}
EXPORT_SYMBOL(vm_insert_pages);
+int map_kernel_pages_prepare(struct vm_area_desc *desc)
+{
+ const struct mmap_action *action = &desc->action;
+ const unsigned long addr = action->map_kernel.start;
+ unsigned long nr_pages, end;
+
+ if (!vma_desc_test(desc, VMA_MIXEDMAP_BIT)) {
+ VM_WARN_ON_ONCE(mmap_read_trylock(desc->mm));
+ VM_WARN_ON_ONCE(vma_desc_test(desc, VMA_PFNMAP_BIT));
+ vma_desc_set_flags(desc, VMA_MIXEDMAP_BIT);
+ }
+
+ nr_pages = action->map_kernel.nr_pages;
+ end = addr + PAGE_SIZE * nr_pages;
+ if (!range_in_vma_desc(desc, addr, end))
+ return -EFAULT;
+
+ return 0;
+}
+EXPORT_SYMBOL(map_kernel_pages_prepare);
+
+int map_kernel_pages_complete(struct vm_area_struct *vma,
+ struct mmap_action *action)
+{
+ unsigned long nr_pages;
+
+ nr_pages = action->map_kernel.nr_pages;
+ return insert_pages(vma, action->map_kernel.start,
+ action->map_kernel.pages,
+ &nr_pages, vma->vm_page_prot);
+}
+EXPORT_SYMBOL(map_kernel_pages_complete);
+
/**
* vm_insert_page - insert single page into user vma
* @vma: user vma to map to
diff --git a/mm/util.c b/mm/util.c
index 5ae20876ef2c..f063fd4de1e8 100644
--- a/mm/util.c
+++ b/mm/util.c
@@ -1448,6 +1448,8 @@ int mmap_action_prepare(struct vm_area_desc *desc)
return io_remap_pfn_range_prepare(desc);
case MMAP_SIMPLE_IO_REMAP:
return simple_ioremap_prepare(desc);
+ case MMAP_MAP_KERNEL_PAGES:
+ return map_kernel_pages_prepare(desc);
}
WARN_ON_ONCE(1);
@@ -1475,6 +1477,9 @@ int mmap_action_complete(struct vm_area_struct *vma,
case MMAP_REMAP_PFN:
err = remap_pfn_range_complete(vma, action);
break;
+ case MMAP_MAP_KERNEL_PAGES:
+ err = map_kernel_pages_complete(vma, action);
+ break;
case MMAP_IO_REMAP_PFN:
case MMAP_SIMPLE_IO_REMAP:
/* Should have been delegated. */
@@ -1495,6 +1500,7 @@ int mmap_action_prepare(struct vm_area_desc *desc)
case MMAP_REMAP_PFN:
case MMAP_IO_REMAP_PFN:
case MMAP_SIMPLE_IO_REMAP:
+ case MMAP_MAP_KERNEL_PAGES:
WARN_ON_ONCE(1); /* nommu cannot handle these. */
break;
}
@@ -1514,6 +1520,7 @@ int mmap_action_complete(struct vm_area_struct *vma,
case MMAP_REMAP_PFN:
case MMAP_IO_REMAP_PFN:
case MMAP_SIMPLE_IO_REMAP:
+ case MMAP_MAP_KERNEL_PAGES:
WARN_ON_ONCE(1); /* nommu cannot handle this. */
err = -EINVAL;
diff --git a/tools/testing/vma/include/dup.h b/tools/testing/vma/include/dup.h
index f5f7c45f1808..60f0b15638d0 100644
--- a/tools/testing/vma/include/dup.h
+++ b/tools/testing/vma/include/dup.h
@@ -454,6 +454,7 @@ enum mmap_action_type {
MMAP_REMAP_PFN, /* Remap PFN range. */
MMAP_IO_REMAP_PFN, /* I/O remap PFN range. */
MMAP_SIMPLE_IO_REMAP, /* I/O remap with guardrails. */
+ MMAP_MAP_KERNEL_PAGES, /* Map kernel page range from an array. */
};
/*
@@ -472,6 +473,12 @@ struct mmap_action {
phys_addr_t start_phys_addr;
unsigned long size;
} simple_ioremap;
+ struct {
+ unsigned long start;
+ struct page **pages;
+ unsigned long nr_pages;
+ pgoff_t pgoff;
+ } map_kernel;
};
enum mmap_action_type type;
--
2.53.0
next prev parent reply other threads:[~2026-03-20 22:41 UTC|newest]
Thread overview: 46+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-03-20 22:39 [PATCH v4 00/21] mm: expand mmap_prepare functionality and usage Lorenzo Stoakes (Oracle)
2026-03-20 22:39 ` [PATCH v4 01/21] mm: various small mmap_prepare cleanups Lorenzo Stoakes (Oracle)
2026-03-24 10:46 ` Vlastimil Babka (SUSE)
2026-03-20 22:39 ` [PATCH v4 02/21] mm: add documentation for the mmap_prepare file operation callback Lorenzo Stoakes (Oracle)
2026-03-20 22:39 ` [PATCH v4 03/21] mm: document vm_operations_struct->open the same as close() Lorenzo Stoakes (Oracle)
2026-03-20 22:39 ` [PATCH v4 04/21] mm: avoid deadlock when holding rmap on mmap_prepare error Lorenzo Stoakes (Oracle)
2026-03-24 10:55 ` Vlastimil Babka (SUSE)
2026-03-20 22:39 ` [PATCH v4 05/21] mm: switch the rmap lock held option off in compat layer Lorenzo Stoakes (Oracle)
2026-03-24 14:26 ` Vlastimil Babka (SUSE)
2026-03-24 16:35 ` Lorenzo Stoakes (Oracle)
2026-03-20 22:39 ` [PATCH v4 06/21] mm/vma: remove superfluous map->hold_file_rmap_lock Lorenzo Stoakes (Oracle)
2026-03-24 14:31 ` Vlastimil Babka (SUSE)
2026-03-20 22:39 ` [PATCH v4 07/21] mm: have mmap_action_complete() handle the rmap lock and unmap Lorenzo Stoakes (Oracle)
2026-03-24 14:38 ` Vlastimil Babka (SUSE)
2026-03-20 22:39 ` [PATCH v4 08/21] mm: add vm_ops->mapped hook Lorenzo Stoakes (Oracle)
2026-03-24 15:32 ` Vlastimil Babka (SUSE)
2026-03-20 22:39 ` [PATCH v4 09/21] fs: afs: revert mmap_prepare() change Lorenzo Stoakes (Oracle)
2026-03-25 9:06 ` Vlastimil Babka (SUSE)
2026-03-20 22:39 ` [PATCH v4 10/21] fs: afs: restore mmap_prepare implementation Lorenzo Stoakes (Oracle)
2026-03-25 9:47 ` Vlastimil Babka (SUSE)
2026-03-20 22:39 ` [PATCH v4 11/21] mm: add mmap_action_simple_ioremap() Lorenzo Stoakes (Oracle)
2026-03-25 9:58 ` Vlastimil Babka (SUSE)
2026-03-20 22:39 ` [PATCH v4 12/21] misc: open-dice: replace deprecated mmap hook with mmap_prepare Lorenzo Stoakes (Oracle)
2026-03-25 10:04 ` Vlastimil Babka (SUSE)
2026-03-25 10:14 ` Vlastimil Babka (SUSE)
2026-03-20 22:39 ` [PATCH v4 13/21] hpet: " Lorenzo Stoakes (Oracle)
2026-03-25 10:17 ` Vlastimil Babka (SUSE)
2026-03-20 22:39 ` [PATCH v4 14/21] mtdchar: replace deprecated mmap hook with mmap_prepare, clean up Lorenzo Stoakes (Oracle)
2026-03-25 10:20 ` Vlastimil Babka (SUSE)
2026-03-20 22:39 ` [PATCH v4 15/21] stm: replace deprecated mmap hook with mmap_prepare Lorenzo Stoakes (Oracle)
2026-03-25 10:24 ` Vlastimil Babka (SUSE)
2026-03-20 22:39 ` [PATCH v4 16/21] staging: vme_user: " Lorenzo Stoakes (Oracle)
2026-03-25 10:34 ` Vlastimil Babka (SUSE)
2026-03-20 22:39 ` [PATCH v4 17/21] mm: allow handling of stacked mmap_prepare hooks in more drivers Lorenzo Stoakes (Oracle)
2026-03-25 13:43 ` Vlastimil Babka (SUSE)
2026-03-20 22:39 ` [PATCH v4 18/21] drivers: hv: vmbus: replace deprecated mmap hook with mmap_prepare Lorenzo Stoakes (Oracle)
2026-03-23 4:16 ` Michael Kelley
2026-03-23 9:13 ` Lorenzo Stoakes (Oracle)
2026-03-25 13:57 ` Vlastimil Babka (SUSE)
2026-03-20 22:39 ` [PATCH v4 19/21] uio: replace deprecated mmap hook with mmap_prepare in uio_info Lorenzo Stoakes (Oracle)
2026-03-25 14:13 ` Vlastimil Babka (SUSE)
2026-03-20 22:39 ` Lorenzo Stoakes (Oracle) [this message]
2026-03-26 10:44 ` [PATCH v4 20/21] mm: add mmap_action_map_kernel_pages[_full]() Vlastimil Babka (SUSE)
2026-03-20 22:39 ` [PATCH v4 21/21] mm: on remap assert that input range within the proposed VMA Lorenzo Stoakes (Oracle)
2026-03-26 10:46 ` Vlastimil Babka (SUSE)
2026-03-21 2:42 ` [PATCH v4 00/21] mm: expand mmap_prepare functionality and usage Andrew Morton
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=926ac961690d856e67ec847bee2370ab3c6b9046.1774045440.git.ljs@kernel.org \
--to=ljs@kernel.org \
--cc=Liam.Howlett@oracle.com \
--cc=akpm@linux-foundation.org \
--cc=alexander.shishkin@linux.intel.com \
--cc=alexandre.torgue@foss.st.com \
--cc=arnd@arndb.de \
--cc=bostroesser@gmail.com \
--cc=brauner@kernel.org \
--cc=clemens@ladisch.de \
--cc=corbet@lwn.net \
--cc=david@kernel.org \
--cc=decui@microsoft.com \
--cc=dhowells@redhat.com \
--cc=gregkh@linuxfoundation.org \
--cc=haiyangz@microsoft.com \
--cc=jack@suse.cz \
--cc=jannh@google.com \
--cc=kys@microsoft.com \
--cc=linux-afs@lists.infradead.org \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-doc@vger.kernel.org \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-hyperv@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=linux-mtd@lists.infradead.org \
--cc=linux-scsi@vger.kernel.org \
--cc=linux-staging@lists.linux.dev \
--cc=linux-stm32@st-md-mailman.stormreply.com \
--cc=longli@microsoft.com \
--cc=marc.dionne@auristor.com \
--cc=martin.petersen@oracle.com \
--cc=mcoquelin.stm32@gmail.com \
--cc=mhocko@suse.com \
--cc=miquel.raynal@bootlin.com \
--cc=pfalcato@suse.de \
--cc=richard@nod.at \
--cc=rppt@kernel.org \
--cc=ryan.roberts@arm.com \
--cc=surenb@google.com \
--cc=target-devel@vger.kernel.org \
--cc=vbabka@kernel.org \
--cc=vigneshr@ti.com \
--cc=viro@zeniv.linux.org.uk \
--cc=wei.liu@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox