From: Waiman Long <Waiman.Long@hp.com>
To: Thomas Gleixner <tglx@linutronix.de>,
Ingo Molnar <mingo@redhat.com>, "H. Peter Anvin" <hpa@zytor.com>,
Peter Zijlstra <peterz@infradead.org>
Cc: linux-arch@vger.kernel.org, Waiman Long <Waiman.Long@hp.com>,
Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>,
Gleb Natapov <gleb@redhat.com>,
kvm@vger.kernel.org, Scott J Norton <scott.norton@hp.com>,
x86@kernel.org, Paolo Bonzini <paolo.bonzini@gmail.com>,
linux-kernel@vger.kernel.org,
virtualization@lists.linux-foundation.org,
Chegu Vinod <chegu_vinod@hp.com>,
David Vrabel <david.vrabel@citrix.com>,
Oleg Nesterov <oleg@redhat.com>,
xen-devel@lists.xenproject.org,
Boris Ostrovsky <boris.ostrovsky@oracle.com>,
"Paul E. McKenney" <paulmck@linux.vnet.ibm.com>,
Linus Torvalds <torvalds@linux-foundation.org>
Subject: [PATCH v10 11/19] qspinlock: Split the MCS queuing code into a separate slowerpath
Date: Wed, 7 May 2014 11:01:39 -0400 [thread overview]
Message-ID: <1399474907-22206-12-git-send-email-Waiman.Long@hp.com> (raw)
In-Reply-To: <1399474907-22206-1-git-send-email-Waiman.Long@hp.com>
With the pending addition of more codes to support unfair lock and
PV spinlock, the complexity of the slowpath function increases to
the point that the number of scratch-pad registers in the x86-64
architecture is not enough and so those additional non-scratch-pad
registers will need to be used. This has the downside of requiring
saving and restoring of those registers in the prolog and epilog of
the slowpath function slowing down the nominally faster pending bit
and trylock code path at the beginning of the slowpath function.
This patch separates out the actual MCS queuing code into a slowerpath
function. This avoids the slow down of the pending bit and trylock
code path at the expense of a little bit of additional overhead to
the MCS queuing code path.
Signed-off-by: Waiman Long <Waiman.Long@hp.com>
---
kernel/locking/qspinlock.c | 120 +++++++++++++++++++++++++------------------
1 files changed, 70 insertions(+), 50 deletions(-)
diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c
index 10e87e1..a14241e 100644
--- a/kernel/locking/qspinlock.c
+++ b/kernel/locking/qspinlock.c
@@ -335,57 +335,23 @@ static inline int trylock_pending(struct qspinlock *lock, u32 *pval)
}
/**
- * queue_spin_lock_slowpath - acquire the queue spinlock
+ * queue_spin_lock_slowerpath - a slower path for acquiring queue spinlock
* @lock: Pointer to queue spinlock structure
- * @val: Current value of the queue spinlock 32-bit word
- *
- * (queue tail, pending bit, lock bit)
- *
- * fast : slow : unlock
- * : :
- * uncontended (0,0,0) -:--> (0,0,1) ------------------------------:--> (*,*,0)
- * : | ^--------.------. / :
- * : v \ \ | :
- * pending : (0,1,1) +--> (0,1,0) \ | :
- * : | ^--' | | :
- * : v | | :
- * uncontended : (n,x,y) +--> (n,0,0) --' | :
- * queue : | ^--' | :
- * : v | :
- * contended : (*,x,y) +--> (*,0,0) ---> (*,0,1) -' :
- * queue : ^--' :
- *
- * The pending bit processing is in the trylock_pending() function
- * whereas the uncontended and contended queue processing is in the
- * queue_spin_lock_slowpath() function.
+ * @val : Current value of the queue spinlock 32-bit word
+ * @node: Pointer to the queue node
+ * @tail: The tail code
*
+ * The reason for splitting a slowerpath from slowpath is to avoid the
+ * unnecessary overhead of non-scratch pad register pushing and popping
+ * due to increased complexity with unfair and PV spinlock from slowing
+ * down the nominally faster pending bit and trylock code path. So this
+ * function is not inlined.
*/
-void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val)
+static noinline void queue_spin_lock_slowerpath(struct qspinlock *lock,
+ u32 val, struct qnode *node, u32 tail)
{
- struct qnode *prev, *next, *node;
- u32 old, tail;
- int idx;
-
- BUILD_BUG_ON(CONFIG_NR_CPUS >= (1U << _Q_TAIL_CPU_BITS));
-
- if (trylock_pending(lock, &val))
- return; /* Lock acquired */
-
- node = this_cpu_ptr(&qnodes[0]);
- idx = node->mcs.count++;
- tail = encode_tail(smp_processor_id(), idx);
-
- node += idx;
- node->qhead = 0;
- node->mcs.next = NULL;
-
- /*
- * We touched a (possibly) cold cacheline in the per-cpu queue node;
- * attempt the trylock once more in the hope someone let go while we
- * weren't watching.
- */
- if (queue_spin_trylock(lock))
- goto release;
+ struct qnode *prev, *next;
+ u32 old;
/*
* we already touched the queueing cacheline; don't bother with pending
@@ -442,7 +408,7 @@ retry_queue_wait:
}
old = atomic_cmpxchg(&lock->val, val, _Q_LOCKED_VAL);
if (old == val)
- goto release; /* No contention */
+ return; /* No contention */
else if (old & _Q_LOCKED_MASK)
goto retry_queue_wait;
@@ -450,14 +416,68 @@ retry_queue_wait:
}
/*
- * contended path; wait for next, release.
+ * contended path; wait for next, return.
*/
while (!(next = (struct qnode *)ACCESS_ONCE(node->mcs.next)))
arch_mutex_cpu_relax();
arch_mcs_spin_unlock_contended(&next->qhead);
+}
+
+/**
+ * queue_spin_lock_slowpath - acquire the queue spinlock
+ * @lock: Pointer to queue spinlock structure
+ * @val: Current value of the queue spinlock 32-bit word
+ *
+ * (queue tail, pending bit, lock bit)
+ *
+ * fast : slow : unlock
+ * : :
+ * uncontended (0,0,0) -:--> (0,0,1) ------------------------------:--> (*,*,0)
+ * : | ^--------.------. / :
+ * : v \ \ | :
+ * pending : (0,1,1) +--> (0,1,0) \ | :
+ * : | ^--' | | :
+ * : v | | :
+ * uncontended : (n,x,y) +--> (n,0,0) --' | :
+ * queue : | ^--' | :
+ * : v | :
+ * contended : (*,x,y) +--> (*,0,0) ---> (*,0,1) -' :
+ * queue : ^--' :
+ *
+ * This slowpath only contains the faster pending bit and trylock codes.
+ * The slower queuing code is in the slowerpath function.
+ *
+ * The pending bit processing is in the trylock_pending() function
+ * whereas the uncontended and contended queue processing is in the
+ * queue_spin_lock_slowerpath() function.
+ */
+void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val)
+{
+ struct qnode *node;
+ u32 tail, idx;
+
+ BUILD_BUG_ON(CONFIG_NR_CPUS >= (1U << _Q_TAIL_CPU_BITS));
+
+ if (trylock_pending(lock, &val))
+ return; /* Lock acquired */
+
+ node = this_cpu_ptr(&qnodes[0]);
+ idx = node->mcs.count++;
+ tail = encode_tail(smp_processor_id(), idx);
+
+ node += idx;
+ node->qhead = 0;
+ node->mcs.next = NULL;
+
+ /*
+ * We touched a (possibly) cold cacheline in the per-cpu queue node;
+ * attempt the trylock once more in the hope someone let go while we
+ * weren't watching.
+ */
+ if (!queue_spin_trylock(lock))
+ queue_spin_lock_slowerpath(lock, val, node, tail);
-release:
/*
* release the node
*/
--
1.7.1
WARNING: multiple messages have this Message-ID (diff)
From: Waiman Long <Waiman.Long@hp.com>
To: Thomas Gleixner <tglx@linutronix.de>,
Ingo Molnar <mingo@redhat.com>, "H. Peter Anvin" <hpa@zytor.com>,
Peter Zijlstra <peterz@infradead.org>
Cc: linux-arch@vger.kernel.org, x86@kernel.org,
linux-kernel@vger.kernel.org,
virtualization@lists.linux-foundation.org,
xen-devel@lists.xenproject.org, kvm@vger.kernel.org,
Paolo Bonzini <paolo.bonzini@gmail.com>,
Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
Boris Ostrovsky <boris.ostrovsky@oracle.com>,
"Paul E. McKenney" <paulmck@linux.vnet.ibm.com>,
Rik van Riel <riel@redhat.com>,
Linus Torvalds <torvalds@linux-foundation.org>,
Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>,
David Vrabel <david.vrabel@citrix.com>,
Oleg Nesterov <oleg@redhat.com>, Gleb Natapov <gleb@redhat.com>,
Scott J Norton <scott.norton@hp.com>,
Chegu Vinod <chegu_vinod@hp.com>,
Waiman Long <Waiman.Long@hp.com>
Subject: [PATCH v10 11/19] qspinlock: Split the MCS queuing code into a separate slowerpath
Date: Wed, 7 May 2014 11:01:39 -0400 [thread overview]
Message-ID: <1399474907-22206-12-git-send-email-Waiman.Long@hp.com> (raw)
Message-ID: <20140507150139.keqLPKw5OgAhvWGg-jiqbsIA0bo80Zy3Fg3kZ9n6zLk@z> (raw)
In-Reply-To: <1399474907-22206-1-git-send-email-Waiman.Long@hp.com>
With the pending addition of more codes to support unfair lock and
PV spinlock, the complexity of the slowpath function increases to
the point that the number of scratch-pad registers in the x86-64
architecture is not enough and so those additional non-scratch-pad
registers will need to be used. This has the downside of requiring
saving and restoring of those registers in the prolog and epilog of
the slowpath function slowing down the nominally faster pending bit
and trylock code path at the beginning of the slowpath function.
This patch separates out the actual MCS queuing code into a slowerpath
function. This avoids the slow down of the pending bit and trylock
code path at the expense of a little bit of additional overhead to
the MCS queuing code path.
Signed-off-by: Waiman Long <Waiman.Long@hp.com>
---
kernel/locking/qspinlock.c | 120 +++++++++++++++++++++++++------------------
1 files changed, 70 insertions(+), 50 deletions(-)
diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c
index 10e87e1..a14241e 100644
--- a/kernel/locking/qspinlock.c
+++ b/kernel/locking/qspinlock.c
@@ -335,57 +335,23 @@ static inline int trylock_pending(struct qspinlock *lock, u32 *pval)
}
/**
- * queue_spin_lock_slowpath - acquire the queue spinlock
+ * queue_spin_lock_slowerpath - a slower path for acquiring queue spinlock
* @lock: Pointer to queue spinlock structure
- * @val: Current value of the queue spinlock 32-bit word
- *
- * (queue tail, pending bit, lock bit)
- *
- * fast : slow : unlock
- * : :
- * uncontended (0,0,0) -:--> (0,0,1) ------------------------------:--> (*,*,0)
- * : | ^--------.------. / :
- * : v \ \ | :
- * pending : (0,1,1) +--> (0,1,0) \ | :
- * : | ^--' | | :
- * : v | | :
- * uncontended : (n,x,y) +--> (n,0,0) --' | :
- * queue : | ^--' | :
- * : v | :
- * contended : (*,x,y) +--> (*,0,0) ---> (*,0,1) -' :
- * queue : ^--' :
- *
- * The pending bit processing is in the trylock_pending() function
- * whereas the uncontended and contended queue processing is in the
- * queue_spin_lock_slowpath() function.
+ * @val : Current value of the queue spinlock 32-bit word
+ * @node: Pointer to the queue node
+ * @tail: The tail code
*
+ * The reason for splitting a slowerpath from slowpath is to avoid the
+ * unnecessary overhead of non-scratch pad register pushing and popping
+ * due to increased complexity with unfair and PV spinlock from slowing
+ * down the nominally faster pending bit and trylock code path. So this
+ * function is not inlined.
*/
-void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val)
+static noinline void queue_spin_lock_slowerpath(struct qspinlock *lock,
+ u32 val, struct qnode *node, u32 tail)
{
- struct qnode *prev, *next, *node;
- u32 old, tail;
- int idx;
-
- BUILD_BUG_ON(CONFIG_NR_CPUS >= (1U << _Q_TAIL_CPU_BITS));
-
- if (trylock_pending(lock, &val))
- return; /* Lock acquired */
-
- node = this_cpu_ptr(&qnodes[0]);
- idx = node->mcs.count++;
- tail = encode_tail(smp_processor_id(), idx);
-
- node += idx;
- node->qhead = 0;
- node->mcs.next = NULL;
-
- /*
- * We touched a (possibly) cold cacheline in the per-cpu queue node;
- * attempt the trylock once more in the hope someone let go while we
- * weren't watching.
- */
- if (queue_spin_trylock(lock))
- goto release;
+ struct qnode *prev, *next;
+ u32 old;
/*
* we already touched the queueing cacheline; don't bother with pending
@@ -442,7 +408,7 @@ retry_queue_wait:
}
old = atomic_cmpxchg(&lock->val, val, _Q_LOCKED_VAL);
if (old == val)
- goto release; /* No contention */
+ return; /* No contention */
else if (old & _Q_LOCKED_MASK)
goto retry_queue_wait;
@@ -450,14 +416,68 @@ retry_queue_wait:
}
/*
- * contended path; wait for next, release.
+ * contended path; wait for next, return.
*/
while (!(next = (struct qnode *)ACCESS_ONCE(node->mcs.next)))
arch_mutex_cpu_relax();
arch_mcs_spin_unlock_contended(&next->qhead);
+}
+
+/**
+ * queue_spin_lock_slowpath - acquire the queue spinlock
+ * @lock: Pointer to queue spinlock structure
+ * @val: Current value of the queue spinlock 32-bit word
+ *
+ * (queue tail, pending bit, lock bit)
+ *
+ * fast : slow : unlock
+ * : :
+ * uncontended (0,0,0) -:--> (0,0,1) ------------------------------:--> (*,*,0)
+ * : | ^--------.------. / :
+ * : v \ \ | :
+ * pending : (0,1,1) +--> (0,1,0) \ | :
+ * : | ^--' | | :
+ * : v | | :
+ * uncontended : (n,x,y) +--> (n,0,0) --' | :
+ * queue : | ^--' | :
+ * : v | :
+ * contended : (*,x,y) +--> (*,0,0) ---> (*,0,1) -' :
+ * queue : ^--' :
+ *
+ * This slowpath only contains the faster pending bit and trylock codes.
+ * The slower queuing code is in the slowerpath function.
+ *
+ * The pending bit processing is in the trylock_pending() function
+ * whereas the uncontended and contended queue processing is in the
+ * queue_spin_lock_slowerpath() function.
+ */
+void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val)
+{
+ struct qnode *node;
+ u32 tail, idx;
+
+ BUILD_BUG_ON(CONFIG_NR_CPUS >= (1U << _Q_TAIL_CPU_BITS));
+
+ if (trylock_pending(lock, &val))
+ return; /* Lock acquired */
+
+ node = this_cpu_ptr(&qnodes[0]);
+ idx = node->mcs.count++;
+ tail = encode_tail(smp_processor_id(), idx);
+
+ node += idx;
+ node->qhead = 0;
+ node->mcs.next = NULL;
+
+ /*
+ * We touched a (possibly) cold cacheline in the per-cpu queue node;
+ * attempt the trylock once more in the hope someone let go while we
+ * weren't watching.
+ */
+ if (!queue_spin_trylock(lock))
+ queue_spin_lock_slowerpath(lock, val, node, tail);
-release:
/*
* release the node
*/
--
1.7.1
next prev parent reply other threads:[~2014-05-07 15:01 UTC|newest]
Thread overview: 97+ messages / expand[flat|nested] mbox.gz Atom feed top
2014-05-07 15:01 [PATCH v10 00/19] qspinlock: a 4-byte queue spinlock with PV support Waiman Long
2014-05-07 15:01 ` Waiman Long
2014-05-07 15:01 ` [PATCH v10 01/19] qspinlock: A simple generic 4-byte queue spinlock Waiman Long
2014-05-07 15:01 ` Waiman Long
2014-05-07 15:01 ` [PATCH v10 02/19] qspinlock, x86: Enable x86-64 to use " Waiman Long
2014-05-07 15:01 ` Waiman Long
2014-05-07 15:01 ` [PATCH v10 03/19] qspinlock: Add pending bit Waiman Long
2014-05-07 15:01 ` Waiman Long
2014-05-08 18:57 ` Peter Zijlstra
2014-05-10 0:49 ` Waiman Long
2014-05-10 0:49 ` Waiman Long
2014-05-12 15:22 ` Radim Krčmář
2014-05-12 17:29 ` Peter Zijlstra
2014-05-13 19:47 ` Waiman Long
2014-05-13 19:47 ` Waiman Long
2014-05-14 16:51 ` Radim Krčmář
2014-05-14 16:51 ` Radim Krčmář
2014-05-14 17:00 ` Peter Zijlstra
2014-05-14 17:00 ` Peter Zijlstra
2014-05-14 19:13 ` Radim Krčmář
2014-05-14 19:13 ` Radim Krčmář
2014-05-19 20:17 ` Waiman Long
2014-05-19 20:17 ` Waiman Long
[not found] ` <20140521164930.GA26199@potion.brq.redhat.com>
2014-05-21 17:02 ` [RFC 08/07] qspinlock: integrate pending bit into queue Radim Krčmář
2014-05-21 17:02 ` Radim Krčmář
2014-05-07 15:01 ` [PATCH v10 04/19] qspinlock: Extract out the exchange of tail code word Waiman Long
2014-05-07 15:01 ` Waiman Long
2014-05-07 15:01 ` [PATCH v10 05/19] qspinlock: Optimize for smaller NR_CPUS Waiman Long
2014-05-07 15:01 ` Waiman Long
2014-05-07 15:01 ` [PATCH v10 06/19] qspinlock: prolong the stay in the pending bit path Waiman Long
2014-05-07 15:01 ` Waiman Long
2014-05-08 18:58 ` Peter Zijlstra
2014-05-08 18:58 ` Peter Zijlstra
2014-05-10 0:58 ` Waiman Long
2014-05-10 0:58 ` Waiman Long
2014-05-10 13:38 ` Peter Zijlstra
2014-05-10 13:38 ` Peter Zijlstra
2014-05-07 15:01 ` [PATCH v10 07/19] qspinlock: Use a simple write to grab the lock, if applicable Waiman Long
2014-05-07 15:01 ` Waiman Long
2014-05-08 19:00 ` Peter Zijlstra
2014-05-08 19:00 ` Peter Zijlstra
2014-05-10 1:05 ` Waiman Long
2014-05-10 1:05 ` Waiman Long
2014-05-08 19:02 ` Peter Zijlstra
2014-05-08 19:02 ` Peter Zijlstra
2014-05-10 1:06 ` Waiman Long
2014-05-10 1:06 ` Waiman Long
2014-05-07 15:01 ` [PATCH v10 08/19] qspinlock: Make a new qnode structure to support virtualization Waiman Long
2014-05-07 15:01 ` Waiman Long
2014-05-08 19:04 ` Peter Zijlstra
2014-05-08 19:04 ` Peter Zijlstra
2014-05-10 1:08 ` Waiman Long
2014-05-10 1:08 ` Waiman Long
2014-05-10 14:14 ` Peter Zijlstra
2014-05-10 14:14 ` Peter Zijlstra
2014-05-10 18:21 ` Peter Zijlstra
2014-05-07 15:01 ` [PATCH v10 09/19] qspinlock: Prepare for unfair lock support Waiman Long
2014-05-07 15:01 ` Waiman Long
2014-05-08 19:06 ` Peter Zijlstra
2014-05-08 19:06 ` Peter Zijlstra
2014-05-10 1:19 ` Waiman Long
2014-05-10 14:13 ` Peter Zijlstra
2014-05-10 14:13 ` Peter Zijlstra
2014-05-07 15:01 ` [PATCH v10 10/19] qspinlock, x86: Allow unfair spinlock in a virtual guest Waiman Long
2014-05-07 15:01 ` Waiman Long
2014-05-08 19:12 ` Peter Zijlstra
2014-05-08 19:12 ` Peter Zijlstra
2014-05-19 20:30 ` Waiman Long
2014-05-19 20:30 ` Waiman Long
2014-05-12 18:57 ` Radim Krčmář
2014-05-07 15:01 ` Waiman Long [this message]
2014-05-07 15:01 ` [PATCH v10 11/19] qspinlock: Split the MCS queuing code into a separate slowerpath Waiman Long
2014-05-07 15:01 ` [PATCH v10 12/19] unfair qspinlock: Variable frequency lock stealing mechanism Waiman Long
2014-05-07 15:01 ` Waiman Long
2014-05-08 19:19 ` Peter Zijlstra
2014-05-08 19:19 ` Peter Zijlstra
2014-05-07 15:01 ` [PATCH v10 13/19] unfair qspinlock: Enable lock stealing in lock waiters Waiman Long
2014-05-07 15:01 ` Waiman Long
2014-05-07 15:01 ` [PATCH v10 14/19] pvqspinlock, x86: Rename paravirt_ticketlocks_enabled Waiman Long
2014-05-07 15:01 ` Waiman Long
2014-05-07 15:01 ` [PATCH v10 15/19] pvqspinlock, x86: Add PV data structure & methods Waiman Long
2014-05-07 15:01 ` Waiman Long
2014-05-07 15:01 ` [PATCH v10 16/19] pvqspinlock: Enable coexistence with the unfair lock Waiman Long
2014-05-07 15:01 ` Waiman Long
2014-05-07 15:01 ` [PATCH v10 17/19] pvqspinlock: Add qspinlock para-virtualization support Waiman Long
2014-05-07 15:01 ` Waiman Long
2014-05-07 15:01 ` [PATCH v10 18/19] pvqspinlock, x86: Enable PV qspinlock PV for KVM Waiman Long
2014-05-07 15:01 ` Waiman Long
2014-05-07 19:07 ` Konrad Rzeszutek Wilk
2014-05-07 19:07 ` Konrad Rzeszutek Wilk
2014-05-08 17:54 ` Waiman Long
2014-05-08 17:54 ` Waiman Long
2014-05-07 15:01 ` [PATCH v10 19/19] pvqspinlock, x86: Enable PV qspinlock for XEN Waiman Long
2014-05-07 15:01 ` Waiman Long
2014-05-07 19:07 ` [PATCH v10 00/19] qspinlock: a 4-byte queue spinlock with PV support Konrad Rzeszutek Wilk
2014-05-07 19:07 ` Konrad Rzeszutek Wilk
2014-05-08 17:54 ` Waiman Long
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1399474907-22206-12-git-send-email-Waiman.Long@hp.com \
--to=waiman.long@hp.com \
--cc=boris.ostrovsky@oracle.com \
--cc=chegu_vinod@hp.com \
--cc=david.vrabel@citrix.com \
--cc=gleb@redhat.com \
--cc=hpa@zytor.com \
--cc=kvm@vger.kernel.org \
--cc=linux-arch@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=mingo@redhat.com \
--cc=oleg@redhat.com \
--cc=paolo.bonzini@gmail.com \
--cc=paulmck@linux.vnet.ibm.com \
--cc=peterz@infradead.org \
--cc=raghavendra.kt@linux.vnet.ibm.com \
--cc=scott.norton@hp.com \
--cc=tglx@linutronix.de \
--cc=torvalds@linux-foundation.org \
--cc=virtualization@lists.linux-foundation.org \
--cc=x86@kernel.org \
--cc=xen-devel@lists.xenproject.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).