From: Sean Christopherson <seanjc@google.com>
To: Marc Zyngier <maz@kernel.org>,
Oliver Upton <oliver.upton@linux.dev>,
Tianrui Zhao <zhaotianrui@loongson.cn>,
Bibo Mao <maobibo@loongson.cn>,
Huacai Chen <chenhuacai@kernel.org>,
Madhavan Srinivasan <maddy@linux.ibm.com>,
Anup Patel <anup@brainfault.org>, Paul Walmsley <pjw@kernel.org>,
Palmer Dabbelt <palmer@dabbelt.com>,
Albert Ou <aou@eecs.berkeley.edu>,
Christian Borntraeger <borntraeger@linux.ibm.com>,
Janosch Frank <frankja@linux.ibm.com>,
Claudio Imbrenda <imbrenda@linux.ibm.com>,
Sean Christopherson <seanjc@google.com>,
Paolo Bonzini <pbonzini@redhat.com>,
"Kirill A. Shutemov" <kas@kernel.org>
Cc: linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev,
kvm@vger.kernel.org, loongarch@lists.linux.dev,
linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org,
kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org,
x86@kernel.org, linux-coco@lists.linux.dev,
linux-kernel@vger.kernel.org, Ira Weiny <ira.weiny@intel.com>,
Kai Huang <kai.huang@intel.com>,
Michael Roth <michael.roth@amd.com>,
Yan Zhao <yan.y.zhao@intel.com>,
Vishal Annapurve <vannapurve@google.com>,
Rick Edgecombe <rick.p.edgecombe@intel.com>,
Ackerley Tng <ackerleytng@google.com>,
Binbin Wu <binbin.wu@linux.intel.com>
Subject: [PATCH v3 15/25] KVM: TDX: ADD pages to the TD image while populating mirror EPT entries
Date: Thu, 16 Oct 2025 17:32:33 -0700 [thread overview]
Message-ID: <20251017003244.186495-16-seanjc@google.com> (raw)
In-Reply-To: <20251017003244.186495-1-seanjc@google.com>
When populating the initial memory image for a TDX guest, ADD pages to the
TD as part of establishing the mappings in the mirror EPT, as opposed to
creating the mappings and then doing ADD after the fact. Doing ADD in the
S-EPT callbacks eliminates the need to track "premapped" pages, as the
mirror EPT (M-EPT) and S-EPT are always synchronized, e.g. if ADD fails,
KVM reverts to the previous M-EPT entry (guaranteed to be !PRESENT).
Eliminating the hole where the M-EPT can have a mapping that doesn't exist
in the S-EPT in turn obviates the need to handle errors that are unique to
encountering a missing S-EPT entry (see tdx_is_sept_zap_err_due_to_premap()).
Keeping the M-EPT and S-EPT synchronized also eliminates the need to check
for unconsumed "premap" entries during tdx_td_finalize(), as there simply
can't be any such entries. Dropping that check in particular reduces the
overall cognitive load, as the managemented of nr_premapped with respect
to removal of S-EPT is _very_ subtle. E.g. successful removal of an S-EPT
entry after it completed ADD doesn't adjust nr_premapped, but it's not
clear why that's "ok" but having half-baked entries is not (it's not truly
"ok" in that removing pages from the image will likely prevent the guest
from booting, but from KVM's perspective it's "ok").
Doing ADD in the S-EPT path requires passing an argument via a scratch
field, but the current approach of tracking the number of "premapped"
pages effectively does the same. And the "premapped" counter is much more
dangerous, as it doesn't have a singular lock to protect its usage, since
nr_premapped can be modified as soon as mmu_lock is dropped, at least in
theory. I.e. nr_premapped is guarded by slots_lock, but only for "happy"
paths.
Note, this approach was used/tried at various points in TDX development,
but was ultimately discarded due to a desire to avoid stashing temporary
state in kvm_tdx. But as above, KVM ended up with such state anyways,
and fully committing to using temporary state provides better access
rules (100% guarded by slots_lock), and makes several edge cases flat out
impossible.
Note #2, continue to extend the measurement outside of mmu_lock, as it's
a slow operation (typically 16 SEAMCALLs per page whose data is included
in the measurement), and doesn't *need* to be done under mmu_lock, e.g.
for consistency purposes. However, MR.EXTEND isn't _that_ slow, e.g.
~1ms latency to measure a full page, so if it needs to be done under
mmu_lock in the future, e.g. because KVM gains a flow that can remove
S-EPT entries uring KVM_TDX_INIT_MEM_REGION, then extending the
measurement can also be moved into the S-EPT mapping path (again, only if
absolutely necessary). P.S. _If_ MR.EXTEND is moved into the S-EPT path,
take care not to return an error up the stack if TDH_MR_EXTEND fails, as
removing the M-EPT entry but not the S-EPT entry would result in
inconsistent state!
Reviewed-by: Rick Edgecombe <rick.p.edgecombe@intel.com>
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
arch/x86/kvm/vmx/tdx.c | 114 ++++++++++++++---------------------------
arch/x86/kvm/vmx/tdx.h | 8 ++-
2 files changed, 45 insertions(+), 77 deletions(-)
diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c
index f4bab75d3ffb..76030461c8f7 100644
--- a/arch/x86/kvm/vmx/tdx.c
+++ b/arch/x86/kvm/vmx/tdx.c
@@ -1583,6 +1583,32 @@ void tdx_load_mmu_pgd(struct kvm_vcpu *vcpu, hpa_t root_hpa, int pgd_level)
td_vmcs_write64(to_tdx(vcpu), SHARED_EPT_POINTER, root_hpa);
}
+static int tdx_mem_page_add(struct kvm *kvm, gfn_t gfn, enum pg_level level,
+ kvm_pfn_t pfn)
+{
+ struct kvm_tdx *kvm_tdx = to_kvm_tdx(kvm);
+ u64 err, entry, level_state;
+ gpa_t gpa = gfn_to_gpa(gfn);
+
+ lockdep_assert_held(&kvm->slots_lock);
+
+ if (KVM_BUG_ON(kvm->arch.pre_fault_allowed, kvm) ||
+ KVM_BUG_ON(!kvm_tdx->page_add_src, kvm))
+ return -EIO;
+
+ err = tdh_mem_page_add(&kvm_tdx->td, gpa, pfn_to_page(pfn),
+ kvm_tdx->page_add_src, &entry, &level_state);
+ if (unlikely(tdx_operand_busy(err)))
+ return -EBUSY;
+
+ if (KVM_BUG_ON(err, kvm)) {
+ pr_tdx_error_2(TDH_MEM_PAGE_ADD, err, entry, level_state);
+ return -EIO;
+ }
+
+ return 0;
+}
+
static int tdx_mem_page_aug(struct kvm *kvm, gfn_t gfn,
enum pg_level level, kvm_pfn_t pfn)
{
@@ -1624,19 +1650,10 @@ static int tdx_sept_set_private_spte(struct kvm *kvm, gfn_t gfn,
/*
* If the TD isn't finalized/runnable, then userspace is initializing
- * the VM image via KVM_TDX_INIT_MEM_REGION. Increment the number of
- * pages that need to be mapped and initialized via TDH.MEM.PAGE.ADD.
- * KVM_TDX_FINALIZE_VM checks the counter to ensure all mapped pages
- * have been added to the image, to prevent running the TD with a
- * valid mapping in the mirror EPT, but not in the S-EPT.
+ * the VM image via KVM_TDX_INIT_MEM_REGION; ADD the page to the TD.
*/
- if (unlikely(kvm_tdx->state != TD_STATE_RUNNABLE)) {
- if (KVM_BUG_ON(kvm->arch.pre_fault_allowed, kvm))
- return -EIO;
-
- atomic64_inc(&kvm_tdx->nr_premapped);
- return 0;
- }
+ if (unlikely(kvm_tdx->state != TD_STATE_RUNNABLE))
+ return tdx_mem_page_add(kvm, gfn, level, pfn);
return tdx_mem_page_aug(kvm, gfn, level, pfn);
}
@@ -1662,39 +1679,6 @@ static int tdx_sept_link_private_spt(struct kvm *kvm, gfn_t gfn,
return 0;
}
-/*
- * Check if the error returned from a SEPT zap SEAMCALL is due to that a page is
- * mapped by KVM_TDX_INIT_MEM_REGION without tdh_mem_page_add() being called
- * successfully.
- *
- * Since tdh_mem_sept_add() must have been invoked successfully before a
- * non-leaf entry present in the mirrored page table, the SEPT ZAP related
- * SEAMCALLs should not encounter err TDX_EPT_WALK_FAILED. They should instead
- * find TDX_EPT_ENTRY_STATE_INCORRECT due to an empty leaf entry found in the
- * SEPT.
- *
- * Further check if the returned entry from SEPT walking is with RWX permissions
- * to filter out anything unexpected.
- *
- * Note: @level is pg_level, not the tdx_level. The tdx_level extracted from
- * level_state returned from a SEAMCALL error is the same as that passed into
- * the SEAMCALL.
- */
-static int tdx_is_sept_zap_err_due_to_premap(struct kvm_tdx *kvm_tdx, u64 err,
- u64 entry, int level)
-{
- if (!err || kvm_tdx->state == TD_STATE_RUNNABLE)
- return false;
-
- if (err != (TDX_EPT_ENTRY_STATE_INCORRECT | TDX_OPERAND_ID_RCX))
- return false;
-
- if ((is_last_spte(entry, level) && (entry & VMX_EPT_RWX_MASK)))
- return false;
-
- return true;
-}
-
static int tdx_sept_zap_private_spte(struct kvm *kvm, gfn_t gfn,
enum pg_level level, struct page *page)
{
@@ -1714,12 +1698,6 @@ static int tdx_sept_zap_private_spte(struct kvm *kvm, gfn_t gfn,
err = tdh_mem_range_block(&kvm_tdx->td, gpa, tdx_level, &entry, &level_state);
tdx_no_vcpus_enter_stop(kvm);
}
- if (tdx_is_sept_zap_err_due_to_premap(kvm_tdx, err, entry, level)) {
- if (KVM_BUG_ON(atomic64_dec_return(&kvm_tdx->nr_premapped) < 0, kvm))
- return -EIO;
-
- return 0;
- }
if (KVM_BUG_ON(err, kvm)) {
pr_tdx_error_2(TDH_MEM_RANGE_BLOCK, err, entry, level_state);
@@ -2825,12 +2803,6 @@ static int tdx_td_finalize(struct kvm *kvm, struct kvm_tdx_cmd *cmd)
if (!is_hkid_assigned(kvm_tdx) || kvm_tdx->state == TD_STATE_RUNNABLE)
return -EINVAL;
- /*
- * Pages are pending for KVM_TDX_INIT_MEM_REGION to issue
- * TDH.MEM.PAGE.ADD().
- */
- if (atomic64_read(&kvm_tdx->nr_premapped))
- return -EINVAL;
cmd->hw_error = tdh_mr_finalize(&kvm_tdx->td);
if (tdx_operand_busy(cmd->hw_error))
@@ -3127,6 +3099,9 @@ static int tdx_gmem_post_populate(struct kvm *kvm, gfn_t gfn, kvm_pfn_t pfn,
struct page *src_page;
int ret, i;
+ if (KVM_BUG_ON(kvm_tdx->page_add_src, kvm))
+ return -EIO;
+
/*
* Get the source page if it has been faulted in. Return failure if the
* source page has been swapped out or unmapped in primary memory.
@@ -3137,22 +3112,14 @@ static int tdx_gmem_post_populate(struct kvm *kvm, gfn_t gfn, kvm_pfn_t pfn,
if (ret != 1)
return -ENOMEM;
+ kvm_tdx->page_add_src = src_page;
ret = kvm_tdp_mmu_map_private_pfn(arg->vcpu, gfn, pfn);
- if (ret < 0)
- goto out;
+ kvm_tdx->page_add_src = NULL;
- ret = 0;
- err = tdh_mem_page_add(&kvm_tdx->td, gpa, pfn_to_page(pfn),
- src_page, &entry, &level_state);
- if (err) {
- ret = unlikely(tdx_operand_busy(err)) ? -EBUSY : -EIO;
- goto out;
- }
+ put_page(src_page);
- KVM_BUG_ON(atomic64_dec_return(&kvm_tdx->nr_premapped) < 0, kvm);
-
- if (!(arg->flags & KVM_TDX_MEASURE_MEMORY_REGION))
- goto out;
+ if (ret || !(arg->flags & KVM_TDX_MEASURE_MEMORY_REGION))
+ return ret;
/*
* Note, MR.EXTEND can fail if the S-EPT mapping is somehow removed
@@ -3165,14 +3132,11 @@ static int tdx_gmem_post_populate(struct kvm *kvm, gfn_t gfn, kvm_pfn_t pfn,
err = tdh_mr_extend(&kvm_tdx->td, gpa + i, &entry, &level_state);
if (KVM_BUG_ON(err, kvm)) {
pr_tdx_error_2(TDH_MR_EXTEND, err, entry, level_state);
- ret = -EIO;
- goto out;
+ return -EIO;
}
}
-out:
- put_page(src_page);
- return ret;
+ return 0;
}
static int tdx_vcpu_init_mem_region(struct kvm_vcpu *vcpu, struct kvm_tdx_cmd *cmd)
diff --git a/arch/x86/kvm/vmx/tdx.h b/arch/x86/kvm/vmx/tdx.h
index ca39a9391db1..1b00adbbaf77 100644
--- a/arch/x86/kvm/vmx/tdx.h
+++ b/arch/x86/kvm/vmx/tdx.h
@@ -36,8 +36,12 @@ struct kvm_tdx {
struct tdx_td td;
- /* For KVM_TDX_INIT_MEM_REGION. */
- atomic64_t nr_premapped;
+ /*
+ * Scratch pointer used to pass the source page to tdx_mem_page_add.
+ * Protected by slots_lock, and non-NULL only when mapping a private
+ * pfn via tdx_gmem_post_populate().
+ */
+ struct page *page_add_src;
/*
* Prevent vCPUs from TD entry to ensure SEPT zap related SEAMCALLs do
--
2.51.0.858.gf9c4a03a3a-goog
next prev parent reply other threads:[~2025-10-17 0:33 UTC|newest]
Thread overview: 94+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-10-17 0:32 [PATCH v3 00/25] KVM: x86/mmu: TDX post-populate cleanups Sean Christopherson
2025-10-17 0:32 ` [PATCH v3 01/25] KVM: Make support for kvm_arch_vcpu_async_ioctl() mandatory Sean Christopherson
2025-10-17 9:12 ` Claudio Imbrenda
2025-10-17 0:32 ` [PATCH v3 02/25] KVM: Rename kvm_arch_vcpu_async_ioctl() to kvm_arch_vcpu_unlocked_ioctl() Sean Christopherson
2025-10-17 9:13 ` Claudio Imbrenda
2025-10-17 0:32 ` [PATCH v3 03/25] KVM: TDX: Drop PROVE_MMU=y sanity check on to-be-populated mappings Sean Christopherson
2025-10-22 3:15 ` Binbin Wu
2025-10-17 0:32 ` [PATCH v3 04/25] KVM: x86/mmu: Add dedicated API to map guest_memfd pfn into TDP MMU Sean Christopherson
2025-10-21 0:10 ` Edgecombe, Rick P
2025-10-21 4:06 ` Yan Zhao
2025-10-21 16:36 ` Sean Christopherson
2025-10-22 8:05 ` Yan Zhao
2025-10-22 18:12 ` Sean Christopherson
2025-10-23 6:48 ` Yan Zhao
2025-10-22 4:53 ` Yan Zhao
2025-10-30 8:34 ` Yan Zhao
2025-11-04 17:57 ` Sean Christopherson
2025-11-05 7:32 ` Yan Zhao
2025-11-05 7:47 ` Yan Zhao
2025-11-05 15:26 ` Sean Christopherson
2025-10-23 10:28 ` Huang, Kai
2025-10-17 0:32 ` [PATCH v3 05/25] Revert "KVM: x86/tdp_mmu: Add a helper function to walk down the TDP MMU" Sean Christopherson
2025-10-22 5:56 ` Binbin Wu
2025-10-23 10:30 ` Huang, Kai
2025-10-17 0:32 ` [PATCH v3 06/25] KVM: x86/mmu: Rename kvm_tdp_map_page() to kvm_tdp_page_prefault() Sean Christopherson
2025-10-22 5:57 ` Binbin Wu
2025-10-23 10:38 ` Huang, Kai
2025-10-17 0:32 ` [PATCH v3 07/25] KVM: TDX: Drop superfluous page pinning in S-EPT management Sean Christopherson
2025-10-21 0:10 ` Edgecombe, Rick P
2025-10-17 0:32 ` [PATCH v3 08/25] KVM: TDX: Return -EIO, not -EINVAL, on a KVM_BUG_ON() condition Sean Christopherson
2025-10-17 0:32 ` [PATCH v3 09/25] KVM: TDX: Fold tdx_sept_drop_private_spte() into tdx_sept_remove_private_spte() Sean Christopherson
2025-10-23 10:53 ` Huang, Kai
2025-10-23 14:59 ` Sean Christopherson
2025-10-23 22:20 ` Huang, Kai
2025-10-17 0:32 ` [PATCH v3 10/25] KVM: x86/mmu: Drop the return code from kvm_x86_ops.remove_external_spte() Sean Christopherson
2025-10-22 8:46 ` Yan Zhao
2025-10-22 19:08 ` Sean Christopherson
2025-10-17 0:32 ` [PATCH v3 11/25] KVM: TDX: Avoid a double-KVM_BUG_ON() in tdx_sept_zap_private_spte() Sean Christopherson
2025-10-23 22:21 ` Huang, Kai
2025-10-17 0:32 ` [PATCH v3 12/25] KVM: TDX: Use atomic64_dec_return() instead of a poor equivalent Sean Christopherson
2025-10-17 0:32 ` [PATCH v3 13/25] KVM: TDX: Fold tdx_mem_page_record_premap_cnt() into its sole caller Sean Christopherson
2025-10-23 22:32 ` Huang, Kai
2025-10-24 7:21 ` Huang, Kai
2025-10-24 7:38 ` Binbin Wu
2025-10-24 16:33 ` Sean Christopherson
2025-10-27 9:01 ` Binbin Wu
2025-10-28 0:29 ` Sean Christopherson
2025-10-17 0:32 ` [PATCH v3 14/25] KVM: TDX: Bug the VM if extended the initial measurement fails Sean Christopherson
2025-10-21 0:10 ` Edgecombe, Rick P
2025-10-23 17:27 ` Sean Christopherson
2025-10-23 22:48 ` Huang, Kai
2025-10-24 16:35 ` Sean Christopherson
2025-10-27 9:31 ` Yan Zhao
2025-10-17 0:32 ` Sean Christopherson [this message]
2025-10-24 7:18 ` [PATCH v3 15/25] KVM: TDX: ADD pages to the TD image while populating mirror EPT entries Huang, Kai
2025-10-17 0:32 ` [PATCH v3 16/25] KVM: TDX: Fold tdx_sept_zap_private_spte() into tdx_sept_remove_private_spte() Sean Christopherson
2025-10-24 9:53 ` Huang, Kai
2025-10-17 0:32 ` [PATCH v3 17/25] KVM: TDX: Combine KVM_BUG_ON + pr_tdx_error() into TDX_BUG_ON() Sean Christopherson
2025-10-17 0:32 ` [PATCH v3 18/25] KVM: TDX: Derive error argument names from the local variable names Sean Christopherson
2025-10-17 0:32 ` [PATCH v3 19/25] KVM: TDX: Assert that mmu_lock is held for write when removing S-EPT entries Sean Christopherson
2025-10-23 7:37 ` Yan Zhao
2025-10-23 15:14 ` Sean Christopherson
2025-10-24 10:05 ` Yan Zhao
2025-10-17 0:32 ` [PATCH v3 20/25] KVM: TDX: Add macro to retry SEAMCALLs when forcing vCPUs out of guest Sean Christopherson
2025-10-24 10:09 ` Huang, Kai
2025-10-27 19:20 ` Sean Christopherson
2025-10-27 22:00 ` Huang, Kai
2025-10-17 0:32 ` [PATCH v3 21/25] KVM: TDX: Add tdx_get_cmd() helper to get and validate sub-ioctl command Sean Christopherson
2025-10-21 0:12 ` Edgecombe, Rick P
2025-10-24 10:11 ` Huang, Kai
2025-10-17 0:32 ` [PATCH v3 22/25] KVM: TDX: Convert INIT_MEM_REGION and INIT_VCPU to "unlocked" vCPU ioctl Sean Christopherson
2025-10-24 10:36 ` Huang, Kai
2025-10-17 0:32 ` [PATCH v3 23/25] KVM: TDX: Use guard() to acquire kvm->lock in tdx_vm_ioctl() Sean Christopherson
2025-10-21 0:10 ` Edgecombe, Rick P
2025-10-21 16:56 ` Sean Christopherson
2025-10-21 19:03 ` Edgecombe, Rick P
2025-10-24 10:36 ` Huang, Kai
2025-10-17 0:32 ` [PATCH v3 24/25] KVM: TDX: Guard VM state transitions with "all" the locks Sean Christopherson
2025-10-24 10:02 ` Yan Zhao
2025-10-24 16:57 ` Sean Christopherson
2025-10-27 9:26 ` Yan Zhao
2025-10-27 17:46 ` Edgecombe, Rick P
2025-10-27 18:10 ` Sean Christopherson
2025-10-28 0:28 ` [PATCH] KVM: TDX: Take MMU lock around tdh_vp_init() Rick Edgecombe
2025-10-28 5:37 ` Yan Zhao
2025-10-29 6:37 ` Binbin Wu
2025-10-28 1:37 ` [PATCH v3 24/25] KVM: TDX: Guard VM state transitions with "all" the locks Yan Zhao
2025-10-28 17:40 ` Edgecombe, Rick P
2025-10-24 10:53 ` Huang, Kai
2025-10-28 0:28 ` Huang, Kai
2025-10-28 0:37 ` Sean Christopherson
2025-10-28 1:01 ` Huang, Kai
2025-10-17 0:32 ` [PATCH v3 25/25] KVM: TDX: Fix list_add corruption during vcpu_load() Sean Christopherson
2025-10-20 8:50 ` Yan Zhao
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20251017003244.186495-16-seanjc@google.com \
--to=seanjc@google.com \
--cc=ackerleytng@google.com \
--cc=anup@brainfault.org \
--cc=aou@eecs.berkeley.edu \
--cc=binbin.wu@linux.intel.com \
--cc=borntraeger@linux.ibm.com \
--cc=chenhuacai@kernel.org \
--cc=frankja@linux.ibm.com \
--cc=imbrenda@linux.ibm.com \
--cc=ira.weiny@intel.com \
--cc=kai.huang@intel.com \
--cc=kas@kernel.org \
--cc=kvm-riscv@lists.infradead.org \
--cc=kvm@vger.kernel.org \
--cc=kvmarm@lists.linux.dev \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-coco@lists.linux.dev \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mips@vger.kernel.org \
--cc=linux-riscv@lists.infradead.org \
--cc=linuxppc-dev@lists.ozlabs.org \
--cc=loongarch@lists.linux.dev \
--cc=maddy@linux.ibm.com \
--cc=maobibo@loongson.cn \
--cc=maz@kernel.org \
--cc=michael.roth@amd.com \
--cc=oliver.upton@linux.dev \
--cc=palmer@dabbelt.com \
--cc=pbonzini@redhat.com \
--cc=pjw@kernel.org \
--cc=rick.p.edgecombe@intel.com \
--cc=vannapurve@google.com \
--cc=x86@kernel.org \
--cc=yan.y.zhao@intel.com \
--cc=zhaotianrui@loongson.cn \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).