From: Peter Xu <peterx@redhat.com>
To: James Houghton <jthoughton@google.com>
Cc: Mike Kravetz <mike.kravetz@oracle.com>,
Muchun Song <songmuchun@bytedance.com>,
David Hildenbrand <david@redhat.com>,
David Rientjes <rientjes@google.com>,
Axel Rasmussen <axelrasmussen@google.com>,
Mina Almasry <almasrymina@google.com>,
Zach O'Keefe <zokeefe@google.com>,
Manish Mishra <manish.mishra@nutanix.com>,
Naoya Horiguchi <naoya.horiguchi@nec.com>,
"Dr . David Alan Gilbert" <dgilbert@redhat.com>,
"Matthew Wilcox (Oracle)" <willy@infradead.org>,
Vlastimil Babka <vbabka@suse.cz>,
Baolin Wang <baolin.wang@linux.alibaba.com>,
Miaohe Lin <linmiaohe@huawei.com>, Yang Shi <shy828301@gmail.com>,
Andrew Morton <akpm@linux-foundation.org>,
linux-mm@kvack.org, linux-kernel@vger.kernel.org
Subject: Re: [PATCH 35/46] hugetlb: add MADV_COLLAPSE for hugetlb
Date: Tue, 17 Jan 2023 16:06:40 -0500 [thread overview]
Message-ID: <Y8cN4G0ICoSSggS+@x1n> (raw)
In-Reply-To: <20230105101844.1893104-36-jthoughton@google.com>
On Thu, Jan 05, 2023 at 10:18:33AM +0000, James Houghton wrote:
> This is a necessary extension to the UFFDIO_CONTINUE changes. When
> userspace finishes mapping an entire hugepage with UFFDIO_CONTINUE, the
> kernel has no mechanism to automatically collapse the page table to map
> the whole hugepage normally. We require userspace to inform us that they
> would like the mapping to be collapsed; they do this with MADV_COLLAPSE.
>
> If userspace has not mapped all of a hugepage with UFFDIO_CONTINUE, but
> only some, hugetlb_collapse will cause the requested range to be mapped
> as if it were UFFDIO_CONTINUE'd already. The effects of any
> UFFDIO_WRITEPROTECT calls may be undone by a call to MADV_COLLAPSE for
> intersecting address ranges.
>
> This commit is co-opting the same madvise mode that has been introduced
> to synchronously collapse THPs. The function that does THP collapsing
> has been renamed to madvise_collapse_thp.
>
> As with the rest of the high-granularity mapping support, MADV_COLLAPSE
> is only supported for shared VMAs right now.
>
> MADV_COLLAPSE has the same synchronization as huge_pmd_unshare.
>
> Signed-off-by: James Houghton <jthoughton@google.com>
> ---
> include/linux/huge_mm.h | 12 +--
> include/linux/hugetlb.h | 8 ++
> mm/hugetlb.c | 164 ++++++++++++++++++++++++++++++++++++++++
> mm/khugepaged.c | 4 +-
> mm/madvise.c | 18 ++++-
> 5 files changed, 197 insertions(+), 9 deletions(-)
>
> diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
> index a1341fdcf666..5d1e3c980f74 100644
> --- a/include/linux/huge_mm.h
> +++ b/include/linux/huge_mm.h
> @@ -218,9 +218,9 @@ void __split_huge_pud(struct vm_area_struct *vma, pud_t *pud,
>
> int hugepage_madvise(struct vm_area_struct *vma, unsigned long *vm_flags,
> int advice);
> -int madvise_collapse(struct vm_area_struct *vma,
> - struct vm_area_struct **prev,
> - unsigned long start, unsigned long end);
> +int madvise_collapse_thp(struct vm_area_struct *vma,
> + struct vm_area_struct **prev,
> + unsigned long start, unsigned long end);
> void vma_adjust_trans_huge(struct vm_area_struct *vma, unsigned long start,
> unsigned long end, long adjust_next);
> spinlock_t *__pmd_trans_huge_lock(pmd_t *pmd, struct vm_area_struct *vma);
> @@ -367,9 +367,9 @@ static inline int hugepage_madvise(struct vm_area_struct *vma,
> return -EINVAL;
> }
>
> -static inline int madvise_collapse(struct vm_area_struct *vma,
> - struct vm_area_struct **prev,
> - unsigned long start, unsigned long end)
> +static inline int madvise_collapse_thp(struct vm_area_struct *vma,
> + struct vm_area_struct **prev,
> + unsigned long start, unsigned long end)
> {
> return -EINVAL;
> }
> diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
> index c8524ac49b24..e1baf939afb6 100644
> --- a/include/linux/hugetlb.h
> +++ b/include/linux/hugetlb.h
> @@ -1298,6 +1298,8 @@ bool hugetlb_hgm_eligible(struct vm_area_struct *vma);
> int hugetlb_alloc_largest_pte(struct hugetlb_pte *hpte, struct mm_struct *mm,
> struct vm_area_struct *vma, unsigned long start,
> unsigned long end);
> +int hugetlb_collapse(struct mm_struct *mm, struct vm_area_struct *vma,
> + unsigned long start, unsigned long end);
> #else
> static inline bool hugetlb_hgm_enabled(struct vm_area_struct *vma)
> {
> @@ -1318,6 +1320,12 @@ int hugetlb_alloc_largest_pte(struct hugetlb_pte *hpte, struct mm_struct *mm,
> {
> return -EINVAL;
> }
> +static inline
> +int hugetlb_collapse(struct mm_struct *mm, struct vm_area_struct *vma,
> + unsigned long start, unsigned long end)
> +{
> + return -EINVAL;
> +}
> #endif
>
> static inline spinlock_t *huge_pte_lock(struct hstate *h,
> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> index 5b6215e03fe1..388c46c7e77a 100644
> --- a/mm/hugetlb.c
> +++ b/mm/hugetlb.c
> @@ -7852,6 +7852,170 @@ int hugetlb_alloc_largest_pte(struct hugetlb_pte *hpte, struct mm_struct *mm,
> return 0;
> }
>
> +static bool hugetlb_hgm_collapsable(struct vm_area_struct *vma)
> +{
> + if (!hugetlb_hgm_eligible(vma))
> + return false;
> + if (!vma->vm_private_data) /* vma lock required for collapsing */
> + return false;
> + return true;
> +}
> +
> +/*
> + * Collapse the address range from @start to @end to be mapped optimally.
> + *
> + * This is only valid for shared mappings. The main use case for this function
> + * is following UFFDIO_CONTINUE. If a user UFFDIO_CONTINUEs an entire hugepage
> + * by calling UFFDIO_CONTINUE once for each 4K region, the kernel doesn't know
> + * to collapse the mapping after the final UFFDIO_CONTINUE. Instead, we leave
> + * it up to userspace to tell us to do so, via MADV_COLLAPSE.
> + *
> + * Any holes in the mapping will be filled. If there is no page in the
> + * pagecache for a region we're collapsing, the PTEs will be cleared.
> + *
> + * If high-granularity PTEs are uffd-wp markers, those markers will be dropped.
> + */
> +int hugetlb_collapse(struct mm_struct *mm, struct vm_area_struct *vma,
> + unsigned long start, unsigned long end)
> +{
> + struct hstate *h = hstate_vma(vma);
> + struct address_space *mapping = vma->vm_file->f_mapping;
> + struct mmu_notifier_range range;
> + struct mmu_gather tlb;
> + unsigned long curr = start;
> + int ret = 0;
> + struct page *hpage, *subpage;
> + pgoff_t idx;
> + bool writable = vma->vm_flags & VM_WRITE;
> + bool shared = vma->vm_flags & VM_SHARED;
> + struct hugetlb_pte hpte;
> + pte_t entry;
> +
> + /*
> + * This is only supported for shared VMAs, because we need to look up
> + * the page to use for any PTEs we end up creating.
> + */
> + if (!shared)
> + return -EINVAL;
> +
> + /* If HGM is not enabled, there is nothing to collapse. */
> + if (!hugetlb_hgm_enabled(vma))
> + return 0;
> +
> + /*
> + * We lost the VMA lock after splitting, so we can't safely collapse.
> + * We could improve this in the future (like take the mmap_lock for
> + * writing and try again), but for now just fail with ENOMEM.
> + */
> + if (unlikely(!hugetlb_hgm_collapsable(vma)))
> + return -ENOMEM;
> +
> + mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, vma, mm,
> + start, end);
> + mmu_notifier_invalidate_range_start(&range);
> + tlb_gather_mmu(&tlb, mm);
> +
> + /*
> + * Grab the VMA lock and mapping sem for writing. This will prevent
> + * concurrent high-granularity page table walks, so that we can safely
> + * collapse and free page tables.
> + *
> + * This is the same locking that huge_pmd_unshare requires.
> + */
> + hugetlb_vma_lock_write(vma);
> + i_mmap_lock_write(vma->vm_file->f_mapping);
> +
> + while (curr < end) {
> + ret = hugetlb_alloc_largest_pte(&hpte, mm, vma, curr, end);
> + if (ret)
> + goto out;
> +
> + entry = huge_ptep_get(hpte.ptep);
> +
> + /*
> + * There is no work to do if the PTE doesn't point to page
> + * tables.
> + */
> + if (!pte_present(entry))
> + goto next_hpte;
> + if (hugetlb_pte_present_leaf(&hpte, entry))
> + goto next_hpte;
> +
> + idx = vma_hugecache_offset(h, vma, curr);
> + hpage = find_get_page(mapping, idx);
> +
> + if (hpage && !HPageMigratable(hpage)) {
> + /*
> + * Don't collapse a mapping to a page that is pending
> + * a migration. Migration swap entries may have placed
> + * in the page table.
> + */
> + ret = -EBUSY;
> + put_page(hpage);
> + goto out;
> + }
> +
> + if (hpage && PageHWPoison(hpage)) {
> + /*
> + * Don't collapse a mapping to a page that is
> + * hwpoisoned.
> + */
> + ret = -EHWPOISON;
> + put_page(hpage);
> + /*
> + * By setting ret to -EHWPOISON, if nothing else
> + * happens, we will tell userspace that we couldn't
> + * fully collapse everything due to poison.
> + *
> + * Skip this page, and continue to collapse the rest
> + * of the mapping.
> + */
> + curr = (curr & huge_page_mask(h)) + huge_page_size(h);
> + continue;
> + }
> +
> + /*
> + * Clear all the PTEs, and drop ref/mapcounts
> + * (on tlb_finish_mmu).
> + */
> + __unmap_hugepage_range(&tlb, vma, curr,
> + curr + hugetlb_pte_size(&hpte),
> + NULL,
> + ZAP_FLAG_DROP_MARKER);
> + /* Free the PTEs. */
> + hugetlb_free_pgd_range(&tlb,
> + curr, curr + hugetlb_pte_size(&hpte),
> + curr, curr + hugetlb_pte_size(&hpte));
> + if (!hpage) {
> + huge_pte_clear(mm, curr, hpte.ptep,
> + hugetlb_pte_size(&hpte));
> + goto next_hpte;
> + }
> +
> + page_dup_file_rmap(hpage, true);
> +
> + subpage = hugetlb_find_subpage(h, hpage, curr);
> + entry = make_huge_pte_with_shift(vma, subpage,
> + writable, hpte.shift);
> + set_huge_pte_at(mm, curr, hpte.ptep, entry);
> +next_hpte:
> + curr += hugetlb_pte_size(&hpte);
> +
> + if (curr < end) {
> + /* Don't hold the VMA lock for too long. */
> + hugetlb_vma_unlock_write(vma);
> + cond_resched();
> + hugetlb_vma_lock_write(vma);
The intention is good here but IIUC this will cause vma lock to be taken
after the i_mmap_rwsem, which can cause circular deadlocks. If to do this
properly we'll need to also release the i_mmap_rwsem.
However it may make the resched() logic over complicated, meanwhile for 2M
huge pages I think this will be called for each 2M range which can be too
fine grained, so it looks like the "cur < end" check is a bit too aggresive.
The other thing is I noticed that the long period of mmu notifier
invalidate between start -> end will (in reallife VM context) causing vcpu
threads spinning.
I _think_ it's because is_page_fault_stale() (when during a vmexit
following a kvm page fault) always reports true during the long procedure
of MADV_COLLAPSE if to be called upon a large range, so even if we release
both locks here it may not tremedously on the VM migration use case because
of the long-standing mmu notifier invalidation procedure.
To summarize.. I think a simpler start version of hugetlb MADV_COLLAPSE can
drop this "if" block, and let the userapp decide the step size of COLLAPSE?
> + }
> + }
> +out:
> + i_mmap_unlock_write(vma->vm_file->f_mapping);
> + hugetlb_vma_unlock_write(vma);
> + tlb_finish_mmu(&tlb);
> + mmu_notifier_invalidate_range_end(&range);
> + return ret;
> +}
> +
> #endif /* CONFIG_HUGETLB_HIGH_GRANULARITY_MAPPING */
--
Peter Xu
next prev parent reply other threads:[~2023-01-17 21:06 UTC|newest]
Thread overview: 126+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-01-05 10:17 [PATCH 00/46] Based on latest mm-unstable (85b44c25cd1e) James Houghton
2023-01-05 10:17 ` [PATCH 01/46] hugetlb: don't set PageUptodate for UFFDIO_CONTINUE James Houghton
2023-01-05 10:18 ` [PATCH 02/46] hugetlb: remove mk_huge_pte; it is unused James Houghton
2023-01-05 10:18 ` [PATCH 03/46] hugetlb: remove redundant pte_mkhuge in migration path James Houghton
2023-01-05 10:18 ` [PATCH 04/46] hugetlb: only adjust address ranges when VMAs want PMD sharing James Houghton
2023-01-05 10:18 ` [PATCH 05/46] hugetlb: add CONFIG_HUGETLB_HIGH_GRANULARITY_MAPPING James Houghton
2023-01-05 10:18 ` [PATCH 06/46] mm: add VM_HUGETLB_HGM VMA flag James Houghton
2023-01-05 10:18 ` [PATCH 07/46] hugetlb: rename __vma_shareable_flags_pmd to __vma_has_hugetlb_vma_lock James Houghton
2023-01-05 10:18 ` [PATCH 08/46] hugetlb: add HugeTLB HGM enablement helpers James Houghton
2023-01-05 10:18 ` [PATCH 09/46] mm: add MADV_SPLIT to enable HugeTLB HGM James Houghton
2023-01-05 15:05 ` kernel test robot
2023-01-05 15:29 ` David Hildenbrand
2023-01-10 0:01 ` Zach O'Keefe
2023-01-05 10:18 ` [PATCH 10/46] hugetlb: make huge_pte_lockptr take an explicit shift argument James Houghton
2023-01-05 10:18 ` [PATCH 11/46] hugetlb: add hugetlb_pte to track HugeTLB page table entries James Houghton
2023-01-05 16:06 ` kernel test robot
2023-01-05 10:18 ` [PATCH 12/46] hugetlb: add hugetlb_alloc_pmd and hugetlb_alloc_pte James Houghton
2023-01-05 10:18 ` [PATCH 13/46] hugetlb: add hugetlb_hgm_walk and hugetlb_walk_step James Houghton
2023-01-05 16:57 ` kernel test robot
2023-01-05 18:58 ` kernel test robot
2023-01-11 21:51 ` Peter Xu
2023-01-12 13:38 ` James Houghton
2023-01-05 10:18 ` [PATCH 14/46] hugetlb: add make_huge_pte_with_shift James Houghton
2023-01-05 10:18 ` [PATCH 15/46] hugetlb: make default arch_make_huge_pte understand small mappings James Houghton
2023-01-05 10:18 ` [PATCH 16/46] hugetlbfs: do a full walk to check if vma maps a page James Houghton
2023-01-05 10:18 ` [PATCH 17/46] hugetlb: make unmapping compatible with high-granularity mappings James Houghton
2023-01-05 10:18 ` [PATCH 18/46] hugetlb: add HGM support for hugetlb_change_protection James Houghton
2023-01-05 10:18 ` [PATCH 19/46] hugetlb: add HGM support for follow_hugetlb_page James Houghton
2023-01-05 22:26 ` Peter Xu
2023-01-12 18:02 ` Peter Xu
2023-01-12 18:06 ` James Houghton
2023-01-05 10:18 ` [PATCH 20/46] hugetlb: add HGM support for hugetlb_follow_page_mask James Houghton
2023-01-05 10:18 ` [PATCH 21/46] hugetlb: use struct hugetlb_pte for walk_hugetlb_range James Houghton
2023-01-05 22:42 ` Peter Xu
2023-01-11 22:58 ` Peter Xu
2023-01-12 14:06 ` James Houghton
2023-01-12 15:29 ` Peter Xu
2023-01-12 16:45 ` James Houghton
2023-01-12 16:55 ` James Houghton
2023-01-12 20:27 ` Peter Xu
2023-01-12 21:17 ` James Houghton
2023-01-12 21:33 ` Peter Xu
2023-01-16 10:17 ` David Hildenbrand
2023-01-17 23:11 ` James Houghton
2023-01-18 9:43 ` David Hildenbrand
2023-01-18 15:35 ` Peter Xu
2023-01-18 16:39 ` James Houghton
2023-01-18 18:21 ` David Hildenbrand
2023-01-18 19:28 ` Mike Kravetz
2023-01-19 16:57 ` James Houghton
2023-01-19 17:31 ` Mike Kravetz
2023-01-19 19:42 ` James Houghton
2023-01-19 20:53 ` Peter Xu
2023-01-19 22:45 ` James Houghton
2023-01-19 22:00 ` Mike Kravetz
2023-01-19 22:23 ` Peter Xu
2023-01-19 22:35 ` James Houghton
2023-01-19 23:07 ` Peter Xu
2023-01-19 23:26 ` James Houghton
2023-01-20 17:23 ` Peter Xu
2023-01-19 23:44 ` Mike Kravetz
2023-01-23 15:19 ` Peter Xu
2023-01-23 17:49 ` Mike Kravetz
2023-01-26 16:58 ` James Houghton
2023-01-26 20:30 ` Peter Xu
2023-01-27 21:02 ` James Houghton
2023-01-30 17:29 ` Peter Xu
2023-01-30 18:38 ` James Houghton
2023-01-30 21:14 ` Peter Xu
2023-02-01 0:24 ` James Houghton
2023-02-01 1:24 ` Peter Xu
2023-02-01 15:45 ` James Houghton
2023-02-01 15:56 ` David Hildenbrand
2023-02-01 17:58 ` James Houghton
2023-02-01 18:01 ` David Hildenbrand
2023-02-01 16:22 ` Peter Xu
2023-02-01 21:32 ` James Houghton
2023-02-01 21:51 ` Peter Xu
2023-02-02 0:24 ` James Houghton
2023-02-07 16:30 ` James Houghton
2023-02-07 22:46 ` James Houghton
2023-02-07 23:13 ` Peter Xu
2023-02-08 0:26 ` James Houghton
2023-02-08 16:16 ` Peter Xu
2023-02-09 16:43 ` James Houghton
2023-02-09 19:10 ` Peter Xu
2023-02-09 19:49 ` James Houghton
2023-02-09 20:22 ` Peter Xu
2023-01-18 17:08 ` David Hildenbrand
2023-01-05 10:18 ` [PATCH 22/46] mm: rmap: provide pte_order in page_vma_mapped_walk James Houghton
2023-01-05 10:18 ` [PATCH 23/46] mm: rmap: make page_vma_mapped_walk callers use pte_order James Houghton
2023-01-05 10:18 ` [PATCH 24/46] rmap: update hugetlb lock comment for HGM James Houghton
2023-01-05 10:18 ` [PATCH 25/46] hugetlb: update page_vma_mapped to do high-granularity walks James Houghton
2023-01-05 10:18 ` [PATCH 26/46] hugetlb: add HGM support for copy_hugetlb_page_range James Houghton
2023-01-05 10:18 ` [PATCH 27/46] hugetlb: add HGM support for move_hugetlb_page_tables James Houghton
2023-01-05 10:18 ` [PATCH 28/46] hugetlb: add HGM support for hugetlb_fault and hugetlb_no_page James Houghton
2023-01-05 10:18 ` [PATCH 29/46] rmap: in try_to_{migrate,unmap}_one, check head page for page flags James Houghton
2023-01-05 10:18 ` [PATCH 30/46] hugetlb: add high-granularity migration support James Houghton
2023-01-05 10:18 ` [PATCH 31/46] hugetlb: sort hstates in hugetlb_init_hstates James Houghton
2023-01-05 10:18 ` [PATCH 32/46] hugetlb: add for_each_hgm_shift James Houghton
2023-01-05 10:18 ` [PATCH 33/46] hugetlb: userfaultfd: add support for high-granularity UFFDIO_CONTINUE James Houghton
2023-01-05 10:18 ` [PATCH 34/46] hugetlb: userfaultfd: when using MADV_SPLIT, round addresses to PAGE_SIZE James Houghton
2023-01-06 15:13 ` Peter Xu
2023-01-10 14:50 ` James Houghton
2023-01-05 10:18 ` [PATCH 35/46] hugetlb: add MADV_COLLAPSE for hugetlb James Houghton
2023-01-10 20:04 ` James Houghton
2023-01-17 21:06 ` Peter Xu [this message]
2023-01-17 21:38 ` James Houghton
2023-01-17 21:54 ` Peter Xu
2023-01-19 22:37 ` Peter Xu
2023-01-19 23:06 ` James Houghton
2023-01-05 10:18 ` [PATCH 36/46] hugetlb: remove huge_pte_lock and huge_pte_lockptr James Houghton
2023-01-05 10:18 ` [PATCH 37/46] hugetlb: replace make_huge_pte with make_huge_pte_with_shift James Houghton
2023-01-05 10:18 ` [PATCH 38/46] mm: smaps: add stats for HugeTLB mapping size James Houghton
2023-01-05 10:18 ` [PATCH 39/46] hugetlb: x86: enable high-granularity mapping James Houghton
2023-01-12 20:07 ` James Houghton
2023-01-05 10:18 ` [PATCH 40/46] docs: hugetlb: update hugetlb and userfaultfd admin-guides with HGM info James Houghton
2023-01-05 10:18 ` [PATCH 41/46] docs: proc: include information about HugeTLB HGM James Houghton
2023-01-05 10:18 ` [PATCH 42/46] selftests/vm: add HugeTLB HGM to userfaultfd selftest James Houghton
2023-01-05 10:18 ` [PATCH 43/46] selftests/kvm: add HugeTLB HGM to KVM demand paging selftest James Houghton
2023-01-05 10:18 ` [PATCH 44/46] selftests/vm: add anon and shared hugetlb to migration test James Houghton
2023-01-05 10:18 ` [PATCH 45/46] selftests/vm: add hugetlb HGM test to migration selftest James Houghton
2023-01-05 10:18 ` [PATCH 46/46] selftests/vm: add HGM UFFDIO_CONTINUE and hwpoison tests James Houghton
2023-01-05 10:47 ` [PATCH 00/46] Based on latest mm-unstable (85b44c25cd1e) David Hildenbrand
2023-01-09 19:53 ` Mike Kravetz
2023-01-10 15:47 ` David Hildenbrand
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=Y8cN4G0ICoSSggS+@x1n \
--to=peterx@redhat.com \
--cc=akpm@linux-foundation.org \
--cc=almasrymina@google.com \
--cc=axelrasmussen@google.com \
--cc=baolin.wang@linux.alibaba.com \
--cc=david@redhat.com \
--cc=dgilbert@redhat.com \
--cc=jthoughton@google.com \
--cc=linmiaohe@huawei.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=manish.mishra@nutanix.com \
--cc=mike.kravetz@oracle.com \
--cc=naoya.horiguchi@nec.com \
--cc=rientjes@google.com \
--cc=shy828301@gmail.com \
--cc=songmuchun@bytedance.com \
--cc=vbabka@suse.cz \
--cc=willy@infradead.org \
--cc=zokeefe@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).