From: Yichen Wang <yichen.wang@bytedance.com>
To: Fabiano Rosas <farosas@suse.de>
Cc: "Peter Xu" <peterx@redhat.com>,
"Dr. David Alan Gilbert" <dave@treblig.org>,
"Paolo Bonzini" <pbonzini@redhat.com>,
"Marc-André Lureau" <marcandre.lureau@redhat.com>,
"Daniel P. Berrangé" <berrange@redhat.com>,
"Philippe Mathieu-Daudé" <philmd@linaro.org>,
"Eric Blake" <eblake@redhat.com>,
"Markus Armbruster" <armbru@redhat.com>,
"Michael S. Tsirkin" <mst@redhat.com>,
"Cornelia Huck" <cohuck@redhat.com>,
qemu-devel@nongnu.org, "Hao Xiang" <hao.xiang@linux.dev>,
"Liu, Yuan1" <yuan1.liu@intel.com>,
"Shivam Kumar" <shivam.kumar1@nutanix.com>,
"Ho-Ren (Jack) Chuang" <horenchuang@bytedance.com>
Subject: Re: [External] Re: [PATCH v7 00/12] Use Intel DSA accelerator to offload zero page checking in multifd live migration.
Date: Mon, 25 Nov 2024 20:43:55 -0800 [thread overview]
Message-ID: <CAHObMVZFZ3Jg0dTk--XV2i-tMuOFwa0Lk3B9Hp3PGe6e1TK5rg@mail.gmail.com> (raw)
In-Reply-To: <87h682syfv.fsf@suse.de>
On Tue, Nov 19, 2024 at 1:31 PM Fabiano Rosas <farosas@suse.de> wrote:
>
> Yichen Wang <yichen.wang@bytedance.com> writes:
>
> > v7
> > * Rebase on top of f0a5a31c33a8109061c2493e475c8a2f4d022432;
> > * Fix a bug that will crash QEMU when DSA initialization failed;
> > * Use a more generalized accel-path to support other accelerators;
> > * Remove multifd-packet-size in the parameter list;
> >
> > v6
> > * Rebase on top of 838fc0a8769d7cc6edfe50451ba4e3368395f5c1;
> > * Refactor code to have clean history on all commits;
> > * Add comments on DSA specific defines about how the value is picked;
> > * Address all comments from v5 reviews about api defines, questions, etc.;
> >
> > v5
> > * Rebase on top of 39a032cea23e522268519d89bb738974bc43b6f6.
> > * Rename struct definitions with typedef and CamelCase names;
> > * Add build and runtime checks about DSA accelerator;
> > * Address all comments from v4 reviews about typos, licenses, comments,
> > error reporting, etc.
> >
> > v4
> > * Rebase on top of 85b597413d4370cb168f711192eaef2eb70535ac.
> > * A separate "multifd zero page checking" patchset was split from this
> > patchset's v3 and got merged into master. v4 re-applied the rest of all
> > commits on top of that patchset, re-factored and re-tested.
> > https://lore.kernel.org/all/20240311180015.3359271-1-hao.xiang@linux.dev/
> > * There are some feedback from v3 I likely overlooked.
> >
> > v3
> > * Rebase on top of 7425b6277f12e82952cede1f531bfc689bf77fb1.
> > * Fix error/warning from checkpatch.pl
> > * Fix use-after-free bug when multifd-dsa-accel option is not set.
> > * Handle error from dsa_init and correctly propogate the error.
> > * Remove unnecessary call to dsa_stop.
> > * Detect availability of DSA feature at compile time.
> > * Implement a generic batch_task structure and a DSA specific one dsa_batch_task.
> > * Remove all exit() calls and propagate errors correctly.
> > * Use bytes instead of page count to configure multifd-packet-size option.
> >
> > v2
> > * Rebase on top of 3e01f1147a16ca566694b97eafc941d62fa1e8d8.
> > * Leave Juan's changes in their original form instead of squashing them.
> > * Add a new commit to refactor the multifd_send_thread function to prepare for introducing the DSA offload functionality.
> > * Use page count to configure multifd-packet-size option.
> > * Don't use the FLAKY flag in DSA tests.
> > * Test if DSA integration test is setup correctly and skip the test if
> > * not.
> > * Fixed broken link in the previous patch cover.
> >
> > * Background:
> >
> > I posted an RFC about DSA offloading in QEMU:
> > https://patchew.org/QEMU/20230529182001.2232069-1-hao.xiang@bytedance.com/
> >
> > This patchset implements the DSA offloading on zero page checking in
> > multifd live migration code path.
> >
> > * Overview:
> >
> > Intel Data Streaming Accelerator(DSA) is introduced in Intel's 4th generation
> > Xeon server, aka Sapphire Rapids.
> > https://cdrdv2-public.intel.com/671116/341204-intel-data-streaming-accelerator-spec.pdf
> > https://www.intel.com/content/www/us/en/content-details/759709/intel-data-streaming-accelerator-user-guide.html
> > One of the things DSA can do is to offload memory comparison workload from
> > CPU to DSA accelerator hardware. This patchset implements a solution to offload
> > QEMU's zero page checking from CPU to DSA accelerator hardware. We gain
> > two benefits from this change:
> > 1. Reduces CPU usage in multifd live migration workflow across all use
> > cases.
> > 2. Reduces migration total time in some use cases.
> >
> > * Design:
> >
> > These are the logical steps to perform DSA offloading:
> > 1. Configure DSA accelerators and create user space openable DSA work
> > queues via the idxd driver.
> > 2. Map DSA's work queue into a user space address space.
> > 3. Fill an in-memory task descriptor to describe the memory operation.
> > 4. Use dedicated CPU instruction _enqcmd to queue a task descriptor to
> > the work queue.
> > 5. Pull the task descriptor's completion status field until the task
> > completes.
> > 6. Check return status.
> >
> > The memory operation is now totally done by the accelerator hardware but
> > the new workflow introduces overheads. The overhead is the extra cost CPU
> > prepares and submits the task descriptors and the extra cost CPU pulls for
> > completion. The design is around minimizing these two overheads.
> >
> > 1. In order to reduce the overhead on task preparation and submission,
> > we use batch descriptors. A batch descriptor will contain N individual
> > zero page checking tasks where the default N is 128 (default packet size
> > / page size) and we can increase N by setting the packet size via a new
> > migration option.
> > 2. The multifd sender threads prepares and submits batch tasks to DSA
> > hardware and it waits on a synchronization object for task completion.
> > Whenever a DSA task is submitted, the task structure is added to a
> > thread safe queue. It's safe to have multiple multifd sender threads to
> > submit tasks concurrently.
> > 3. Multiple DSA hardware devices can be used. During multifd initialization,
> > every sender thread will be assigned a DSA device to work with. We
> > use a round-robin scheme to evenly distribute the work across all used
> > DSA devices.
> > 4. Use a dedicated thread dsa_completion to perform busy pulling for all
> > DSA task completions. The thread keeps dequeuing DSA tasks from the
> > thread safe queue. The thread blocks when there is no outstanding DSA
> > task. When pulling for completion of a DSA task, the thread uses CPU
> > instruction _mm_pause between the iterations of a busy loop to save some
> > CPU power as well as optimizing core resources for the other hypercore.
> > 5. DSA accelerator can encounter errors. The most popular error is a
> > page fault. We have tested using devices to handle page faults but
> > performance is bad. Right now, if DSA hits a page fault, we fallback to
> > use CPU to complete the rest of the work. The CPU fallback is done in
> > the multifd sender thread.
> > 6. Added a new migration option multifd-dsa-accel to set the DSA device
> > path. If set, the multifd workflow will leverage the DSA devices for
> > offloading.
> > 7. Added a new migration option multifd-normal-page-ratio to make
> > multifd live migration easier to test. Setting a normal page ratio will
> > make live migration recognize a zero page as a normal page and send
> > the entire payload over the network. If we want to send a large network
> > payload and analyze throughput, this option is useful.
> > 8. Added a new migration option multifd-packet-size. This can increase
> > the number of pages being zero page checked and sent over the network.
> > The extra synchronization between the sender threads and the dsa
> > completion thread is an overhead. Using a large packet size can reduce
> > that overhead.
> >
> > * Performance:
> >
> > We use two Intel 4th generation Xeon servers for testing.
> >
> > Architecture: x86_64
> > CPU(s): 192
> > Thread(s) per core: 2
> > Core(s) per socket: 48
> > Socket(s): 2
> > NUMA node(s): 2
> > Vendor ID: GenuineIntel
> > CPU family: 6
> > Model: 143
> > Model name: Intel(R) Xeon(R) Platinum 8457C
> > Stepping: 8
> > CPU MHz: 2538.624
> > CPU max MHz: 3800.0000
> > CPU min MHz: 800.0000
> >
> > We perform multifd live migration with below setup:
> > 1. VM has 100GB memory.
> > 2. Use the new migration option multifd-set-normal-page-ratio to control the total
> > size of the payload sent over the network.
> > 3. Use 8 multifd channels.
> > 4. Use tcp for live migration.
> > 4. Use CPU to perform zero page checking as the baseline.
> > 5. Use one DSA device to offload zero page checking to compare with the baseline.
> > 6. Use "perf sched record" and "perf sched timehist" to analyze CPU usage.
> >
> > A) Scenario 1: 50% (50GB) normal pages on an 100GB vm.
> >
> > CPU usage
> >
> > |---------------|---------------|---------------|---------------|
> > | |comm |runtime(msec) |totaltime(msec)|
> > |---------------|---------------|---------------|---------------|
> > |Baseline |live_migration |5657.58 | |
> > | |multifdsend_0 |3931.563 | |
> > | |multifdsend_1 |4405.273 | |
> > | |multifdsend_2 |3941.968 | |
> > | |multifdsend_3 |5032.975 | |
> > | |multifdsend_4 |4533.865 | |
> > | |multifdsend_5 |4530.461 | |
> > | |multifdsend_6 |5171.916 | |
> > | |multifdsend_7 |4722.769 |41922 |
> > |---------------|---------------|---------------|---------------|
> > |DSA |live_migration |6129.168 | |
> > | |multifdsend_0 |2954.717 | |
> > | |multifdsend_1 |2766.359 | |
> > | |multifdsend_2 |2853.519 | |
> > | |multifdsend_3 |2740.717 | |
> > | |multifdsend_4 |2824.169 | |
> > | |multifdsend_5 |2966.908 | |
> > | |multifdsend_6 |2611.137 | |
> > | |multifdsend_7 |3114.732 | |
> > | |dsa_completion |3612.564 |32568 |
> > |---------------|---------------|---------------|---------------|
> >
> > Baseline total runtime is calculated by adding up all multifdsend_X
> > and live_migration threads runtime. DSA offloading total runtime is
> > calculated by adding up all multifdsend_X, live_migration and
> > dsa_completion threads runtime. 41922 msec VS 32568 msec runtime and
> > that is 23% total CPU usage savings.
> >
> > Latency
> > |---------------|---------------|---------------|---------------|---------------|---------------|
> > | |total time |down time |throughput |transferred-ram|total-ram |
> > |---------------|---------------|---------------|---------------|---------------|---------------|
> > |Baseline |10343 ms |161 ms |41007.00 mbps |51583797 kb |102400520 kb |
> > |---------------|---------------|---------------|---------------|-------------------------------|
> > |DSA offload |9535 ms |135 ms |46554.40 mbps |53947545 kb |102400520 kb |
> > |---------------|---------------|---------------|---------------|---------------|---------------|
> >
> > Total time is 8% faster and down time is 16% faster.
> >
> > B) Scenario 2: 100% (100GB) zero pages on an 100GB vm.
> >
> > CPU usage
> > |---------------|---------------|---------------|---------------|
> > | |comm |runtime(msec) |totaltime(msec)|
> > |---------------|---------------|---------------|---------------|
> > |Baseline |live_migration |4860.718 | |
> > | |multifdsend_0 |748.875 | |
> > | |multifdsend_1 |898.498 | |
> > | |multifdsend_2 |787.456 | |
> > | |multifdsend_3 |764.537 | |
> > | |multifdsend_4 |785.687 | |
> > | |multifdsend_5 |756.941 | |
> > | |multifdsend_6 |774.084 | |
> > | |multifdsend_7 |782.900 |11154 |
> > |---------------|---------------|-------------------------------|
> > |DSA offloading |live_migration |3846.976 | |
> > | |multifdsend_0 |191.880 | |
> > | |multifdsend_1 |166.331 | |
> > | |multifdsend_2 |168.528 | |
> > | |multifdsend_3 |197.831 | |
> > | |multifdsend_4 |169.580 | |
> > | |multifdsend_5 |167.984 | |
> > | |multifdsend_6 |198.042 | |
> > | |multifdsend_7 |170.624 | |
> > | |dsa_completion |3428.669 |8700 |
> > |---------------|---------------|---------------|---------------|
> >
> > Baseline total runtime is 11154 msec and DSA offloading total runtime is
> > 8700 msec. That is 22% CPU savings.
> >
> > Latency
> > |--------------------------------------------------------------------------------------------|
> > | |total time |down time |throughput |transferred-ram|total-ram |
> > |---------------|---------------|---------------|---------------|---------------|------------|
> > |Baseline |4867 ms |20 ms |1.51 mbps |565 kb |102400520 kb|
> > |---------------|---------------|---------------|---------------|----------------------------|
> > |DSA offload |3888 ms |18 ms |1.89 mbps |565 kb |102400520 kb|
> > |---------------|---------------|---------------|---------------|---------------|------------|
> >
> > Total time 20% faster and down time 10% faster.
> >
> > * Testing:
> >
> > 1. Added unit tests for cover the added code path in dsa.c
> > 2. Added integration tests to cover multifd live migration using DSA
> > offloading.
> >
> > Hao Xiang (10):
> > meson: Introduce new instruction set enqcmd to the build system.
> > util/dsa: Implement DSA device start and stop logic.
> > util/dsa: Implement DSA task enqueue and dequeue.
> > util/dsa: Implement DSA task asynchronous completion thread model.
> > util/dsa: Implement zero page checking in DSA task.
> > util/dsa: Implement DSA task asynchronous submission and wait for
> > completion.
> > migration/multifd: Add new migration option for multifd DSA
> > offloading.
> > migration/multifd: Enable DSA offloading in multifd sender path.
> > util/dsa: Add unit test coverage for Intel DSA task submission and
> > completion.
> > migration/multifd: Add integration tests for multifd with Intel DSA
> > offloading.
> >
> > Yichen Wang (1):
> > util/dsa: Add idxd into linux header copy list.
> >
> > Yuan Liu (1):
> > migration/doc: Add DSA zero page detection doc
> >
> > .../migration/dsa-zero-page-detection.rst | 290 +++++
> > docs/devel/migration/features.rst | 1 +
> > hmp-commands.hx | 2 +-
> > include/qemu/dsa.h | 188 +++
> > meson.build | 14 +
> > meson_options.txt | 2 +
> > migration/migration-hmp-cmds.c | 19 +-
> > migration/multifd-zero-page.c | 129 +-
> > migration/multifd.c | 29 +-
> > migration/multifd.h | 5 +
> > migration/options.c | 30 +
> > migration/options.h | 1 +
> > qapi/migration.json | 32 +-
> > scripts/meson-buildoptions.sh | 3 +
> > scripts/update-linux-headers.sh | 2 +-
> > tests/qtest/migration-test.c | 80 +-
> > tests/unit/meson.build | 6 +
> > tests/unit/test-dsa.c | 503 ++++++++
> > util/dsa.c | 1112 +++++++++++++++++
> > util/meson.build | 3 +
> > 20 files changed, 2427 insertions(+), 24 deletions(-)
> > create mode 100644 docs/devel/migration/dsa-zero-page-detection.rst
> > create mode 100644 include/qemu/dsa.h
> > create mode 100644 tests/unit/test-dsa.c
> > create mode 100644 util/dsa.c
>
> Hi, take a look at make check, there are some tests failing.
>
> Summary of Failures:
>
> 16/474 qemu:qtest+qtest-x86_64 / qtest-x86_64/test-hmp ERROR 0.86s killed by signal 6 SIGABRT
> 18/474 qemu:qtest+qtest-ppc64 / qtest-ppc64/test-hmp ERROR 0.93s killed by signal 6 SIGABRT
> 20/474 qemu:qtest+qtest-aarch64 / qtest-aarch64/test-hmp ERROR 1.30s killed by signal 6 SIGABRT
> 21/474 qemu:qtest+qtest-s390x / qtest-s390x/test-hmp ERROR 0.76s killed by signal 6 SIGABRT
> 22/474 qemu:qtest+qtest-riscv64 / qtest-riscv64/test-hmp ERROR 0.60s killed by signal 6 SIGABRT
>
> Looks like a double-free due to glib autofree. Here's one sample:
>
> #0 __GI_abort () at abort.c:49
> #1 0x00007ffff5899c87 in __libc_message (action=do_abort, fmt=0x7ffff59c3138 "%s\n") at ../sysdeps/posix/libc_fatal.c:155
> #2 0x00007ffff58a1d2a in malloc_printerr (str=0x7ffff59c0e0e "free(): invalid pointer") at malloc.c:5347
> #3 0x00007ffff58a37d4 in _int_free (av=<optimized out>, p=<optimized out>, have_lock=0) at malloc.c:4173
> #4 0x00007ffff78c5639 in g_free (mem=0x5555561200f1 <qemu_mutex_unlock_impl+96>) at ../glib/gmem.c:199
> #5 0x0000555555bdd527 in g_autoptr_cleanup_generic_gfree (p=0x7fffffffd568) at /usr/include/glib-2.0/glib/glib-autocleanups.h:28
> #6 0x0000555555bdfabc in hmp_migrate_set_parameter (mon=0x7fffffffd6f0, qdict=0x555558554560) at ../migration/migration-hmp-cmds.c:577
> #7 0x0000555555c1a231 in handle_hmp_command_exec (mon=0x7fffffffd6f0, cmd=0x5555571e7450 <hmp_cmds+4560>, qdict=0x555558554560) at ../monitor/hmp.c:1106
> #8 0x0000555555c1a470 in handle_hmp_command (mon=0x7fffffffd6f0, cmdline=0x5555577ec2f6 "xbzrle-cache-size 64k") at ../monitor/hmp.c:1158
> #9 0x0000555555c1c40e in qmp_human_monitor_command (command_line=0x5555577ec2e0 "migrate_set_parameter xbzrle-cache-size 64k", has_cpu_index=false, cpu_index=0, errp=0x7fffffffd800)
> at ../monitor/qmp-cmds.c:181
> #10 0x00005555560c7eb6 in qmp_marshal_human_monitor_command (args=0x7fffe000ac00, ret=0x7ffff4d25da8, errp=0x7ffff4d25da0) at qapi/qapi-commands-misc.c:347
> #11 0x000055555610e7a4 in do_qmp_dispatch_bh (opaque=0x7ffff4d25e40) at ../qapi/qmp-dispatch.c:128
> #12 0x000055555613a1b9 in aio_bh_call (bh=0x7fffe0004050) at ../util/async.c:172
> #13 0x000055555613a2d5 in aio_bh_poll (ctx=0x5555573df400) at ../util/async.c:219
> #14 0x000055555611b8cd in aio_dispatch (ctx=0x5555573df400) at ../util/aio-posix.c:424
> #15 0x000055555613a712 in aio_ctx_dispatch (source=0x5555573df400, callback=0x0, user_data=0x0) at ../util/async.c:361
> #16 0x00007ffff78bf82b in g_main_dispatch (context=0x5555573e3440) at ../glib/gmain.c:3381
> #17 g_main_context_dispatch (context=0x5555573e3440) at ../glib/gmain.c:4099
> #18 0x000055555613bdae in glib_pollfds_poll () at ../util/main-loop.c:287
> #19 0x000055555613be28 in os_host_main_loop_wait (timeout=0) at ../util/main-loop.c:310
> #20 0x000055555613bf2d in main_loop_wait (nonblocking=0) at ../util/main-loop.c:589
> #21 0x0000555555bb455c in qemu_main_loop () at ../system/runstate.c:835
> #22 0x00005555560594d1 in qemu_default_main () at ../system/main.c:37
> #23 0x000055555605950c in main (argc=18, argv=0x7fffffffdc18) at ../system/main.c:48
Fixed. Interesting that our marco of g_auto(GStrv) won't work in
switch-case statements. I switched back to original char
**/g_strfreev() to get it working.
prev parent reply other threads:[~2024-11-26 4:44 UTC|newest]
Thread overview: 30+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-11-14 22:01 [PATCH v7 00/12] Use Intel DSA accelerator to offload zero page checking in multifd live migration Yichen Wang
2024-11-14 22:01 ` [PATCH v7 01/12] meson: Introduce new instruction set enqcmd to the build system Yichen Wang
2024-11-21 13:51 ` Fabiano Rosas
2024-11-14 22:01 ` [PATCH v7 02/12] util/dsa: Add idxd into linux header copy list Yichen Wang
2024-11-21 13:51 ` Fabiano Rosas
2024-11-14 22:01 ` [PATCH v7 03/12] util/dsa: Implement DSA device start and stop logic Yichen Wang
2024-11-21 14:11 ` Fabiano Rosas
2024-11-14 22:01 ` [PATCH v7 04/12] util/dsa: Implement DSA task enqueue and dequeue Yichen Wang
2024-11-21 20:55 ` Fabiano Rosas
2024-11-14 22:01 ` [PATCH v7 05/12] util/dsa: Implement DSA task asynchronous completion thread model Yichen Wang
2024-11-21 20:58 ` Fabiano Rosas
2024-11-14 22:01 ` [PATCH v7 06/12] util/dsa: Implement zero page checking in DSA task Yichen Wang
2024-11-25 15:53 ` Fabiano Rosas
2024-11-26 4:38 ` [External] " Yichen Wang
2024-11-14 22:01 ` [PATCH v7 07/12] util/dsa: Implement DSA task asynchronous submission and wait for completion Yichen Wang
2024-11-25 18:00 ` Fabiano Rosas
2024-11-14 22:01 ` [PATCH v7 08/12] migration/multifd: Add new migration option for multifd DSA offloading Yichen Wang
2024-11-15 14:32 ` Dr. David Alan Gilbert
2024-11-14 22:01 ` [PATCH v7 09/12] migration/multifd: Enable DSA offloading in multifd sender path Yichen Wang
2024-11-21 20:50 ` Fabiano Rosas
2024-11-26 4:41 ` [External] " Yichen Wang
2024-11-26 13:20 ` Fabiano Rosas
2024-12-03 3:43 ` Yichen Wang
2024-11-14 22:01 ` [PATCH v7 10/12] util/dsa: Add unit test coverage for Intel DSA task submission and completion Yichen Wang
2024-11-14 22:01 ` [PATCH v7 11/12] migration/multifd: Add integration tests for multifd with Intel DSA offloading Yichen Wang
2024-11-25 18:25 ` Fabiano Rosas
2024-11-14 22:01 ` [PATCH v7 12/12] migration/doc: Add DSA zero page detection doc Yichen Wang
2024-11-25 18:28 ` Fabiano Rosas
2024-11-19 21:31 ` [PATCH v7 00/12] Use Intel DSA accelerator to offload zero page checking in multifd live migration Fabiano Rosas
2024-11-26 4:43 ` Yichen Wang [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=CAHObMVZFZ3Jg0dTk--XV2i-tMuOFwa0Lk3B9Hp3PGe6e1TK5rg@mail.gmail.com \
--to=yichen.wang@bytedance.com \
--cc=armbru@redhat.com \
--cc=berrange@redhat.com \
--cc=cohuck@redhat.com \
--cc=dave@treblig.org \
--cc=eblake@redhat.com \
--cc=farosas@suse.de \
--cc=hao.xiang@linux.dev \
--cc=horenchuang@bytedance.com \
--cc=marcandre.lureau@redhat.com \
--cc=mst@redhat.com \
--cc=pbonzini@redhat.com \
--cc=peterx@redhat.com \
--cc=philmd@linaro.org \
--cc=qemu-devel@nongnu.org \
--cc=shivam.kumar1@nutanix.com \
--cc=yuan1.liu@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).