* [PATCH v2 1/2] qtest/migration/rdma: Enforce RLIMIT_MEMLOCK >= 128MB requirement
@ 2025-05-09 1:42 Li Zhijian via
2025-05-09 1:42 ` [PATCH v2 2/2] qtest/migration/rdma: Add test for rdma migration with ipv6 Li Zhijian via
2025-05-09 15:33 ` [PATCH v2 1/2] qtest/migration/rdma: Enforce RLIMIT_MEMLOCK >= 128MB requirement Peter Xu
0 siblings, 2 replies; 5+ messages in thread
From: Li Zhijian via @ 2025-05-09 1:42 UTC (permalink / raw)
To: Peter Xu, Fabiano Rosas, qemu-devel
Cc: Laurent Vivier, Paolo Bonzini, Li Zhijian
Ensure successful migration over RDMA by verifying that RLIMIT_MEMLOCK is
set to at least 128MB. This allocation is necessary due to the requirement
to pin significant portions of guest memory, typically exceeding 100MB
in this test, while the remainder is transmitted as compressed zero pages.
Otherwise, it will fail with:
stderr:
qemu-system-x86_64: cannot get rkey
qemu-system-x86_64: error while loading state section id 2(ram)
qemu-system-x86_64: load of migration failed: Operation not permitted
qemu-system-x86_64: rdma migration: recv polling control error!
qemu-system-x86_64: RDMA is in an error state waiting migration to abort!
qemu-system-x86_64: failed to save SaveStateEntry with id(name): 2(ram): -1
qemu-system-x86_64: Channel error: Operation not permitted
Reported-by: Peter Xu <peterx@redhat.com>
Signed-off-by: Li Zhijian <lizhijian@fujitsu.com>
---
tests/qtest/migration/precopy-tests.c | 34 +++++++++++++++++++++++++++
1 file changed, 34 insertions(+)
diff --git a/tests/qtest/migration/precopy-tests.c b/tests/qtest/migration/precopy-tests.c
index 02465c20ae..4e32e61053 100644
--- a/tests/qtest/migration/precopy-tests.c
+++ b/tests/qtest/migration/precopy-tests.c
@@ -101,6 +101,35 @@ static void test_precopy_unix_dirty_ring(void)
#ifdef CONFIG_RDMA
+#include <sys/resource.h>
+
+/*
+ * During migration over RDMA, it will try to pin portions of guest memory,
+ * typically exceeding 100MB in this test, while the remainder will be
+ * transmitted as compressed zero pages.
+ *
+ * REQUIRED_MEMLOCK_SZ indicates the minimal mlock size in the current context.
+ */
+#define REQUIRED_MEMLOCK_SZ (128 << 20) /* 128MB */
+
+/* check 'ulimit -l' */
+static bool mlock_check(void)
+{
+ uid_t uid;
+ struct rlimit rlim;
+
+ uid = getuid();
+ if (uid == 0) {
+ return true;
+ }
+
+ if (getrlimit(RLIMIT_MEMLOCK, &rlim) != 0) {
+ return false;
+ }
+
+ return rlim.rlim_cur >= REQUIRED_MEMLOCK_SZ;
+}
+
#define RDMA_MIGRATION_HELPER "scripts/rdma-migration-helper.sh"
static int new_rdma_link(char *buffer)
{
@@ -136,6 +165,11 @@ static void test_precopy_rdma_plain(void)
{
char buffer[128] = {};
+ if (!mlock_check()) {
+ g_test_skip("'ulimit -l' is too small, require >=128M");
+ return;
+ }
+
if (new_rdma_link(buffer)) {
g_test_skip("No rdma link available\n"
"# To enable the test:\n"
--
2.41.0
^ permalink raw reply related [flat|nested] 5+ messages in thread
* [PATCH v2 2/2] qtest/migration/rdma: Add test for rdma migration with ipv6
2025-05-09 1:42 [PATCH v2 1/2] qtest/migration/rdma: Enforce RLIMIT_MEMLOCK >= 128MB requirement Li Zhijian via
@ 2025-05-09 1:42 ` Li Zhijian via
2025-05-09 15:32 ` Peter Xu
2025-05-09 15:33 ` [PATCH v2 1/2] qtest/migration/rdma: Enforce RLIMIT_MEMLOCK >= 128MB requirement Peter Xu
1 sibling, 1 reply; 5+ messages in thread
From: Li Zhijian via @ 2025-05-09 1:42 UTC (permalink / raw)
To: Peter Xu, Fabiano Rosas, qemu-devel
Cc: Laurent Vivier, Paolo Bonzini, Li Zhijian, Jack Wang,
Michael R . Galaxy, Yu Zhang
Recently, we removed ipv6 restriction[0] from RDMA migration, add a
test for it.
[0] https://lore.kernel.org/qemu-devel/20250326095224.9918-1-jinpu.wang@ionos.com/
Cc: Jack Wang <jinpu.wang@ionos.com>
Cc: Michael R. Galaxy <mrgalaxy@nvidia.com>
Cc: Peter Xu <peterx@redhat.com>
Cc: Yu Zhang <yu.zhang@ionos.com>
Reviewed-by: Jack Wang <jinpu.wang@ionos.com>
Signed-off-by: Li Zhijian <lizhijian@fujitsu.com>
---
V2:
- Collect Reviewed-by
- quoate the whole string to adapt to the newer bash # Fedora40+
---
scripts/rdma-migration-helper.sh | 26 +++++++++++++++++++++++---
tests/qtest/migration/precopy-tests.c | 21 +++++++++++++++++----
2 files changed, 40 insertions(+), 7 deletions(-)
diff --git a/scripts/rdma-migration-helper.sh b/scripts/rdma-migration-helper.sh
index a39f2fb0e5..9fb7a12274 100755
--- a/scripts/rdma-migration-helper.sh
+++ b/scripts/rdma-migration-helper.sh
@@ -8,6 +8,15 @@ get_ipv4_addr()
head -1 | tr -d '\n'
}
+get_ipv6_addr() {
+ ipv6=$(ip -6 -o addr show dev "$1" |
+ sed -n 's/.*[[:blank:]]inet6[[:blank:]]*\([^[:blank:]/]*\).*/\1/p' |
+ head -1 | tr -d '\n')
+
+ [ $? -eq 0 ] || return
+ echo -n "[$ipv6%$1]"
+}
+
# existing rdma interfaces
rdma_interfaces()
{
@@ -20,11 +29,16 @@ ipv4_interfaces()
ip -o addr show | awk '/inet / {print $2}' | grep -v -w lo
}
+ipv6_interfaces()
+{
+ ip -o addr show | awk '/inet6 / {print $2}' | sort -u | grep -v -w lo
+}
+
rdma_rxe_detect()
{
for r in $(rdma_interfaces)
do
- ipv4_interfaces | grep -qw $r && get_ipv4_addr $r && return
+ "$IP_FAMILY"_interfaces | grep -qw $r && get_"$IP_FAMILY"_addr $r && return
done
return 1
@@ -32,11 +46,11 @@ rdma_rxe_detect()
rdma_rxe_setup()
{
- for i in $(ipv4_interfaces)
+ for i in $("$IP_FAMILY"_interfaces)
do
rdma_interfaces | grep -qw $i && continue
rdma link add "${i}_rxe" type rxe netdev "$i" && {
- echo "Setup new rdma/rxe ${i}_rxe for $i with $(get_ipv4_addr $i)"
+ echo "Setup new rdma/rxe ${i}_rxe for $i with $(get_"$IP_FAMILY"_addr $i)"
return
}
done
@@ -50,6 +64,12 @@ rdma_rxe_clean()
modprobe -r rdma_rxe
}
+IP_FAMILY=${IP_FAMILY:-ipv4}
+if [ "$IP_FAMILY" != "ipv6" ] && [ "$IP_FAMILY" != "ipv4" ]; then
+ echo "Unknown ip family '$IP_FAMILY', only ipv4 or ipv6 is supported." >&2
+ exit 1
+fi
+
operation=${1:-detect}
command -v rdma >/dev/null || {
diff --git a/tests/qtest/migration/precopy-tests.c b/tests/qtest/migration/precopy-tests.c
index 4e32e61053..fb80c83967 100644
--- a/tests/qtest/migration/precopy-tests.c
+++ b/tests/qtest/migration/precopy-tests.c
@@ -131,12 +131,13 @@ static bool mlock_check(void)
}
#define RDMA_MIGRATION_HELPER "scripts/rdma-migration-helper.sh"
-static int new_rdma_link(char *buffer)
+static int new_rdma_link(char *buffer, bool ipv6)
{
char cmd[256];
bool verbose = g_getenv("QTEST_LOG");
- snprintf(cmd, sizeof(cmd), "%s detect %s", RDMA_MIGRATION_HELPER,
+ snprintf(cmd, sizeof(cmd), "IP_FAMILY=%s %s detect %s",
+ ipv6 ? "ipv6" : "ipv4", RDMA_MIGRATION_HELPER,
verbose ? "" : "2>/dev/null");
FILE *pipe = popen(cmd, "r");
@@ -161,7 +162,7 @@ static int new_rdma_link(char *buffer)
return -1;
}
-static void test_precopy_rdma_plain(void)
+static void __test_precopy_rdma_plain(bool ipv6)
{
char buffer[128] = {};
@@ -170,7 +171,7 @@ static void test_precopy_rdma_plain(void)
return;
}
- if (new_rdma_link(buffer)) {
+ if (new_rdma_link(buffer, ipv6)) {
g_test_skip("No rdma link available\n"
"# To enable the test:\n"
"# Run \'" RDMA_MIGRATION_HELPER " setup\' with root to "
@@ -193,6 +194,16 @@ static void test_precopy_rdma_plain(void)
test_precopy_common(&args);
}
+
+static void test_precopy_rdma_plain(void)
+{
+ __test_precopy_rdma_plain(0);
+}
+
+static void test_precopy_rdma_plain_ipv6(void)
+{
+ __test_precopy_rdma_plain(1);
+}
#endif
static void test_precopy_tcp_plain(void)
@@ -1226,6 +1237,8 @@ static void migration_test_add_precopy_smoke(MigrationTestEnv *env)
#ifdef CONFIG_RDMA
migration_test_add("/migration/precopy/rdma/plain",
test_precopy_rdma_plain);
+ migration_test_add("/migration/precopy/rdma/plain/ipv6",
+ test_precopy_rdma_plain_ipv6);
#endif
}
--
2.41.0
^ permalink raw reply related [flat|nested] 5+ messages in thread
* Re: [PATCH v2 2/2] qtest/migration/rdma: Add test for rdma migration with ipv6
2025-05-09 1:42 ` [PATCH v2 2/2] qtest/migration/rdma: Add test for rdma migration with ipv6 Li Zhijian via
@ 2025-05-09 15:32 ` Peter Xu
2025-05-12 5:49 ` Zhijian Li (Fujitsu) via
0 siblings, 1 reply; 5+ messages in thread
From: Peter Xu @ 2025-05-09 15:32 UTC (permalink / raw)
To: Li Zhijian
Cc: Fabiano Rosas, qemu-devel, Laurent Vivier, Paolo Bonzini,
Jack Wang, Michael R . Galaxy, Yu Zhang
On Fri, May 09, 2025 at 09:42:11AM +0800, Li Zhijian wrote:
> Recently, we removed ipv6 restriction[0] from RDMA migration, add a
> test for it.
>
> [0] https://lore.kernel.org/qemu-devel/20250326095224.9918-1-jinpu.wang@ionos.com/
>
> Cc: Jack Wang <jinpu.wang@ionos.com>
> Cc: Michael R. Galaxy <mrgalaxy@nvidia.com>
> Cc: Peter Xu <peterx@redhat.com>
> Cc: Yu Zhang <yu.zhang@ionos.com>
> Reviewed-by: Jack Wang <jinpu.wang@ionos.com>
> Signed-off-by: Li Zhijian <lizhijian@fujitsu.com>
> ---
>
> V2:
> - Collect Reviewed-by
> - quoate the whole string to adapt to the newer bash # Fedora40+
> ---
> scripts/rdma-migration-helper.sh | 26 +++++++++++++++++++++++---
> tests/qtest/migration/precopy-tests.c | 21 +++++++++++++++++----
> 2 files changed, 40 insertions(+), 7 deletions(-)
>
> diff --git a/scripts/rdma-migration-helper.sh b/scripts/rdma-migration-helper.sh
> index a39f2fb0e5..9fb7a12274 100755
> --- a/scripts/rdma-migration-helper.sh
> +++ b/scripts/rdma-migration-helper.sh
> @@ -8,6 +8,15 @@ get_ipv4_addr()
> head -1 | tr -d '\n'
> }
>
> +get_ipv6_addr() {
> + ipv6=$(ip -6 -o addr show dev "$1" |
> + sed -n 's/.*[[:blank:]]inet6[[:blank:]]*\([^[:blank:]/]*\).*/\1/p' |
> + head -1 | tr -d '\n')
> +
> + [ $? -eq 0 ] || return
> + echo -n "[$ipv6%$1]"
> +}
> +
> # existing rdma interfaces
> rdma_interfaces()
> {
> @@ -20,11 +29,16 @@ ipv4_interfaces()
> ip -o addr show | awk '/inet / {print $2}' | grep -v -w lo
> }
>
> +ipv6_interfaces()
> +{
> + ip -o addr show | awk '/inet6 / {print $2}' | sort -u | grep -v -w lo
> +}
> +
> rdma_rxe_detect()
> {
> for r in $(rdma_interfaces)
> do
> - ipv4_interfaces | grep -qw $r && get_ipv4_addr $r && return
> + "$IP_FAMILY"_interfaces | grep -qw $r && get_"$IP_FAMILY"_addr $r && return
> done
>
> return 1
> @@ -32,11 +46,11 @@ rdma_rxe_detect()
>
> rdma_rxe_setup()
> {
> - for i in $(ipv4_interfaces)
> + for i in $("$IP_FAMILY"_interfaces)
> do
> rdma_interfaces | grep -qw $i && continue
> rdma link add "${i}_rxe" type rxe netdev "$i" && {
> - echo "Setup new rdma/rxe ${i}_rxe for $i with $(get_ipv4_addr $i)"
> + echo "Setup new rdma/rxe ${i}_rxe for $i with $(get_"$IP_FAMILY"_addr $i)"
> return
> }
> done
> @@ -50,6 +64,12 @@ rdma_rxe_clean()
> modprobe -r rdma_rxe
> }
>
> +IP_FAMILY=${IP_FAMILY:-ipv4}
Does this mean I'll need to setup twice, one for each v?
Even if so, I did this:
===8<===
$ sudo ../scripts/rdma-migration-helper.sh setup
Setup new rdma/rxe wlp0s20f3_rxe for wlp0s20f3 with 192.168.68.123
$ sudo IP_FAMILY=ipv6 ../scripts/rdma-migration-helper.sh setup
Setup new rdma/rxe tun0_rxe for tun0 with [fd10:22:88:1::110c%tun0]
$ rdma link
link wlp0s20f3_rxe/1 state ACTIVE physical_state LINK_UP netdev wlp0s20f3
link tun0_rxe/1 state ACTIVE physical_state LINK_UP netdev tun0
===8<===
And it still fails..
===8<===
$ sudo QTEST_QEMU_BINARY=./qemu-system-x86_64 ./tests/qtest/migration-test -p /x86_64/migration/precopy/rdma/plain
TAP version 14
# random seed: R02S778a51bb3555664ae9449bf4fb9e3730
# starting QEMU: exec ./qemu-system-x86_64 -qtest unix:/tmp/qtest-286985.sock -qtest-log /dev/null -chardev socket,path=/tmp/qtest-286985.qmp,id=char0 -mon chardev=char0,mode=control -display none -audio none -machine none -accel qtest
# Start of x86_64 tests
# Start of migration tests
# Start of precopy tests
# Start of rdma tests
# Running /x86_64/migration/precopy/rdma/plain
# Using machine type: pc-q35-10.1
# starting QEMU: exec ./qemu-system-x86_64 -qtest unix:/tmp/qtest-286985.sock -qtest-log /dev/null -chardev socket,path=/tmp/qtest-286985.qmp,id=char0 -mon chardev=char0,mode=control -display none -audio none -accel kvm -accel tcg -machine pc-q35-10.1, -name source,debug-threads=on -m 150M -serial file:/tmp/migration-test-QEB452/src_serial -drive if=none,id=d0,file=/tmp/migration-test-QEB452/bootsect,format=raw -device ide-hd,drive=d0,secs=1,cyls=1,heads=1 -accel qtest
# starting QEMU: exec ./qemu-system-x86_64 -qtest unix:/tmp/qtest-286985.sock -qtest-log /dev/null -chardev socket,path=/tmp/qtest-286985.qmp,id=char0 -mon chardev=char0,mode=control -display none -audio none -accel kvm -accel tcg -machine pc-q35-10.1, -name target,debug-threads=on -m 150M -serial file:/tmp/migration-test-QEB452/dest_serial -incoming rdma:192.168.68.123:29200 -drive if=none,id=d0,file=/tmp/migration-test-QEB452/bootsect,format=raw -device ide-hd,drive=d0,secs=1,cyls=1,heads=1 -accel qtest
ok 1 /x86_64/migration/precopy/rdma/plain
# slow test /x86_64/migration/precopy/rdma/plain executed in 1.46 secs
# Start of plain tests
# Running /x86_64/migration/precopy/rdma/plain/ipv6
# Using machine type: pc-q35-10.1
# starting QEMU: exec ./qemu-system-x86_64 -qtest unix:/tmp/qtest-286985.sock -qtest-log /dev/null -chardev socket,path=/tmp/qtest-286985.qmp,id=char0 -mon chardev=char0,mode=control -display none -audio none -accel kvm -accel tcg -machine pc-q35-10.1, -name source,debug-threads=on -m 150M -serial file:/tmp/migration-test-QEB452/src_serial -drive if=none,id=d0,file=/tmp/migration-test-QEB452/bootsect,format=raw -device ide-hd,drive=d0,secs=1,cyls=1,heads=1 -accel qtest
# starting QEMU: exec ./qemu-system-x86_64 -qtest unix:/tmp/qtest-286985.sock -qtest-log /dev/null -chardev socket,path=/tmp/qtest-286985.qmp,id=char0 -mon chardev=char0,mode=control -display none -audio none -accel kvm -accel tcg -machine pc-q35-10.1, -name target,debug-threads=on -m 150M -serial file:/tmp/migration-test-QEB452/dest_serial -incoming rdma:[fdd3:4fdc:97c9:ca4e:2837:28dd:1ec4:6b5a%wlp0s20f3]:29200 -drive if=none,id=d0,file=/tmp/migration-test-QEB452/bootsect,format=raw -device ide-hd,drive=d0,secs=1,cyls=1,heads=1 -accel qtest
qemu-system-x86_64: -incoming rdma:[fdd3:4fdc:97c9:ca4e:2837:28dd:1ec4:6b5a%wlp0s20f3]:29200: RDMA ERROR: could not rdma_getaddrinfo address fdd3:4fdc:97c9:ca4e:2837:28dd:1ec4:6b5a%wlp0s20f3
Broken pipe
../tests/qtest/libqtest.c:199: kill_qemu() tried to terminate QEMU process but encountered exit status 1 (expected 0)
Aborted
===8<===
It would be great if the setup only needs to be run once, setting up
whatever ipv* supported, then in the test run kickoff whatever ipv* is
supported and detected.
Would it be possible?
Thanks,
--
Peter Xu
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [PATCH v2 1/2] qtest/migration/rdma: Enforce RLIMIT_MEMLOCK >= 128MB requirement
2025-05-09 1:42 [PATCH v2 1/2] qtest/migration/rdma: Enforce RLIMIT_MEMLOCK >= 128MB requirement Li Zhijian via
2025-05-09 1:42 ` [PATCH v2 2/2] qtest/migration/rdma: Add test for rdma migration with ipv6 Li Zhijian via
@ 2025-05-09 15:33 ` Peter Xu
1 sibling, 0 replies; 5+ messages in thread
From: Peter Xu @ 2025-05-09 15:33 UTC (permalink / raw)
To: Li Zhijian; +Cc: Fabiano Rosas, qemu-devel, Laurent Vivier, Paolo Bonzini
On Fri, May 09, 2025 at 09:42:10AM +0800, Li Zhijian wrote:
> Ensure successful migration over RDMA by verifying that RLIMIT_MEMLOCK is
> set to at least 128MB. This allocation is necessary due to the requirement
> to pin significant portions of guest memory, typically exceeding 100MB
> in this test, while the remainder is transmitted as compressed zero pages.
>
> Otherwise, it will fail with:
> stderr:
> qemu-system-x86_64: cannot get rkey
> qemu-system-x86_64: error while loading state section id 2(ram)
> qemu-system-x86_64: load of migration failed: Operation not permitted
> qemu-system-x86_64: rdma migration: recv polling control error!
> qemu-system-x86_64: RDMA is in an error state waiting migration to abort!
> qemu-system-x86_64: failed to save SaveStateEntry with id(name): 2(ram): -1
> qemu-system-x86_64: Channel error: Operation not permitted
>
> Reported-by: Peter Xu <peterx@redhat.com>
> Signed-off-by: Li Zhijian <lizhijian@fujitsu.com>
Thanks, this works for me.
I'll queue this one first separately.
--
Peter Xu
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [PATCH v2 2/2] qtest/migration/rdma: Add test for rdma migration with ipv6
2025-05-09 15:32 ` Peter Xu
@ 2025-05-12 5:49 ` Zhijian Li (Fujitsu) via
0 siblings, 0 replies; 5+ messages in thread
From: Zhijian Li (Fujitsu) via @ 2025-05-12 5:49 UTC (permalink / raw)
To: Peter Xu
Cc: Fabiano Rosas, qemu-devel@nongnu.org, Laurent Vivier,
Paolo Bonzini, Jack Wang, Michael R . Galaxy, Yu Zhang
On 09/05/2025 23:32, Peter Xu wrote:
> Does this mean I'll need to setup twice, one for each v?
>
> Even if so, I did this:
>
> ===8<===
> $ sudo ../scripts/rdma-migration-helper.sh setup
> Setup new rdma/rxe wlp0s20f3_rxe for wlp0s20f3 with 192.168.68.123
> $ sudo IP_FAMILY=ipv6 ../scripts/rdma-migration-helper.sh setup
> Setup new rdma/rxe tun0_rxe for tun0 with [fd10:22:88:1::110c%tun0]
> $ rdma link
> link wlp0s20f3_rxe/1 state ACTIVE physical_state LINK_UP netdev
> wlp0s20f3
> link tun0_rxe/1 state ACTIVE physical_state LINK_UP netdev tun0
> ===8<===
That's because lo/tun/tap is not an valid RXE for migration, I will update
the script to ignore them.
> rdma:[fdd3:4fdc:97c9:ca4e:2837:28dd:1ec4:6b5a%wlp0s20f3]:29200 -drive
> if=none,id=d0,file=/tmp/migration-test-QEB452/bootsect,format=raw
> -device ide-hd,drive=d0,secs=1,cyls=1,heads=1 -accel qtest
> qemu-system-x86_64: -incoming
> rdma:[fdd3:4fdc:97c9:ca4e:2837:28dd:1ec4:6b5a%wlp0s20f3]:29200: RDMA
> ERROR: could not rdma_getaddrinfo address
> fdd3:4fdc:97c9:ca4e:2837:28dd:1ec4:6b5a%wlp0s20f3
> Broken pipe
> ../tests/qtest/libqtest.c:199: kill_qemu() tried to terminate QEMU
> process but encountered exit status 1 (expected 0)
> Aborted
> ===8<===
>
> It would be great if the setup only needs to be run once, setting up
> whatever ipv* supported, then in the test run kickoff whatever ipv* is
> supported and detected.
>
> Would it be possible?
It sound good to me, I will update it in next version
Thanks
Zhijian
>
> Thanks,
^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2025-05-12 5:50 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-05-09 1:42 [PATCH v2 1/2] qtest/migration/rdma: Enforce RLIMIT_MEMLOCK >= 128MB requirement Li Zhijian via
2025-05-09 1:42 ` [PATCH v2 2/2] qtest/migration/rdma: Add test for rdma migration with ipv6 Li Zhijian via
2025-05-09 15:32 ` Peter Xu
2025-05-12 5:49 ` Zhijian Li (Fujitsu) via
2025-05-09 15:33 ` [PATCH v2 1/2] qtest/migration/rdma: Enforce RLIMIT_MEMLOCK >= 128MB requirement Peter Xu
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).