ocfs2-devel.oss.oracle.com archive mirror
 help / color / mirror / Atom feed
* [syzbot] [ocfs2?] possible deadlock in ocfs2_read_folio
@ 2025-02-13 14:33 syzbot
  2025-05-28 14:48 ` syzbot
  0 siblings, 1 reply; 2+ messages in thread
From: syzbot @ 2025-02-13 14:33 UTC (permalink / raw)
  To: jlbec, joseph.qi, linux-kernel, mark, ocfs2-devel, syzkaller-bugs

Hello,

syzbot found the following issue on:

HEAD commit:    9946eaf552b1 Merge tag 'hardening-v6.14-rc2' of git://git...
git tree:       upstream
console output: https://syzkaller.appspot.com/x/log.txt?x=165382a4580000
kernel config:  https://syzkaller.appspot.com/x/.config?x=147b7d49d83b8036
dashboard link: https://syzkaller.appspot.com/bug?extid=bd316bb736c7dc2f318e
compiler:       Debian clang version 15.0.6, GNU ld (GNU Binutils for Debian) 2.40

Unfortunately, I don't have any reproducer for this issue yet.

Downloadable assets:
disk image: https://storage.googleapis.com/syzbot-assets/eac613b48ce8/disk-9946eaf5.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/a9e51f9c7777/vmlinux-9946eaf5.xz
kernel image: https://storage.googleapis.com/syzbot-assets/96f75428ab6a/bzImage-9946eaf5.xz

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+bd316bb736c7dc2f318e@syzkaller.appspotmail.com

======================================================
WARNING: possible circular locking dependency detected
6.14.0-rc1-syzkaller-00235-g9946eaf552b1 #0 Not tainted
------------------------------------------------------
syz.2.361/9103 is trying to acquire lock:
ffff88807fd822e0 (&ocfs2_file_ip_alloc_sem_key){++++}-{4:4}, at: ocfs2_read_folio+0x36a/0x980 fs/ocfs2/aops.c:294

but task is already holding lock:
ffff88807fd827e0 (mapping.invalidate_lock#13){.+.+}-{4:4}, at: filemap_invalidate_lock_shared include/linux/fs.h:932 [inline]
ffff88807fd827e0 (mapping.invalidate_lock#13){.+.+}-{4:4}, at: filemap_create_folio mm/filemap.c:2516 [inline]
ffff88807fd827e0 (mapping.invalidate_lock#13){.+.+}-{4:4}, at: filemap_get_pages+0xdc3/0x1fb0 mm/filemap.c:2586

which lock already depends on the new lock.


the existing dependency chain (in reverse order) is:

-> #2 (mapping.invalidate_lock#13){.+.+}-{4:4}:
       lock_acquire+0x1ed/0x550 kernel/locking/lockdep.c:5851
       down_read+0xb1/0xa40 kernel/locking/rwsem.c:1524
       filemap_invalidate_lock_shared include/linux/fs.h:932 [inline]
       filemap_fault+0x7f8/0x16c0 mm/filemap.c:3435
       ocfs2_fault+0xbb/0x3d0 fs/ocfs2/mmap.c:38
       __do_fault+0x135/0x390 mm/memory.c:4977
       do_read_fault mm/memory.c:5392 [inline]
       do_fault mm/memory.c:5526 [inline]
       do_pte_missing mm/memory.c:4047 [inline]
       handle_pte_fault mm/memory.c:5889 [inline]
       __handle_mm_fault+0x4c44/0x70f0 mm/memory.c:6032
       handle_mm_fault+0x2c1/0x7e0 mm/memory.c:6201
       do_user_addr_fault arch/x86/mm/fault.c:1388 [inline]
       handle_page_fault arch/x86/mm/fault.c:1480 [inline]
       exc_page_fault+0x2b9/0x8b0 arch/x86/mm/fault.c:1538
       asm_exc_page_fault+0x26/0x30 arch/x86/include/asm/idtentry.h:623

-> #1 (&mm->mmap_lock){++++}-{4:4}:
       lock_acquire+0x1ed/0x550 kernel/locking/lockdep.c:5851
       __might_fault+0xc6/0x120 mm/memory.c:6840
       _inline_copy_to_user include/linux/uaccess.h:192 [inline]
       _copy_to_user+0x2c/0xb0 lib/usercopy.c:26
       copy_to_user include/linux/uaccess.h:225 [inline]
       fiemap_fill_next_extent+0x235/0x410 fs/ioctl.c:145
       ocfs2_fiemap+0x9f1/0xf80 fs/ocfs2/extent_map.c:806
       ioctl_fiemap fs/ioctl.c:220 [inline]
       do_vfs_ioctl+0x1c01/0x2e40 fs/ioctl.c:840
       __do_sys_ioctl fs/ioctl.c:904 [inline]
       __se_sys_ioctl+0x80/0x170 fs/ioctl.c:892
       do_syscall_x64 arch/x86/entry/common.c:52 [inline]
       do_syscall_64+0xf3/0x230 arch/x86/entry/common.c:83
       entry_SYSCALL_64_after_hwframe+0x77/0x7f

-> #0 (&ocfs2_file_ip_alloc_sem_key){++++}-{4:4}:
       check_prev_add kernel/locking/lockdep.c:3163 [inline]
       check_prevs_add kernel/locking/lockdep.c:3282 [inline]
       validate_chain+0x18ef/0x5920 kernel/locking/lockdep.c:3906
       __lock_acquire+0x1397/0x2100 kernel/locking/lockdep.c:5228
       lock_acquire+0x1ed/0x550 kernel/locking/lockdep.c:5851
       down_read+0xb1/0xa40 kernel/locking/rwsem.c:1524
       ocfs2_read_folio+0x36a/0x980 fs/ocfs2/aops.c:294
       filemap_read_folio+0x148/0x3b0 mm/filemap.c:2390
       filemap_create_folio mm/filemap.c:2525 [inline]
       filemap_get_pages+0x1042/0x1fb0 mm/filemap.c:2586
       filemap_splice_read+0x68e/0xef0 mm/filemap.c:2971
       do_splice_read fs/splice.c:985 [inline]
       splice_direct_to_actor+0x4af/0xc80 fs/splice.c:1089
       do_splice_direct_actor fs/splice.c:1207 [inline]
       do_splice_direct+0x289/0x3e0 fs/splice.c:1233
       do_sendfile+0x564/0x8a0 fs/read_write.c:1363
       __do_sys_sendfile64 fs/read_write.c:1424 [inline]
       __se_sys_sendfile64+0x17c/0x1e0 fs/read_write.c:1410
       do_syscall_x64 arch/x86/entry/common.c:52 [inline]
       do_syscall_64+0xf3/0x230 arch/x86/entry/common.c:83
       entry_SYSCALL_64_after_hwframe+0x77/0x7f

other info that might help us debug this:

Chain exists of:
  &ocfs2_file_ip_alloc_sem_key --> &mm->mmap_lock --> mapping.invalidate_lock#13

 Possible unsafe locking scenario:

       CPU0                    CPU1
       ----                    ----
  rlock(mapping.invalidate_lock#13);
                               lock(&mm->mmap_lock);
                               lock(mapping.invalidate_lock#13);
  rlock(&ocfs2_file_ip_alloc_sem_key);

 *** DEADLOCK ***

1 lock held by syz.2.361/9103:
 #0: ffff88807fd827e0 (mapping.invalidate_lock#13){.+.+}-{4:4}, at: filemap_invalidate_lock_shared include/linux/fs.h:932 [inline]
 #0: ffff88807fd827e0 (mapping.invalidate_lock#13){.+.+}-{4:4}, at: filemap_create_folio mm/filemap.c:2516 [inline]
 #0: ffff88807fd827e0 (mapping.invalidate_lock#13){.+.+}-{4:4}, at: filemap_get_pages+0xdc3/0x1fb0 mm/filemap.c:2586

stack backtrace:
CPU: 0 UID: 0 PID: 9103 Comm: syz.2.361 Not tainted 6.14.0-rc1-syzkaller-00235-g9946eaf552b1 #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 12/27/2024
Call Trace:
 <TASK>
 __dump_stack lib/dump_stack.c:94 [inline]
 dump_stack_lvl+0x241/0x360 lib/dump_stack.c:120
 print_circular_bug+0x13a/0x1b0 kernel/locking/lockdep.c:2076
 check_noncircular+0x36a/0x4a0 kernel/locking/lockdep.c:2208
 check_prev_add kernel/locking/lockdep.c:3163 [inline]
 check_prevs_add kernel/locking/lockdep.c:3282 [inline]
 validate_chain+0x18ef/0x5920 kernel/locking/lockdep.c:3906
 __lock_acquire+0x1397/0x2100 kernel/locking/lockdep.c:5228
 lock_acquire+0x1ed/0x550 kernel/locking/lockdep.c:5851
 down_read+0xb1/0xa40 kernel/locking/rwsem.c:1524
 ocfs2_read_folio+0x36a/0x980 fs/ocfs2/aops.c:294
 filemap_read_folio+0x148/0x3b0 mm/filemap.c:2390
 filemap_create_folio mm/filemap.c:2525 [inline]
 filemap_get_pages+0x1042/0x1fb0 mm/filemap.c:2586
 filemap_splice_read+0x68e/0xef0 mm/filemap.c:2971
 do_splice_read fs/splice.c:985 [inline]
 splice_direct_to_actor+0x4af/0xc80 fs/splice.c:1089
 do_splice_direct_actor fs/splice.c:1207 [inline]
 do_splice_direct+0x289/0x3e0 fs/splice.c:1233
 do_sendfile+0x564/0x8a0 fs/read_write.c:1363
 __do_sys_sendfile64 fs/read_write.c:1424 [inline]
 __se_sys_sendfile64+0x17c/0x1e0 fs/read_write.c:1410
 do_syscall_x64 arch/x86/entry/common.c:52 [inline]
 do_syscall_64+0xf3/0x230 arch/x86/entry/common.c:83
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7f943158cde9
Code: ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 a8 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007f9432389038 EFLAGS: 00000246 ORIG_RAX: 0000000000000028
RAX: ffffffffffffffda RBX: 00007f94317a6080 RCX: 00007f943158cde9
RDX: 0000000000000000 RSI: 0000000000000004 RDI: 0000000000000008
RBP: 00007f943160e2a0 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000fffe83 R11: 0000000000000246 R12: 0000000000000000
R13: 0000000000000000 R14: 00007f94317a6080 R15: 00007ffd6d546fb8
 </TASK>


---
This report is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzkaller@googlegroups.com.

syzbot will keep track of this issue. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.

If the report is already addressed, let syzbot know by replying with:
#syz fix: exact-commit-title

If you want to overwrite report's subsystems, reply with:
#syz set subsystems: new-subsystem
(See the list of subsystem names on the web dashboard)

If the report is a duplicate of another one, reply with:
#syz dup: exact-subject-of-another-report

If you want to undo deduplication, reply with:
#syz undup

^ permalink raw reply	[flat|nested] 2+ messages in thread

* Re: [syzbot] [ocfs2?] possible deadlock in ocfs2_read_folio
  2025-02-13 14:33 [syzbot] [ocfs2?] possible deadlock in ocfs2_read_folio syzbot
@ 2025-05-28 14:48 ` syzbot
  0 siblings, 0 replies; 2+ messages in thread
From: syzbot @ 2025-05-28 14:48 UTC (permalink / raw)
  To: jlbec, joseph.qi, linux-kernel, mark, ocfs2-devel, syzkaller-bugs

syzbot has found a reproducer for the following issue on:

HEAD commit:    c89756bcf406 Merge tag 'pm-6.16-rc1' of git://git.kernel.o..
git tree:       upstream
console output: https://syzkaller.appspot.com/x/log.txt?x=1201f170580000
kernel config:  https://syzkaller.appspot.com/x/.config?x=ded97a85afe9a6c8
dashboard link: https://syzkaller.appspot.com/bug?extid=bd316bb736c7dc2f318e
compiler:       Debian clang version 20.1.6 (++20250514063057+1e4d39e07757-1~exp1~20250514183223.118), Debian LLD 20.1.6
syz repro:      https://syzkaller.appspot.com/x/repro.syz?x=113ae6d4580000
C reproducer:   https://syzkaller.appspot.com/x/repro.c?x=12b01df4580000

Downloadable assets:
disk image (non-bootable): https://storage.googleapis.com/syzbot-assets/d900f083ada3/non_bootable_disk-c89756bc.raw.xz
vmlinux: https://storage.googleapis.com/syzbot-assets/b21d74e73303/vmlinux-c89756bc.xz
kernel image: https://storage.googleapis.com/syzbot-assets/b778ededeb75/bzImage-c89756bc.xz
mounted in repro: https://storage.googleapis.com/syzbot-assets/6ca75df782b2/mount_0.gz
  fsck result: OK (log: https://syzkaller.appspot.com/x/fsck.log?x=14b01df4580000)

IMPORTANT: if you fix the issue, please add the following tag to the commit:
Reported-by: syzbot+bd316bb736c7dc2f318e@syzkaller.appspotmail.com

======================================================
WARNING: possible circular locking dependency detected
6.15.0-syzkaller-03478-gc89756bcf406 #0 Not tainted
------------------------------------------------------
syz-executor341/5408 is trying to acquire lock:
ffff888046e7a2e0 (&ocfs2_file_ip_alloc_sem_key){++++}-{4:4}, at: ocfs2_read_folio+0x353/0x970 fs/ocfs2/aops.c:287

but task is already holding lock:
ffff888046e7a7e0 (mapping.invalidate_lock#3){.+.+}-{4:4}, at: filemap_invalidate_lock_shared include/linux/fs.h:932 [inline]
ffff888046e7a7e0 (mapping.invalidate_lock#3){.+.+}-{4:4}, at: filemap_fault+0x546/0x1200 mm/filemap.c:3391

which lock already depends on the new lock.


the existing dependency chain (in reverse order) is:

-> #2 (mapping.invalidate_lock#3){.+.+}-{4:4}:
       lock_acquire+0x120/0x360 kernel/locking/lockdep.c:5871
       down_read+0x46/0x2e0 kernel/locking/rwsem.c:1524
       filemap_invalidate_lock_shared include/linux/fs.h:932 [inline]
       filemap_fault+0x546/0x1200 mm/filemap.c:3391
       ocfs2_fault+0xa4/0x3f0 fs/ocfs2/mmap.c:38
       __do_fault+0x138/0x390 mm/memory.c:5098
       do_read_fault mm/memory.c:5518 [inline]
       do_fault mm/memory.c:5652 [inline]
       do_pte_missing mm/memory.c:4160 [inline]
       handle_pte_fault mm/memory.c:5997 [inline]
       __handle_mm_fault+0x37c5/0x55e0 mm/memory.c:6140
       handle_mm_fault+0x3f6/0x8c0 mm/memory.c:6309
       faultin_page mm/gup.c:1193 [inline]
       __get_user_pages+0x1a78/0x30c0 mm/gup.c:1491
       populate_vma_page_range+0x26b/0x340 mm/gup.c:1929
       __mm_populate+0x24c/0x380 mm/gup.c:2032
       mm_populate include/linux/mm.h:3487 [inline]
       vm_mmap_pgoff+0x3f0/0x4c0 mm/util.c:584
       ksys_mmap_pgoff+0x51f/0x760 mm/mmap.c:607
       do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
       do_syscall_64+0xfa/0x3b0 arch/x86/entry/syscall_64.c:94
       entry_SYSCALL_64_after_hwframe+0x77/0x7f

-> #1 (&mm->mmap_lock){++++}-{4:4}:
       lock_acquire+0x120/0x360 kernel/locking/lockdep.c:5871
       __might_fault+0xcc/0x130 mm/memory.c:7151
       _inline_copy_to_user include/linux/uaccess.h:192 [inline]
       _copy_to_user+0x2c/0xb0 lib/usercopy.c:26
       copy_to_user include/linux/uaccess.h:225 [inline]
       fiemap_fill_next_extent+0x1c0/0x390 fs/ioctl.c:145
       ocfs2_fiemap+0x888/0xc90 fs/ocfs2/extent_map.c:806
       ioctl_fiemap fs/ioctl.c:220 [inline]
       do_vfs_ioctl+0x16d3/0x1990 fs/ioctl.c:841
       __do_sys_ioctl fs/ioctl.c:905 [inline]
       __se_sys_ioctl+0x82/0x170 fs/ioctl.c:893
       do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
       do_syscall_64+0xfa/0x3b0 arch/x86/entry/syscall_64.c:94
       entry_SYSCALL_64_after_hwframe+0x77/0x7f

-> #0 (&ocfs2_file_ip_alloc_sem_key){++++}-{4:4}:
       check_prev_add kernel/locking/lockdep.c:3168 [inline]
       check_prevs_add kernel/locking/lockdep.c:3287 [inline]
       validate_chain+0xb9b/0x2140 kernel/locking/lockdep.c:3911
       __lock_acquire+0xab9/0xd20 kernel/locking/lockdep.c:5240
       lock_acquire+0x120/0x360 kernel/locking/lockdep.c:5871
       down_read+0x46/0x2e0 kernel/locking/rwsem.c:1524
       ocfs2_read_folio+0x353/0x970 fs/ocfs2/aops.c:287
       filemap_read_folio+0x117/0x380 mm/filemap.c:2401
       filemap_fault+0xb16/0x1200 mm/filemap.c:3495
       ocfs2_fault+0xa4/0x3f0 fs/ocfs2/mmap.c:38
       __do_fault+0x138/0x390 mm/memory.c:5098
       do_read_fault mm/memory.c:5518 [inline]
       do_fault mm/memory.c:5652 [inline]
       do_pte_missing mm/memory.c:4160 [inline]
       handle_pte_fault mm/memory.c:5997 [inline]
       __handle_mm_fault+0x37c5/0x55e0 mm/memory.c:6140
       handle_mm_fault+0x3f6/0x8c0 mm/memory.c:6309
       faultin_page mm/gup.c:1193 [inline]
       __get_user_pages+0x1a78/0x30c0 mm/gup.c:1491
       populate_vma_page_range+0x26b/0x340 mm/gup.c:1929
       __mm_populate+0x24c/0x380 mm/gup.c:2032
       mm_populate include/linux/mm.h:3487 [inline]
       vm_mmap_pgoff+0x3f0/0x4c0 mm/util.c:584
       ksys_mmap_pgoff+0x51f/0x760 mm/mmap.c:607
       do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
       do_syscall_64+0xfa/0x3b0 arch/x86/entry/syscall_64.c:94
       entry_SYSCALL_64_after_hwframe+0x77/0x7f

other info that might help us debug this:

Chain exists of:
  &ocfs2_file_ip_alloc_sem_key --> &mm->mmap_lock --> mapping.invalidate_lock#3

 Possible unsafe locking scenario:

       CPU0                    CPU1
       ----                    ----
  rlock(mapping.invalidate_lock#3);
                               lock(&mm->mmap_lock);
                               lock(mapping.invalidate_lock#3);
  rlock(&ocfs2_file_ip_alloc_sem_key);

 *** DEADLOCK ***

1 lock held by syz-executor341/5408:
 #0: ffff888046e7a7e0 (mapping.invalidate_lock#3){.+.+}-{4:4}, at: filemap_invalidate_lock_shared include/linux/fs.h:932 [inline]
 #0: ffff888046e7a7e0 (mapping.invalidate_lock#3){.+.+}-{4:4}, at: filemap_fault+0x546/0x1200 mm/filemap.c:3391

stack backtrace:
CPU: 0 UID: 0 PID: 5408 Comm: syz-executor341 Not tainted 6.15.0-syzkaller-03478-gc89756bcf406 #0 PREEMPT(full) 
Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2~bpo12+1 04/01/2014
Call Trace:
 <TASK>
 dump_stack_lvl+0x189/0x250 lib/dump_stack.c:120
 print_circular_bug+0x2ee/0x310 kernel/locking/lockdep.c:2046
 check_noncircular+0x134/0x160 kernel/locking/lockdep.c:2178
 check_prev_add kernel/locking/lockdep.c:3168 [inline]
 check_prevs_add kernel/locking/lockdep.c:3287 [inline]
 validate_chain+0xb9b/0x2140 kernel/locking/lockdep.c:3911
 __lock_acquire+0xab9/0xd20 kernel/locking/lockdep.c:5240
 lock_acquire+0x120/0x360 kernel/locking/lockdep.c:5871
 down_read+0x46/0x2e0 kernel/locking/rwsem.c:1524
 ocfs2_read_folio+0x353/0x970 fs/ocfs2/aops.c:287
 filemap_read_folio+0x117/0x380 mm/filemap.c:2401
 filemap_fault+0xb16/0x1200 mm/filemap.c:3495
 ocfs2_fault+0xa4/0x3f0 fs/ocfs2/mmap.c:38
 __do_fault+0x138/0x390 mm/memory.c:5098
 do_read_fault mm/memory.c:5518 [inline]
 do_fault mm/memory.c:5652 [inline]
 do_pte_missing mm/memory.c:4160 [inline]
 handle_pte_fault mm/memory.c:5997 [inline]
 __handle_mm_fault+0x37c5/0x55e0 mm/memory.c:6140
 handle_mm_fault+0x3f6/0x8c0 mm/memory.c:6309
 faultin_page mm/gup.c:1193 [inline]
 __get_user_pages+0x1a78/0x30c0 mm/gup.c:1491
 populate_vma_page_range+0x26b/0x340 mm/gup.c:1929
 __mm_populate+0x24c/0x380 mm/gup.c:2032
 mm_populate include/linux/mm.h:3487 [inline]
 vm_mmap_pgoff+0x3f0/0x4c0 mm/util.c:584
 ksys_mmap_pgoff+0x51f/0x760 mm/mmap.c:607
 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
 do_syscall_64+0xfa/0x3b0 arch/x86/entry/syscall_64.c:94
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7f1267d03dd9
Code: 28 00 00 00 75 05 48 83 c4 28 c3 e8 b1 18 00 00 90 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 b0 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007f1267c97208 EFLAGS: 00000246 ORIG_RAX: 0000000000000009
RAX: ffffffffffffffda RBX: 00007f1267d90718 RCX: 00007f1267d03dd9
RDX: 0000000001000003 RSI: 0000000000b36000 RDI: 0000200000000000
RBP: 00007f1267d90710 R08: 0000000000000006 R09: 0000000000000000
R10: 0000000000028011 R11: 0000000000000246 R12: 00007f1267d5d624
R13: 5bf000f24f5ebbca R14: 0000200000000280 R15: 0000200000000000
 </TASK>


---
If you want syzbot to run the reproducer, reply with:
#syz test: git://repo/address.git branch-or-commit-hash
If you attach or paste a git patch, syzbot will apply it before testing.

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2025-05-28 14:48 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-02-13 14:33 [syzbot] [ocfs2?] possible deadlock in ocfs2_read_folio syzbot
2025-05-28 14:48 ` syzbot

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).