* [PATCH 000 of 5] md: Assorted minor fixes for mainline
@ 2006-12-08 1:05 NeilBrown
2006-12-08 1:05 ` [PATCH 001 of 5] md: Remove some old ifdefed-out code from raid5.c NeilBrown
` (5 more replies)
0 siblings, 6 replies; 8+ messages in thread
From: NeilBrown @ 2006-12-08 1:05 UTC (permalink / raw)
To: Andrew Morton; +Cc: linux-raid, linux-kernel
Following are 5 patches for md in 2.6.19-rc6-mm2 that are suitable for 2.6.20.
Patch 4 might fix an outstanding bug against md which manifests as an
oops early in boot, but I don't have test results yet.
NeilBrown
[PATCH 001 of 5] md: Remove some old ifdefed-out code from raid5.c
[PATCH 002 of 5] md: Return a non-zero error to bi_end_io as appropriate in raid5.
[PATCH 003 of 5] md: Assorted md and raid1 one-liners
[PATCH 004 of 5] md: Close a race between destroying and recreating an md device.
[PATCH 005 of 5] md: Allow mddevs to live a bit longer to avoid a loop with udev.
^ permalink raw reply [flat|nested] 8+ messages in thread
* [PATCH 001 of 5] md: Remove some old ifdefed-out code from raid5.c
2006-12-08 1:05 [PATCH 000 of 5] md: Assorted minor fixes for mainline NeilBrown
@ 2006-12-08 1:05 ` NeilBrown
2006-12-08 1:05 ` [PATCH 002 of 5] md: Return a non-zero error to bi_end_io as appropriate in raid5 NeilBrown
` (4 subsequent siblings)
5 siblings, 0 replies; 8+ messages in thread
From: NeilBrown @ 2006-12-08 1:05 UTC (permalink / raw)
To: Andrew Morton; +Cc: linux-raid, linux-kernel
There are some vestiges of old code that was used for bypassing the
stripe cache on reads in raid5.c. This was never updated after the
change from buffer_heads to bios, but was left as a reminder.
That functionality has nowe been implemented in a completely different
way, so the old code can go.
Signed-off-by: Neil Brown <neilb@suse.de>
### Diffstat output
./drivers/md/raid5.c | 63 ---------------------------------------------------
1 file changed, 63 deletions(-)
diff .prev/drivers/md/raid5.c ./drivers/md/raid5.c
--- .prev/drivers/md/raid5.c 2006-12-06 14:23:10.000000000 +1100
+++ ./drivers/md/raid5.c 2006-12-06 14:31:26.000000000 +1100
@@ -544,35 +544,7 @@ static int raid5_end_read_request(struct
}
if (uptodate) {
-#if 0
- struct bio *bio;
- unsigned long flags;
- spin_lock_irqsave(&conf->device_lock, flags);
- /* we can return a buffer if we bypassed the cache or
- * if the top buffer is not in highmem. If there are
- * multiple buffers, leave the extra work to
- * handle_stripe
- */
- buffer = sh->bh_read[i];
- if (buffer &&
- (!PageHighMem(buffer->b_page)
- || buffer->b_page == bh->b_page )
- ) {
- sh->bh_read[i] = buffer->b_reqnext;
- buffer->b_reqnext = NULL;
- } else
- buffer = NULL;
- spin_unlock_irqrestore(&conf->device_lock, flags);
- if (sh->bh_page[i]==bh->b_page)
- set_buffer_uptodate(bh);
- if (buffer) {
- if (buffer->b_page != bh->b_page)
- memcpy(buffer->b_data, bh->b_data, bh->b_size);
- buffer->b_end_io(buffer, 1);
- }
-#else
set_bit(R5_UPTODATE, &sh->dev[i].flags);
-#endif
if (test_bit(R5_ReadError, &sh->dev[i].flags)) {
rdev = conf->disks[i].rdev;
printk(KERN_INFO "raid5:%s: read error corrected (%lu sectors at %llu on %s)\n",
@@ -618,14 +590,6 @@ static int raid5_end_read_request(struct
}
}
rdev_dec_pending(conf->disks[i].rdev, conf->mddev);
-#if 0
- /* must restore b_page before unlocking buffer... */
- if (sh->bh_page[i] != bh->b_page) {
- bh->b_page = sh->bh_page[i];
- bh->b_data = page_address(bh->b_page);
- clear_buffer_uptodate(bh);
- }
-#endif
clear_bit(R5_LOCKED, &sh->dev[i].flags);
set_bit(STRIPE_HANDLE, &sh->state);
release_stripe(sh);
@@ -1619,15 +1583,6 @@ static void handle_stripe5(struct stripe
} else if (test_bit(R5_Insync, &dev->flags)) {
set_bit(R5_LOCKED, &dev->flags);
set_bit(R5_Wantread, &dev->flags);
-#if 0
- /* if I am just reading this block and we don't have
- a failed drive, or any pending writes then sidestep the cache */
- if (sh->bh_read[i] && !sh->bh_read[i]->b_reqnext &&
- ! syncing && !failed && !to_write) {
- sh->bh_cache[i]->b_page = sh->bh_read[i]->b_page;
- sh->bh_cache[i]->b_data = sh->bh_read[i]->b_data;
- }
-#endif
locked++;
PRINTK("Reading block %d (sync=%d)\n",
i, syncing);
@@ -1645,9 +1600,6 @@ static void handle_stripe5(struct stripe
dev = &sh->dev[i];
if ((dev->towrite || i == sh->pd_idx) &&
(!test_bit(R5_LOCKED, &dev->flags)
-#if 0
-|| sh->bh_page[i]!=bh->b_page
-#endif
) &&
!test_bit(R5_UPTODATE, &dev->flags)) {
if (test_bit(R5_Insync, &dev->flags)
@@ -1659,9 +1611,6 @@ static void handle_stripe5(struct stripe
/* Would I have to read this buffer for reconstruct_write */
if (!test_bit(R5_OVERWRITE, &dev->flags) && i != sh->pd_idx &&
(!test_bit(R5_LOCKED, &dev->flags)
-#if 0
-|| sh->bh_page[i] != bh->b_page
-#endif
) &&
!test_bit(R5_UPTODATE, &dev->flags)) {
if (test_bit(R5_Insync, &dev->flags)) rcw++;
@@ -2197,15 +2146,6 @@ static void handle_stripe6(struct stripe
} else if (test_bit(R5_Insync, &dev->flags)) {
set_bit(R5_LOCKED, &dev->flags);
set_bit(R5_Wantread, &dev->flags);
-#if 0
- /* if I am just reading this block and we don't have
- a failed drive, or any pending writes then sidestep the cache */
- if (sh->bh_read[i] && !sh->bh_read[i]->b_reqnext &&
- ! syncing && !failed && !to_write) {
- sh->bh_cache[i]->b_page = sh->bh_read[i]->b_page;
- sh->bh_cache[i]->b_data = sh->bh_read[i]->b_data;
- }
-#endif
locked++;
PRINTK("Reading block %d (sync=%d)\n",
i, syncing);
@@ -2224,9 +2164,6 @@ static void handle_stripe6(struct stripe
if (!test_bit(R5_OVERWRITE, &dev->flags)
&& i != pd_idx && i != qd_idx
&& (!test_bit(R5_LOCKED, &dev->flags)
-#if 0
- || sh->bh_page[i] != bh->b_page
-#endif
) &&
!test_bit(R5_UPTODATE, &dev->flags)) {
if (test_bit(R5_Insync, &dev->flags)) rcw++;
^ permalink raw reply [flat|nested] 8+ messages in thread
* [PATCH 002 of 5] md: Return a non-zero error to bi_end_io as appropriate in raid5.
2006-12-08 1:05 [PATCH 000 of 5] md: Assorted minor fixes for mainline NeilBrown
2006-12-08 1:05 ` [PATCH 001 of 5] md: Remove some old ifdefed-out code from raid5.c NeilBrown
@ 2006-12-08 1:05 ` NeilBrown
2006-12-08 1:05 ` [PATCH 003 of 5] md: Assorted md and raid1 one-liners NeilBrown
` (3 subsequent siblings)
5 siblings, 0 replies; 8+ messages in thread
From: NeilBrown @ 2006-12-08 1:05 UTC (permalink / raw)
To: Andrew Morton; +Cc: linux-raid, linux-kernel
Currently raid5 depends on clearing the BIO_UPTODATE flag to signal an
error to higher levels. While this should be sufficient, it is safer
to explicitly set the error code as well - less room for confusion.
Signed-off-by: Neil Brown <neilb@suse.de>
### Diffstat output
./drivers/md/raid5.c | 16 ++++++++++++----
1 file changed, 12 insertions(+), 4 deletions(-)
diff .prev/drivers/md/raid5.c ./drivers/md/raid5.c
--- .prev/drivers/md/raid5.c 2006-12-07 15:33:40.000000000 +1100
+++ ./drivers/md/raid5.c 2006-12-07 15:44:41.000000000 +1100
@@ -1818,7 +1818,9 @@ static void handle_stripe5(struct stripe
return_bi = bi->bi_next;
bi->bi_next = NULL;
bi->bi_size = 0;
- bi->bi_end_io(bi, bytes, 0);
+ bi->bi_end_io(bi, bytes,
+ test_bit(BIO_UPTODATE, &bi->bi_flags)
+ ? 0 : -EIO);
}
for (i=disks; i-- ;) {
int rw;
@@ -2359,7 +2361,9 @@ static void handle_stripe6(struct stripe
return_bi = bi->bi_next;
bi->bi_next = NULL;
bi->bi_size = 0;
- bi->bi_end_io(bi, bytes, 0);
+ bi->bi_end_io(bi, bytes,
+ test_bit(BIO_UPTODATE, &bi->bi_flags)
+ ? 0 : -EIO);
}
for (i=disks; i-- ;) {
int rw;
@@ -2859,7 +2863,9 @@ static int make_request(request_queue_t
if ( rw == WRITE )
md_write_end(mddev);
bi->bi_size = 0;
- bi->bi_end_io(bi, bytes, 0);
+ bi->bi_end_io(bi, bytes,
+ test_bit(BIO_UPTODATE, &bi->bi_flags)
+ ? 0 : -EIO);
}
return 0;
}
@@ -3127,7 +3133,9 @@ static int retry_aligned_read(raid5_con
int bytes = raid_bio->bi_size;
raid_bio->bi_size = 0;
- raid_bio->bi_end_io(raid_bio, bytes, 0);
+ raid_bio->bi_end_io(raid_bio, bytes,
+ test_bit(BIO_UPTODATE, &raid_bio->bi_flags)
+ ? 0 : -EIO);
}
if (atomic_dec_and_test(&conf->active_aligned_reads))
wake_up(&conf->wait_for_stripe);
^ permalink raw reply [flat|nested] 8+ messages in thread
* [PATCH 003 of 5] md: Assorted md and raid1 one-liners
2006-12-08 1:05 [PATCH 000 of 5] md: Assorted minor fixes for mainline NeilBrown
2006-12-08 1:05 ` [PATCH 001 of 5] md: Remove some old ifdefed-out code from raid5.c NeilBrown
2006-12-08 1:05 ` [PATCH 002 of 5] md: Return a non-zero error to bi_end_io as appropriate in raid5 NeilBrown
@ 2006-12-08 1:05 ` NeilBrown
2006-12-08 1:05 ` [PATCH 004 of 5] md: Close a race between destroying and recreating an md device NeilBrown
` (2 subsequent siblings)
5 siblings, 0 replies; 8+ messages in thread
From: NeilBrown @ 2006-12-08 1:05 UTC (permalink / raw)
To: Andrew Morton; +Cc: linux-raid, linux-kernel
Fix few bugs that meant that:
- superblocks weren't alway written at exactly the right time (this
could show up if the array was not written to - writting to the array
causes lots of superblock updates and so hides these errors).
- restarting device recovery after a clean shutdown (version-1 metadata
only) didn't work as intended (or at all).
1/ Ensure superblock is updated when a new device is added.
2/ Remove an inappropriate test on MD_RECOVERY_SYNC in md_do_sync.
The body of this if takes one of two branches depending on whether
MD_RECOVERY_SYNC is set, so testing it in the clause of the if
is wrong.
3/ Flag superblock for updating after a resync/recovery finishes.
4/ If we find the neeed to restart a recovery in the middle (version-1
metadata only) make sure a full recovery (not just as guided by
bitmaps) does get done.
Signed-off-by: Neil Brown <neilb@suse.de>
### Diffstat output
./drivers/md/md.c | 3 ++-
./drivers/md/raid1.c | 1 +
2 files changed, 3 insertions(+), 1 deletion(-)
diff .prev/drivers/md/md.c ./drivers/md/md.c
--- .prev/drivers/md/md.c 2006-12-07 15:33:40.000000000 +1100
+++ ./drivers/md/md.c 2006-12-07 15:44:53.000000000 +1100
@@ -3729,6 +3729,7 @@ static int add_new_disk(mddev_t * mddev,
if (err)
export_rdev(rdev);
+ md_update_sb(mddev, 1);
set_bit(MD_RECOVERY_NEEDED, &mddev->recovery);
md_wakeup_thread(mddev->thread);
return err;
@@ -5289,7 +5290,6 @@ void md_do_sync(mddev_t *mddev)
mddev->pers->sync_request(mddev, max_sectors, &skipped, 1);
if (!test_bit(MD_RECOVERY_ERR, &mddev->recovery) &&
- test_bit(MD_RECOVERY_SYNC, &mddev->recovery) &&
!test_bit(MD_RECOVERY_CHECK, &mddev->recovery) &&
mddev->curr_resync > 2) {
if (test_bit(MD_RECOVERY_SYNC, &mddev->recovery)) {
@@ -5313,6 +5313,7 @@ void md_do_sync(mddev_t *mddev)
rdev->recovery_offset = mddev->curr_resync;
}
}
+ set_bit(MD_CHANGE_DEVS, &mddev->flags);
skip:
mddev->curr_resync = 0;
diff .prev/drivers/md/raid1.c ./drivers/md/raid1.c
--- .prev/drivers/md/raid1.c 2006-12-07 15:33:40.000000000 +1100
+++ ./drivers/md/raid1.c 2006-12-07 15:44:53.000000000 +1100
@@ -1951,6 +1951,7 @@ static int run(mddev_t *mddev)
!test_bit(In_sync, &disk->rdev->flags)) {
disk->head_position = 0;
mddev->degraded++;
+ conf->fullsync = 1;
}
}
if (mddev->degraded == conf->raid_disks) {
^ permalink raw reply [flat|nested] 8+ messages in thread
* [PATCH 004 of 5] md: Close a race between destroying and recreating an md device.
2006-12-08 1:05 [PATCH 000 of 5] md: Assorted minor fixes for mainline NeilBrown
` (2 preceding siblings ...)
2006-12-08 1:05 ` [PATCH 003 of 5] md: Assorted md and raid1 one-liners NeilBrown
@ 2006-12-08 1:05 ` NeilBrown
2006-12-08 1:05 ` [PATCH 005 of 5] md: Allow mddevs to live a bit longer to avoid a loop with udev NeilBrown
2006-12-09 0:04 ` [PATCH 000 of 5] md: Assorted minor fixes for mainline Andrew Morton
5 siblings, 0 replies; 8+ messages in thread
From: NeilBrown @ 2006-12-08 1:05 UTC (permalink / raw)
To: Andrew Morton; +Cc: linux-raid, linux-kernel
For each md device, we need a gendisk. As that gendisk has a name
that gets registered in sysfs, we need to make sure that when an md
device is shut down, we don't create it again until the shutdown is
complete and the gendisk has been deleted.
This patches utilises the disks_mutex to ensure the proper exclusion.
Signed-off-by: Neil Brown <neilb@suse.de>
### Diffstat output
./drivers/md/md.c | 25 +++++++++++++++++++++----
1 file changed, 21 insertions(+), 4 deletions(-)
diff .prev/drivers/md/md.c ./drivers/md/md.c
--- .prev/drivers/md/md.c 2006-12-07 15:45:31.000000000 +1100
+++ ./drivers/md/md.c 2006-12-07 21:01:11.000000000 +1100
@@ -222,18 +222,36 @@ static inline mddev_t *mddev_get(mddev_t
return mddev;
}
+static DEFINE_MUTEX(disks_mutex);
static void mddev_put(mddev_t *mddev)
{
+ /* We need to hold disks_mutex to safely destroy the gendisk
+ * info before someone else creates a new gendisk with the same
+ * name, but we don't want to take that mutex just to decrement
+ * the ->active counter. So we first test if this is the last
+ * reference. If it is, we put things back as they were found
+ * and take disks_mutex before trying again.
+ */
if (!atomic_dec_and_lock(&mddev->active, &all_mddevs_lock))
return;
+ atomic_inc(&mddev->active);
+ spin_unlock(&all_mddevs_lock);
+
+ mutex_lock(&disks_mutex);
+
+ if (!atomic_dec_and_lock(&mddev->active, &all_mddevs_lock)) {
+ mutex_unlock(&disks_mutex);
+ return;
+ }
list_del(&mddev->all_mddevs);
spin_unlock(&all_mddevs_lock);
- del_gendisk(mddev->gendisk);
- mddev->gendisk = NULL;
+ if (mddev->gendisk)
+ del_gendisk(mddev->gendisk);
blk_cleanup_queue(mddev->queue);
- mddev->queue = NULL;
kobject_unregister(&mddev->kobj);
+
+ mutex_unlock(&disks_mutex);
}
static mddev_t * mddev_find(dev_t unit)
@@ -2948,7 +2966,6 @@ int mdp_major = 0;
static struct kobject *md_probe(dev_t dev, int *part, void *data)
{
- static DEFINE_MUTEX(disks_mutex);
mddev_t *mddev = mddev_find(dev);
struct gendisk *disk;
int partitioned = (MAJOR(dev) != MD_MAJOR);
^ permalink raw reply [flat|nested] 8+ messages in thread
* [PATCH 005 of 5] md: Allow mddevs to live a bit longer to avoid a loop with udev.
2006-12-08 1:05 [PATCH 000 of 5] md: Assorted minor fixes for mainline NeilBrown
` (3 preceding siblings ...)
2006-12-08 1:05 ` [PATCH 004 of 5] md: Close a race between destroying and recreating an md device NeilBrown
@ 2006-12-08 1:05 ` NeilBrown
2006-12-09 0:04 ` [PATCH 000 of 5] md: Assorted minor fixes for mainline Andrew Morton
5 siblings, 0 replies; 8+ messages in thread
From: NeilBrown @ 2006-12-08 1:05 UTC (permalink / raw)
To: Andrew Morton; +Cc: linux-raid, linux-kernel
As md devices a automatically created on first open, and automatically
destroyed on last close if they have no significant state, a loop can
be caused with udev.
If you open/close an md device that will generate add and remove
events to udev. udev will open the device, notice nothing is there
and close it again ... which generates another pair of add/remove events.
Ad infinitum.
So: Change md to only destroy a device if an explicity MD_STOP was
requested. This means that md devices might hang around longer than
you would like, but it is easy to get rid of them, and that could even
be automated in user-space (e.g. by mdadm --monitor).
Signed-off-by: Neil Brown <neilb@suse.de>
### Diffstat output
./drivers/md/md.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff .prev/drivers/md/md.c ./drivers/md/md.c
--- .prev/drivers/md/md.c 2006-12-07 21:01:11.000000000 +1100
+++ ./drivers/md/md.c 2006-12-08 10:22:46.000000000 +1100
@@ -292,7 +292,7 @@ static mddev_t * mddev_find(dev_t unit)
atomic_set(&new->active, 1);
spin_lock_init(&new->write_lock);
init_waitqueue_head(&new->sb_wait);
- new->dead = 1;
+ new->dead = 0;
new->queue = blk_alloc_queue(GFP_KERNEL);
if (!new->queue) {
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH 000 of 5] md: Assorted minor fixes for mainline
2006-12-08 1:05 [PATCH 000 of 5] md: Assorted minor fixes for mainline NeilBrown
` (4 preceding siblings ...)
2006-12-08 1:05 ` [PATCH 005 of 5] md: Allow mddevs to live a bit longer to avoid a loop with udev NeilBrown
@ 2006-12-09 0:04 ` Andrew Morton
2006-12-09 8:51 ` Neil Brown
5 siblings, 1 reply; 8+ messages in thread
From: Andrew Morton @ 2006-12-09 0:04 UTC (permalink / raw)
To: NeilBrown; +Cc: linux-raid, linux-kernel
On Fri, 8 Dec 2006 12:05:24 +1100
NeilBrown <neilb@suse.de> wrote:
> Following are 5 patches for md in 2.6.19-rc6-mm2 that are suitable for 2.6.20.
>
> Patch 4 might fix an outstanding bug against md which manifests as an
> oops early in boot, but I don't have test results yet.
>
> NeilBrown
>
> [PATCH 001 of 5] md: Remove some old ifdefed-out code from raid5.c
> [PATCH 002 of 5] md: Return a non-zero error to bi_end_io as appropriate in raid5.
> [PATCH 003 of 5] md: Assorted md and raid1 one-liners
> [PATCH 004 of 5] md: Close a race between destroying and recreating an md device.
> [PATCH 005 of 5] md: Allow mddevs to live a bit longer to avoid a loop with udev.
md-change-lifetime-rules-for-md-devices.patch still has a cloud over its
head (Jiri Kosina <jikos@jikos.cz>'s repeatable failure), so I staged these
new patches as below:
md-fix-innocuous-bug-in-raid6-stripe_to_pdidx.patch
#
md-conditionalize-some-code.patch
+md-remove-some-old-ifdefed-out-code-from-raid5c.patch
+md-return-a-non-zero-error-to-bi_end_io-as-appropriate-in-raid5.patch
+md-assorted-md-and-raid1-one-liners.patch
md-change-lifetime-rules-for-md-devices.patch
+md-close-a-race-between-destroying-and-recreating-an-md-device.patch
+md-allow-mddevs-to-live-a-bit-longer-to-avoid-a-loop-with-udev.patch
So the last three are maybe-not-for-2.6.20.
Does that sounds sane?
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH 000 of 5] md: Assorted minor fixes for mainline
2006-12-09 0:04 ` [PATCH 000 of 5] md: Assorted minor fixes for mainline Andrew Morton
@ 2006-12-09 8:51 ` Neil Brown
0 siblings, 0 replies; 8+ messages in thread
From: Neil Brown @ 2006-12-09 8:51 UTC (permalink / raw)
To: Andrew Morton; +Cc: linux-raid, linux-kernel
On Friday December 8, akpm@osdl.org wrote:
>
> md-change-lifetime-rules-for-md-devices.patch still has a cloud over its
> head (Jiri Kosina <jikos@jikos.cz>'s repeatable failure), so I staged these
> new patches as below:
>
>
> md-fix-innocuous-bug-in-raid6-stripe_to_pdidx.patch
> #
> md-conditionalize-some-code.patch
> +md-remove-some-old-ifdefed-out-code-from-raid5c.patch
> +md-return-a-non-zero-error-to-bi_end_io-as-appropriate-in-raid5.patch
> +md-assorted-md-and-raid1-one-liners.patch
> md-change-lifetime-rules-for-md-devices.patch
> +md-close-a-race-between-destroying-and-recreating-an-md-device.patch
> +md-allow-mddevs-to-live-a-bit-longer-to-avoid-a-loop-with-udev.patch
>
> So the last three are maybe-not-for-2.6.20.
>
> Does that sounds sane?
Yes, perfectly sane ... though I still hope to nail that bug :-)
Thanks,
NeilBrown
^ permalink raw reply [flat|nested] 8+ messages in thread
end of thread, other threads:[~2006-12-09 8:51 UTC | newest]
Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2006-12-08 1:05 [PATCH 000 of 5] md: Assorted minor fixes for mainline NeilBrown
2006-12-08 1:05 ` [PATCH 001 of 5] md: Remove some old ifdefed-out code from raid5.c NeilBrown
2006-12-08 1:05 ` [PATCH 002 of 5] md: Return a non-zero error to bi_end_io as appropriate in raid5 NeilBrown
2006-12-08 1:05 ` [PATCH 003 of 5] md: Assorted md and raid1 one-liners NeilBrown
2006-12-08 1:05 ` [PATCH 004 of 5] md: Close a race between destroying and recreating an md device NeilBrown
2006-12-08 1:05 ` [PATCH 005 of 5] md: Allow mddevs to live a bit longer to avoid a loop with udev NeilBrown
2006-12-09 0:04 ` [PATCH 000 of 5] md: Assorted minor fixes for mainline Andrew Morton
2006-12-09 8:51 ` Neil Brown
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).