From: Ron Leach <ronleach@tesco.net>
To: linux-raid@vger.kernel.org
Subject: Re: Recovery on new 2TB disk: finish=7248.4min (raid1)
Date: Fri, 28 Apr 2017 08:05:42 +0100 [thread overview]
Message-ID: <5902E9C6.3090601@tesco.net> (raw)
In-Reply-To: <e76a0a07-2306-e63a-a1f7-48d103eefd62@thelounge.net>
On 27/04/2017 15:43, Reindl Harald wrote:
> [root@rh:~]$ cat /proc/mdstat
> Personalities : [raid10] [raid1]
> md0 : active raid1 sda1[0] sdc1[1] sdb1[3] sdd1[2]
> 511988 blocks super 1.0 [4/4] [UUUU]
>
> md1 : active raid10 sda2[0] sdc2[1] sdd2[2] sdb2[3]
> 30716928 blocks super 1.1 512K chunks 2 near-copies [4/4] [UUUU]
>
> md2 : active raid10 sda3[0] sdc3[1] sdd3[2] sdb3[3]
> 3875222528 blocks super 1.1 512K chunks 2 near-copies [4/4] [UUUU]
> [========>............] check = 44.4% (1721204032/3875222528)
> finish=470.9min speed=76232K/sec
>
Those were the sort of times that I used to see on this machine.
I've fixed it now, though. There were some clues in syslog - gdm3 was
alerting 2 or 3 times each second, continually. This was because I'd
taken this server offline and across to a workbench to change the
disk. I'd restarted the machine, partitioned it, etc, and issued
those --add commands, without a screen or keyboard, just over ssh. I
hadn't realised that gdm3 would panic, causing a couple of acpid
messages as well each time.
Someone on the Debian list pointed out that gdm3 was a service and
could be stopped for this circumstance. Doing that seemed to release
mdadm to recovering at its normal rate; all the mds are fully
replicated, now.
Thanks to folks for contributing their thoughts, some interesting
insights came up as well which will be useful in the future. This is
quite an old server (still actively used), created before I realised
the drawbacks of having so many partitions for each part of the
filesystem; I don't do this on more-recent systems, which look more
like the setup you show here.
Anyway, all seems ok now, and thanks again,
regards, Ron
next prev parent reply other threads:[~2017-04-28 7:05 UTC|newest]
Thread overview: 25+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-04-26 21:57 Recovery on new 2TB disk: finish=7248.4min (raid1) Ron Leach
2017-04-27 14:25 ` John Stoffel
2017-04-27 14:43 ` Reindl Harald
2017-04-28 7:05 ` Ron Leach [this message]
2017-04-27 14:54 ` Mateusz Korniak
2017-04-27 19:03 ` John Stoffel
2017-04-27 19:42 ` Reindl Harald
2017-04-28 7:30 ` Mateusz Korniak
2017-04-30 12:04 ` Nix
2017-04-30 13:21 ` Roman Mamedov
2017-04-30 16:10 ` Nix
2017-04-30 16:47 ` Roman Mamedov
2017-05-01 21:13 ` Nix
2017-05-01 21:44 ` Anthony Youngman
2017-05-01 21:46 ` Roman Mamedov
2017-05-01 21:53 ` Anthony Youngman
2017-05-01 22:03 ` Roman Mamedov
2017-05-02 6:10 ` Wols Lists
2017-05-02 10:02 ` Nix
2017-05-01 23:26 ` Nix
2017-04-30 17:16 ` Wols Lists
2017-05-01 20:12 ` Nix
2017-04-27 14:58 ` Mateusz Korniak
2017-04-27 19:01 ` Ron Leach
2017-04-28 7:06 ` Mateusz Korniak
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=5902E9C6.3090601@tesco.net \
--to=ronleach@tesco.net \
--cc=linux-raid@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).