From: NeilBrown <neilb@suse.de>
To: Killian De Volder <killian.de.volder@megasoft.be>
Cc: linux-raid@vger.kernel.org
Subject: Re: Unable to reduce raid size.
Date: Fri, 18 Jul 2014 19:40:43 +1000 [thread overview]
Message-ID: <20140718194043.1c938b2c@notabene.brown> (raw)
In-Reply-To: <53C8DFEE.7000000@megasoft.be>
[-- Attachment #1: Type: text/plain, Size: 2668 bytes --]
On Fri, 18 Jul 2014 10:50:54 +0200 Killian De Volder
<killian.de.volder@megasoft.be> wrote:
> Hello,
>
> I have a strange issue, I cannot reduce the size of a degraded raid 5:
>
> strace mdadm -vv --grow /dev/md125 --size=2778726400
>
> Fails with:
> Stace:
> open("/sys/block/md125/md/dev-sdb4/size", O_WRONLY) = 4
> write(4, "2778726400", 10) = -1 EBUSY (Device or resource busy)
> close(4)
This condition isn't treated as an error by mdadm, so it isn't the cause.
Could you post the entire strace (really, bytes a cheap, alway provide more
detail than you think is needed.... though you did provide quite a bit)
Any kernel messages (dmesg output) ??
NeilBrown
> Stdout:
> component size of /dev/md125 unchanged at 2858285568K
> Stderr:
> <nothing>
>
>
> Any suggestions ?
> Note: I can work around this bug, by moving partitions around a bit, and not requiring the size reduction.
> However I suspect a bug/undocumented border case that should be resolved?
>
>
> Things I tried:
> ---------------
> - Disable bcache udev rules (as it was appearing in each attempt, maybe it got triggered during resize, seems not to be the case).
> - I have tried the same with loop files -> this works fine even with bcache (and the udev rules disabled)
> - Removed the internal write bitmap
> - Opend the file os.open("md125",os.O_EXCL,os.O_RDWR) in Python to test if it's in use somewhere (command worked fine)
> - Set array-size to the less then the desired new size (while still not destroying the FS below it).
>
>
> Information:
> ------------
> Kernel version: 3.15.5
> mdadm tools: 3.3-r2
>
> mdadm --detail:
> /dev/md125:
> Version : 1.2
> Creation Time : Wed Apr 16 20:58:09 2014
> Raid Level : raid5
> Array Size : 8283750400 (7900.00 GiB 8482.56 GB)
> Used Dev Size : 2858285568 (2725.87 GiB 2926.88 GB)
> Raid Devices : 4
> Total Devices : 3
> Persistence : Superblock is persistent
>
> Update Time : Fri Jul 18 09:53:08 2014
> State : clean, degraded
> Active Devices : 3
> Working Devices : 3
> Failed Devices : 0
> Spare Devices : 0
>
> Layout : left-symmetric
> Chunk Size : 512K
>
> UUID : 885c588b:c3503d9d:c67b86db:2887f8f7
> Events : 6440
>
> Number Major Minor RaidDevice State
> 0 8 36 0 active sync /dev/sdc4
> 2 0 0 2 removed
> 2 8 20 2 active sync /dev/sdb4
> 4 8 52 3 active sync /dev/sdd4
>
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 828 bytes --]
next prev parent reply other threads:[~2014-07-18 9:40 UTC|newest]
Thread overview: 5+ messages / expand[flat|nested] mbox.gz Atom feed top
2014-07-18 8:50 Unable to reduce raid size Killian De Volder
2014-07-18 9:40 ` NeilBrown [this message]
2014-07-18 9:58 ` Killian De Volder
2014-07-18 10:48 ` NeilBrown
2014-07-18 11:19 ` Killian De Volder
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20140718194043.1c938b2c@notabene.brown \
--to=neilb@suse.de \
--cc=killian.de.volder@megasoft.be \
--cc=linux-raid@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).