* mdadm raid5 array - 0 space available but usage is less than capacity
@ 2010-09-23 19:43 Robin Doherty
2010-09-23 19:53 ` Kaizaad Bilimorya
` (2 more replies)
0 siblings, 3 replies; 6+ messages in thread
From: Robin Doherty @ 2010-09-23 19:43 UTC (permalink / raw)
To: linux-raid
I have a RAID5 array of 5 1TB disks that has worked fine for 2 years
but now says that it has 0 space available (even though it does have
space available). It will allow me to read from it but not write. I
can delete things, and the usage goes down but the space stays at 0.
I can touch but not mkdir:
rob@cholera ~ $ mkdir /share/test
mkdir: cannot create directory `/share/test': No space left on device
rob@cholera ~ $ touch /share/test
rob@cholera ~ $ rm /share/test
rob@cholera ~ $
Output from df -h (/dev/md2 is the problem array):
Filesystem Size Used Avail Use% Mounted on
/dev/md1 23G 15G 6.1G 72% /
varrun 1008M 328K 1007M 1% /var/run
varlock 1008M 0 1008M 0% /var/lock
udev 1008M 140K 1008M 1% /dev
devshm 1008M 0 1008M 0% /dev/shm
/dev/md0 183M 43M 131M 25% /boot
/dev/md2 3.6T 3.5T 0 100% /share
and without the -h:
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/md1 23261796 15696564 6392900 72% /
varrun 1031412 328 1031084 1% /var/run
varlock 1031412 0 1031412 0% /var/lock
udev 1031412 140 1031272 1% /dev
devshm 1031412 0 1031412 0% /dev/shm
/dev/md0 186555 43532 133391 25% /boot
/dev/md2 3843709832 3705379188 0 100% /share
Everything looks fine with the mdadm array as far as I can tell from
the following:
rob@cholera /share $ cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5]
[raid4] [raid10]
md2 : active raid5 sda4[0] sde4[4] sdd4[3] sdc4[2] sdb4[1]
3874235136 blocks level 5, 64k chunk, algorithm 2 [5/5] [UUUUU]
md1 : active raid5 sda3[0] sde3[4] sdd3[3] sdc3[2] sdb3[1]
31262208 blocks level 5, 64k chunk, algorithm 2 [5/5] [UUUUU]
md0 : active raid1 sda1[0] sde1[4](S) sdd1[3] sdc1[2] sdb1[1]
192640 blocks [4/4] [UUUU]
unused devices: <none>
rob@cholera /share $ sudo mdadm -D /dev/md2
/dev/md2:
Version : 00.90.03
Creation Time : Sat May 3 13:45:54 2008
Raid Level : raid5
Array Size : 3874235136 (3694.76 GiB 3967.22 GB)
Used Dev Size : 968558784 (923.69 GiB 991.80 GB)
Raid Devices : 5
Total Devices : 5
Preferred Minor : 2
Persistence : Superblock is persistent
Update Time : Wed Sep 22 23:16:06 2010
State : clean
Active Devices : 5
Working Devices : 5
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 64K
UUID : 4387b8c0:21551766:ed750333:824b67f8
Events : 0.651050
Number Major Minor RaidDevice State
0 8 4 0 active sync /dev/sda4
1 8 20 1 active sync /dev/sdb4
2 8 36 2 active sync /dev/sdc4
3 8 52 3 active sync /dev/sdd4
4 8 68 4 active sync /dev/sde4
rob@cholera /share $ cat /etc/mdadm/mdadm.conf
# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#
# by default, scan all partitions (/proc/partitions) for MD superblocks.
# alternatively, specify devices to scan, using wildcards if desired.
DEVICE partitions
# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes
# automatically tag new arrays as belonging to the local system
HOMEHOST <system>
# instruct the monitoring daemon where to send mail alerts
MAILADDR root
# definitions of existing MD arrays
ARRAY /dev/md0 level=raid1 num-devices=4
UUID=a761c788:81771ba6:c983b0fe:7dba32e6
ARRAY /dev/md1 level=raid5 num-devices=5
UUID=291649db:9f874a3c:def17491:656cf263
ARRAY /dev/md2 level=raid5 num-devices=5
UUID=4387b8c0:21551766:ed750333:824b67f8
# This file was auto-generated on Sun, 04 May 2008 14:57:35 +0000
# by mkconf $Id$
So maybe this is a file system problem rather than an mdadm problem?
Either way I've already bashed my head against a brick wall for a few
weeks and I don't know where to go from here so any advice would be
appreciated.
Thanks
Rob
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: mdadm raid5 array - 0 space available but usage is less than capacity
2010-09-23 19:43 mdadm raid5 array - 0 space available but usage is less than capacity Robin Doherty
@ 2010-09-23 19:53 ` Kaizaad Bilimorya
2010-09-23 20:12 ` Robin Doherty
2010-09-23 20:18 ` Marcus Kool
2010-09-23 20:18 ` Roman Mamedov
2 siblings, 1 reply; 6+ messages in thread
From: Kaizaad Bilimorya @ 2010-09-23 19:53 UTC (permalink / raw)
To: Robin Doherty; +Cc: linux-raid
On Thu, 23 Sep 2010, Robin Doherty wrote:
> I have a RAID5 array of 5 1TB disks that has worked fine for 2 years
> but now says that it has 0 space available (even though it does have
> space available). It will allow me to read from it but not write. I
> can delete things, and the usage goes down but the space stays at 0.
>
> I can touch but not mkdir:
>
> rob@cholera ~ $ mkdir /share/test
> mkdir: cannot create directory `/share/test': No space left on device
> rob@cholera ~ $ touch /share/test
> rob@cholera ~ $ rm /share/test
> rob@cholera ~ $
>
> Output from df -h (/dev/md2 is the problem array):
>
> Filesystem Size Used Avail Use% Mounted on
> /dev/md1 23G 15G 6.1G 72% /
> varrun 1008M 328K 1007M 1% /var/run
> varlock 1008M 0 1008M 0% /var/lock
> udev 1008M 140K 1008M 1% /dev
> devshm 1008M 0 1008M 0% /dev/shm
> /dev/md0 183M 43M 131M 25% /boot
> /dev/md2 3.6T 3.5T 0 100% /share
>
> and without the -h:
>
> Filesystem 1K-blocks Used Available Use% Mounted on
> /dev/md1 23261796 15696564 6392900 72% /
> varrun 1031412 328 1031084 1% /var/run
> varlock 1031412 0 1031412 0% /var/lock
> udev 1031412 140 1031272 1% /dev
> devshm 1031412 0 1031412 0% /dev/shm
> /dev/md0 186555 43532 133391 25% /boot
> /dev/md2 3843709832 3705379188 0 100% /share
Just a shot in the dark but I have seen this with Lustre systems. What
does "df -i" show?
thanks
-k
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: mdadm raid5 array - 0 space available but usage is less than capacity
2010-09-23 19:53 ` Kaizaad Bilimorya
@ 2010-09-23 20:12 ` Robin Doherty
0 siblings, 0 replies; 6+ messages in thread
From: Robin Doherty @ 2010-09-23 20:12 UTC (permalink / raw)
To: Kaizaad Bilimorya; +Cc: linux-raid
Well, it's an ext3 file system. Here's the output of df -Ti
Filesystem Type Inodes IUsed IFree IUse% Mounted on
/dev/md1 ext3 1466368 215121 1251247 15% /
varrun tmpfs 257853 85 257768 1% /var/run
varlock tmpfs 257853 2 257851 1% /var/lock
udev tmpfs 257853 3193 254660 2% /dev
devshm tmpfs 257853 1 257852 1% /dev/shm
/dev/md0 ext3 48192 38 48154 1% /boot
/dev/md2 ext3 242147328 151281 241996047 1% /share
Cheers
Rob
On 23 September 2010 20:53, Kaizaad Bilimorya <kaizaad@sharcnet.ca> wrote:
>
>
> On Thu, 23 Sep 2010, Robin Doherty wrote:
>
>> I have a RAID5 array of 5 1TB disks that has worked fine for 2 years
>> but now says that it has 0 space available (even though it does have
>> space available). It will allow me to read from it but not write. I
>> can delete things, and the usage goes down but the space stays at 0.
>>
>> I can touch but not mkdir:
>>
>> rob@cholera ~ $ mkdir /share/test
>> mkdir: cannot create directory `/share/test': No space left on device
>> rob@cholera ~ $ touch /share/test
>> rob@cholera ~ $ rm /share/test
>> rob@cholera ~ $
>>
>> Output from df -h (/dev/md2 is the problem array):
>>
>> Filesystem Size Used Avail Use% Mounted on
>> /dev/md1 23G 15G 6.1G 72% /
>> varrun 1008M 328K 1007M 1% /var/run
>> varlock 1008M 0 1008M 0% /var/lock
>> udev 1008M 140K 1008M 1% /dev
>> devshm 1008M 0 1008M 0% /dev/shm
>> /dev/md0 183M 43M 131M 25% /boot
>> /dev/md2 3.6T 3.5T 0 100% /share
>>
>> and without the -h:
>>
>> Filesystem 1K-blocks Used Available Use% Mounted on
>> /dev/md1 23261796 15696564 6392900 72% /
>> varrun 1031412 328 1031084 1% /var/run
>> varlock 1031412 0 1031412 0% /var/lock
>> udev 1031412 140 1031272 1% /dev
>> devshm 1031412 0 1031412 0% /dev/shm
>> /dev/md0 186555 43532 133391 25% /boot
>> /dev/md2 3843709832 3705379188 0 100% /share
>
>
> Just a shot in the dark but I have seen this with Lustre systems. What does
> "df -i" show?
>
> thanks
> -k
>
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: mdadm raid5 array - 0 space available but usage is less than capacity
2010-09-23 19:43 mdadm raid5 array - 0 space available but usage is less than capacity Robin Doherty
2010-09-23 19:53 ` Kaizaad Bilimorya
@ 2010-09-23 20:18 ` Marcus Kool
2010-09-23 20:18 ` Roman Mamedov
2 siblings, 0 replies; 6+ messages in thread
From: Marcus Kool @ 2010-09-23 20:18 UTC (permalink / raw)
To: Robin Doherty; +Cc: linux-raid
Robin,
this is normal file system behaviour:
File systems reserve 5-10% for reasons of efficiency.
If 95% of the capacity is used, df will report 'file system full'.
and *only* root can write new files in the remaining 5%,
regular users cannot.
You need to clean up or insert more disks :-)
Marcus
Robin Doherty wrote:
> I have a RAID5 array of 5 1TB disks that has worked fine for 2 years
> but now says that it has 0 space available (even though it does have
> space available). It will allow me to read from it but not write. I
> can delete things, and the usage goes down but the space stays at 0.
>
> I can touch but not mkdir:
>
> rob@cholera ~ $ mkdir /share/test
> mkdir: cannot create directory `/share/test': No space left on device
> rob@cholera ~ $ touch /share/test
> rob@cholera ~ $ rm /share/test
> rob@cholera ~ $
>
> Output from df -h (/dev/md2 is the problem array):
>
> Filesystem Size Used Avail Use% Mounted on
> /dev/md1 23G 15G 6.1G 72% /
> varrun 1008M 328K 1007M 1% /var/run
> varlock 1008M 0 1008M 0% /var/lock
> udev 1008M 140K 1008M 1% /dev
> devshm 1008M 0 1008M 0% /dev/shm
> /dev/md0 183M 43M 131M 25% /boot
> /dev/md2 3.6T 3.5T 0 100% /share
>
> and without the -h:
>
> Filesystem 1K-blocks Used Available Use% Mounted on
> /dev/md1 23261796 15696564 6392900 72% /
> varrun 1031412 328 1031084 1% /var/run
> varlock 1031412 0 1031412 0% /var/lock
> udev 1031412 140 1031272 1% /dev
> devshm 1031412 0 1031412 0% /dev/shm
> /dev/md0 186555 43532 133391 25% /boot
> /dev/md2 3843709832 3705379188 0 100% /share
>
> Everything looks fine with the mdadm array as far as I can tell from
> the following:
>
> rob@cholera /share $ cat /proc/mdstat
> Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5]
> [raid4] [raid10]
> md2 : active raid5 sda4[0] sde4[4] sdd4[3] sdc4[2] sdb4[1]
> 3874235136 blocks level 5, 64k chunk, algorithm 2 [5/5] [UUUUU]
>
> md1 : active raid5 sda3[0] sde3[4] sdd3[3] sdc3[2] sdb3[1]
> 31262208 blocks level 5, 64k chunk, algorithm 2 [5/5] [UUUUU]
>
> md0 : active raid1 sda1[0] sde1[4](S) sdd1[3] sdc1[2] sdb1[1]
> 192640 blocks [4/4] [UUUU]
>
> unused devices: <none>
>
>
> rob@cholera /share $ sudo mdadm -D /dev/md2
> /dev/md2:
> Version : 00.90.03
> Creation Time : Sat May 3 13:45:54 2008
> Raid Level : raid5
> Array Size : 3874235136 (3694.76 GiB 3967.22 GB)
> Used Dev Size : 968558784 (923.69 GiB 991.80 GB)
> Raid Devices : 5
> Total Devices : 5
> Preferred Minor : 2
> Persistence : Superblock is persistent
>
> Update Time : Wed Sep 22 23:16:06 2010
> State : clean
> Active Devices : 5
> Working Devices : 5
> Failed Devices : 0
> Spare Devices : 0
>
> Layout : left-symmetric
> Chunk Size : 64K
>
> UUID : 4387b8c0:21551766:ed750333:824b67f8
> Events : 0.651050
>
> Number Major Minor RaidDevice State
> 0 8 4 0 active sync /dev/sda4
> 1 8 20 1 active sync /dev/sdb4
> 2 8 36 2 active sync /dev/sdc4
> 3 8 52 3 active sync /dev/sdd4
> 4 8 68 4 active sync /dev/sde4
>
>
> rob@cholera /share $ cat /etc/mdadm/mdadm.conf
> # mdadm.conf
> #
> # Please refer to mdadm.conf(5) for information about this file.
> #
>
> # by default, scan all partitions (/proc/partitions) for MD superblocks.
> # alternatively, specify devices to scan, using wildcards if desired.
> DEVICE partitions
>
> # auto-create devices with Debian standard permissions
> CREATE owner=root group=disk mode=0660 auto=yes
>
> # automatically tag new arrays as belonging to the local system
> HOMEHOST <system>
>
> # instruct the monitoring daemon where to send mail alerts
> MAILADDR root
>
> # definitions of existing MD arrays
> ARRAY /dev/md0 level=raid1 num-devices=4
> UUID=a761c788:81771ba6:c983b0fe:7dba32e6
> ARRAY /dev/md1 level=raid5 num-devices=5
> UUID=291649db:9f874a3c:def17491:656cf263
> ARRAY /dev/md2 level=raid5 num-devices=5
> UUID=4387b8c0:21551766:ed750333:824b67f8
>
> # This file was auto-generated on Sun, 04 May 2008 14:57:35 +0000
> # by mkconf $Id$
>
> So maybe this is a file system problem rather than an mdadm problem?
> Either way I've already bashed my head against a brick wall for a few
> weeks and I don't know where to go from here so any advice would be
> appreciated.
>
> Thanks
> Rob
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
>
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: mdadm raid5 array - 0 space available but usage is less than capacity
2010-09-23 19:43 mdadm raid5 array - 0 space available but usage is less than capacity Robin Doherty
2010-09-23 19:53 ` Kaizaad Bilimorya
2010-09-23 20:18 ` Marcus Kool
@ 2010-09-23 20:18 ` Roman Mamedov
2010-09-23 20:22 ` Robin Doherty
2 siblings, 1 reply; 6+ messages in thread
From: Roman Mamedov @ 2010-09-23 20:18 UTC (permalink / raw)
To: Robin Doherty; +Cc: linux-raid
[-- Attachment #1: Type: text/plain, Size: 721 bytes --]
On Thu, 23 Sep 2010 20:43:44 +0100
Robin Doherty <rdoherty@gmail.com> wrote:
> So maybe this is a file system problem rather than an mdadm problem?
mkfs.ext3 says:
-m reserved-blocks-percentage
Specify the percentage of the filesystem blocks reserved for the
super-user. This avoids fragmentation, and allows root-owned
daemons, such as syslogd(8), to continue to function correctly
after non-privileged processes are prevented from writing to the
filesystem. The default percentage is 5%.
this can be changed on an existing FS using tune2fs.
Also, this is not related to mdadm at all.
--
With respect,
Roman
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 198 bytes --]
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: mdadm raid5 array - 0 space available but usage is less than capacity
2010-09-23 20:18 ` Roman Mamedov
@ 2010-09-23 20:22 ` Robin Doherty
0 siblings, 0 replies; 6+ messages in thread
From: Robin Doherty @ 2010-09-23 20:22 UTC (permalink / raw)
To: Roman Mamedov; +Cc: linux-raid
My apologies. Thanks for the responses.
Rob
On 23 September 2010 21:18, Roman Mamedov <roman@rm.pp.ru> wrote:
> On Thu, 23 Sep 2010 20:43:44 +0100
> Robin Doherty <rdoherty@gmail.com> wrote:
>
>> So maybe this is a file system problem rather than an mdadm problem?
>
> mkfs.ext3 says:
>
> -m reserved-blocks-percentage
> Specify the percentage of the filesystem blocks reserved for the
> super-user. This avoids fragmentation, and allows root-owned
> daemons, such as syslogd(8), to continue to function correctly
> after non-privileged processes are prevented from writing to the
> filesystem. The default percentage is 5%.
>
> this can be changed on an existing FS using tune2fs.
>
> Also, this is not related to mdadm at all.
>
> --
> With respect,
> Roman
>
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2010-09-23 20:22 UTC | newest]
Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2010-09-23 19:43 mdadm raid5 array - 0 space available but usage is less than capacity Robin Doherty
2010-09-23 19:53 ` Kaizaad Bilimorya
2010-09-23 20:12 ` Robin Doherty
2010-09-23 20:18 ` Marcus Kool
2010-09-23 20:18 ` Roman Mamedov
2010-09-23 20:22 ` Robin Doherty
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).