* Spinning down idle disks?
@ 2013-05-26 10:49 Roy Sigurd Karlsbakk
2013-05-27 6:46 ` Vincent Pelletier
0 siblings, 1 reply; 2+ messages in thread
From: Roy Sigurd Karlsbakk @ 2013-05-26 10:49 UTC (permalink / raw)
To: linux-raid
Hi all
Is it possible somehow to have linux spin down idle disks in an MD raid as to use MD for a MAID (massive array of idle disks)? I tried to monitor an idle raid with blktrace, and it seems the array (and its members) is accessed every two seconds for some reason. The array used with for the testing is a idle, degraded raid-5.
roy
root@Mathsterk:~# blktrace -d /dev/md0 -o - | blkparse -i -
9,0 2 1 0.000000000 26793 A W 5368711168 + 8 <- (252,2) 0
9,0 2 2 0.000000879 26793 Q W 5368711168 + 8 [(null)]
9,0 1 3 0.020063084 372 C W 5368711168 [0]
9,0 2 3 0.020113389 26793 A WBS 0 + 0 <- (252,2) 0
9,0 2 4 0.020114118 26793 Q WBS [kworker/u:1]
9,0 0 1 1.999791074 6948 A WBS 0 + 0 <- (252,2) 0
9,0 0 2 1.999792301 6948 Q WBS [kworker/u:2]
9,0 0 3 2.000000284 6948 A W 5368711168 + 8 <- (252,2) 0
9,0 0 4 2.000001050 6948 Q W 5368711168 + 8 [kworker/u:2]
9,0 0 5 2.020065947 6948 A WBS 0 + 0 <- (252,2) 0
9,0 0 6 2.020066632 6948 Q WBS [kworker/u:2]
9,0 1 4 2.020018632 372 C W 5368711168 [0]
9,0 0 7 3.999790266 26793 A WBS 0 + 0 <- (252,2) 0
9,0 0 8 3.999791552 26793 Q WBS [kworker/u:1]
9,0 0 9 4.000000435 26793 A W 5368711168 + 8 <- (252,2) 0
9,0 0 10 4.000001135 26793 Q W 5368711168 + 8 [kworker/u:1]
9,0 0 11 4.020027371 26793 A WBS 0 + 0 <- (252,2) 0
9,0 0 12 4.020028051 26793 Q WBS [kworker/u:1]
9,0 1 5 4.019973885 372 C W 5368711168 [0]
9,0 0 13 5.999788009 6948 A WBS 0 + 0 <- (252,2) 0
9,0 0 14 5.999789265 6948 Q WBS [kworker/u:2]
9,0 0 15 5.999999943 6948 A W 5368711168 + 8 <- (252,2) 0
9,0 0 16 6.000000676 6948 Q W 5368711168 + 8 [kworker/u:2]
9,0 0 17 6.019936327 6948 A WBS 0 + 0 <- (252,2) 0
9,0 0 18 6.019937059 6948 Q WBS [kworker/u:2]
9,0 1 6 6.019883619 372 C W 5368711168 [0]
9,0 2 5 8.019890546 372 C W 5368711168 [0]
9,0 0 19 7.999785408 26793 A WBS 0 + 0 <- (252,2) 0
9,0 0 20 7.999786808 26793 Q WBS [kworker/u:1]
9,0 0 21 7.999997589 26793 A W 5368711168 + 8 <- (252,2) 0
9,0 0 22 7.999998282 26793 Q W 5368711168 + 8 [kworker/u:1]
9,0 0 23 8.019948314 26793 A WBS 0 + 0 <- (252,2) 0
9,0 0 24 8.019949033 26793 Q WBS [kworker/u:1]
--
Vennlige hilsener / Best regards
roy
--
Roy Sigurd Karlsbakk
(+47) 98013356
roy@karlsbakk.net
http://blogg.karlsbakk.net/
GPG Public key: http://karlsbakk.net/roysigurdkarlsbakk.pubkey.txt
--
I all pedagogikk er det essensielt at pensum presenteres intelligibelt. Det er et elementært imperativ for alle pedagoger å unngå eksessiv anvendelse av idiomer med xenotyp etymologi. I de fleste tilfeller eksisterer adekvate og relevante synonymer på norsk.
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 2+ messages in thread
* Re: Spinning down idle disks?
2013-05-26 10:49 Spinning down idle disks? Roy Sigurd Karlsbakk
@ 2013-05-27 6:46 ` Vincent Pelletier
0 siblings, 0 replies; 2+ messages in thread
From: Vincent Pelletier @ 2013-05-27 6:46 UTC (permalink / raw)
To: Roy Sigurd Karlsbakk; +Cc: linux-raid
Le dimanche 26 mai 2013 12:49:54, Roy Sigurd Karlsbakk a écrit :
> Is it possible somehow to have linux spin down idle disks in an MD raid as
> to use MD for a MAID (massive array of idle disks)? I tried to monitor an
> idle raid with blktrace, and it seems the array (and its members) is
> accessed every two seconds for some reason. The array used with for the
> testing is a idle, degraded raid-5.
FWIW, there is a small daemon to spin disks down in a - to me - clever way:
only reads from device reset spindown timeout. Writes are kept in cache until
an explicit flush happens.
http://noflushd.sourceforge.net/
This allows skipping superblock refreshes (FS- and MD-level).
It has a large drawback when using it for MD (and other composite devices): it
doesn't look for slave devices, so if you tell it to control sda and sdb and
you have md0 on top of both, it will eventually spin down sda and sdb, only to
manually flush md0, spinning both up again.
I've implemented a quick-hack workaround for this:
https://github.com/vpelletier/pynoflushd
Both implementation have the drawback of increasing the frequency of writes to
the actual disk: as the daemon take over dirty_writeback_centisecs's job using
userspace-available flush methods (mine using BLKFLSBUF ioctl, the original
using fsync on block device), something gets flushed which is usually not (I
wandered a bit in kernel code without finding how writeback code handles this
timeout).
I think it would be nice to have an equivalent of dirty_writeback_centisecs at
device granularity, so that one doesn't have to delegate flushing to a
userspace daemon.
Regards,
--
Vincent Pelletier
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 2+ messages in thread
end of thread, other threads:[~2013-05-27 6:46 UTC | newest]
Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2013-05-26 10:49 Spinning down idle disks? Roy Sigurd Karlsbakk
2013-05-27 6:46 ` Vincent Pelletier
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).