linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* xosview
@ 2016-07-29 16:52 Anthony Youngman
  2016-07-29 23:23 ` xosview Glenn Enright
  2016-08-01 18:50 ` xosview Bill Hudacek
  0 siblings, 2 replies; 4+ messages in thread
From: Anthony Youngman @ 2016-07-29 16:52 UTC (permalink / raw)
  To: mdraid; +Cc: mike.romberg

I don't know how many of you use this ancient nifty utility, but I've 
been using it for as long as I can remember. Unfortunately, the raid 
monitor code no longer works ... :-(

I emailed the maintainer privately and he was pleased that I'd got in 
touch even though it was with the bad news, and he'd like to fix it, but 
he has no system with raid to test it on.

It works by parsing /proc/mdstat. From what I can see of the code, it 
would be very easy to test it by pointing it at a fake mdstat. I've got 
three arrays on a two disk mirror, so that's easy for me to test, but 
I'd like to test it on other setups.

So if people wouldn't mind, could you email your mdstat files? 
Preferably on the list so people can see what has and has not been sent 
- obviously I'd like standard setups like raid10, raid5, raid6, both 
named and numbered. And if people have them, mdstats showing broken 
arrays, rebuilds, complicated setups with lvm, etc.

Dunno about other people, but I have xosview running on my desktop all 
the time (I feel naked without it :-) so if that can give me a 
continuous monitor of my raid state that's great. And hopefully, if 
other people use it, we might reduce the number of "I didn't realise my 
raid was degraded, and then it suffered another failure" type emails.

Cheers,
Wol

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: xosview
  2016-07-29 16:52 xosview Anthony Youngman
@ 2016-07-29 23:23 ` Glenn Enright
  2016-08-01 18:50 ` xosview Bill Hudacek
  1 sibling, 0 replies; 4+ messages in thread
From: Glenn Enright @ 2016-07-29 23:23 UTC (permalink / raw)
  To: Anthony Youngman; +Cc: mdraid, mike.romberg

On 30 July 2016 at 04:52, Anthony Youngman <antlists@youngman.org.uk> wrote:
>
> I don't know how many of you use this ancient nifty utility, but I've been using it for as long as I can remember. Unfortunately, the raid monitor code no longer works ... :-(
>
> I emailed the maintainer privately and he was pleased that I'd got in touch even though it was with the bad news, and he'd like to fix it, but he has no system with raid to test it on.
>
> It works by parsing /proc/mdstat. From what I can see of the code, it would be very easy to test it by pointing it at a fake mdstat. I've got three arrays on a two disk mirror, so that's easy for me to test, but I'd like to test it on other setups.
>
> So if people wouldn't mind, could you email your mdstat files? Preferably on the list so people can see what has and has not been sent - obviously I'd like standard setups like raid10, raid5, raid6, both named and numbered. And if people have them, mdstats showing broken arrays, rebuilds, complicated setups with lvm, etc.
>
> Dunno about other people, but I have xosview running on my desktop all the time (I feel naked without it :-) so if that can give me a continuous monitor of my raid state that's great. And hopefully, if other people use it, we might reduce the number of "I didn't realise my raid was degraded, and then it suffered another failure" type emails.
>
> Cheers,
> Wol
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html


There are a number of good examples on
https://raid.wiki.kernel.org/index.php/Mdstat

This is from my desktop... (boot root and swap)

$ cat /proc/mdstat
Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5]
[raid4] [raid10]
md1 : active raid1 sdb2[1] sda2[0]
      4194240 blocks [2/2] [UU]

md2 : active raid1 sdb3[1] sda3[0]
      62914496 blocks [2/2] [UU]

md0 : active raid1 sda1[0] sdb1[1]
      204736 blocks [2/2] [UU]

unused devices: <none>

Best
--Glenn

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: xosview
  2016-07-29 16:52 xosview Anthony Youngman
  2016-07-29 23:23 ` xosview Glenn Enright
@ 2016-08-01 18:50 ` Bill Hudacek
  2016-08-01 19:25   ` xosview Wols Lists
  1 sibling, 1 reply; 4+ messages in thread
From: Bill Hudacek @ 2016-08-01 18:50 UTC (permalink / raw)
  To: mdraid

Anthony Youngman wrote on 07/29/2016 12:52 PM:
> So if people wouldn't mind, could you email your mdstat files?
> Preferably on the list so people can see what has and has not been sent
> - obviously I'd like standard setups like raid10, raid5, raid6, both
> named and numbered. And if people have them, mdstats showing broken
> arrays, rebuilds, complicated setups with lvm, etc.
>

RAID 6 across 5 disks, of which 1 is a spare (in external cabinet), 
and two disks in RAID-1 (for OS, inside the tower):

 > cat /proc/mdstat
Personalities : [raid1] [raid6] [raid5] [raid4]
md0 : active raid6 sdc1[0] sdf1[3] sdd1[1] sdg1[4](S) sde1[2]
       3071737856 blocks super 1.2 level 6, 1024k chunk, algorithm 2 
[4/4] [UUUU]
       bitmap: 0/12 pages [0KB], 65536KB chunk

md126 : active raid1 sdb1[1] sda1[0]
       2099136 blocks super 1.0 [2/2] [UU]
       bitmap: 0/1 pages [0KB], 65536KB chunk

md127 : active raid1 sdb2[1] sda2[0]
       234921984 blocks super 1.2 [2/2] [UU]
       bitmap: 1/2 pages [4KB], 65536KB chunk

unused devices: <none>

I don't have any failure mdstat output saved, sorry...

-- 
/Bill


^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: xosview
  2016-08-01 18:50 ` xosview Bill Hudacek
@ 2016-08-01 19:25   ` Wols Lists
  0 siblings, 0 replies; 4+ messages in thread
From: Wols Lists @ 2016-08-01 19:25 UTC (permalink / raw)
  To: mdraid

Thanks very much to all who replied :-)

The author is planning to rewrite based on /sys, however, so mdstat's
are likely no longer required ... the original raid code was donated,
and seems to have bit-rotted :-(

What I've said I'd like to see is the raid name and status - healthy,
rebuilding or degraded on the left; with the constituent
drives/partitions listed on the right, over colour bars indicating their
status (live, rebuilding, failed or spare). The author would also like
to add a status bar indicating the status of any rebuild.

So a healthy raid would be indicated by (default colours) green and
maybe blue for spare drives, while a degraded array would have red
around, and a rebuilding array would have yellow.

Anybody have any other ideas?

I know I'm bad at checking my raid status, and a lot of people probably
let the default install set up raid on a desktop without configuring
notification etc. It's just lovely to have a little utility like xosview
that can sit in the background on your desktop keeping an eye on things.
And that shows up instantly when things start going wrong. It keeps an
eye on my cpus, memory, swap space, and i/o. It could probably keep an
eye on more ...

Cheers,
Wol

On 01/08/16 19:50, Bill Hudacek wrote:
> Anthony Youngman wrote on 07/29/2016 12:52 PM:
>> So if people wouldn't mind, could you email your mdstat files?
>> Preferably on the list so people can see what has and has not been sent
>> - obviously I'd like standard setups like raid10, raid5, raid6, both
>> named and numbered. And if people have them, mdstats showing broken
>> arrays, rebuilds, complicated setups with lvm, etc.
>>
> 
> RAID 6 across 5 disks, of which 1 is a spare (in external cabinet), and
> two disks in RAID-1 (for OS, inside the tower):
> 
>> cat /proc/mdstat
> Personalities : [raid1] [raid6] [raid5] [raid4]
> md0 : active raid6 sdc1[0] sdf1[3] sdd1[1] sdg1[4](S) sde1[2]
>       3071737856 blocks super 1.2 level 6, 1024k chunk, algorithm 2
> [4/4] [UUUU]
>       bitmap: 0/12 pages [0KB], 65536KB chunk
> 
> md126 : active raid1 sdb1[1] sda1[0]
>       2099136 blocks super 1.0 [2/2] [UU]
>       bitmap: 0/1 pages [0KB], 65536KB chunk
> 
> md127 : active raid1 sdb2[1] sda2[0]
>       234921984 blocks super 1.2 [2/2] [UU]
>       bitmap: 1/2 pages [4KB], 65536KB chunk
> 
> unused devices: <none>
> 
> I don't have any failure mdstat output saved, sorry...
> 


^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2016-08-01 19:25 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2016-07-29 16:52 xosview Anthony Youngman
2016-07-29 23:23 ` xosview Glenn Enright
2016-08-01 18:50 ` xosview Bill Hudacek
2016-08-01 19:25   ` xosview Wols Lists

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).