* managing raid on linux
@ 2008-04-20 21:51 devzero
2008-04-23 15:00 ` Bill Davidsen
0 siblings, 1 reply; 6+ messages in thread
From: devzero @ 2008-04-20 21:51 UTC (permalink / raw)
To: linux-raid
Hello !
since we use lots of servers with raid where i work, for every system i need to go trough that hassle to find out, how to monitor the state of the raid array.
if you you have more than one brand of raid controller, you really need a large amount of time to find the proper tool for this, if you think you found it, then you have a broken link or the wrong version, the tool is outdated, doesn`t work with recent controller versions, is tricky to setup or difficult to use.
this takes a lot of time and is a major annoyance.
isn`t there a linux project which is adressing this?
some site for the sysadmin to consult?
i have raid controller xyz, what do i need to monitor the arrays state....?
i would expect, that the linux kernel would provide sort of a standardized way to check the health state of a raid array - i.e. this should be completely done in kernel space, as some raid drivers do.
instead i need to use a dozen different tools, which are often closed source, too.
anybody suffer from that headaches, too ?
regards
roland
_____________________________________________________________________
Der WEB.DE SmartSurfer hilft bis zu 70% Ihrer Onlinekosten zu sparen!
http://smartsurfer.web.de/?mc=100071&distributionid=000000000066
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: managing raid on linux
2008-04-20 21:51 managing raid on linux devzero
@ 2008-04-23 15:00 ` Bill Davidsen
2008-04-23 15:55 ` Ty! Boyack
0 siblings, 1 reply; 6+ messages in thread
From: Bill Davidsen @ 2008-04-23 15:00 UTC (permalink / raw)
To: devzero; +Cc: linux-raid
devzero@web.de wrote:
> Hello !
>
> since we use lots of servers with raid where i work, for every system i need to go trough that hassle to find out, how to monitor the state of the raid array.
>
> if you you have more than one brand of raid controller, you really need a large amount of time to find the proper tool for this, if you think you found it, then you have a broken link or the wrong version, the tool is outdated, doesn`t work with recent controller versions, is tricky to setup or difficult to use.
>
> this takes a lot of time and is a major annoyance.
>
> isn`t there a linux project which is adressing this?
> some site for the sysadmin to consult?
> i have raid controller xyz, what do i need to monitor the arrays state....?
>
> i would expect, that the linux kernel would provide sort of a standardized way to check the health state of a raid array - i.e. this should be completely done in kernel space, as some raid drivers do.
>
> instead i need to use a dozen different tools, which are often closed source, too.
>
> anybody suffer from that headaches, too ?
>
You have that option, set your controllers to JBOD and use software
raid. Most people don't play "flavor of the month" with hardware, and
those of us who let purchasing alter hardware specs to "save money" use
software raid.
--
Bill Davidsen <davidsen@tmr.com>
"Woe unto the statesman who makes war without a reason that will still
be valid when the war is over..." Otto von Bismark
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: managing raid on linux
2008-04-23 15:00 ` Bill Davidsen
@ 2008-04-23 15:55 ` Ty! Boyack
2008-04-23 18:17 ` managing raid on Linux David Lethe
0 siblings, 1 reply; 6+ messages in thread
From: Ty! Boyack @ 2008-04-23 15:55 UTC (permalink / raw)
To: linux-raid
I'm not quite "hardware flavor of the month" but I still feel like I've
got all 31 flavors around the shop.
Something you might look into is SMI-S (Storage Management Initiative -
Specification), which is being promoted by SNIA (Storage Network
Industry Association):
http://www.snia.org/tech_activities/standards/curr_standards/smi/
The goal is to provide a single management protocol/API for any
traditional storage hardware (including arrays, switches, NAS devices,
tape libraies, etc.). As I understand it, there are hooks for the
standard operations and monitoring, and extensibility for specialized
devices.
While it has been targeted at SAN hardware, it would seem that the array
management features could be used on their own in your case, if your
arrays support this. There are some open source projects working on
providing a management tool for these devices. It's been on my horizon
for a while, but I have not had the chance to really look into it from a
practical sense. So I don't know if it is exactly what you might need,
but it could be worth exploring.
(I know it's WAY outside of the kernel space tools you mentioned, but it
is a mature option)
-Ty!
Bill Davidsen wrote:
> devzero@web.de wrote:
>> Hello !
>>
>> since we use lots of servers with raid where i work, for every system
>> i need to go trough that hassle to find out, how to monitor the state
>> of the raid array.
>>
>> if you you have more than one brand of raid controller, you really
>> need a large amount of time to find the proper tool for this, if you
>> think you found it, then you have a broken link or the wrong version,
>> the tool is outdated, doesn`t work with recent controller versions,
>> is tricky to setup or difficult to use.
>>
>> this takes a lot of time and is a major annoyance.
>>
>> isn`t there a linux project which is adressing this?
>> some site for the sysadmin to consult?
>> i have raid controller xyz, what do i need to monitor the arrays
>> state....?
>>
>> i would expect, that the linux kernel would provide sort of a
>> standardized way to check the health state of a raid array - i.e.
>> this should be completely done in kernel space, as some raid drivers do.
>>
>> instead i need to use a dozen different tools, which are often closed
>> source, too.
>>
>> anybody suffer from that headaches, too ?
>>
>
> You have that option, set your controllers to JBOD and use software
> raid. Most people don't play "flavor of the month" with hardware, and
> those of us who let purchasing alter hardware specs to "save money"
> use software raid.
>
--
-===========================-
Ty! Boyack
NREL Unix Network Manager
ty@nrel.colostate.edu
(970) 491-1186
-===========================-
^ permalink raw reply [flat|nested] 6+ messages in thread
* RE: managing raid on Linux
2008-04-23 15:55 ` Ty! Boyack
@ 2008-04-23 18:17 ` David Lethe
[not found] ` <20080423184722.GA4889@rap.rap.dk>
0 siblings, 1 reply; 6+ messages in thread
From: David Lethe @ 2008-04-23 18:17 UTC (permalink / raw)
To: Ty! Boyack, linux-raid
Here are the dirty-little-secret reasons why an uber-manager does not,
and will not exist. (including reason why SMI-S basically limited to the
big-box high-dollar external RAID)
* Several such products were proposed an died either early or deep into
development. DDF, for example, was to be a common API and Disk Data
Format (hence DDF) that a large number of vendors, including myself when
I was with another company Adaptec, and several other vendors, including
me, got into. We recognized that you couldn't have consistent
monitoring/diags without a common layout. DDF would have let you take
disks with live data and RAID5/RAID6 structures and plug them into a
DDF-compliant adapter. It was a failure, in my opinion, due to politics
from the manufacturers. Not even Dell & Intel, who were the biggest
proponents, could entice the RAID manufacturers to come together.
* There are some ANSI specs on the T10.org site related to common APIs,
as well as vendors such as Chaparral (now dot hill) which published
extremely open APIs. No other vendors adopted them. (I can't comment
on whether reasons were political or engineering .. but as an engineer
who has been involved with controller development and analyzing
suitability of the published APIs, I speak for myself when I say they
were unsuitable for the controller architecture I was working on at the
time)
* SMI-s is a huge pig from perspective of PC-based RAID controllers, and
it adds a significant cost to the controllers in terms of necessary CPU
power, memory, more sophisticated threading, and all the integration
costs. That is why you'll likely never see SMI-S embedded in anything
that physically plugs into a PCI slot that doesn't cost thousands of
dollars. Aside from the increased hardware costs, consider the
additional RFI, BTUs, and real-estate on the board. It isn't
practical.
* One *could* write a general SMI-S agent, but that has a whole other
set of problems I don't need to go into. That's why you don't see SMI-S
agents for all or perhaps any of the RAID controllers the OP has.
* As for an ubermanager that does the common internal cards, i.e.,
Adaptec, LSI, Infortrend, HP, Dell, AMCC/3WARE, Promise, Highpoint,
Intel RAID, etc ... then I am aware of one, but it will only support a
subset of these cards. As my company does just what the OP asked for,
here re some reasons why WE don't support everything:
- APIs for all of these are closed source, so you are limited to
compiled software, which necessitates limitations on kernels, platforms,
32/64bit, etc. A vendor who goes down this path also has to be a
software-only vendor and go through NDA. Manufacturer "A" isn't going
to give an API to a vendor that potentially sells product from one of
their competitors, so even obtaining APIs can be difficult.
- Some of the vendor APIs provide source code, others provide object
code and headers that have to be linked into executables. This
necessitates additional constraints.
- Firmware, driver, API updates can break the management software, so
it is a constant battle ... an don't get me started on some changes to
the Linux kernel that people reading the list have made which breaks
RAID drivers/management software :)
- Some management functions can block the RAID controller, or can
conflict with any currently-installed management daemons, drivers, or
application software. The 3rd-party developer and manufacturer have to
have clear understanding on rules of conflict.
- Some HBAs (surprisingly) have incredibly weak/limited APIs, and
there isn't much you can do with them. Also you have to be really
careful not to use the API to send code that could either break the API,
crash the system, or lock up the RAID.
- Sometimes the RAID vendor doesn't have a published API, or
documentation is wrong, so the manufacturer has to burn engineering and
FAE time to work with the software vendor. Let alone the extra work
for the software vendor.
- Most of these manufacturers have "specials" for OEMs where they
tweak the firmware, or change the identification strings, so that there
are multiple flavors of the same thing. The API may or may not work, or
have bugs related to this. Sometimes the OEM doesn't even know about
the API or compatibility issues, or they farmed out software, so this
makes it difficult and sometimes not-worth-the-effort for a software
vendor to bother with particular controller.
Well, this is the norm. While some APIs are well done, the other
extreme is that vendor xyz farmed out the API/drivers to another
company, and the hardware vendor doesn't have the resources, knowledge
or experience to provide the information necessary to bring a 3rd-party
management tool to market. I will certainly not reveal anything about
the companies that did farm out the software/firmware to 3rd parties,
and will not tell you if their names were mentioned in this post.
Suffice to say that this is a real problem with SOME controllers that
people reading this post have installed in their computers.
- The economics of developing support for a particular controller is a
severe constraint. Consider the reasons above, and then add simple
equipment costs for development/testing, and then support. I'm not
going to add support for product xyz unless I know it will be
profitable, and even a hundred end-user requests won't begin to pay for
such an effort.
- I for one certainly can't keep up with all the new things coming
out, so we choose our battles wisely based on the market opportunity,
longevity of the controller platform, safety/robustness of the API, and
development/ongoing support expense.
So there are the reasons why the OP can't find anything that supports
everything he has.
Suggestions for the OP
- awk, /proc, vendor-specific command-line utilities, and a big fat
shell or perl script.
David @ SANtools ^ com
-----Original Message-----
From: linux-raid-owner@vger.kernel.org
[mailto:linux-raid-owner@vger.kernel.org] On Behalf Of Ty! Boyack
Sent: Wednesday, April 23, 2008 10:55 AM
To: linux-raid@vger.kernel.org
Subject: Re: managing raid on linux
I'm not quite "hardware flavor of the month" but I still feel like I've
got all 31 flavors around the shop.
Something you might look into is SMI-S (Storage Management Initiative -
Specification), which is being promoted by SNIA (Storage Network
Industry Association):
http://www.snia.org/tech_activities/standards/curr_standards/smi/
The goal is to provide a single management protocol/API for any
traditional storage hardware (including arrays, switches, NAS devices,
tape libraies, etc.). As I understand it, there are hooks for the
standard operations and monitoring, and extensibility for specialized
devices.
While it has been targeted at SAN hardware, it would seem that the array
management features could be used on their own in your case, if your
arrays support this. There are some open source projects working on
providing a management tool for these devices. It's been on my horizon
for a while, but I have not had the chance to really look into it from a
practical sense. So I don't know if it is exactly what you might need,
but it could be worth exploring.
(I know it's WAY outside of the kernel space tools you mentioned, but it
is a mature option)
-Ty!
Bill Davidsen wrote:
> devzero@web.de wrote:
>> Hello !
>>
>> since we use lots of servers with raid where i work, for every system
>> i need to go trough that hassle to find out, how to monitor the state
>> of the raid array.
>>
>> if you you have more than one brand of raid controller, you really
>> need a large amount of time to find the proper tool for this, if you
>> think you found it, then you have a broken link or the wrong version,
>> the tool is outdated, doesn`t work with recent controller versions,
>> is tricky to setup or difficult to use.
>>
>> this takes a lot of time and is a major annoyance.
>>
>> isn`t there a linux project which is adressing this?
>> some site for the sysadmin to consult?
>> i have raid controller xyz, what do i need to monitor the arrays
>> state....?
>>
>> i would expect, that the linux kernel would provide sort of a
>> standardized way to check the health state of a raid array - i.e.
>> this should be completely done in kernel space, as some raid drivers
do.
>>
>> instead i need to use a dozen different tools, which are often closed
>> source, too.
>>
>> anybody suffer from that headaches, too ?
>>
>
> You have that option, set your controllers to JBOD and use software
> raid. Most people don't play "flavor of the month" with hardware, and
> those of us who let purchasing alter hardware specs to "save money"
> use software raid.
>
--
-===========================-
Ty! Boyack
NREL Unix Network Manager
ty@nrel.colostate.edu
(970) 491-1186
-===========================-
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 6+ messages in thread
* RE: managing raid on Linux
[not found] ` <20080423184722.GA4889@rap.rap.dk>
@ 2008-04-23 19:21 ` David Lethe
2008-04-26 19:39 ` Bill Davidsen
0 siblings, 1 reply; 6+ messages in thread
From: David Lethe @ 2008-04-23 19:21 UTC (permalink / raw)
To: Keld Jørn Simonsen; +Cc: Ty! Boyack, linux-raid
-----Original Message-----
From: Keld Jørn Simonsen [mailto:keld@dkuug.dk]
Sent: Wednesday, April 23, 2008 1:47 PM
To: David Lethe
Cc: Ty! Boyack; linux-raid@vger.kernel.org
Subject: Re: managing raid on Linux
On Wed, Apr 23, 2008 at 01:17:19PM -0500, David Lethe wrote:
>
> Well, this is the norm. While some APIs are well done, the other
> extreme is that vendor xyz farmed out the API/drivers to another
> company, and the hardware vendor doesn't have the resources, knowledge
> or experience to provide the information necessary to bring a 3rd-party
> management tool to market. I will certainly not reveal anything about
> the companies that did farm out the software/firmware to 3rd parties,
> and will not tell you if their names were mentioned in this post.
> Suffice to say that this is a real problem with SOME controllers that
> people reading this post have installed in their computers.
>
> - The economics of developing support for a particular controller is a
> severe constraint. Consider the reasons above, and then add simple
> equipment costs for development/testing, and then support. I'm not
> going to add support for product xyz unless I know it will be
> profitable, and even a hundred end-user requests won't begin to pay for
> such an effort.
>
> - I for one certainly can't keep up with all the new things coming
> out, so we choose our battles wisely based on the market opportunity,
> longevity of the controller platform, safety/robustness of the API, and
> development/ongoing support expense.
>
> So there are the reasons why the OP can't find anything that supports
> everything he has.
David, thanks for your explanation of the current situation for making
an übermanageri for HW RAID. The prospects seem dark indeed.
Are there then more future in doing it the SW RAID way, as the Linux
Kernel stuff we are discussion here on this mailing list?
Are there chances that this work om Linux RAID can match or be better
on the issues of performance, raid personality flavours and management,
compared to the best of the HW RAID manufacturers?
Best regards
keld
============================================
Keld -
Well, I don't want to insult the outstanding work product that everybody has put forth in LINUX, md/raid ... but from my perspective coming from the hardware RAID world, and software appliance perspective, then it is a mixed bag.
Pure software RAID, as designed in the LINUX kernel is superior in terms of potential performance to many internal RAID controllers. I break NDAs if I give specific reasons to explain this, but here is one reason that is inherent to the differences in architectures.
Consider that the fastest I/O is the one that doesn't have to be performed, and software-kernel-based RAID has the luxury of not having to perform an I/O if it knows the data exists somewhere in CPU cache, L1/L2 cache, or even RAM. Here I can "satisfy" an I/O in nanoseconds.
If I have to get an I/O from a card on a BUS, or external storage, then it will take milliseconds, even if solid-state disk is attached. Bus/protocol limitations for one, plus the speed of light is just too darned slow.
But Hardware RAID has other things going for it in terms of snapshot, compression, and even performance depending on the I/O requirements. You can offload calculations and smooth and balance I/O easily in some configs.
No architecture is "best" in all cases, even if price was taken out of the picture.
In terms of reliability, availability, and clustering capability, then LINUX need a major redesign in both architecture & design philosophy. Not trying to shove zfs down anybody's throat, but read up on the specs and architecture and features and then decide for yourself. There are GPL reasons why zfs isn't available today in the kernel, and philosophical/legal/logistical (but non-technical) reasons holding software-based RAID in LINUX back.
As for management, Sorry, the economics don't make it possible for LINUX to compete, except in point solutions .. not when you have EMC, CA, HP, IBM, MSFT, and all the rest spending billions of dollars on programmers writing management software.
David @ SANtools ^ com
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: managing raid on Linux
2008-04-23 19:21 ` David Lethe
@ 2008-04-26 19:39 ` Bill Davidsen
0 siblings, 0 replies; 6+ messages in thread
From: Bill Davidsen @ 2008-04-26 19:39 UTC (permalink / raw)
To: David Lethe; +Cc: Keld Jørn Simonsen, Ty! Boyack, linux-raid
David Lethe wrote:
> -----Original Message-----
> From: Keld Jørn Simonsen [mailto:keld@dkuug.dk]
> Sent: Wednesday, April 23, 2008 1:47 PM
> To: David Lethe
> Cc: Ty! Boyack; linux-raid@vger.kernel.org
> Subject: Re: managing raid on Linux
>
> On Wed, Apr 23, 2008 at 01:17:19PM -0500, David Lethe wrote:
>
>> Well, this is the norm. While some APIs are well done, the other
>> extreme is that vendor xyz farmed out the API/drivers to another
>> company, and the hardware vendor doesn't have the resources, knowledge
>> or experience to provide the information necessary to bring a 3rd-party
>> management tool to market. I will certainly not reveal anything about
>> the companies that did farm out the software/firmware to 3rd parties,
>> and will not tell you if their names were mentioned in this post.
>> Suffice to say that this is a real problem with SOME controllers that
>> people reading this post have installed in their computers.
>>
>> - The economics of developing support for a particular controller is a
>> severe constraint. Consider the reasons above, and then add simple
>> equipment costs for development/testing, and then support. I'm not
>> going to add support for product xyz unless I know it will be
>> profitable, and even a hundred end-user requests won't begin to pay for
>> such an effort.
>>
>> - I for one certainly can't keep up with all the new things coming
>> out, so we choose our battles wisely based on the market opportunity,
>> longevity of the controller platform, safety/robustness of the API, and
>> development/ongoing support expense.
>>
>> So there are the reasons why the OP can't find anything that supports
>> everything he has.
>>
>
> David, thanks for your explanation of the current situation for making
> an übermanageri for HW RAID. The prospects seem dark indeed.
>
>
If there were an ubermanager then products could compete on only price
and performance. Now they can compete on ease of use as well, and that's
not all bad for the customer. Corporations try to skimp on system
management (usually) and often ease of use is the differerce between
mediocre management and totally incompetent.
> Are there then more future in doing it the SW RAID way, as the Linux
> Kernel stuff we are discussion here on this mailing list?
>
>
The advantage is in the portability, you need not spend the top dollar
on hardware to get quite good results.
> Are there chances that this work om Linux RAID can match or be better
> on the issues of performance, raid personality flavours and management,
> compared to the best of the HW RAID manufacturers?
>
>
No, not the best. The reason is that Raid-N, where N>0, writes recovery
information, either mirrors or CRCs, or similar. Software raid must pass
this as multiple writes, opening a chance for consistency problems, and
always taking more bus bandwidth. So better than the best dedicated
hardware it is not, and can't be. As good as the "very good" hardware
solutions, yes. More flexible to expanding with whatever hardware is
cost effective when you need it, SW raid wins every time.
--
Bill Davidsen <davidsen@tmr.com>
"Woe unto the statesman who makes war without a reason that will still
be valid when the war is over..." Otto von Bismark
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2008-04-26 19:39 UTC | newest]
Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2008-04-20 21:51 managing raid on linux devzero
2008-04-23 15:00 ` Bill Davidsen
2008-04-23 15:55 ` Ty! Boyack
2008-04-23 18:17 ` managing raid on Linux David Lethe
[not found] ` <20080423184722.GA4889@rap.rap.dk>
2008-04-23 19:21 ` David Lethe
2008-04-26 19:39 ` Bill Davidsen
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).