linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Veritas Volume Manager
@ 2003-05-14  8:52 John Finlay
  0 siblings, 0 replies; 8+ messages in thread
From: John Finlay @ 2003-05-14  8:52 UTC (permalink / raw)
  To: linux-raid

Is the Veritas volume manager available for linux? If so has anyone 
tried it and have some comments about it?

Thanks

John


^ permalink raw reply	[flat|nested] 8+ messages in thread

* RE: Veritas Volume Manager
@ 2003-05-14 11:48 Buechler, Mark R
  2003-05-14 17:19 ` John Finlay
  0 siblings, 1 reply; 8+ messages in thread
From: Buechler, Mark R @ 2003-05-14 11:48 UTC (permalink / raw)
  To: 'John Finlay', 'linux-raid@vger.kernel.org'

Yes, it is. However, it is very kernel version specific and really provides
very little benefit compared to Linux LVM/MD for the price.

-----Original Message-----
From: linux-raid-owner@vger.kernel.org
[mailto:linux-raid-owner@vger.kernel.org]On Behalf Of John Finlay
Sent: Wednesday, May 14, 2003 4:53 AM
To: linux-raid@vger.kernel.org
Subject: Veritas Volume Manager


Is the Veritas volume manager available for linux? If so has anyone 
tried it and have some comments about it?

Thanks

John

-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html










EMAIL DISCLAIMER 

Please Note: The information contained in this message may be privileged and
confidential, protected from disclosure, and/or intended only for the use of
the individual or entity named above. If the reader of this message is not
the intended recipient, or an employee or agent responsible for delivering
this message to the intended recipient, you are hereby notified that any
disclosure, distribution, copying or other dissemination of this
communication is strictly prohibited. If you received this communication in
error, please immediately reply to the sender, delete the message and
destroy all copies of it.

Thank You


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Veritas Volume Manager
  2003-05-14 11:48 Veritas Volume Manager Buechler, Mark R
@ 2003-05-14 17:19 ` John Finlay
  2003-05-14 19:46   ` Jose Luis Domingo Lopez
  0 siblings, 1 reply; 8+ messages in thread
From: John Finlay @ 2003-05-14 17:19 UTC (permalink / raw)
  To: Buechler, Mark R; +Cc: 'linux-raid@vger.kernel.org'

Hi Mark,

Thanks for the info. My impression is that the Veritas VM has a lot more 
features than MD/LVM including management tools, on-line 
reconfiguration, etc. These seem like valuable features. What's the cost 
of VVM?

Thanks

John

Buechler, Mark R wrote:

>Yes, it is. However, it is very kernel version specific and really provides
>very little benefit compared to Linux LVM/MD for the price.
>
>-----Original Message-----
>From: linux-raid-owner@vger.kernel.org
>[mailto:linux-raid-owner@vger.kernel.org]On Behalf Of John Finlay
>Sent: Wednesday, May 14, 2003 4:53 AM
>To: linux-raid@vger.kernel.org
>Subject: Veritas Volume Manager
>
>
>Is the Veritas volume manager available for linux? If so has anyone 
>tried it and have some comments about it?
>
>  
>



^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Veritas Volume Manager
  2003-05-14 17:19 ` John Finlay
@ 2003-05-14 19:46   ` Jose Luis Domingo Lopez
  2003-05-14 20:08     ` John Finlay
  0 siblings, 1 reply; 8+ messages in thread
From: Jose Luis Domingo Lopez @ 2003-05-14 19:46 UTC (permalink / raw)
  To: linux-raid

On Wednesday, 14 May 2003, at 10:19:25 -0700,
John Finlay wrote:

> Thanks for the info. My impression is that the Veritas VM has a lot more 
> features than MD/LVM including management tools, on-line 
> reconfiguration, etc. These seem like valuable features. What's the cost 
> of VVM?
> 
You can (and should) always go to the vendor's website and look there,
because this list is not about Veritas products and/or marketing/sales.

-- 
Jose Luis Domingo Lopez
Linux Registered User #189436     Debian Linux Sid (Linux 2.5.69)

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Veritas Volume Manager
  2003-05-14 19:46   ` Jose Luis Domingo Lopez
@ 2003-05-14 20:08     ` John Finlay
  2003-05-14 20:27       ` John DeFranco
  0 siblings, 1 reply; 8+ messages in thread
From: John Finlay @ 2003-05-14 20:08 UTC (permalink / raw)
  To: Jose Luis Domingo Lopez; +Cc: linux-raid

Hi Jose,

I am assuming that this list is about linux raid and that the Veritas 
products are raid products on linux. I assume that this list is not 
restricted to discussions about free raid implementations. If I want 
marketing fluff I can get that from the Veritas site but I'm interested 
in why real people would use Veritas on linux or why not. So far it 
seems that the cons are lack of broad support for various linux kernels 
and cost (which is not on the web site) and the pro is a professional 
mature stable product. The current crop of free linux raid tools (MD and 
LVM) seem really primitive and inflexible and having experience with 
Veritas in a previous life on Solaris (where it's really expensive) I 
was wondering if the linux version was as good and whether people felt 
it was worth spending the money to get the Veritas features. Of course 
it's possible that Veritas is overkill for the seemingly limited 
capabilities and application of the current linux PC type systems and of 
course the Veritas stuff might not cover all the possible hardware 
configurations that are available for linux.

Thanks

John

Jose Luis Domingo Lopez wrote:

>On Wednesday, 14 May 2003, at 10:19:25 -0700,
>John Finlay wrote:
>
>  
>
>>Thanks for the info. My impression is that the Veritas VM has a lot more 
>>features than MD/LVM including management tools, on-line 
>>reconfiguration, etc. These seem like valuable features. What's the cost 
>>of VVM?
>>
>>    
>>
>You can (and should) always go to the vendor's website and look there,
>because this list is not about Veritas products and/or marketing/sales.
>
>  
>



^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Veritas Volume Manager
  2003-05-14 20:08     ` John Finlay
@ 2003-05-14 20:27       ` John DeFranco
  2003-05-15  2:47         ` RAID startup problem Maurice Hilarius
  0 siblings, 1 reply; 8+ messages in thread
From: John DeFranco @ 2003-05-14 20:27 UTC (permalink / raw)
  To: John Finlay, Jose Luis Domingo Lopez; +Cc: linux-raid

I actually think its also really (relatively) expensive for Linux as 
well. Veritas is not in the habit doing things for the good of the 
community. Whether it worth it or not really depends on what your 
requirements are. Certainly the Veritas product provides a much 
cleaner solution in terms of including the mirroring portion within 
the vm itself (although depending on what features you want it will 
be an extra cost as well). This is better than using lvm on top of md 
(which I do at my site). 

These other products (lvm and md) are free but that also means if you 
have a problem who knows when it will be addressed. Same is true to a 
certain point with Veritas as well.

I have done quite a bit of work with VxVM on both Solaris and HP-UX as 
well as work with md/lvm. Both have their pros and cons. If you don't 
mind the cost and feel the current tools are primitive then VxVM 
might be worth it. However it you just want a reasonable 
mirroring/raid solution md/lvm might just be ok.

My $.02
On Wednesday 14 May 2003 13:08, John Finlay wrote:
> Hi Jose,
>
> I am assuming that this list is about linux raid and that the
> Veritas products are raid products on linux. I assume that this
> list is not restricted to discussions about free raid
> implementations. If I want marketing fluff I can get that from the
> Veritas site but I'm interested in why real people would use
> Veritas on linux or why not. So far it seems that the cons are lack
> of broad support for various linux kernels and cost (which is not
> on the web site) and the pro is a professional mature stable
> product. The current crop of free linux raid tools (MD and LVM)
> seem really primitive and inflexible and having experience with
> Veritas in a previous life on Solaris (where it's really expensive)
> I was wondering if the linux version was as good and whether people
> felt it was worth spending the money to get the Veritas features.
> Of course it's possible that Veritas is overkill for the seemingly
> limited capabilities and application of the current linux PC type
> systems and of course the Veritas stuff might not cover all the
> possible hardware configurations that are available for linux.
>
> Thanks
>
> John
>
> Jose Luis Domingo Lopez wrote:
> >On Wednesday, 14 May 2003, at 10:19:25 -0700,
> >
> >John Finlay wrote:
> >>Thanks for the info. My impression is that the Veritas VM has a
> >> lot more features than MD/LVM including management tools,
> >> on-line reconfiguration, etc. These seem like valuable features.
> >> What's the cost of VVM?
> >
> >You can (and should) always go to the vendor's website and look
> > there, because this list is not about Veritas products and/or
> > marketing/sales.
>
> -
> To unsubscribe from this list: send the line "unsubscribe
> linux-raid" in the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

-- 
==========
Cheers
   -jdf


^ permalink raw reply	[flat|nested] 8+ messages in thread

* RAID startup problem
  2003-05-14 20:27       ` John DeFranco
@ 2003-05-15  2:47         ` Maurice Hilarius
  2003-05-15  3:12           ` Stephen Lee
  0 siblings, 1 reply; 8+ messages in thread
From: Maurice Hilarius @ 2003-05-15  2:47 UTC (permalink / raw)
  To: linux-raid

Hi There!

I just set up a server with 2 x 3Ware cards, 16 IDE disks, and am building 
a number of software md raids using mdadm.

Everything built and synched OK, but the raid1 devices don't seem to come 
up upon reboot.

Any suggestions are appreciated!

Here is some fairly detailed info:


Status upon reboot:

md2 : active raid5 sdl1[2] sde1[1] sdd1[0]
       19550848 blocks level 5, 64k chunk, algorithm 2 [3/3] [UUU]

md6 : active raid5 sdl2[2] sde2[1] sdd2[0]
       136745088 blocks level 5, 64k chunk, algorithm 2 [3/3] [UUU]

md7 : active raid5 sdn2[2] sdm2[1] sdf2[0]
       136745088 blocks level 5, 64k chunk, algorithm 2 [3/3] [UUU]

md8 : active raid5 sdo2[2] sdh2[1] sdg2[0]
       136745088 blocks level 5, 64k chunk, algorithm 2 [3/3] [UUU]

unused devices: <none>


So then I do this:

mdadm -C /dev/md3 -l1 -n2 /dev/sd[ai]4
mdadm: /dev/sda4 appears to contain an ext2fs file system
     size=73143872K  mtime=Tue May 13 17:47:42 2003
mdadm: /dev/sda4 appear to be part of a raid array:
     level=1 devices=2 ctime=Tue May 13 17:47:40 2003
mdadm: /dev/sdi4 appears to contain an ext2fs file system
     size=73143872K  mtime=Tue May 13 17:47:42 2003
mdadm: /dev/sdi4 appear to be part of a raid array:
     level=1 devices=2 ctime=Tue May 13 17:47:40 2003
Continue creating array? y
mdadm: array /dev/md3 started.

It syncs up and appears to be okay, and then reboot...gone.

The motherboard is configured under PnP to Scan the PCI Bus from Lowest to 
Highest (default), the option "pci=nosort" was required to make the linux 
kernel recognize the correct order.

# partition table of sd-ai
unit: sectors

/dev/sdi1 : start=       63, size=   208782, Id=83
/dev/sdi2 : start=   208845, size=  3919860, Id=82
/dev/sdi3 : start=  4128705, size=  5879790, Id=83
/dev/sdi4 : start= 10008495, size=146287890, Id=fd

# partition table of sd-bcjk
unit: sectors

/dev/sde1 : start=       63, size=  6088572, Id=fd
/dev/sde2 : start=        0, size=        0, Id= 0
/dev/sde3 : start=  6088635, size=150207750, Id=fd
/dev/sde4 : start=        0, size=        0, Id= 0

# partition table of sd-defghlmno
unit: sectors

/dev/sdj1 : start=       63, size= 19551042, Id=fd
/dev/sdj2 : start= 19551105, size=136745280, Id=fd
/dev/sdj3 : start=        0, size=        0, Id= 0
/dev/sdj4 : start=        0, size=        0, Id= 0

### r1-0 sda,sdi:

100M -> 100M /bootone,/boottwo
   2G -> 2G   swap
   3G -> 3G   /rootone,/roottwo
  75G -> 75G  /mnt/r1-0         /dev/md3

p1             1 -   13
p2            14 -  257
p3           258 -  623
p4	     624 - 9729

mdadm -C /dev/md3 -l1 -n2 /dev/sd[ai]4
tune2fs -L /mnt/r1-0 /dev/md3


### r1-1 sdb,sdj

3.1G -> 3.1G /tmp              /dev/md0
  75G -> 75G  /mnt/r1-1         /dev/md4

p1             1 -  379
p3           380 - 9729

mdadm -C /dev/md0 -l1 -n2 /dev/sd[bj]1
mdadm -C /dev/md4 -l1 -n2 /dev/sd[bj]3
mkfs.ext3 /dev/md0
mkfs.ext3 /dev/md4
tune2fs -L /tmp              /dev/md0
tune2fs -L /mnt/r1-1         /dev/md4

### r1-2 sdc,sdk:

3.1G -> 3.1G /var              /dev/md1
  75G -> 75G  /mnt/r1-2         /dev/md5

p1             1 -  379
p3           380 - 9729

mdadm -C /dev/md1 -l1 -n2 /dev/sd[ck]1
mdadm -C /dev/md5 -l1 -n2 /dev/sd[ck]3
mkfs.ext3 /dev/md1
mkfs.ext3 /dev/md5
tune2fs -L /var              /dev/md1
tune2fs -L /mnt/r1-2         /dev/md5

###########################################################

### r5-0 sdd,sde,sdl:

10G  -> 10G  -> 10G  /home     /dev/md2
150G -> 150G -> 150G /mnt/r5-0 /dev/md6

p1             1 - 1217
p2          1218 - 9729

mdadm -C /dev/md2 -l5 -n3 /dev/sd[del]1
mdadm -C /dev/md6 -l5 -n3 /dev/sd[del]2
mkfs.ext3 /dev/md2
mkfs.ext3 /dev/md6
tune2fs -L /mnt/r5-0 /dev/md6
tune2fs -L /home     /dev/md2


### r5-1 sdf,sdm,sdn:

10G  -> 10G  -> 10G  blank
150G -> 150G -> 150G /mnt/r5-1 /dev/md7

p1             1 - 1217
p2          1218 - 9729

mdadm -C /dev/md7 -l5 -n3 /dev/sd[fmn]2
mkfs.ext3 /dev/md7
tune2fs -L /mnt/r5-1 /dev/md7

### r5-2 sdg,sdh,sdo:

10G  -> 10G  -> 10G  blank
150G -> 150G -> 150G /mnt/r5-2 /dev/md8

p1             1 - 1217
p2          1218 - 9729

mdadm -C /dev/md8 -l5 -n3 /dev/sd[gho]2
mkfs.ext3 /dev/md8
tune2fs -L /mnt/r5-2 /dev/md8


###
# The following arrays fail to initialize upon reboot
###
mdadm -C /dev/md0 -l1 -n2 /dev/sd[bj]1
mdadm -C /dev/md1 -l1 -n2 /dev/sd[ck]1
mdadm -C /dev/md3 -l1 -n2 /dev/sd[ai]4
mdadm -C /dev/md4 -l1 -n2 /dev/sd[bj]3
mdadm -C /dev/md5 -l1 -n2 /dev/sd[ck]3


# mdadm configuration file
#
# mdadm will function properly without the use of a configuration file,
# but this file is useful for keeping track of arrays and member disks.
# In general, a mdadm.conf file is created, and updated, after arrays
# are created. This is the opposite behavior of /etc/raidtab which is
# created prior to array construction.
#
#
# the config file takes two types of lines:
#
#	DEVICE lines specify a list of devices of where to look for
#	  potential member disks
#
#	ARRAY lines specify information about how to identify arrays so
#	  so that they can be activated
#
# You can have more than one device line and use wild cards. The first
# example includes SCSI the first partition of SCSI disks /dev/sdb,
# /dev/sdc, /dev/sdd, /dev/sdj, /dev/sdk, and /dev/sdl. The second
# line looks for array slices on IDE disks.
#
#DEVICE /dev/sd[bcdjkl]1
#DEVICE /dev/hda1 /dev/hdb1
#
# If you mount devfs on /dev, then a suitable way to list all devices is:
#DEVICE /dev/discs/*/*
#
#
#
# ARRAY lines specify an array to assemble and a method of identification.
# Arrays can currently be identified by using a UUID, superblock minor number,
# or a listing of devices.
#
#	super-minor is usally the minor number of the metadevice
#	UUID is the Universally Unique Identifier for the array
# Each can be obtained using
#
# 	mdadm -D <md>
#
#ARRAY /dev/md0 UUID=3aaa0122:29827cfa:5331ad66:ca767371
#ARRAY /dev/md1 superminor=1
#ARRAY /dev/md2 devices=/dev/hda1,/dev/hda2
#
# ARRAY lines can also specify a "spare-group" for each array.  mdadm --monitor
# will then move a spare between arrays in a spare-group if one array has a 
failed
# drive but no spare
#ARRAY /dev/md4 uuid=b23f3c6d:aec43a9f:fd65db85:369432df spare-group=group1
#ARRAY /dev/md5 uuid=19464854:03f71b1b:e0df2edd:246cc977 spare-group=group1
#
# When used in --follow (aka --monitor) mode, mdadm needs a
# mail address and/or a program.  This can be given with "mailaddr"
# and "program" lines to that monitoring can be started using
#    mdadm --follow --scan & echo $! > /var/run/mdadm
# If the lines are not found, mdadm will exit quietly
#MAILADDR root@mydomain.tld
#PROGRAM /usr/sbin/handle-mdadm-events
ARRAY /dev/md1 level=raid1 num-devices=2 
UUID=81cc8f4f:701e9eac:47773277:18c7216e
ARRAY /dev/md3 level=raid1 num-devices=2 
UUID=c447f450:7c969497:4c53bac0:83bca3d4
ARRAY /dev/md5 level=raid1 num-devices=2 
UUID=23e25917:0f7657ce:380b4af6:89dd2f77
ARRAY /dev/md4 level=raid1 num-devices=2 
UUID=cd2b2762:05779c04:f73ca962:0c74fc7e
ARRAY /dev/md0 level=raid1 num-devices=2 
UUID=e27b007f:23eccfec:bbf767e5:9718900e
ARRAY /dev/md2 level=raid5 num-devices=3 
UUID=a82a43f3:8b6f37ce:d2dedf15:8ab81c60
ARRAY /dev/md6 level=raid5 num-devices=3 
UUID=877f1a1d:1515d873:f957d9d5:24736d93
ARRAY /dev/md7 level=raid5 num-devices=3 
UUID=324fc51f:6d7e54b7:277e5673:dea4c603
ARRAY /dev/md8 level=raid5 num-devices=3 
UUID=31071bc3:9aa6f86e:45577fcc:f502abc7

With our best regards,

Maurice W. Hilarius       Telephone: 01-780-456-9771
Hard Data Ltd.               FAX:       01-780-456-9772
11060 - 166 Avenue        mailto:maurice@harddata.com
Edmonton, AB, Canada      http://www.harddata.com/
    T5X 1Y3


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: RAID startup problem
  2003-05-15  2:47         ` RAID startup problem Maurice Hilarius
@ 2003-05-15  3:12           ` Stephen Lee
  0 siblings, 0 replies; 8+ messages in thread
From: Stephen Lee @ 2003-05-15  3:12 UTC (permalink / raw)
  To: Raid; +Cc: maurice

On Wed, 2003-05-14 at 19:47, Maurice Hilarius wrote:
> Hi There!
> 
> I just set up a server with 2 x 3Ware cards, 16 IDE disks, and am building 
> a number of software md raids using mdadm.
> 
> Everything built and synched OK, but the raid1 devices don't seem to come 
> up upon reboot.
> 
> Any suggestions are appreciated!
> 
> Here is some fairly detailed info:
> 
> 
> Status upon reboot:
> 
> md2 : active raid5 sdl1[2] sde1[1] sdd1[0]
>        19550848 blocks level 5, 64k chunk, algorithm 2 [3/3] [UUU]
> 
> md6 : active raid5 sdl2[2] sde2[1] sdd2[0]
>        136745088 blocks level 5, 64k chunk, algorithm 2 [3/3] [UUU]
> 
> md7 : active raid5 sdn2[2] sdm2[1] sdf2[0]
>        136745088 blocks level 5, 64k chunk, algorithm 2 [3/3] [UUU]
> 
> md8 : active raid5 sdo2[2] sdh2[1] sdg2[0]
>        136745088 blocks level 5, 64k chunk, algorithm 2 [3/3] [UUU]
> 
> unused devices: <none>

Was your raid1 module loaded prior to calling the raid1 devices during
bootup? What personalities does /proc/mdstat show?

Stephen




^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2003-05-15  3:12 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2003-05-14 11:48 Veritas Volume Manager Buechler, Mark R
2003-05-14 17:19 ` John Finlay
2003-05-14 19:46   ` Jose Luis Domingo Lopez
2003-05-14 20:08     ` John Finlay
2003-05-14 20:27       ` John DeFranco
2003-05-15  2:47         ` RAID startup problem Maurice Hilarius
2003-05-15  3:12           ` Stephen Lee
  -- strict thread matches above, loose matches on Subject: below --
2003-05-14  8:52 Veritas Volume Manager John Finlay

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).