From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mx3.redhat.com (mx3.redhat.com [172.16.48.32]) by int-mx1.corp.redhat.com (8.11.6/8.11.6) with ESMTP id jBD6mU100461 for ; Tue, 13 Dec 2005 01:48:30 -0500 Received: from wproxy.gmail.com (wproxy.gmail.com [64.233.184.199]) by mx3.redhat.com (8.13.1/8.13.1) with ESMTP id jBD6mHwR022959 for ; Tue, 13 Dec 2005 01:48:18 -0500 Received: by wproxy.gmail.com with SMTP id i4so58723wra for ; Mon, 12 Dec 2005 22:48:17 -0800 (PST) Message-ID: <40b3fe60512122248y5634c779p993fa953d09640c@mail.gmail.com> Date: Tue, 13 Dec 2005 12:18:17 +0530 From: neelima dahiya MIME-Version: 1.0 Content-Type: multipart/alternative; boundary="----=_Part_12193_31137801.1134456497248" Subject: [linux-lvm] prob using lvm2 Reply-To: LVM general discussion and development List-Id: LVM general discussion and development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , List-Id: To: linux-lvm@redhat.com ------=_Part_12193_31137801.1134456497248 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable Content-Disposition: inline Hello all, I've got a a 80GB HD partitioned automatically during Fedora Core 4 setup. That is the disk carries 8 partitions where the first to 4th i.e (/dev/hda1 to /dev/hda4) are dedicated to windows OS and (/dev/hda5) is mounted as /boot and (/dev/sda6) as /home (/dev/hda7) as swap & (/dev/hda8) as /. I have lvm2 installed on my system. Now can i use lvm2 on the top of all and if yes how? When i issued the command: lvm> pvcreate /dev/hda5 the following message was displayed: /var/lock/lvm/P_orphans: open failed: Permission denied Can't get lock for orphan PVs please reply. Thanks & regards neelima On 12/12/05, linux-lvm-request@redhat.com wrote: > > Send linux-lvm mailing list submissions to > linux-lvm@redhat.com > > To subscribe or unsubscribe via the World Wide Web, visit > https://www.redhat.com/mailman/listinfo/linux-lvm > or, via email, send a message with subject or body 'help' to > linux-lvm-request@redhat.com > > You can reach the person managing the list at > linux-lvm-owner@redhat.com > > When replying, please edit your Subject line so it is more specific > than "Re: Contents of linux-lvm digest..." > > > Today's Topics: > > 1. Re: LVM onFly features (Michael Loftis) > 2. Re: LVM onFly features (Nathan Scott) > 3. Converting LVM back to Ext2? (Andrey Subbotin) > 4. Re: Converting LVM back to Ext2? (Anil Kumar Sharma) > 5. Re: Newbie of LVM (Alasdair G Kergon) > > > ---------------------------------------------------------------------- > > Message: 1 > Date: Sun, 11 Dec 2005 18:14:39 -0700 > From: Michael Loftis > Subject: Re: [linux-lvm] LVM onFly features > To: Nathan Scott > Cc: linux-xfs@oss.sgi.com, linux-lvm@redhat.com > Message-ID: <9CD94D4B0F3B63057B4C2BC0@dhcp-2-206.wgops.com> > Content-Type: text/plain; charset=3Dus-ascii; format=3Dflowed > > > > --On December 12, 2005 9:15:39 AM +1100 Nathan Scott > wrote: > > > >> XFS has terrible unpredictable performance in production. Also it has > >> very > > > > What on earth does that mean? Whatever it means, it doesn't > > sound right - can you back that up with some data please? > > The worst problems we had we're likely most strongly related to running > out > of journal transaction space. When XFS was under high transaction load > sometimes it would just hang everything syncing meta-data. From what I > understand this has supposedly been dealt with, but we were still having > these issues when we decommissioned the last XFS based server a year ago. > Another datapoint is the fact we primarily served via NFS, which XFS > (atleast at the time) still didn't behave great with, I never did see any > good answers on that as I recall. > > > > >> bad behavior when recovering from crashes, > > > > Details? Are you talking about this post of yours: > > http://oss.sgi.com/archives/linux-xfs/2003-06/msg00032.html > > That particular behavior happened a lot. And it wasn't annoying that it > happened, so much so that it happened after the system claimed it was > clean. Further, yes, that hardware has been fully checked out. There's > nothing wrong with the hardware. I wish there was, that'd make me feel > better honestly. The only thing I can reason is bugs in the XFS > fsck/repair tools, or *maybe* an interaction with XFS and the DAC960 > controller, or NFS. The fact that XFS has weird interactions with NFS at > all bugs me, but I don't understand the code involved well enough. There > might be a decent reason. > > > > > There have been several fixes in this area since that post. > > > >> often times it's tools totally fail to clean the filesystem. > > > > In what way? Did you open a bug report? > > > >> It also needs larger kernel stacks because > >> of some of the really deep call trees, > > > > Those have been long since fixed as far as we are aware. Do you > > have an actual example where things can fail? > > We pulled it out of production and replaced XFS with Reiser. At the time > Reiser was far more mature on Linux. XFS Linux implementation (in > combination with other work in the block layer as you mention later) may > have matured to atleast a similar (possibly moreso) point now. I've just > personally lost more data due to XFS than Reiser. I've also had problems > with ext3 in the (now distant) past while it was teething still. > > > >> so when you use it with LVM or MD it > >> can oops unless you use the larger kernel stacks. > > > > Anything can oops in combination with enough stacked device drivers > > (although there has been block layer work to resolve this recently, > > so you should try again with a current kernel...). If you have an > > actual example of this still happening, please open a bug or at least > > let the XFS developers know of your test case. Thanks. > > That was actually part of the problem. There was no time, and no > hardware, > to try to reproduce the problem in the lab. This isn't an XFS problem > specifically, this is an Open Source problem really....If you encounter a > bug, and you're unlucky enough to be a bit of an edge case, you better be > prepared to pony up with hardware and mantime to diagnose and reproduce i= t > or it might not get fixed. Again though, this is common to the whole ope= n > source community, and not XFS, Linux, LVM, or any other project specific. > > Having said that, if you can reproduce it, and get good details, the open > source community has a far better track record of *really* fixing and > addressing bugs than any commercial software. > > > > >> We also have had > >> problems with the quota system but the details on that have faded. > > > > Seems like details of all the problems you described have faded. > > Your mail seems to me like a bit of a troll ... I guess you had a > > problem or two a couple of years ago (from searching the lists) > > and are still sore. Can you point me to mailing list reports of > > the problems you're refering to here or bug reports you've opened > > for these issues? I'll let you know if any of them are still > > relevent. > > No, we had dozens actually. The only ones that were really crippling wer= e > when XFS would suddenly unmount in the middle of the business day for no > apparent reason. Without details bug reports are ignored, and we couldn'= t > really provide details or filesystem dumps because there was too much > data, > and we had to get it back online. We just moved as fast as we could away > from XFS. It wasn't just a one day thing, or a week, there was a trail o= f > crashes with XFS at the time. Sometimes the machine was so locked up fro= m > XFS pulling the rug out that the console was wedged up pretty badly too. > > I wanted to provide the information as a data point from the other side a= s > it were not get into a pissing match with the XFS developers and > community. > XFS is still young, as is ReiserFS. and while Reiser is a completely new > FS and XFS has roots in IRIX and other implementations, their age is > similar since XFS' Linux implementation is around the same age. If the > state has change in the last 6-12 months then so much the better. The > facts are that XFS during operation had many problems, and as we pulled i= t > out still had many unresolved problems as we replaced it with ReiserFS. > And Reiser has been flawless except for one problem already mentioned on > Linux-LVM very clearly caused by an external SAN/RAID problem which EMC > has > corrected (completely as an aside -- anyone running a CX series REALLY > needs to be on the latest code rev, you might never run into the bug, and > i'm still not sure exactly which one we hit, there were atleast two that > could have caused the data corruption, but if you do, it can be ugly). > > > The best guess that I have as to why we had such a bad time with XFS was > the XFS+NFS interaction and possibly an old (unknown to me -- this is jus= t > a guess) bug that may have created some minor underlying corruption that > the repair tools couldn't fully fix or diagnose may have caused our > continual (seemingly random) problems. I don't believe in really random > problems, atleast not in computers anyway. > > > > > cheers. > > > > -- > > Nathan > > > > > > -- > "Genius might be described as a supreme capacity for getting its > possessors > into trouble of all kinds." > -- Samuel Butler > > > > ------------------------------ > > Message: 2 > Date: Mon, 12 Dec 2005 13:28:30 +1100 > From: Nathan Scott > Subject: Re: [linux-lvm] LVM onFly features > To: Michael Loftis > Cc: linux-xfs@oss.sgi.com, linux-lvm@redhat.com > Message-ID: <20051212132830.A7432365@wobbly.melbourne.sgi.com> > Content-Type: text/plain; charset=3Dus-ascii > > On Sun, Dec 11, 2005 at 06:14:39PM -0700, Michael Loftis wrote: > > --On December 12, 2005 9:15:39 AM +1100 Nathan Scott > > The worst problems we had we're likely most strongly related to running > out > > of journal transaction space. When XFS was under high transaction load > > Can you define "high load" for your scenario? > > > sometimes it would just hang everything syncing meta-data. From what I > > There is no situation in which XFS will "hang everything". A process > that is modifying the filesystem may be paused briefly waiting for space > to become available in the log, and that involves flushing the in-core > log buffers. But only processes that need log space will be paused > waiting for that (relatively small) write to complete. This is also not > a behaviour peculiar to XFS, and with suitable tuning in terms of mkfs/ > mount/sysctl parameters, it can be completely controlled. > > > understand this has supposedly been dealt with, but we were still havin= g > > these issues when we decommissioned the last XFS based server a year > ago. > > I'd like some more information describing your workload there if > you could provide it. Thanks. > > > Another datapoint is the fact we primarily served via NFS, which XFS > > (atleast at the time) still didn't behave great with, I never did see > any > > good answers on that as I recall. > > Indeed. Early 2.6 kernels did have XFS/NFS interaction problems, > with NFS using generation number zero as "magic", and XFS using > that as a valid gen number. That was fixed a long time ago. > > > controller, or NFS. The fact that XFS has weird interactions with NFS > at > > all bugs me, but I don't understand the code involved well > enough. There > > might be a decent reason. > > No, there's no reason, and XFS does not have "wierd interactions" > with NFS. > > > >> It also needs larger kernel stacks because > > >> of some of the really deep call trees, > > > > > > Those have been long since fixed as far as we are aware. Do you > > > have an actual example where things can fail? > > > > We pulled it out of production and replaced XFS with Reiser. At the > time > > Reiser was far more mature on Linux. XFS Linux implementation (in > > Not because of 4K stacks though surely? That kernel option wasn't around > then I think, and the reiserfs folks have also had a bunch of work to do > in that area too. > > > > Seems like details of all the problems you described have faded. > > > Your mail seems to me like a bit of a troll ... I guess you had a > > > problem or two a couple of years ago (from searching the lists) > > > and are still sore. Can you point me to mailing list reports of > > > the problems you're refering to here or bug reports you've opened > > > for these issues? I'll let you know if any of them are still > > > relevent. > > > > No, we had dozens actually. The only ones that were really crippling > were > > when XFS would suddenly unmount in the middle of the business day for n= o > > apparent reason. Without details bug reports are ignored, and we > couldn't > > The NFS issue had the unfortunate side effect of causing filesystem > corruption and hence forced filesystem shutdowns would result. There > were also bugs on that error handling path, so probably you hit two > independent XFS bugs on a pretty old kernel version. > > > I wanted to provide the information as a data point from the other side > as > > it were not get into a pissing match with the XFS developers and > community. > > You were claiming long-resolved issues that existed in an XFS version > from an early 2.6 kernel as still relevent. That is quite misleading, > and doesn't provide useful information to anyone. > > cheers. > > -- > Nathan > > > > ------------------------------ > > Message: 3 > Date: Mon, 12 Dec 2005 15:25:23 +0700 > From: Andrey Subbotin > Subject: [linux-lvm] Converting LVM back to Ext2? > To: linux-lvm@redhat.com > Message-ID: <45980936.20051212152523@gmail.com> > Content-Type: text/plain; charset=3Dus-ascii > > Hello all. > > I've got a 200GB HD partitioned automatically during Fedora Core 4 setup. > That is the disk carries 2 partitions where the first one (/dev/sda1) is > ext3 mounted as /boot and the second one (/dev/sda2) is an LVM. > > That is all clear and fancy but the problem is I'm faced with the fact I > need to migrate the HD to Ext2 FS, so I could convert it to FAT32 later, = so > I could access it from a copy of the Windows OS I happen to boot recently= to > do some work. The LVM on /dev/sda2 is full of data I need to save and the > problem is I don't have a spare HD to temporarily copy all those 200GB to= . > > If I had a spare HD I would eventually mount it, make a new Ext2 partitio= n > on it and then copy all the data from the LogicalVolume to the new > partition. Then I would fire up fdisk and kill the LVM, thus freeing the > space on the drive. Then, moving the data back to the first HD would be a > snap. But without a spare disk I face a real challenge. > > My initial idea was to reduce the FS inside the LogicalVolume (it has > ~40GB free of space) and then reduce the size of the LogicalVolume and th= en > reduce the PhysicalVolume /dev/sda2 by the freed number of cylinders. The= n, > I would create an ext2 partition over the freed cylinders and move some > files from the LogicalVolume onto it. Then I thought I would repeat the > process several times effectively migrating my data from the ever-shrinki= ng > LVM to the ever-growing plain Ext2 FS. > > The problem is I have little idea how I can shrink an LVM partition on > /dev/sda2. And there seem to be very little information on this on the ne= t. > > So far, I have lvreduce'd the FS inside the LogicalVolume and the > LogicalVolume itselft to 35700000 4k blocks. Now, how do I redeuce the > number of cyllinders occupied by the LVM on /dev/sda? > > I would really apreciate any help or ideas. > Thanks a lot in advance. > > -- > See you, > Andrey > > ICQ UIN: 114087545 > Journal: http://www.livejournal.com/users/e_ploko/ > > > > ------------------------------ > > Message: 4 > Date: Mon, 12 Dec 2005 19:50:30 +0530 > From: Anil Kumar Sharma > Subject: Re: [linux-lvm] Converting LVM back to Ext2? > To: Andrey Subbotin , LVM general discussion and > development > Message-ID: > <52fe6b680512120620m2d9d462erdc37b7f3d79183de@mail.gmail.com> > Content-Type: text/plain; charset=3D"iso-8859-1" > > U may reduce LV size and get some free space but that still will be lying > on > PV and there is no pvreduce (afaik). > > So, I think (I may be wrong), U are out of luck. Get your luck back, have > same temporary storage for your data to change the size or convert PV. > > UC, LVM is good for multi-partitions and multi-disks, everything in > multiples. That's the playground for LVM. > When U (re)start for FC4 or FC5, have your linux space on multiple PV's. > I will suggest utilize all 4 primary partitions, > 1. boot, 2. dual boot (if required) else PV and 3.& 4 also PV's. Swap goe= s > in LVM. > LVM can make them look like one or as desired partitions (LV's), which U > can > change as per your requirements, even for dual boot. > > Hard luck with "auto partition" - it is good for itself! smart fellow not > caring for our moods. > > > > > On 12/12/05, Andrey Subbotin wrote: > > > > Hello all. > > > > I've got a 200GB HD partitioned automatically during Fedora Core 4 > setup. > > That is the disk carries 2 partitions where the first one (/dev/sda1) i= s > > ext3 mounted as /boot and the second one (/dev/sda2) is an LVM. > > > > That is all clear and fancy but the problem is I'm faced with the fact = I > > need to migrate the HD to Ext2 FS, so I could convert it to FAT32 later= , > so > > I could access it from a copy of the Windows OS I happen to boot > recently to > > do some work. The LVM on /dev/sda2 is full of data I need to save and > the > > problem is I don't have a spare HD to temporarily copy all those 200GB > to. > > > > If I had a spare HD I would eventually mount it, make a new Ext2 > partition > > on it and then copy all the data from the LogicalVolume to the new > > partition. Then I would fire up fdisk and kill the LVM, thus freeing th= e > > space on the drive. Then, moving the data back to the first HD would be > a > > snap. But without a spare disk I face a real challenge. > > > > My initial idea was to reduce the FS inside the LogicalVolume (it has > > ~40GB free of space) and then reduce the size of the LogicalVolume and > then > > reduce the PhysicalVolume /dev/sda2 by the freed number of cylinders. > Then, > > I would create an ext2 partition over the freed cylinders and move some > > files from the LogicalVolume onto it. Then I thought I would repeat the > > process several times effectively migrating my data from the > ever-shrinking > > LVM to the ever-growing plain Ext2 FS. > > > > The problem is I have little idea how I can shrink an LVM partition on > > /dev/sda2. And there seem to be very little information on this on the > net. > > > > So far, I have lvreduce'd the FS inside the LogicalVolume and the > > LogicalVolume itselft to 35700000 4k blocks. Now, how do I redeuce the > > number of cyllinders occupied by the LVM on /dev/sda? > > > > I would really apreciate any help or ideas. > > Thanks a lot in advance. > > > > -- > > See you, > > Andrey > > > > ICQ UIN: 114087545 > > Journal: http://www.livejournal.com/users/e_ploko/ > > > > _______________________________________________ > > linux-lvm mailing list > > linux-lvm@redhat.com > > https://www.redhat.com/mailman/listinfo/linux-lvm > > read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/ > > > > > > -- > Anil Kumar Shrama > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: > https://www.redhat.com/archives/linux-lvm/attachments/20051212/312e7a1b/a= ttachment.html > > ------------------------------ > > Message: 5 > Date: Mon, 12 Dec 2005 15:21:26 +0000 > From: Alasdair G Kergon > Subject: Re: [linux-lvm] Newbie of LVM > To: LVM general discussion and development > Message-ID: <20051212152126.GA25866@agk.surrey.redhat.com> > Content-Type: text/plain; charset=3Dus-ascii > > On Fri, Dec 09, 2005 at 02:12:43PM -0500, Matthew Gillen wrote: > > Way Loss wrote: > > > > /dev/md5 153G 119G 27G 82% /www > > > > My md5 is almost full and I wanna use LVM to merge > > > my md5 with a new partition from a new hdd. I wanna > > > ask if this possible for LVM to merge 2 partition > > > together while one of them have data on it? I can't > > > suffer any data loss and want to make sure that LVM > > > works perfectly to what I want. > > > You're out of luck. You can't take an existing partition and keep the > > data yet switch it over to LVM. > > See also: > https://www.redhat.com/archives/linux-lvm/2005-October/msg00110.html > > Alasdair > -- > agk@redhat.com > > > > ------------------------------ > > _______________________________________________ > linux-lvm mailing list > linux-lvm@redhat.com > https://www.redhat.com/mailman/listinfo/linux-lvm > > End of linux-lvm Digest, Vol 22, Issue 12 > ***************************************** > ------=_Part_12193_31137801.1134456497248 Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable Content-Disposition: inline Hello all,

I've got a a 80GB HD partitioned automatically during Fedora Core 4 setup. That is the disk carries 8 partitions where the first to 4th i.e (/dev/hda1  to /dev/hda4) are dedicated to windows OS and (/dev/hda5) is mounted as /boot and (/dev/sda6) as /home (/dev/hda7) as swap & (/dev/hda8) as /.

I have lvm2 installed on my system.

Now can i use lvm2 on the top of all and if yes how?

When i issued the command:

lvm> pvcreate /dev/hda5

the following message was displayed:

  /var/lock/lvm/P_orphans: open failed: Permission denied
  Can't get lock for orphan PVs

please reply.

Thanks & regards
neelima
On 12/12/05, linux-lvm-reque= st@redhat.com <l= inux-lvm-request@redhat.com > wrote:
Send linux-lvm mailing list submissions to
    =      linux-lvm@redhat.com

To subscribe or unsubscribe via the World W= ide Web, visit
        https://www.redhat.c= om/mailman/listinfo/linux-lvm
or, via email, send a message with sub= ject or body 'help' to
        linux-lvm-request@redhat.com

You can re= ach the person managing the list at
      =   linux-lvm-owner@r= edhat.com

When replying, please edit your Subject line so it is more spec= ific
than "Re: Contents of linux-lvm digest..."


Tod= ay's Topics:

   1. Re: LVM onFly features (Michael Loftis)=
   2. Re: LVM onFly features (Nathan Scott)
   3. Converting LVM back to Ext2? (Andrey Subbotin)
 = ;  4. Re: Converting LVM back to Ext2? (Anil Kumar Sharma)
 &n= bsp; 5. Re: Newbie of LVM (Alasdair G Kergon)


------------------= ----------------------------------------------------

Message: 1
Date: Sun, 11 Dec 2005 18:14:39 -0700
From: Michae= l Loftis <mloftis@wgops.com>=
Subject: Re: [linux-lvm] LVM onFly features
To: Nathan Scott < nathans@sgi.com>
Cc: lin= ux-xfs@oss.sgi.com, linux-lvm@r= edhat.com
Message-ID: < 9CD94D4B0F3B63057B4C2BC0@dhcp-2-206.wgops.com>
Content-Type: text= /plain; charset=3Dus-ascii; format=3Dflowed



--On December 12= , 2005 9:15:39 AM +1100 Nathan Scott <nathans@sgi.com >
wrote:


>> XFS has terrible unpredictable perfo= rmance in production.  Also it has
>> very
>
&g= t; What on earth does that mean?  Whatever it means, it doesn't> sound right - can you back that up with some data please?

The worst problems we had we're likely most strongly related to run= ning out
of journal transaction space.  When XFS was under hig= h transaction load
sometimes it would just hang everything syncing meta-= data.  From what I
understand this has supposedly been dealt with, but we were still havin= g
these issues when we decommissioned the last XFS based server a year a= go.
Another datapoint is the fact we primarily served via NFS, which XFS
(atleast at the time) still didn't behave great with, I never did see a= ny
good answers on that as I recall.

>
>> bad behavio= r when recovering from crashes,
>
> Details?  Are you= talking about this post of yours:
> http://oss.sgi.com/archives/linux-xfs/2003-06/msg00032.html
That particular behavior happened a lot.  And it wasn't annoyin= g that it
happened, so much so that it happened after the system claimed it wasclean.  Further, yes, that hardware has been fully checked out.=   There's
nothing wrong with the hardware.  I wish t= here was, that'd make me feel
better honestly.  The only thing I can reason is bugs in the = XFS
fsck/repair tools, or *maybe* an interaction with XFS and the DAC960=
controller, or NFS.  The fact that XFS has weird interactions= with NFS at
all bugs me, but I don't understand the code involved well enough. &nb= sp;There
might be a decent reason.

>
> There have been s= everal fixes in this area since that post.
>
>> often times = it's tools totally fail to clean the filesystem.
>
> In what way?  Did you open a bug report?
>=
>> It also needs larger kernel stacks because
>> of some= of the really deep call trees,
>
> Those have been long since = fixed as far as we are aware.  Do you
> have an actual example where things can fail?

We pulled it = out of production and replaced XFS with Reiser.  At the time
R= eiser was far more mature on Linux.  XFS Linux implementation (in=
combination with other work in the block layer as you mention later) ma= y
have matured to atleast a similar (possibly moreso) point now. &nb= sp;I've just
personally lost more data due to XFS than Reiser. &nbs= p;I've also had problems
with ext3 in the (now distant) past while it wa= s teething still.


>> so when you use it with LVM or MD it
>> can o= ops unless you use the larger kernel stacks.
>
> Anything can o= ops in combination with enough stacked device drivers
> (although the= re has been block layer work to resolve this recently,
> so you should try again with a current kernel...).  If y= ou have an
> actual example of this still happening, please open a bu= g or at least
> let the XFS developers know of your test case. &= nbsp;Thanks.

That was actually part of the problem.  There was no time, an= d no hardware,
to try to reproduce the problem in the lab.  Th= is isn't an XFS problem
specifically, this is an Open Source problem rea= lly....If you encounter a
bug, and you're unlucky enough to be a bit of an edge case, you better = be
prepared to pony up with hardware and mantime to diagnose and reprodu= ce it
or it might not get fixed.  Again though, this is common= to the whole open
source community, and not XFS, Linux, LVM, or any other project specifi= c.

Having said that, if you can reproduce it, and get good details, = the open
source community has a far better track record of *really* fixi= ng and
addressing bugs than any commercial software.

>
>> W= e also have had
>> problems with the quota system but the details = on that have faded.
>
> Seems like details of all the problems = you described have faded.
> Your mail seems to me like a bit of a troll ... I guess you had a<= br>> problem or two a couple of years ago (from searching the lists)
= > and are still sore.  Can you point me to mailing list report= s of
> the problems you're refering to here or bug reports you've opened
&= gt; for these issues?  I'll let you know if any of them are still=
> relevent.

No, we had dozens actually.  The only o= nes that were really crippling were
when XFS would suddenly unmount in the middle of the business day for n= o
apparent reason.  Without details bug reports are ignored, a= nd we couldn't
really provide details or filesystem dumps because there = was too much data,
and we had to get it back online.  We just moved as fast as w= e could away
from XFS.  It wasn't just a one day thing, or a w= eek, there was a trail of
crashes with XFS at the time.  Somet= imes the machine was so locked up from
XFS pulling the rug out that the console was wedged up pretty badly too= .

I wanted to provide the information as a data point from the other= side as
it were not get into a pissing match with the XFS developers an= d community.
XFS is still young, as is ReiserFS.  and while Reiser is a co= mpletely new
FS and XFS has roots in IRIX and other implementations, the= ir age is
similar since XFS' Linux implementation is around the same age= .  If the
state has change in the last 6-12 months then so much the better. =  The
facts are that XFS during operation had many problems, and as = we pulled it
out still had many unresolved problems as we replaced it wi= th ReiserFS.
And Reiser has been flawless except for one problem already mentioned o= n
Linux-LVM very clearly caused by an external SAN/RAID problem which EM= C has
corrected (completely as an aside -- anyone running a CX series RE= ALLY
needs to be on the latest code rev, you might never run into the bug, a= nd
i'm still not sure exactly which one we hit, there were atleast two t= hat
could have caused the data corruption, but if you do, it can be ugly= ).


The best guess that I have as to why we had such a bad time wit= h XFS was
the XFS+NFS interaction and possibly an old (unknown to me -- = this is just
a guess) bug that may have created some minor underlying co= rruption that
the repair tools couldn't fully fix or diagnose may have caused our
= continual (seemingly random) problems.  I don't believe in really= random
problems, atleast not in computers anyway.

>
> c= heers.
>
> --
> Nathan
>



--
"Geniu= s might be described as a supreme capacity for getting its possessors
in= to trouble of all kinds."
-- Samuel Butler



---------= ---------------------

Message: 2
Date: Mon, 12 Dec 2005 13:28:30 +1100
From: Nathan= Scott <nathans@sgi.com>
Su= bject: Re: [linux-lvm] LVM onFly features
To: Michael Loftis < mloftis@wgops.com>
Cc: l= inux-xfs@oss.sgi.com, linux-lvm= @redhat.com
Message-ID: < 20051212132830.A7432365@wobbly.melbourne.sgi.com>
Content-Type: t= ext/plain; charset=3Dus-ascii

On Sun, Dec 11, 2005 at 06:14:39PM -07= 00, Michael Loftis wrote:
> --On December 12, 2005 9:15:39 AM +1100 N= athan Scott < nathans@sgi.com>
> The wors= t problems we had we're likely most strongly related to running out
>= of journal transaction space.  When XFS was under high transacti= on load

Can you define "high load" for your scenario?

> som= etimes it would just hang everything syncing meta-data.  From wha= t I

There is no situation in which XFS will "hang everything&qu= ot;.  A process
that is modifying the filesystem may be paused briefly waiting for spac= e
to become available in the log, and that involves flushing the in-core=
log buffers.  But only processes that need log space will be = paused
waiting for that (relatively small) write to complete.  This = is also not
a behaviour peculiar to XFS, and with suitable tuning in ter= ms of mkfs/
mount/sysctl parameters, it can be completely controlled.
> understand this has supposedly been dealt with, but we were still= having
> these issues when we decommissioned the last XFS based server a ye= ar ago.

I'd like some more information describing your workload ther= e if
you could provide it.  Thanks.

> Another datapo= int is the fact we primarily served via NFS, which XFS
> (atleast at the time) still didn't behave great with, I never did = see any
> good answers on that as I recall.

Indeed.  = ;Early 2.6 kernels did have XFS/NFS interaction problems,
with NFS using= generation number zero as "magic", and XFS using
that as a valid gen number.  That was fixed a long time ago.<= br>
> controller, or NFS.  The fact that XFS has weird inte= ractions with NFS at
> all bugs me, but I don't understand the code i= nvolved well enough.  There
> might be a decent reason.

No, there's no reason, and XFS do= es not have "wierd interactions"
with NFS.

> >>= ; It also needs larger kernel stacks because
> >> of some of th= e really deep call trees,
> >
> > Those have been long since fixed as far as we ar= e aware.  Do you
> > have an actual example where things= can fail?
>
> We pulled it out of production and replaced XFS = with Reiser.  At the time
> Reiser was far more mature on Linux.  XFS Linux implemen= tation (in

Not because of 4K stacks though surely?  That k= ernel option wasn't around
then I think, and the reiserfs folks have als= o had a bunch of work to do
in that area too.

> > Seems like details of all the proble= ms you described have faded.
> > Your mail seems to me like a bit = of a troll ... I guess you had a
> > problem or two a couple of ye= ars ago (from searching the lists)
> > and are still sore.  Can you point me to mailing li= st reports of
> > the problems you're refering to here or bug repo= rts you've opened
> > for these issues?  I'll let you kn= ow if any of them are still
> > relevent.
>
> No, we had dozens actually. &n= bsp;The only ones that were really crippling were
> when XFS would su= ddenly unmount in the middle of the business day for no
> apparent re= ason.  Without details bug reports are ignored, and we couldn't

The NFS issue had the unfortunate side effect of causing filesystem=
corruption and hence forced filesystem shutdowns would result. &nb= sp;There
were also bugs on that error handling path, so probably you hit= two
independent XFS bugs on a pretty old kernel version.

> I wanted t= o provide the information as a data point from the other side as
> it= were not get into a pissing match with the XFS developers and community.

You were claiming long-resolved issues that existed in an XFS versi= on
from an early 2.6 kernel as still relevent.  That is quite = misleading,
and doesn't provide useful information to anyone.

che= ers.

--
Nathan



------------------------------

Messa= ge: 3
Date: Mon, 12 Dec 2005 15:25:23 +0700
From: Andrey Subbotin <= ;eploko@gmail.com>
Subject: [= linux-lvm] Converting LVM back to Ext2?
To: linux-lvm@redhat.comMessage-ID: <45980= 936.20051212152523@gmail.com>
Content-Type: text/plain; charset= =3Dus-ascii

Hello all.

I've got a 200GB HD partitioned automatically during Fedora Core 4 setup. That is the disk carries 2 partitions where the first one (/dev/sda1) is ext3 mounted as /boot and the second one (/dev/sda2) is an LVM.

T= hat is all clear and fancy but the problem is I'm faced with the fact I need to migrate the HD to Ext2 FS, so I could convert it to FAT32 later, so I could access it from a copy of the Windows OS I happen to boot recently to do some work. The LVM on /dev/sda2 is full of data I need to save and the problem is I don't have a spare HD to temporarily copy all those 200GB to.

If I had a spare HD I would eventually mount it, make a new Ext2 partition on it and then copy all the data from the LogicalVolume to the new partition. Then I would fire up fdisk and kill the LVM, thus freeing the space on the drive. Then, moving the data back to the first HD would be a snap. But without a spare disk I face a real challenge.

My initial idea was to reduce the FS inside the LogicalVolume (it has ~40GB free of space) and then reduce the size of the LogicalVolume and then reduce the PhysicalVolume /dev/sda2 by the freed number of cylinders. Then, I would create an ext2 partition over the freed cylinders and move some files from the LogicalVolume onto it. Then I thought I would repeat the process several times effectively migrating my data from the ever-shrinking LVM to the ever-growing plain Ext2 FS.

The problem is I have little idea how I can shrink an LVM partition on /dev/sda2. And there seem to be very little information on this on the net.

So far, I have lvreduce'd the FS inside the LogicalVolume and the LogicalVolume itselft to 35700000 4k blocks. Now, how do I redeuce the number of cyllinders occupied by the LVM on /dev/sda?

I would really apreciate= any help or ideas.
Thanks a lot in advance.

--
See you,
An= drey

ICQ UIN: 114087545
Journal: http://www.livejournal.com/users/e_ploko/



--------------= ----------------

Message: 4
Date: Mon, 12 Dec 2005 19:50:30 +0530=
From: Anil Kumar Sharma <xplus= aks@gmail.com >
Subject: Re: [linux-lvm] Converting LVM back to Ext2?
To: An= drey Subbotin <eploko@gmail.com&= gt;, LVM general discussion and
      &nbs= p; development < linux-lvm@redhat.com>
Message-ID:
    &nbs= p;   <52fe6b680512120620m2d9d462erdc37b7f3d79183de@ma= il.gmail.com>
Content-Type: text/plain; charset=3D"iso-8859-= 1"

U may reduce LV size and get some free space but that still will be= lying on
PV and there is no pvreduce (afaik).

So, I think (I may= be wrong), U are out of luck. Get your luck back, have
same temporary s= torage for your data to change the size or convert PV.

UC, LVM is good for multi-partitions and multi-disks, everything in=
multiples. That's the playground for LVM.
When U (re)start for FC4 o= r FC5, have your linux space on multiple PV's.
I will suggest utilize al= l 4 primary  partitions,
1. boot, 2. dual boot (if required) else PV and 3.& 4 also PV's. Sw= ap goes
in LVM.
LVM can make them look like one or as desired partiti= ons (LV's), which U can
change as per your requirements, even for dual b= oot.

Hard luck with "auto partition" - it is good for itself!= smart fellow not
caring for our moods.




On 12/12/05, = Andrey Subbotin <eploko@gmail.com > wrote:
>
> Hello all.
>
> I've got a 200GB HD = partitioned automatically during Fedora Core 4 setup.
> That is the d= isk carries 2 partitions where the first one (/dev/sda1) is
> ext3 mo= unted as /boot and the second one (/dev/sda2) is an LVM.
>
> That is all clear and fancy but the problem is I'm faced w= ith the fact I
> need to migrate the HD to Ext2 FS, so I could conver= t it to FAT32 later, so
> I could access it from a copy of the Window= s OS I happen to boot recently to
> do some work. The LVM on /dev/sda2 is full of data I need to save = and the
> problem is I don't have a spare HD to temporarily copy all = those 200GB to.
>
> If I had a spare HD I would eventually moun= t it, make a new Ext2 partition
> on it and then copy all the data from the LogicalVolume to the new=
> partition. Then I would fire up fdisk and kill the LVM, thus freei= ng the
> space on the drive. Then, moving the data back to the first = HD would be a
> snap. But without a spare disk I face a real challenge.
>> My initial idea was to reduce the FS inside the LogicalVolume (it has=
> ~40GB free of space) and then reduce the size of the LogicalVolume= and then
> reduce the PhysicalVolume /dev/sda2 by the freed number of cylinde= rs. Then,
> I would create an ext2 partition over the freed cylinders= and move some
> files from the LogicalVolume onto it. Then I thought= I would repeat the
> process several times effectively migrating my data from the ever-= shrinking
> LVM to the ever-growing plain Ext2 FS.
>
> Th= e problem is I have little idea how I can shrink an LVM partition on
> /dev/sda2. And there seem to be very little information on this on the= net.
>
> So far, I have lvreduce'd the FS inside the LogicalVo= lume and the
> LogicalVolume itselft to 35700000 4k blocks. Now, how = do I redeuce the
> number of cyllinders occupied by the LVM on /dev/sda?
>
&= gt; I would really apreciate any help or ideas.
> Thanks a lot in adv= ance.
>
> --
> See you,
> Andrey
>
> IC= Q UIN: 114087545
> Journal:
htt= p://www.livejournal.com/users/e_ploko/
>
> ________________= _______________________________
> linux-lvm mailing list
> linux-lvm@redhat.com
> https://www.redhat.com/mailman/listinfo/linux-lvm> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
>



--
Anil Kuma= r Shrama
-------------- next part --------------
An HTML attachment w= as scrubbed...
URL: https://www.redhat.com/archives/linux-lvm/attachments/20051212/312e7a1b/att= achment.html

------------------------------

Message: 5Date: Mon, 12 Dec 2005 15:21:26 +0000
From: Alasdair G Kergon < agk@redhat.com>
Subject: Re: [linux-lvm] Newbie of LVM
To: LVM= general discussion and development <linux-lvm@redhat.com>
Message-ID: < 20051212152126.GA25866@agk.surrey.redhat.com>
Content-Type: text/= plain; charset=3Dus-ascii

On Fri, Dec 09, 2005 at 02:12:43PM -0500, = Matthew Gillen wrote:
> Way Loss wrote:

> > /dev/md5           &= nbsp;  153G  119G   27G  82% /www

> >     My md5 is = almost full and I wanna use LVM to merge
> > my md5 with a new par= tition from a new hdd. I wanna
> > ask if this possible for LVM to= merge 2 partition
> > together while one of them have data on it?= I can't
> > suffer any data loss and want to make sure that LVM
> &= gt; works perfectly to what I want.

> You're out of luck. &n= bsp;You can't take an existing partition and keep the
> data yet swit= ch it over to LVM.

See also:
  https://www.redhat.com/archives/linu= x-lvm/2005-October/msg00110.html

Alasdair
--
agk@redhat.com



------------------------------

___= ____________________________________________
linux-lvm mailing list
<= a href=3D"mailto:linux-lvm@redhat.com">linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm

End of linux-l= vm Digest, Vol 22, Issue 12
*****************************************

------=_Part_12193_31137801.1134456497248--