From: Gregory Seidman <gsslist+linuxraid@anthropohedron.net>
To: linux-raid@vger.kernel.org
Subject: Re: Best way to achieve large, expandable, cheap storage?
Date: Thu, 20 Oct 2005 07:19:50 -0400 [thread overview]
Message-ID: <20051020111943.GA9757@anthropohedron.net> (raw)
In-Reply-To: <43577022.5010306@robinbowes.com>
On Thu, Oct 20, 2005 at 11:23:30AM +0100, Robin Bowes wrote:
} Christopher Smith said the following on 04/10/2005 05:09:
} >Yep, that's pretty much bang on. The only thing you've missed is using
} >pvmove to physically move the data off the soon-to-be-decomissioned
} >PVs(/RAID arrays).
} >
} >Be warned, for those who haven't used it before, pvmove is _very_ slow.
}
} I've just been re-reading this thread.
}
} I'd like to just check if I understand how this will work.
}
} Assume the following setup (hypothetical).
}
} VG:
} big_vg - contains /dev/md1, /dev/md2; 240GB
}
} PV:
} /dev/md1 - 4 x 40GB drives (RAID5 - 120GB total)
} /dev/md2 - 4 x 40GB drives (RAID5 - 120GB total)
You should at least read the following before using RAID5. You can agree or
disagree, but you should take the arguments into account:
http://www.miracleas.com/BAARF/RAID5_versus_RAID10.txt
} LV:
} big_lv - in big_vg - 240GB
}
} Filesystems:
} /home - xfs filesystem in big_lv - 240GB
}
} Suppose I then add a new PV:
} /dev/md3 - 4 x 300GB drives (RAID5 - 900GB total)
You use pvcreate and vgextend to do so, incidentally.
} I want to replace /dev/md1 with /dev/md3
}
} I use pvmove something like this:
}
} # pvmove /dev/md1 /dev/md3
}
} When this finishes, big_vg will contain /dev/md2 + /dev/md3 (1020GB
} total). /dev/md1 will be unused.
/dev/md1 will still be a part of big_vg, but it won't have any data from
any LVs on it. You will need to use vgreduce to remove /dev/md1 from the
VG:
# vgreduce big_vg /dev/md1
} big_lv will still be using just 240GB of big_vg.
}
} I then use lvextend to increase the size of big_lv
}
} big_lv will now use all 1020GB of big_vg.
}
} However, the /home filesystem will still just use 240GB of big_lv
}
} I can then use xfs_growfs to expand the /home filesystem to use all
} 1020GB of big_lv.
All correct.
} Have I missed anything?
Just the vgreduce step (and removing the physical drives that make up
/dev/md1).
} R.
--Greg
next prev parent reply other threads:[~2005-10-20 11:19 UTC|newest]
Thread overview: 19+ messages / expand[flat|nested] mbox.gz Atom feed top
2005-09-30 13:20 Best way to achieve large, expandable, cheap storage? Robin Bowes
2005-09-30 13:29 ` Robin Bowes
2005-09-30 18:28 ` Brad Dameron
2005-09-30 19:20 ` Dan Stromberg
2005-09-30 18:16 ` Gregory Seidman
2005-09-30 18:34 ` Andy Smith
2005-10-02 4:36 ` Christopher Smith
2005-10-02 7:09 ` Tyler
2005-10-03 3:19 ` Christopher Smith
2005-10-03 16:33 ` Sebastian Kuzminsky
2005-10-04 4:09 ` Christopher Smith
2005-10-20 10:23 ` Robin Bowes
2005-10-20 11:19 ` Gregory Seidman [this message]
2005-10-20 11:41 ` Robin Bowes
2005-10-21 4:42 ` Christopher Smith
2005-10-21 16:48 ` Gil
2005-10-21 20:08 ` Robin Bowes
2005-10-21 4:40 ` Christopher Smith
-- strict thread matches above, loose matches on Subject: below --
2005-10-27 19:12 Andrew Burgess
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20051020111943.GA9757@anthropohedron.net \
--to=gsslist+linuxraid@anthropohedron.net \
--cc=linux-raid@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).