linux-lvm.redhat.com archive mirror
 help / color / mirror / Atom feed
* [linux-lvm] Re: [reiserfs-list] Horrid performance with 2.4.{9,10,12} + LVM + ReiserFS
       [not found] <20011018000006.A22777@jensbenecke.de>
@ 2001-10-17 22:30 ` Andreas Dilger
  2001-10-17 23:00 ` Hans Reiser
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 6+ messages in thread
From: Andreas Dilger @ 2001-10-17 22:30 UTC (permalink / raw)
  To: ReiserFS Mailingliste, LVM List

On Oct 18, 2001  00:00 +0200, Jens Benecke wrote:
> The LVM volume is about 97% full (7.5GB free). In ext2 the rule was not to
> use the last 5% of a disk because it would be too slow. But I think this
> should not be a problem for ReiserFS, at least not with such big disks (?).
> If it is, this would waste over 10GB disk space which I don't think is
> acceptable.

Do you mean that you have 3% of the LOGICAL VOLUME or DISK free, or do you
have 3% of the FILESYSTEM free (e.g. is the free space reported by "pvscan"
or "df")?  If it is the FILESYSTEM with only 3% free, then there is not much
that you can do about the performance problem - fragmentation cannot be
helped, regardless of the filesystem.

Actually, there is someone testing exactly this issue with reiserfs, and they
report 5-10x slowdown with a highly fragmented filesystem, so your 2.5x
slowdown is in the right range.

> Strangely, on a 'real' partition this is much faster (this was formatted
> recently and with 3.6 format, though):

Yes, well a new filesystem (especially if not full) will not have
fragmentation problems.

> Currently I'm thinking of backing up the whole volume and reformatting, but
> I wanted to ask what might cause this first because this would be a major
> PITA for many users here.

Well, it may help for a short time, but at 97% full you will have problems
almost right away again.  I'd suggest adding more space to the LV and growing
the filesystem (without doing the backup/restore) and you will naturally get
some "defragmentation" happening as the new space is used and old space is
freed.

Cheers, Andreas
--
Andreas Dilger  \ "If a man ate a pound of pasta and a pound of antipasto,
                 \  would they cancel out, leaving him still hungry?"
http://www-mddsp.enel.ucalgary.ca/People/adilger/               -- Dogbert

^ permalink raw reply	[flat|nested] 6+ messages in thread

* [linux-lvm] Re: [reiserfs-list] Horrid performance with 2.4.{9,10,12} + LVM + ReiserFS
       [not found] <20011018000006.A22777@jensbenecke.de>
  2001-10-17 22:30 ` [linux-lvm] Re: [reiserfs-list] Horrid performance with 2.4.{9,10,12} + LVM + ReiserFS Andreas Dilger
@ 2001-10-17 23:00 ` Hans Reiser
  2001-10-18  0:48 ` [linux-lvm] " José Luis Domingo López
  2001-10-18  9:17 ` Werner John
  3 siblings, 0 replies; 6+ messages in thread
From: Hans Reiser @ 2001-10-17 23:00 UTC (permalink / raw)
  To: Jens Benecke; +Cc: ReiserFS Mailingliste, LVM List

All filesystems perform better if kept at 85% of disk capacity or less.

(maybe v4 will improve this by use of a repacker to aggregate the free space
away from stable data, maybe not, we shall see)

ReiserFS will give you about 6-8% more disk space for the same amount of data
for average usage patterns (I don't remember these numbers clearly, don't trust
me on them).

Hans

Jens Benecke wrote:
> 
> Hello,
> 
> this is CC'ed to LVM user list and ReiserFS because I don't know who to
> blame currently. ;)
> 
> I have severe performance problems with a 210G ReiserFS volume sitting on a
> LVM (1.0.1rc4) spread over three disks (Maxtor 60, 80, 80GB IDE). The
> system is a Duron 650MHz with via686b chipset and UDMA-100 capable IDE.
> /proc/cpuinfo, /proc/ide/via output see below. The hardware seems
> performant enough to me.
> 
> The LVM volume is about 97% full (7.5GB free). In ext2 the rule was not to
> use the last 5% of a disk because it would be too slow. But I think this
> should not be a problem for ReiserFS, at least not with such big disks (?).
> If it is, this would waste over 10GB disk space which I don't think is
> acceptable.
> 
> The ReiserFS volume has withstood some difficulties in the past (a LOT of
> crashes due to power outages - the reason for me using ReiserFS in the
> first place), a conversion from 3.5, LVM issues (were fixed though), a data
> loss problem (see list archive) fixed by the pre-reiserfsck - the old one
> segfaulted - and it's under constant attack by FTP, NFS and Samba.
> 
> When I copy a file within the LVM volume, I get 212MB (106 read, 106
> written) in 1:20:
> 
> -rw-rw-r--    1 jens     public   106377216 16. Okt 00:06 www2.avi
> server: /dat# nice -n -19 time cp www2.avi www2.avi_
> 0.04user 2.82system 1:17.39elapsed 3%CPU (0avgtext+0avgdata 0maxresident)k
> 0inputs+0outputs (135major+16minor)pagefaults 0swaps
> 
> Strangely, on a 'real' partition this is much faster (this was formatted
> recently and with 3.6 format, though):
> 
> server: /home# nice -n -19 time cp www2.avi www2.avi_
> 0.01user 2.41system 0:27.92elapsed 8%CPU (0avgtext+0avgdata 0maxresident)k
> 0inputs+0outputs (135major+16minor)pagefaults 0swaps
> 
> which isn't really _fast_ (28sec for 106MB means 5MB/sec) but it's reading
> and writing to the same disk so it's 10MB/sec.
> 
> during which the harddisk LED is on all the time.
> 
> Currently I'm thinking of backing up the whole volume and reformatting, but
> I wanted to ask what might cause this first because this would be a major
> PITA for many users here.
> 
> ---------------------------------------------------------------------------
> hdparm output for the three disks:
> 
> server: /home# hdparm -tT /dev/hd[abcd]
> /dev/hda:
>  Timing buffer-cache reads:   128 MB in  0.86 seconds =148.84 MB/sec
>  Timing buffered disk reads:  64 MB in  2.12 seconds = 30.19 MB/sec
> /dev/hdc:
>  Timing buffer-cache reads:   128 MB in  0.84 seconds =152.38 MB/sec
>  Timing buffered disk reads:  64 MB in  2.28 seconds = 28.07 MB/sec
> /dev/hdd:
>  Timing buffer-cache reads:   128 MB in  0.86 seconds =148.84 MB/sec
>  Timing buffered disk reads:  64 MB in  2.36 seconds = 27.12 MB/sec
> 
> bonnie++ output for the LVM volume: ---------------------------------------
> Version 1.01d       ------Sequential Output------ --Sequential Input- --Random-
>                     -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
> Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
> server         256M  5596  97 24539  34  6926   7  5194  87 24662  17 115.1 1
>                     ------Sequential Create------ --------Random Create--------
>                     -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
>               files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
>                  16  7965  99 +++++ +++ 10546  99  5458  92 +++++ +++  8214 90
> server,256M,5596,97,24539,34,6926,7,5194,87,24662,17,115.1,1,16,7965,99,+++++,++
> ---------------------------------------------------------------------------
> 
> --
> Jens Benecke �������� http://www.hitchhikers.de/ - Europas Mitfahrzentrale
> 
> Crypto regulations will only hinder criminals who obey the law.

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [linux-lvm] Horrid performance with 2.4.{9,10,12} + LVM + ReiserFS
       [not found] <20011018000006.A22777@jensbenecke.de>
  2001-10-17 22:30 ` [linux-lvm] Re: [reiserfs-list] Horrid performance with 2.4.{9,10,12} + LVM + ReiserFS Andreas Dilger
  2001-10-17 23:00 ` Hans Reiser
@ 2001-10-18  0:48 ` José Luis Domingo López
  2001-10-18  9:17 ` Werner John
  3 siblings, 0 replies; 6+ messages in thread
From: José Luis Domingo López @ 2001-10-18  0:48 UTC (permalink / raw)
  To: LVM List; +Cc: ReiserFS Mailingliste

On Thursday, 18 October 2001, at 00:00:06 +0200,
Jens Benecke wrote:

> Hello,
> [...]
> The LVM volume is about 97% full (7.5GB free). In ext2 the rule was not to
> use the last 5% of a disk because it would be too slow. But I think this
> should not be a problem for ReiserFS, at least not with such big disks (?).
> If it is, this would waste over 10GB disk space which I don't think is
> acceptable.
>
If you mean your ReiserFS partition is 97% full, read:
http://www.namesys.com/faq.html#full-disk

-- 
Jos� Luis Domingo L�pez
Linux Registered User #189436     Debian Linux Woody (P166 64 MB RAM)
 
jdomingo EN internautas PUNTO org  => � Spam ? Atente a las consecuencias
jdomingo AT internautas DOT   org  => Spam at your own risk

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [linux-lvm] Horrid performance with 2.4.{9,10,12} + LVM + ReiserFS
       [not found] <20011018000006.A22777@jensbenecke.de>
                   ` (2 preceding siblings ...)
  2001-10-18  0:48 ` [linux-lvm] " José Luis Domingo López
@ 2001-10-18  9:17 ` Werner John
       [not found]   ` <20011019030045.O25054@jensbenecke.de>
  3 siblings, 1 reply; 6+ messages in thread
From: Werner John @ 2001-10-18  9:17 UTC (permalink / raw)
  To: linux-lvm

Jens Benecke writes:
> Hello,

Hello,

> this is CC'ed to LVM user list and ReiserFS because I don't know who to
> blame currently. ;)
> 
lets, see ...
> 
> I have severe performance problems with a 210G ReiserFS volume sitting on a
> LVM (1.0.1rc4) spread over three disks (Maxtor 60, 80, 80GB IDE). The
> system is a Duron 650MHz with via686b chipset and UDMA-100 capable IDE.
> /proc/cpuinfo, /proc/ide/via output see below. The hardware seems
> performant enough to me.
> [...]
> 
> ---------------------------------------------------------------------------
> hdparm output for the three disks:
> 
> server: /home# hdparm -tT /dev/hd[abcd]
> /dev/hda:
>  Timing buffer-cache reads:   128 MB in  0.86 seconds =148.84 MB/sec
>  Timing buffered disk reads:  64 MB in  2.12 seconds = 30.19 MB/sec
> /dev/hdc:
>  Timing buffer-cache reads:   128 MB in  0.84 seconds =152.38 MB/sec
>  Timing buffered disk reads:  64 MB in  2.28 seconds = 28.07 MB/sec
> /dev/hdd:
>  Timing buffer-cache reads:   128 MB in  0.86 seconds =148.84 MB/sec
>  Timing buffered disk reads:  64 MB in  2.36 seconds = 27.12 MB/sec

Well, you have 1 disk on a single channel (hda) and the other two as master
and slave on *one* channel. That's the bottleneck. If the disks are
accessed individually, you get the full performance (more or less). But if
*both* disks have to respond, performance drops horribly.

Best is to get a third IDE controller.

Yours,
	Werner

> 
> 
> bonnie++ output for the LVM volume: ---------------------------------------
> Version 1.01d       ------Sequential Output------ --Sequential Input- --Random-
>                     -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
> Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
> server         256M  5596  97 24539  34  6926   7  5194  87 24662  17 115.1 1
>                     ------Sequential Create------ --------Random Create--------
>                     -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
>               files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
>                  16  7965  99 +++++ +++ 10546  99  5458  92 +++++ +++  8214 90
> server,256M,5596,97,24539,34,6926,7,5194,87,24662,17,115.1,1,16,7965,99,+++++,++
> ---------------------------------------------------------------------------
> 
> 
> 
> 
> -- 
> Jens Benecke ииииииии http://www.hitchhikers.de/ - Europas Mitfahrzentrale
> 
> Crypto regulations will only hinder criminals who obey the law.
> 
> _______________________________________________
> linux-lvm mailing list
> linux-lvm@sistina.com
> http://lists.sistina.com/mailman/listinfo/linux-lvm
> read the LVM HOW-TO at http://www.sistina.com/lvm/Pages/howto.html

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [linux-lvm] Horrid performance with 2.4.{9,10,12} + LVM + ReiserFS
       [not found]   ` <20011019030045.O25054@jensbenecke.de>
@ 2001-10-19  1:19     ` Eric M. Hopper
  2001-10-19  5:49       ` Werner John
  0 siblings, 1 reply; 6+ messages in thread
From: Eric M. Hopper @ 2001-10-19  1:19 UTC (permalink / raw)
  To: linux-lvm

[-- Attachment #1: Type: text/plain, Size: 1393 bytes --]

On Fri, Oct 19, 2001 at 03:00:45AM +0200, Jens Benecke wrote:
> > Well, you have 1 disk on a single channel (hda) and the other two as
> > master and slave on *one* channel. That's the bottleneck. If the disks
> > are accessed individually, you get the full performance (more or less).
> > But if *both* disks have to respond, performance drops horribly.  Best is
> > to get a third IDE controller.
> 
> As the disks aren't interleaved (just appended to each other) I don't think
> this is a problem. I use LVM because I don't want to split up the FTP
> server space with a huge chaos of symlinks and partitions, not because I
> absolutely need RAID performance.

	This could still be a problem if you have two LVs on the VG that
spans both disk, and one LV is mainly on one disk, and the other is
mainly on the other, and you end up accessing both filesystems at the
same time, you still get a contention problem.  This is a lot of 'ifs',
but it can still happen.  :-)

	The disk space problem is the much more likely culprit.  *grin*

Have fun (if at all possible),
-- 
"It does me no injury for my neighbor to say there are twenty gods or no God.
It neither picks my pocket nor breaks my leg."  --- Thomas Jefferson
"Go to Heaven for the climate, Hell for the company."  -- Mark Twain
-- Eric Hopper (hopper@omnifarious.org  http://www.omnifarious.org/~hopper) --

[-- Attachment #2: Type: application/pgp-signature, Size: 232 bytes --]

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [linux-lvm] Horrid performance with 2.4.{9,10,12} + LVM + ReiserFS
  2001-10-19  1:19     ` Eric M. Hopper
@ 2001-10-19  5:49       ` Werner John
  0 siblings, 0 replies; 6+ messages in thread
From: Werner John @ 2001-10-19  5:49 UTC (permalink / raw)
  To: linux-lvm

Eric M. Hopper writes:
> On Fri, Oct 19, 2001 at 03:00:45AM +0200, Jens Benecke wrote:
> > > Well, you have 1 disk on a single channel (hda) and the other two as
> > > master and slave on *one* channel. That's the bottleneck. If the disks
> > > are accessed individually, you get the full performance (more or less).
> > > But if *both* disks have to respond, performance drops horribly.  Best is
> > > to get a third IDE controller.
> > 
> > As the disks aren't interleaved (just appended to each other) I don't think
> > this is a problem. I use LVM because I don't want to split up the FTP
> > server space with a huge chaos of symlinks and partitions, not because I
> > absolutely need RAID performance.
> 
> 	This could still be a problem if you have two LVs on the VG that
> spans both disk, and one LV is mainly on one disk, and the other is
> mainly on the other, and you end up accessing both filesystems at the
> same time, you still get a contention problem.  This is a lot of 'ifs',
> but it can still happen.  :-)

As Eric said, appending disks does not prevent a scenario where you have
some data that spans across hdc and hdd. Or just think about the filesystem
itself. You'll never really know where files are placed on the partition...

Yours,
	Werner

> 	The disk space problem is the much more likely culprit.  *grin*
>
> Have fun (if at all possible),
> -- 
> "It does me no injury for my neighbor to say there are twenty gods or no God.
> It neither picks my pocket nor breaks my leg."  --- Thomas Jefferson
> "Go to Heaven for the climate, Hell for the company."  -- Mark Twain
> -- Eric Hopper (hopper@omnifarious.org  http://www.omnifarious.org/~hopper) --

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2001-10-19  5:49 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
     [not found] <20011018000006.A22777@jensbenecke.de>
2001-10-17 22:30 ` [linux-lvm] Re: [reiserfs-list] Horrid performance with 2.4.{9,10,12} + LVM + ReiserFS Andreas Dilger
2001-10-17 23:00 ` Hans Reiser
2001-10-18  0:48 ` [linux-lvm] " José Luis Domingo López
2001-10-18  9:17 ` Werner John
     [not found]   ` <20011019030045.O25054@jensbenecke.de>
2001-10-19  1:19     ` Eric M. Hopper
2001-10-19  5:49       ` Werner John

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).