* Does \"32.1% non-contigunous\" mean severely fragmented?
@ 2007-10-19 1:49 Tetsuo Handa
2007-10-19 18:52 ` Theodore Tso
0 siblings, 1 reply; 8+ messages in thread
From: Tetsuo Handa @ 2007-10-19 1:49 UTC (permalink / raw)
To: linux-fsdevel
Hello.
I ran e2fsck and it reported as follows.
[root@sakura ~]# e2fsck -f /dev/hda1
e2fsck 1.39 (29-May-2006)
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
/data/VMware: 349/19546112 files (32.1% non-contiguous), 31019203/39072080 blocks
Does non-contiguous mean fragmented?
If so, where is ext3defrag?
Regards.
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: Does \"32.1% non-contigunous\" mean severely fragmented?
2007-10-19 1:49 Does \"32.1% non-contigunous\" mean severely fragmented? Tetsuo Handa
@ 2007-10-19 18:52 ` Theodore Tso
2007-10-20 3:39 ` Does "32.1% non-contiguous" " Tetsuo Handa
0 siblings, 1 reply; 8+ messages in thread
From: Theodore Tso @ 2007-10-19 18:52 UTC (permalink / raw)
To: Tetsuo Handa; +Cc: linux-fsdevel
On Fri, Oct 19, 2007 at 10:49:03AM +0900, Tetsuo Handa wrote:
> /data/VMware: 349/19546112 files (32.1% non-contiguous), 31019203/39072080 blocks
>
> Does non-contiguous mean fragmented?
> If so, where is ext3defrag?
Not necessarily; it just means that 32% of your files have at least
one discontinuity. Given the ext3 layout, by definition every 128
megs there will be a discontinuity because of the metadata at the
beginning of every single block group. You have a small number of
files on your system (349) occupying an average of 348 megabytes. So
it's not at all surprising that the contiguous percentage is 32%.
The recent Flex BG feature that was recently pulled into 2.6.23-git14
for ext4 is desgined to avoid this issue, but a seek every 128 megs is
for most workloads not a big deal and will hopefully not cause you any
problems.
- Ted
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: Does "32.1% non-contiguous" mean severely fragmented?
2007-10-19 18:52 ` Theodore Tso
@ 2007-10-20 3:39 ` Tetsuo Handa
2007-10-20 13:17 ` Theodore Tso
0 siblings, 1 reply; 8+ messages in thread
From: Tetsuo Handa @ 2007-10-20 3:39 UTC (permalink / raw)
To: tytso; +Cc: linux-fsdevel
Hello.
Theodore Tso wrote:
> beginning of every single block group. You have a small number of
> files on your system (349) occupying an average of 348 megabytes. So
> it's not at all surprising that the contiguous percentage is 32%.
I see, thank you. Yes, there are many files splitted in 2GB each.
But what is surprising for me is that I have to wait for more than
five minutes to save/restore the virtual machine's 512MB-RAM image
(usually it takes less than five seconds).
Hdparm reports DMA is on and e2fsck reports no errors,
so I thought it is severely fragmented.
May be I should backup all virtual machine's data and
format the partition and restore them.
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: Does "32.1% non-contiguous" mean severely fragmented?
2007-10-20 3:39 ` Does "32.1% non-contiguous" " Tetsuo Handa
@ 2007-10-20 13:17 ` Theodore Tso
2007-10-22 11:58 ` Tetsuo Handa
0 siblings, 1 reply; 8+ messages in thread
From: Theodore Tso @ 2007-10-20 13:17 UTC (permalink / raw)
To: Tetsuo Handa; +Cc: linux-fsdevel
On Sat, Oct 20, 2007 at 12:39:33PM +0900, Tetsuo Handa wrote:
> Theodore Tso wrote:
> > beginning of every single block group. You have a small number of
> > files on your system (349) occupying an average of 348 megabytes. So
> > it's not at all surprising that the contiguous percentage is 32%.
> I see, thank you. Yes, there are many files splitted in 2GB each.
>
> But what is surprising for me is that I have to wait for more than
> five minutes to save/restore the virtual machine's 512MB-RAM image
> (usually it takes less than five seconds).
> Hdparm reports DMA is on and e2fsck reports no errors,
> so I thought it is severely fragmented.
> May be I should backup all virtual machine's data and
> format the partition and restore them.
Well, that's a little drastic if you're not sure what is going on is
fragmentation.
5 minutes to save/restore a 512MB ram image, assuming that you are
saving somewhere around 576 megs of data, means you are writing less
than 2 megs/second. That seems point to something fundamentally
wrong, far worse than can be explained by fragmentation.
First of all, what does the "filefrag" program (shipped as part of
e2fsprogs, not included in some distributions) say if you run it as
root on your VM data file?
Secondly, what results do you get when you run the command "hdparm -tT
/dev/sda" (or /dev/hda if you are using an IDE disk)?
This kind of performance regression is the sort of thing I see on my
laptop when compile the kernel with the wrong options, and/or disable
AHCI mode in favor of compatibility mode, such that my laptop SATA
performance (as measured using hdparm) drops from 50 megs/second to 2
megs/second.
Regards,
- Ted
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: Does "32.1% non-contiguous" mean severely fragmented?
2007-10-20 13:17 ` Theodore Tso
@ 2007-10-22 11:58 ` Tetsuo Handa
2007-10-22 13:02 ` Theodore Tso
0 siblings, 1 reply; 8+ messages in thread
From: Tetsuo Handa @ 2007-10-22 11:58 UTC (permalink / raw)
To: tytso; +Cc: linux-fsdevel
Hello.
Theodore Tso wrote:
> Secondly, what results do you get when you run the command "hdparm -tT
> /dev/sda" (or /dev/hda if you are using an IDE disk)?
[root@sakura Ubuntu7.10]# hdparm -tT /dev/hda1
/dev/hda1:
Timing cached reads: 10384 MB in 2.00 seconds = 5196.44 MB/sec
Timing buffered disk reads: 116 MB in 3.02 seconds = 38.36 MB/sec
[root@sakura Ubuntu7.10]# hdparm -tT /dev/hda1
/dev/hda1:
Timing cached reads: 10572 MB in 2.00 seconds = 5291.32 MB/sec
Timing buffered disk reads: 118 MB in 3.04 seconds = 38.83 MB/sec
BIOS setting says it uses AHCI mode.
> First of all, what does the "filefrag" program (shipped as part of
> e2fsprogs, not included in some distributions) say if you run it as
> root on your VM data file?
Here is the result of "filefrag". *-f???*.vmdk is splitted in 2 GB each.
[root@sakura Ubuntu7.10]# filefrag *
Ubuntu7.10-0: 1 extent found
Ubuntu7.10-f001.vmdk: 151 extents found, perfection would be 18 extents
Ubuntu7.10-f002.vmdk: 36 extents found, perfection would be 18 extents
Ubuntu7.10-f003.vmdk: 5 extents found, perfection would be 1 extent
Ubuntu7.10.nvram: 1 extent found
Ubuntu7.10.vmdk: 1 extent found
Ubuntu7.10.vmsd: 1 extent found
Ubuntu7.10.vmx: 1 extent found
Ubuntu7.10.vmxf: 1 extent found
Ubuntu7.10.vmx.lck: Not a regular file
Ubuntu7-f001.10-0: 167 extents found, perfection would be 18 extents
Ubuntu7-f002.10-0: 68 extents found, perfection would be 18 extents
Ubuntu7-f003.10-0: 20 extents found, perfection would be 18 extents
Ubuntu7-f004.10-0: 93 extents found, perfection would be 18 extents
Ubuntu7-f005.10-0: 316 extents found, perfection would be 18 extents
Ubuntu7-f006.10-0: 27 extents found, perfection would be 18 extents
Ubuntu7-f007.10-0: 21 extents found, perfection would be 18 extents
Ubuntu7-f008.10-0: 20 extents found, perfection would be 18 extents
Ubuntu7-f009.10-0: 78 extents found, perfection would be 18 extents
Ubuntu7-f010.10-0: 22 extents found, perfection would be 18 extents
Ubuntu7-f011.10-0: 47 extents found, perfection would be 1 extent
vmware-0.log: 4 extents found, perfection would be 1 extent
vmware-1.log: 3 extents found, perfection would be 1 extent
vmware-2.log: 15 extents found, perfection would be 1 extent
vmware.log: 3 extents found, perfection would be 1 extent
Yes, there are some discontiguous, but the ratio is not so high when considering their file size.
Regarding 512MB-sized suspend image, it has more higher ratio of discontiguous, as shown below.
When I just power on and suspend at grub, the extent is smaller than perfection.
They would be sparse image (memory is allocated but not all memory is accessed).
But when I do some operation after login, it yeilds more discontiguous.
--- Start VM ---
--- Suspend VM ---
[root@sakura Ubuntu7.10]# filefrag Ubuntu7.10.vmem
Ubuntu7.10.vmem: 1 extent found, perfection would be 5 extents
[root@sakura Ubuntu7.10]# filefrag Ubuntu7.10.vmem
Ubuntu7.10.vmem: 14 extents found, perfection would be 5 extents
[root@sakura Ubuntu7.10]# filefrag Ubuntu7.10.vmem
Ubuntu7.10.vmem: 14 extents found, perfection would be 5 extents
[root@sakura Ubuntu7.10]# sync
[root@sakura Ubuntu7.10]# filefrag Ubuntu7.10.vmem
Ubuntu7.10.vmem: 17 extents found, perfection would be 5 extents
[root@sakura Ubuntu7.10]# filefrag Ubuntu7.10.vmem
Ubuntu7.10.vmem: 17 extents found, perfection would be 5 extents
[root@sakura Ubuntu7.10]# sync
[root@sakura Ubuntu7.10]# filefrag Ubuntu7.10.vmem
Ubuntu7.10.vmem: 17 extents found, perfection would be 5 extents
[root@sakura Ubuntu7.10]# filefrag Ubuntu7.10.vmem
Ubuntu7.10.vmem: 17 extents found, perfection would be 5 extents
--- Resume and poweroff VM ---
--- Start VM ---
--- Suspend VM ---
[root@sakura Ubuntu7.10]# filefrag Ubuntu7.10.vmem
Ubuntu7.10.vmem: 751 extents found, perfection would be 5 extents
[root@sakura Ubuntu7.10]# sync
[root@sakura Ubuntu7.10]# filefrag Ubuntu7.10.vmem
Ubuntu7.10.vmem: 3281 extents found, perfection would be 5 extents
[root@sakura Ubuntu7.10]# filefrag Ubuntu7.10.vmem
Ubuntu7.10.vmem: 3281 extents found, perfection would be 5 extents
--- Resume and poweroff VM ---
What? "sync" yields more discontiguous?
--- Start VM ---
--- Suspend VM ---
[root@sakura Ubuntu7.10]# filefrag Ubuntu7.10.vmem
Ubuntu7.10.vmem: 10 extents found, perfection would be 5 extents
[root@sakura Ubuntu7.10]# sync
[root@sakura Ubuntu7.10]# filefrag Ubuntu7.10.vmem
Ubuntu7.10.vmem: 482 extents found, perfection would be 5 extents
--- Resume and poweroff VM ---
--- Start VM ---
--- Suspend VM ---
[root@sakura Ubuntu7.10]# filefrag Ubuntu7.10.vmem
Ubuntu7.10.vmem: 8 extents found, perfection would be 5 extents
[root@sakura Ubuntu7.10]# sync
[root@sakura Ubuntu7.10]# filefrag Ubuntu7.10.vmem
Ubuntu7.10.vmem: 19 extents found, perfection would be 5 extents
--- Resume and poweroff VM ---
--- Start VM ---
--- Suspend VM ---
[root@sakura Ubuntu7.10]# filefrag Ubuntu7.10.vmem
Ubuntu7.10.vmem: 8 extents found, perfection would be 5 extents
[root@sakura Ubuntu7.10]# sync
[root@sakura Ubuntu7.10]# filefrag Ubuntu7.10.vmem
Ubuntu7.10.vmem: 11 extents found, perfection would be 5 extents
--- Resume and poweroff VM ---
--- Start VM ---
--- Suspend VM ---
[root@sakura Ubuntu7.10]# filefrag Ubuntu7.10.vmem
Ubuntu7.10.vmem: 1 extent found, perfection would be 5 extents
[root@sakura Ubuntu7.10]# filefrag Ubuntu7.10.vmem
Ubuntu7.10.vmem: 1 extent found, perfection would be 5 extents
[root@sakura Ubuntu7.10]# sync
[root@sakura Ubuntu7.10]# filefrag Ubuntu7.10.vmem
Ubuntu7.10.vmem: 3 extents found, perfection would be 5 extents
[root@sakura Ubuntu7.10]# filefrag Ubuntu7.10.vmem
Ubuntu7.10.vmem: 3 extents found, perfection would be 5 extents
--- Resume and poweroff VM ---
--- Start VM ---
--- Suspend VM ---
[root@sakura Ubuntu7.10]# filefrag Ubuntu7.10.vmem
Ubuntu7.10.vmem: 629 extents found, perfection would be 5 extents
[root@sakura Ubuntu7.10]# filefrag Ubuntu7.10.vmem
Ubuntu7.10.vmem: 629 extents found, perfection would be 5 extents
[root@sakura Ubuntu7.10]# sync
[root@sakura Ubuntu7.10]# filefrag Ubuntu7.10.vmem
Ubuntu7.10.vmem: 2715 extents found, perfection would be 5 extents
[root@sakura Ubuntu7.10]# filefrag Ubuntu7.10.vmem
Ubuntu7.10.vmem: 2760 extents found, perfection would be 5 extents
[root@sakura Ubuntu7.10]# sync
[root@sakura Ubuntu7.10]# filefrag Ubuntu7.10.vmem
Ubuntu7.10.vmem: 2769 extents found, perfection would be 5 extents
[root@sakura Ubuntu7.10]# filefrag Ubuntu7.10.vmem
Ubuntu7.10.vmem: 2769 extents found, perfection would be 5 extents
--- Resume and poweroff VM ---
As I repeated, it seems that the ratio of discontiguous depends on
both "the location of file data allocated on the filesystem"
and "the content of VM's memory".
I don't know how many discontiguous were there when suspend/resume took many minutes.
But I guess the ratio was very high.
Your "filefrag" command is a very good tool!
Thank you.
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: Does "32.1% non-contiguous" mean severely fragmented?
2007-10-22 11:58 ` Tetsuo Handa
@ 2007-10-22 13:02 ` Theodore Tso
2007-10-23 10:38 ` Tetsuo Handa
0 siblings, 1 reply; 8+ messages in thread
From: Theodore Tso @ 2007-10-22 13:02 UTC (permalink / raw)
To: Tetsuo Handa; +Cc: linux-fsdevel
On Mon, Oct 22, 2007 at 08:58:11PM +0900, Tetsuo Handa wrote:
>
> --- Start VM ---
> --- Suspend VM ---
> [root@sakura Ubuntu7.10]# filefrag Ubuntu7.10.vmem
> Ubuntu7.10.vmem: 751 extents found, perfection would be 5 extents
> [root@sakura Ubuntu7.10]# sync
> [root@sakura Ubuntu7.10]# filefrag Ubuntu7.10.vmem
> Ubuntu7.10.vmem: 3281 extents found, perfection would be 5 extents
> [root@sakura Ubuntu7.10]# filefrag Ubuntu7.10.vmem
> Ubuntu7.10.vmem: 3281 extents found, perfection would be 5 extents
> --- Resume and poweroff VM ---
>
> What? "sync" yields more discontiguous?
What filesystem are you using? ext3? ext4? xfs? And are you using
any non-standard patches, such as some of the delayed allocation
patches that have been floating around? If you're using ext3, that
shouldn't be happening.....
If you use the -v option to filefrag, both before and after the sync,
that might show us what is going on. The other thing is to use
debugfs and its "stat" command to get detailed breakdown of the block
assignments of the file.
Are you sure the file isn't getting written by some background tasks
that you weren't aware of? This seems very strange; what
virtualization software are you using? VMware, Xen, KVM?
- Ted
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: Does "32.1% non-contiguous" mean severely fragmented?
2007-10-22 13:02 ` Theodore Tso
@ 2007-10-23 10:38 ` Tetsuo Handa
2007-10-23 12:34 ` Theodore Tso
0 siblings, 1 reply; 8+ messages in thread
From: Tetsuo Handa @ 2007-10-23 10:38 UTC (permalink / raw)
To: tytso; +Cc: linux-fsdevel
Hello.
> What filesystem are you using? ext3? ext4? xfs? And are you using
> any non-standard patches, such as some of the delayed allocation
> patches that have been floating around? If you're using ext3, that
> shouldn't be happening.....
I'm using ext3.
I'm running it on kernel 2.6.18-8.1.14.el5 (CentOS 5) for x86_64.
I don't know whether some of the delayed allocation patches are used
for 2.6.18-8.1.14.el5 kernel.
> Are you sure the file isn't getting written by some background tasks
> that you weren't aware of? This seems very strange; what
> virtualization software are you using? VMware, Xen, KVM?
I'm using VMware Workstation 6.0.0 build 45731 for x86_64.
It seems that there were some background tasks that delays writing.
I tried the following sequence, "sync" didn't affect.
[root@sakura Ubuntu7.10]# service vmware stop
[root@sakura Ubuntu7.10]# sleep 30
[root@sakura Ubuntu7.10]# filefrag Ubuntu7.10.vmem
Ubuntu7.10.vmem: 9280 extents found, perfection would be 5 extents
[root@sakura Ubuntu7.10]# sync
[root@sakura Ubuntu7.10]# filefrag Ubuntu7.10.vmem
Ubuntu7.10.vmem: 9280 extents found, perfection would be 5 extents
[root@sakura Ubuntu7.10]# service vmware start
[root@sakura Ubuntu7.10]# vmware
[root@sakura Ubuntu7.10]# service vmware stop
[root@sakura Ubuntu7.10]# sleep 30
[root@sakura Ubuntu7.10]# filefrag Ubuntu7.10.vmem
Ubuntu7.10.vmem: 9748 extents found, perfection would be 5 extents
[root@sakura Ubuntu7.10]# sync
[root@sakura Ubuntu7.10]# filefrag Ubuntu7.10.vmem
Ubuntu7.10.vmem: 9748 extents found, perfection would be 5 extents
[root@sakura Ubuntu7.10]# service vmware start
[root@sakura Ubuntu7.10]# vmware
[root@sakura Ubuntu7.10]# service vmware stop
[root@sakura Ubuntu7.10]# filefrag Ubuntu7.10.vmem
Ubuntu7.10.vmem: 9749 extents found, perfection would be 5 extents
[root@sakura Ubuntu7.10]# sync
[root@sakura Ubuntu7.10]# filefrag Ubuntu7.10.vmem
Ubuntu7.10.vmem: 9755 extents found, perfection would be 5 extents
Thank you.
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: Does "32.1% non-contiguous" mean severely fragmented?
2007-10-23 10:38 ` Tetsuo Handa
@ 2007-10-23 12:34 ` Theodore Tso
0 siblings, 0 replies; 8+ messages in thread
From: Theodore Tso @ 2007-10-23 12:34 UTC (permalink / raw)
To: Tetsuo Handa; +Cc: linux-fsdevel
On Tue, Oct 23, 2007 at 07:38:20PM +0900, Tetsuo Handa wrote:
> > Are you sure the file isn't getting written by some background tasks
> > that you weren't aware of? This seems very strange; what
> > virtualization software are you using? VMware, Xen, KVM?
> I'm using VMware Workstation 6.0.0 build 45731 for x86_64.
> It seems that there were some background tasks that delays writing.
> I tried the following sequence, "sync" didn't affect.
Or it may be that it takes a while to do a controlled shutdown.
One potential reason for the vmem file being very badly fragmented is
that it might not be getting written in sequential order. If the
writer is writing the file in random order, then unless you have a
filesystem which can do delayed allocations, the blocks will get
allocated in the other that they are first written, and if the writer
is seeking to random locations to do the write, that's one way that
you can end up with a very badly fragmented file.
Regards,
- Ted
^ permalink raw reply [flat|nested] 8+ messages in thread
end of thread, other threads:[~2007-10-23 12:34 UTC | newest]
Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2007-10-19 1:49 Does \"32.1% non-contigunous\" mean severely fragmented? Tetsuo Handa
2007-10-19 18:52 ` Theodore Tso
2007-10-20 3:39 ` Does "32.1% non-contiguous" " Tetsuo Handa
2007-10-20 13:17 ` Theodore Tso
2007-10-22 11:58 ` Tetsuo Handa
2007-10-22 13:02 ` Theodore Tso
2007-10-23 10:38 ` Tetsuo Handa
2007-10-23 12:34 ` Theodore Tso
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).