* Block Device Caching
@ 2004-06-29 22:20 Markus Schaber
2004-06-29 22:46 ` Andreas Dilger
2004-06-29 23:40 ` Block Device Caching Timothy Miller
0 siblings, 2 replies; 10+ messages in thread
From: Markus Schaber @ 2004-06-29 22:20 UTC (permalink / raw)
To: Linux Kernel Mailing List
Hello,
During our application testing, we noticed that our application (that
operates directly on a LVM volume) we noticed that it seems the read
data does not go into any cache.
Now we did some tests using dd blocksize=1M count=1000:
Using dd directly on the /dev/daten/testing lvm volume, we read about 95
MBytes/Seconds. Issuing multiple dds in sequence gives little variance in IO
speed (between 90 and 100 MB/sec).
When we create a file system on this volume, and mount it, and we create
a 1G file there, the dd gives us the same 95 MB/sec on the first read
after the mount, and approx. 480 MB/Sec on subsequent reads.
The machine runs Kernel 2.6.5 SMP on a dual SMT Itanium HP Box.
This lead us to the conclusion that block devices do not cache, but the
filesystem does. But subsequently, I ran some tests on my developer
machine (Pentium 4 Mobile Laptop).
dd'ing 16MB from an usb 1.1 stick present as /dev/sda, I got about
900k/sec on every read, so this seems to be uncached access, too.
When dd'ing 100MB from /dev/hda5, the first read gives about
22MBytes/Sek (which seems okay for a 2.5" IDE Disk), but subsequend
reads give about 389MBytes/sek (which is impossible to achieve using
such hardware). Interestingly, this happens on a mounted partition,
while when unmounting the partition, caching does not work. But for the
/dev/daten/testing above, mounting a filesystem on the partition does
not help in caching.
Can anyone give us a hint what's happening here, or even tell us how to
use a block device via the kernel caching (and memory mapping does not
work easily, as the production lvm volume is about 600 Gig on said
32-bit X86 machine.
Thanks,
Markus
PS: I subscribed to the list, so no Cc:-ing required.
--
markus schaber | dipl. informatiker
logi-track ag | rennweg 14-16 | ch 8001 zürich
phone +41-43-888 62 52 | fax +41-43-888 62 53
mailto:schabios@logi-track.com | www.logi-track.com
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: Block Device Caching
2004-06-29 22:20 Block Device Caching Markus Schaber
@ 2004-06-29 22:46 ` Andreas Dilger
2004-06-29 23:41 ` Timothy Miller
2004-06-30 6:30 ` Markus Schaber
2004-06-29 23:40 ` Block Device Caching Timothy Miller
1 sibling, 2 replies; 10+ messages in thread
From: Andreas Dilger @ 2004-06-29 22:46 UTC (permalink / raw)
To: Markus Schaber; +Cc: Linux Kernel Mailing List
[-- Attachment #1: Type: text/plain, Size: 1731 bytes --]
On Jun 30, 2004 00:20 +0200, Markus Schaber wrote:
> During our application testing, we noticed that our application (that
> operates directly on a LVM volume) we noticed that it seems the read
> data does not go into any cache.
>
> Now we did some tests using dd blocksize=1M count=1000:
>
> Using dd directly on the /dev/daten/testing lvm volume, we read about 95
> MBytes/Seconds. Issuing multiple dds in sequence gives little variance in IO
> speed (between 90 and 100 MB/sec).
>
> When we create a file system on this volume, and mount it, and we create
> a 1G file there, the dd gives us the same 95 MB/sec on the first read
> after the mount, and approx. 480 MB/Sec on subsequent reads.
>
> This lead us to the conclusion that block devices do not cache, but the
> filesystem does. But subsequently, I ran some tests on my developer
> machine (Pentium 4 Mobile Laptop).
>
> When dd'ing 100MB from /dev/hda5, the first read gives about
> 22MBytes/Sek (which seems okay for a 2.5" IDE Disk), but subsequend
> reads give about 389MBytes/sek (which is impossible to achieve using
> such hardware). Interestingly, this happens on a mounted partition,
> while when unmounting the partition, caching does not work. But for the
> /dev/daten/testing above, mounting a filesystem on the partition does
> not help in caching.
When you close the block device it flushes the cache for that device (inode).
If you kept the device open in some way (e.g. "sleep 10000000 < /dev/hda5")
then it should start caching the data between dd runs.
Cheers, Andreas
--
Andreas Dilger
http://sourceforge.net/projects/ext2resize/
http://members.shaw.ca/adilger/ http://members.shaw.ca/golinux/
[-- Attachment #2: Type: application/pgp-signature, Size: 189 bytes --]
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: Block Device Caching
2004-06-29 22:20 Block Device Caching Markus Schaber
2004-06-29 22:46 ` Andreas Dilger
@ 2004-06-29 23:40 ` Timothy Miller
2004-06-30 8:29 ` Helge Hafting
1 sibling, 1 reply; 10+ messages in thread
From: Timothy Miller @ 2004-06-29 23:40 UTC (permalink / raw)
To: Markus Schaber; +Cc: Linux Kernel Mailing List
Markus Schaber wrote:
> This lead us to the conclusion that block devices do not cache, but the
> filesystem does. But subsequently, I ran some tests on my developer
> machine (Pentium 4 Mobile Laptop).
I had kernel experts repeatedly insist to me that block devices were
cached, while all of my tests (using dd to or from, say, /dev/sda1 or
whatever) indicated that there was absolutely no caching whatsoever.
In my experience, reads and writes to block devices are uncached, while
filesystem stuff IS cached.
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: Block Device Caching
2004-06-29 22:46 ` Andreas Dilger
@ 2004-06-29 23:41 ` Timothy Miller
2004-06-30 6:30 ` Markus Schaber
1 sibling, 0 replies; 10+ messages in thread
From: Timothy Miller @ 2004-06-29 23:41 UTC (permalink / raw)
To: Andreas Dilger; +Cc: Markus Schaber, Linux Kernel Mailing List
Andreas Dilger wrote:
>
>
> When you close the block device it flushes the cache for that device (inode).
> If you kept the device open in some way (e.g. "sleep 10000000 < /dev/hda5")
> then it should start caching the data between dd runs.
>
Ok, now THIS makes sense.
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: Block Device Caching
2004-06-29 22:46 ` Andreas Dilger
2004-06-29 23:41 ` Timothy Miller
@ 2004-06-30 6:30 ` Markus Schaber
2004-07-02 9:21 ` CCISS driver and Caching (was: Block Device Caching) Markus Schaber
1 sibling, 1 reply; 10+ messages in thread
From: Markus Schaber @ 2004-06-30 6:30 UTC (permalink / raw)
To: Linux Kernel Mailing List
Hi,
On Tue, 29 Jun 2004 16:46:22 -0600
Andreas Dilger <adilger@clusterfs.com> wrote:
>>[block device caching problems]
> When you close the block device it flushes the cache for that device
> (inode). If you kept the device open in some way (e.g. "sleep 10000000
> < /dev/hda5") then it should start caching the data between dd runs.
This sounds reasonable, and it works using hda5 on my developer machine:
root@kingfisher:~# dd if=/dev/hda5 of=/dev/null bs=1M count=100
100+0 Datensätze ein
100+0 Datensätze aus
104857600 bytes transferred in 4,644529 seconds (22576584 bytes/sec)
root@kingfisher:~# dd if=/dev/hda5 of=/dev/null bs=1M count=100
100+0 Datensätze ein
100+0 Datensätze aus
104857600 bytes transferred in 4,688006 seconds (22367207 bytes/sec)
root@kingfisher:~# sleep 1000000 </dev/hda5 &
[1] 17321
root@kingfisher:~# dd if=/dev/hda5 of=/dev/null bs=1M count=100
100+0 Datensätze ein
100+0 Datensätze aus
104857600 bytes transferred in 4,662113 seconds (22491433 bytes/sec)
root@kingfisher:~# dd if=/dev/hda5 of=/dev/null bs=1M count=100
100+0 Datensätze ein
100+0 Datensätze aus
104857600 bytes transferred in 0,271807 seconds (385779610 bytes/sec)
And it works on my USB memory stick:
root@kingfisher:~# dd if=/dev/sda of=/dev/null bs=1M count=100
31+1 Datensätze ein
31+1 Datensätze aus
32768000 bytes transferred in 36,011661 seconds (909927 bytes/sec)
root@kingfisher:~# dd if=/dev/sda of=/dev/null bs=1M count=100
31+1 Datensätze ein
31+1 Datensätze aus
32768000 bytes transferred in 37,133379 seconds (882441 bytes/sec)
root@kingfisher:~# sleep 1000000 </dev/sda &
[1] 17375
root@kingfisher:~# dd if=/dev/sda of=/dev/null bs=1M count=100
31+1 Datensätze ein
31+1 Datensätze aus
32768000 bytes transferred in 36,004170 seconds (910117 bytes/sec)
root@kingfisher:~# dd if=/dev/sda of=/dev/null bs=1M count=100
31+1 Datensätze ein
31+1 Datensätze aus
32768000 bytes transferred in 0,088144 seconds (371755027 bytes/sec)
But it seems to fail on our production machine LVM volume:
bear:~# dd if=/dev/daten/testing of=/dev/null bs=1M count=1000
1000+0 records in
1000+0 records out
1048576000 bytes transferred in 10.779081 seconds (97278793 bytes/sec)
bear:~# dd if=/dev/daten/testing of=/dev/null bs=1M count=1000
1000+0 records in
1000+0 records out
1048576000 bytes transferred in 10.773274 seconds (97331229 bytes/sec)
bear:~# sleep 1000000 </dev/daten/testing &
[1] 23588
bear:~# dd if=/dev/daten/testing of=/dev/null bs=1M count=1000
1000+0 records in
1000+0 records out
1048576000 bytes transferred in 11.030774 seconds (95059149 bytes/sec)
bear:~# dd if=/dev/daten/testing of=/dev/null bs=1M count=1000
1000+0 records in
1000+0 records out
1048576000 bytes transferred in 11.027699 seconds (95085656 bytes/sec)
top on bear displays:
top - 08:23:31 up 68 days, 5:14, 8 users, load average: 0.08, 0.08, 0.03
Tasks: 81 total, 1 running, 79 sleeping, 0 stopped, 1 zombie
Cpu0 : 0.0% us, 0.0% sy, 0.0% ni, 99.4% id, 0.0% wa, 0.6% hi, 0.0% si
Cpu1 : 0.0% us, 0.3% sy, 0.0% ni, 99.7% id, 0.0% wa, 0.0% hi, 0.0% si
Cpu2 : 0.0% us, 0.0% sy, 0.0% ni, 100.0% id, 0.0% wa, 0.0% hi, 0.0% si
Cpu3 : 0.0% us, 0.0% sy, 0.0% ni, 100.0% id, 0.0% wa, 0.0% hi, 0.0% si
Mem: 3953624k total, 1120508k used, 2833116k free, 800248k buffers
Swap: 1048568k total, 0k used, 1048568k free, 24932k cached
So there clearly is enough free RAM to buffer 1 Gig of data (which, as I
said, works for a file when mounting /dev/daten/testing with ext3).
Did we do something wrong in our LVM setup? Do you need more info (such
as the output of some lv tools or the kernel config)?
Thanks for your efforts,
Markus Schaber
--
markus schaber | dipl. informatiker
logi-track ag | rennweg 14-16 | ch 8001 zürich
phone +41-43-888 62 52 | fax +41-43-888 62 53
mailto:schabios@logi-track.com | www.logi-track.com
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: Block Device Caching
2004-06-29 23:40 ` Block Device Caching Timothy Miller
@ 2004-06-30 8:29 ` Helge Hafting
[not found] ` <20040630123828.7d48c6e6@kingfisher.intern.logi-track.com>
0 siblings, 1 reply; 10+ messages in thread
From: Helge Hafting @ 2004-06-30 8:29 UTC (permalink / raw)
To: Timothy Miller; +Cc: linux-kernel
Timothy Miller wrote:
>
>
> Markus Schaber wrote:
>
>> This lead us to the conclusion that block devices do not cache, but the
>> filesystem does. But subsequently, I ran some tests on my developer
>> machine (Pentium 4 Mobile Laptop).
>
>
>
> I had kernel experts repeatedly insist to me that block devices were
> cached, while all of my tests (using dd to or from, say, /dev/sda1 or
> whatever) indicated that there was absolutely no caching whatsoever.
Well, any cache is dropped when the device is closed. "dd" closes the device
when it finishes.
Try a program that reads the same two blocks (spaced videly apart)
over and over from the same open file descriptor. With _no_ caching
you'll see
the drive seeking all the time. With caching, you won't.
Helge Hafting
^ permalink raw reply [flat|nested] 10+ messages in thread
* CCISS driver and Caching (was: Block Device Caching)
2004-06-30 6:30 ` Markus Schaber
@ 2004-07-02 9:21 ` Markus Schaber
0 siblings, 0 replies; 10+ messages in thread
From: Markus Schaber @ 2004-07-02 9:21 UTC (permalink / raw)
To: Markus Schaber; +Cc: Linux Kernel Mailing List
Hello,
We did some additional tests, and just learned that the underlying raid
itsself is not cached by the linux kernel:
bear:~/readcachetest# ./readcache /dev/cciss/c0d0p2 1000
frist run needed 16.006950 seconds, this is 62.472863 MBytes/Sec
second run needed 15.919336 seconds, this is 62.816690 MBytes/Sec
third run needed 15.830325 seconds, this is 63.169897 MBytes/Sec
So we now think it's some problem of the cciss driver, and not of lvm.
weird...
Markus
--
markus schaber | dipl. informatiker
logi-track ag | rennweg 14-16 | ch 8001 zürich
phone +41-43-888 62 52 | fax +41-43-888 62 53
mailto:schabios@logi-track.com | www.logi-track.com
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: Block Device Caching
[not found] ` <40E543D7.9030303@hist.no>
@ 2004-07-02 12:01 ` Markus Schaber
2004-07-02 12:56 ` FabF
0 siblings, 1 reply; 10+ messages in thread
From: Markus Schaber @ 2004-07-02 12:01 UTC (permalink / raw)
To: Helge Hafting, Linux Kernel Mailing List
Hi, Helge,
On Fri, 02 Jul 2004 13:15:35 +0200
Helge Hafting <helge.hafting@hist.no> wrote:
> > [Caching problems]
> >Any Ideas?
> >
> Not much. "bear" may have decided to drop cache in one case but not
> in the other - that is valid although strange. Take a look at what
> block size the device uses. A mounted device uses the fs blocksize, a
> partition(or entire disk) may be set to some different blocksize. (I
> believe it might be the classical 512 byte) Caching blocks
> with a size different from the page size (usually 4k) has extra
> overhead and could lead to extra memory pressure and different caching
> decisions.
>
> So I recommend retesting bear with a good blocksize, such as 4k, on
> the device. That also gives somewhat better performance in general
> than 512-byte blocks. There is a syscall for changing the block size
> of a device.
>
> (Block size may also be changed by the following hack: mount a fs
> like ext2 on the device, then umount it. Of course you need to use
> mkfs so this can't be done on a device that contains data.
> "mount" sets the device block size to the size used by the fs.
> After umount, the device is free to be opened directly again, but it
> retains the new blocksize.)
This sounds reasonable, and so I did some tests wr/t mounting and memory
pressure:
bear:~/readcachetest# top | head -n 8 | tail -n 2
Mem: 3953624k total, 775776k used, 3177848k free, 458060k buffers
Swap: 1048568k total, 0k used, 1048568k free, 31812k cached
bear:~/readcachetest# ./readcache /dev/daten/testing 1000
frist run needed 12.374819 seconds, this is 80.809263 MBytes/Sec
second run needed 12.004670 seconds, this is 83.300915 MBytes/Sec
third run needed 11.898035 seconds, this is 84.047492 MBytes/Sec
bear:~/readcachetest# top | head -n 8 | tail -n 2
Mem: 3953624k total, 775648k used, 3177976k free, 457904k buffers
Swap: 1048568k total, 0k used, 1048568k free, 31900k cached
bear:~/readcachetest# mount /dev/daten/testing /testinmount/ -t ext2
bear:~/readcachetest# top | head -n 8 | tail -n 2
Mem: 3953624k total, 775720k used, 3177904k free, 457944k buffers
Swap: 1048568k total, 0k used, 1048568k free, 31860k cached
bear:~/readcachetest# ./readcache /dev/daten/testing 1000
frist run needed 12.199237 seconds, this is 81.972340 MBytes/Sec
second run needed 12.028633 seconds, this is 83.134966 MBytes/Sec
third run needed 12.454502 seconds, this is 80.292251 MBytes/Sec
bear:~/readcachetest# top | head -n 8 | tail -n 2
Mem: 3953624k total, 1125432k used, 2828192k free, 808008k buffers
Swap: 1048568k total, 0k used, 1048568k free, 31792k cached
bear:~/readcachetest# ./readcache /testinmount/test.dump 1000
frist run needed 11.985626 seconds, this is 83.433272 MBytes/Sec
second run needed 3.682361 seconds, this is 271.564901 MBytes/Sec
third run needed 3.638563 seconds, this is 274.833774 MBytes/Sec
bear:~/readcachetest# top | head -n 8 | tail -n 2
Mem: 3953624k total, 2150128k used, 1803496k free, 808736k buffers
Swap: 1048568k total, 0k used, 1048568k free, 1055688k cached
bear:~/readcachetest# ./readcache /dev/daten/testing 1000
frist run needed 12.418232 seconds, this is 80.526761 MBytes/Sec
second run needed 12.381597 seconds, this is 80.765026 MBytes/Sec
third run needed 12.185159 seconds, this is 82.067046 MBytes/Sec
bear:~/readcachetest# top | head -n 8 | tail -n 2
Mem: 3953624k total, 2149232k used, 1804392k free, 807784k buffers
Swap: 1048568k total, 0k used, 1048568k free, 1055756k cached
bear:~/readcachetest# umount /testinmount/
bear:~/readcachetest# top | head -n 8 | tail -n 2
Mem: 3953624k total, 775728k used, 3177896k free, 457872k buffers
Swap: 1048568k total, 0k used, 1048568k free, 31728k cached
bear:~/readcachetest# ./readcache /dev/daten/testing 1000
frist run needed 12.171772 seconds, this is 82.157306 MBytes/Sec
second run needed 11.842943 seconds, this is 84.438471 MBytes/Sec
third run needed 12.167059 seconds, this is 82.189131 MBytes/Sec
bear:~/readcachetest# top | head -n 8 | tail -n 2
Mem: 3953624k total, 775656k used, 3177968k free, 457876k buffers
Swap: 1048568k total, 0k used, 1048568k free, 31860k cached
So files from the mounted fs get cached, but not direct reads from the
block device, although the lvm volume is mounted, and enough memory is
available.
I currently suspect this to be a problem with the cciss controller, see
my other mail to the list...
> You may also want to test with a smaller count. Reading directly from
> pagecache should be noticeably faster than reading from the cache on
> the raid controller, although either obviously is way faster than
> reading from actual disks.
bear:~/readcachetest# ./readcache /dev/daten/testing 100
frist run needed 1.202330 seconds, this is 83.171841 MBytes/Sec
second run needed 0.349686 seconds, this is 285.970842 MBytes/Sec
third run needed 0.346644 seconds, this is 288.480401 MBytes/Sec
bear:~/readcachetest# mount /dev/daten/testing /testinmount/ -t ext2
bear:~/readcachetest# ./readcache /testinmount/test.dump 100
frist run needed 1.205797 seconds, this is 82.932699 MBytes/Sec
second run needed 0.367004 seconds, this is 272.476594 MBytes/Sec
third run needed 0.362783 seconds, this is 275.646874 MBytes/Sec
bear:~/readcachetest# ./readcache /testinmount/test.dump 100
frist run needed 0.363097 seconds, this is 275.408500 MBytes/Sec
second run needed 0.363002 seconds, this is 275.480576 MBytes/Sec
third run needed 0.362852 seconds, this is 275.594457 MBytes/Sec
bear:~/readcachetest# umount /testinmount/
bear:~/readcachetest# mount /dev/daten/testing /testinmount/ -t ext2
bear:~/readcachetest# ./readcache /testinmount/test.dump 100
frist run needed 1.189227 seconds, this is 84.088235 MBytes/Sec
second run needed 0.367456 seconds, this is 272.141426 MBytes/Sec
third run needed 0.363139 seconds, this is 275.376646 MBytes/Sec
Hmm, I'm somehow irritated now, as 275 MB/Sec seems rather low for reads
from the page cache on a 2.8Ghz Xeon machine (even my laptop gets about 400
MB/Sec with this, and Skate even reaches 3486.385664 MBytes/Sec.)
No idea...
Markus.
--
markus schaber | dipl. informatiker
logi-track ag | rennweg 14-16 | ch 8001 zürich
phone +41-43-888 62 52 | fax +41-43-888 62 53
mailto:schabios@logi-track.com | www.logi-track.com
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: Block Device Caching
2004-07-02 12:01 ` Markus Schaber
@ 2004-07-02 12:56 ` FabF
2004-07-05 15:07 ` Markus Schaber
0 siblings, 1 reply; 10+ messages in thread
From: FabF @ 2004-07-02 12:56 UTC (permalink / raw)
To: Markus Schaber; +Cc: Helge Hafting, Linux Kernel Mailing List
On Fri, 2004-07-02 at 14:01, Markus Schaber wrote:
> Hi, Helge,
>
> On Fri, 02 Jul 2004 13:15:35 +0200
> Helge Hafting <helge.hafting@hist.no> wrote:
> > > [Caching problems]
> > >Any Ideas?
> > >
> > Not much. "bear" may have decided to drop cache in one case but not
> > in the other - that is valid although strange. Take a look at what
> > block size the device uses. A mounted device uses the fs blocksize, a
> > partition(or entire disk) may be set to some different blocksize. (I
> > believe it might be the classical 512 byte) Caching blocks
> > with a size different from the page size (usually 4k) has extra
> > overhead and could lead to extra memory pressure and different caching
> > decisions.
> >
> > So I recommend retesting bear with a good blocksize, such as 4k, on
> > the device. That also gives somewhat better performance in general
> > than 512-byte blocks. There is a syscall for changing the block size
> > of a device.
> >
> > (Block size may also be changed by the following hack: mount a fs
> > like ext2 on the device, then umount it. Of course you need to use
> > mkfs so this can't be done on a device that contains data.
> > "mount" sets the device block size to the size used by the fs.
> > After umount, the device is free to be opened directly again, but it
> > retains the new blocksize.)
>
> This sounds reasonable, and so I did some tests wr/t mounting and memory
> pressure:
>
>
> bear:~/readcachetest# top | head -n 8 | tail -n 2
> Mem: 3953624k total, 775776k used, 3177848k free, 458060k buffers
> Swap: 1048568k total, 0k used, 1048568k free, 31812k cached
>
> bear:~/readcachetest# ./readcache /dev/daten/testing 1000
> frist run needed 12.374819 seconds, this is 80.809263 MBytes/Sec
> second run needed 12.004670 seconds, this is 83.300915 MBytes/Sec
> third run needed 11.898035 seconds, this is 84.047492 MBytes/Sec
>
> bear:~/readcachetest# top | head -n 8 | tail -n 2
> Mem: 3953624k total, 775648k used, 3177976k free, 457904k buffers
> Swap: 1048568k total, 0k used, 1048568k free, 31900k cached
>
> bear:~/readcachetest# mount /dev/daten/testing /testinmount/ -t ext2
>
> bear:~/readcachetest# top | head -n 8 | tail -n 2
> Mem: 3953624k total, 775720k used, 3177904k free, 457944k buffers
> Swap: 1048568k total, 0k used, 1048568k free, 31860k cached
>
> bear:~/readcachetest# ./readcache /dev/daten/testing 1000
> frist run needed 12.199237 seconds, this is 81.972340 MBytes/Sec
> second run needed 12.028633 seconds, this is 83.134966 MBytes/Sec
> third run needed 12.454502 seconds, this is 80.292251 MBytes/Sec
>
> bear:~/readcachetest# top | head -n 8 | tail -n 2
> Mem: 3953624k total, 1125432k used, 2828192k free, 808008k buffers
> Swap: 1048568k total, 0k used, 1048568k free, 31792k cached
>
> bear:~/readcachetest# ./readcache /testinmount/test.dump 1000
> frist run needed 11.985626 seconds, this is 83.433272 MBytes/Sec
> second run needed 3.682361 seconds, this is 271.564901 MBytes/Sec
> third run needed 3.638563 seconds, this is 274.833774 MBytes/Sec
>
> bear:~/readcachetest# top | head -n 8 | tail -n 2
> Mem: 3953624k total, 2150128k used, 1803496k free, 808736k buffers
> Swap: 1048568k total, 0k used, 1048568k free, 1055688k cached
>
> bear:~/readcachetest# ./readcache /dev/daten/testing 1000
> frist run needed 12.418232 seconds, this is 80.526761 MBytes/Sec
> second run needed 12.381597 seconds, this is 80.765026 MBytes/Sec
> third run needed 12.185159 seconds, this is 82.067046 MBytes/Sec
>
> bear:~/readcachetest# top | head -n 8 | tail -n 2
> Mem: 3953624k total, 2149232k used, 1804392k free, 807784k buffers
> Swap: 1048568k total, 0k used, 1048568k free, 1055756k cached
>
> bear:~/readcachetest# umount /testinmount/
>
> bear:~/readcachetest# top | head -n 8 | tail -n 2
> Mem: 3953624k total, 775728k used, 3177896k free, 457872k buffers
> Swap: 1048568k total, 0k used, 1048568k free, 31728k cached
>
> bear:~/readcachetest# ./readcache /dev/daten/testing 1000
> frist run needed 12.171772 seconds, this is 82.157306 MBytes/Sec
> second run needed 11.842943 seconds, this is 84.438471 MBytes/Sec
> third run needed 12.167059 seconds, this is 82.189131 MBytes/Sec
>
> bear:~/readcachetest# top | head -n 8 | tail -n 2
> Mem: 3953624k total, 775656k used, 3177968k free, 457876k buffers
> Swap: 1048568k total, 0k used, 1048568k free, 31860k cached
>
> So files from the mounted fs get cached, but not direct reads from the
> block device, although the lvm volume is mounted, and enough memory is
> available.
>
> I currently suspect this to be a problem with the cciss controller, see
> my other mail to the list...
>
> > You may also want to test with a smaller count. Reading directly from
> > pagecache should be noticeably faster than reading from the cache on
> > the raid controller, although either obviously is way faster than
> > reading from actual disks.
>
> bear:~/readcachetest# ./readcache /dev/daten/testing 100
> frist run needed 1.202330 seconds, this is 83.171841 MBytes/Sec
> second run needed 0.349686 seconds, this is 285.970842 MBytes/Sec
> third run needed 0.346644 seconds, this is 288.480401 MBytes/Sec
>
> bear:~/readcachetest# mount /dev/daten/testing /testinmount/ -t ext2
>
> bear:~/readcachetest# ./readcache /testinmount/test.dump 100
> frist run needed 1.205797 seconds, this is 82.932699 MBytes/Sec
> second run needed 0.367004 seconds, this is 272.476594 MBytes/Sec
> third run needed 0.362783 seconds, this is 275.646874 MBytes/Sec
>
> bear:~/readcachetest# ./readcache /testinmount/test.dump 100
> frist run needed 0.363097 seconds, this is 275.408500 MBytes/Sec
> second run needed 0.363002 seconds, this is 275.480576 MBytes/Sec
> third run needed 0.362852 seconds, this is 275.594457 MBytes/Sec
>
> bear:~/readcachetest# umount /testinmount/
>
> bear:~/readcachetest# mount /dev/daten/testing /testinmount/ -t ext2
>
> bear:~/readcachetest# ./readcache /testinmount/test.dump 100
> frist run needed 1.189227 seconds, this is 84.088235 MBytes/Sec
> second run needed 0.367456 seconds, this is 272.141426 MBytes/Sec
> third run needed 0.363139 seconds, this is 275.376646 MBytes/Sec
>
> Hmm, I'm somehow irritated now, as 275 MB/Sec seems rather low for reads
> from the page cache on a 2.8Ghz Xeon machine (even my laptop gets about 400
> MB/Sec with this, and Skate even reaches 3486.385664 MBytes/Sec.)
>
>
> No idea...
> Markus.
Did you played with hdparm -m <device> ?
Regards,
FabF
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: Block Device Caching
2004-07-02 12:56 ` FabF
@ 2004-07-05 15:07 ` Markus Schaber
0 siblings, 0 replies; 10+ messages in thread
From: Markus Schaber @ 2004-07-05 15:07 UTC (permalink / raw)
Cc: Linux Kernel Mailing List
Hi, Fabian,
On Fri, 02 Jul 2004 14:56:09 +0200
FabF <fabian.frederick@skynet.be> wrote:
> > [Caching problems]
> Did you played with hdparm -m <device> ?
I'm afraid this does not work as this are not IDE disks:
bear:~# hdparm -m /dev/daten/testing
/dev/daten/testing not supported by hdparm
bear:~# hdparm -m /dev/cciss/c0d0p2
/dev/cciss/c0d0p2:
operation not supported on SCSI disks
Greets,
Markus
--
markus schaber | dipl. informatiker
logi-track ag | rennweg 14-16 | ch 8001 zürich
phone +41-43-888 62 52 | fax +41-43-888 62 53
mailto:schabios@logi-track.com | www.logi-track.com
^ permalink raw reply [flat|nested] 10+ messages in thread
end of thread, other threads:[~2004-07-05 15:09 UTC | newest]
Thread overview: 10+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2004-06-29 22:20 Block Device Caching Markus Schaber
2004-06-29 22:46 ` Andreas Dilger
2004-06-29 23:41 ` Timothy Miller
2004-06-30 6:30 ` Markus Schaber
2004-07-02 9:21 ` CCISS driver and Caching (was: Block Device Caching) Markus Schaber
2004-06-29 23:40 ` Block Device Caching Timothy Miller
2004-06-30 8:29 ` Helge Hafting
[not found] ` <20040630123828.7d48c6e6@kingfisher.intern.logi-track.com>
[not found] ` <40E543D7.9030303@hist.no>
2004-07-02 12:01 ` Markus Schaber
2004-07-02 12:56 ` FabF
2004-07-05 15:07 ` Markus Schaber
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox