From: Alexey Fisher <bug-track@fisher-privat.net>
To: Sitsofe Wheeler <sitsofe@yahoo.com>
Cc: Linux Kernel Mailing List <linux-kernel@vger.kernel.org>
Subject: Re: smart cache. ist is possible?
Date: Mon, 16 Mar 2009 08:34:53 +0100 [thread overview]
Message-ID: <49BE011D.5080807@fisher-privat.net> (raw)
In-Reply-To: <20090315232614.GA27452@silver.sucs.org>
Sitsofe Wheeler schrieb:
> On Sun, Mar 15, 2009 at 11:06:34PM +0100, Alexey Fisher wrote:
>> It is not what i mean. I know how to clear cache but exactly i do not
>> won't it. I will use cache and it's working perfectly with small files.
>
> I meant for timings on the small files otherwise how do you know which
> exactly pages were floating around the cache?
>
>> But there is a problem with big files. For example i have 4GB RAM, if i
>> read 4,6GB file the cache is useless. The question is; are there any way
>> to workaround it, except more RAM?
>
> I suspect what is happening is that you are cycling the cache. Because
> you can't hold everything and you are reading the file sequentially you
> will successfully have cleared the cache of the start of the file by the
> time you start again (so first bit gets evicted by the time last bit is
> read etc). If you use dd bs=1000M count=1 I think you will find that the
> kernel CAN cache pieces of files but as pointed out elsewhere, without
> knowing the future what do you decide to keep when your cache is full?
>
> At a guess you either you need to provide a hint (e.g. bybassing the
> cache for some of the file so it doesn't become full or locking specific
> pages into RAM) or create a bigger cache somehow (e.g. by buying more
> RAM).
Just to make sure i understand you.
for example:
i have some smole RAM to cache 5 blocks from hard drive, and it's empty.
|0|0|0|0|0|
i read some file with 1-10 blocks ( dd if=somefile ). At the beginning
of read it will cache thirst 5 blocks ( |1|2|3|4|5| ) and after no
place left in cache it will replace old cache with new blocks(
|6|7|8|9|10| ).
If i read same somefile second time, normally will happen the same. It
tries to read block 1 and this is not it cache, so it will be cached
and move block 6 out. ( |1|7|8|9|10| ) So it will complete replace
entire cache.
Or i can say, read somefile and only blocks 6-10 so i can use
performance from cache.
Or OS need to get a list of blocks from somefile and list of cached
blocks. And check if there is some of them cached. If they are, it
should lock the cache, read 1-5 without caching it and read 6-10 from
cache. After this unlock the cache. But this is not possible because
this operation is to expensive.
Is this what you mean?
Thank you.
Alexey.
next prev parent reply other threads:[~2009-03-16 7:35 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
2009-03-15 15:28 smart cache. ist is possible? Alexey Fisher
2009-03-15 18:13 ` Sitsofe Wheeler
2009-03-15 22:06 ` Alexey Fisher
2009-03-15 23:26 ` Sitsofe Wheeler
2009-03-16 7:34 ` Alexey Fisher [this message]
2009-03-16 17:36 ` Sitsofe Wheeler
2009-03-15 22:23 ` Dave Chinner
2009-03-16 13:02 ` Pádraig Brady
2009-03-16 15:15 ` Paulo Marques
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=49BE011D.5080807@fisher-privat.net \
--to=bug-track@fisher-privat.net \
--cc=linux-kernel@vger.kernel.org \
--cc=sitsofe@yahoo.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox