linux-lvm.redhat.com archive mirror
 help / color / mirror / Atom feed
* [linux-lvm] Caching policy in machine learning context
@ 2017-02-13 10:58 Jonas Degrave
  2017-02-13 12:55 ` Zdenek Kabelac
  0 siblings, 1 reply; 6+ messages in thread
From: Jonas Degrave @ 2017-02-13 10:58 UTC (permalink / raw)
  To: linux-lvm

[-- Attachment #1: Type: text/plain, Size: 2405 bytes --]

Hi,

We are a group of scientists, who work on reasonably sized datasets
(10-100GB). Because we had troubles managing our SSD's (everyone likes to
have their data on the SSD), I set up a caching system where the 500GB SSD
caches the 4TB HD. This way, everybody would have their data virtually on
the SSD, and only the first pass through the dataset would be slow.
Afterwards, it would be cached anyway, and the reads would be faster.

I used lvm-cache for this. Yet, it seems that the (only) smq-policy is very
reluctant in promoting data to the cache, whereas what we would need, is
that data is promoted basically upon the first read. Because if someone is
using the machine on certain data, they will most likely go over the
dataset a couple of hundred times in the following hours.

Right now, after a week of testing lvm-cache with the smq-policy, it looks
like this:

jdgrave@kat:~$ sudo ./lvmstats
> start              0
> end                7516192768
> segment_type       cache
> md_block_size      8
> md_utilization     14353/1179648
> cache_block_size   128
> cache_utilization  7208960/7208960
> read_hits          19954892
> read_misses        84623959
> read_hit_ratio     19.08%
> write_hits         672621
> write_misses       7336700
> write_hit_ratio    8.40%
> demotions          151757
> promotions         151757
> dirty              0
> features           1


 jdgrave@kat:~$ sudo ./lvmcache-statistics.sh
> -------------------------------------------------------------------------
> LVM [2.02.133(2)] cache report of found device /dev/VG/lv
> -------------------------------------------------------------------------
> - Cache Usage: 100.0% - Metadata Usage: 1.2%
> - Read Hit Rate: 19.0% - Write Hit Rate: 8.3%
> - Demotions/Promotions/Dirty: 151757/151757/0
> - Feature arguments in use: writeback
> - Core arguments in use : migration_threshold 2048 smq 0
>   - Cache Policy: stochastic multiqueue (smq)
> - Cache Metadata Mode: rw
> - MetaData Operation Health: ok


The number of promotions has been very low, even though the read hit rate
is low as well. This is with a cache of 450GB, and currently only 614GB of
data on the cached device. A read hit rate of lower than 20%, when just
randomly caching would have achieved 73% is not what I would have hoped to
get.

Is there a way to make the caching way more aggressive? Some settings I can
tweak?

Yours sincerely,

Jonas

[-- Attachment #2: Type: text/html, Size: 3167 bytes --]

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2017-02-16 10:29 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2017-02-13 10:58 [linux-lvm] Caching policy in machine learning context Jonas Degrave
2017-02-13 12:55 ` Zdenek Kabelac
2017-02-13 14:19   ` Jonas Degrave
2017-02-13 14:33     ` Zdenek Kabelac
2017-02-15 13:30       ` Jonas Degrave
2017-02-16 10:29         ` Zdenek Kabelac

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).