public inbox for linux-bcache@vger.kernel.org
 help / color / mirror / Atom feed
* Very poor performances with the bcache-for-upstream branch
@ 2013-04-29 15:43 Leslie Basmid
       [not found] ` <CA+XuAnJO4BCE0yj0i_CZ_iQvDj56FHwFZH302XrisD45P5R3Tw-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
  2013-05-02  5:50 ` Gabriel de Perthuis
  0 siblings, 2 replies; 12+ messages in thread
From: Leslie Basmid @ 2013-04-29 15:43 UTC (permalink / raw)
  To: linux-bcache-u79uwXL29TY76Z2rM5mHXA@public.gmane.org

Hi all,

I have chosen to install bcache using the bcache-for-upstream branch
(recompiled this morning for the latest patches). Even though
everything is running smoothly, I am surprised by the very poor
performance I'm obtaining from my setup.

1. Everything is setup on my laptop which has a 500GB HD (sda) and a
16 GB SSD (sdb).
2. I have setup an LVM above bcache, and /dev/sda4 is the only cache
partition. The whole thing was setup with:

make-bcache -B /dev/sda4 -C /dev/sdb

the LVM is "inside" /dev/sda4
3. I am using fio as a benchmark, have setup writeback
# cat /sys/block/bcache0/bcache/writeback_running
1
and I think I have followed every hints I could found about performance tuning.
Yet, when running the ssd fio test suite on a file on partition that
is not "cached", I am obtaining the following figures:
seq-read: iops=29156
rand-read: iops=291
seq-write: iops=22355
rand-write: iops=260

Running it on a cached file system I'm obtaining:
seq-read: iops=22196
rand-read: iops=330
seq-write: iops=15864
rand-write: iops=387

What am I missing ?
The benchmark parameters are:
bs=4k
ioengine=libaio
iodepth=64
size=1g
direct=1

Thanks in advance for your answers,
Leslie.

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: Very poor performances with the bcache-for-upstream branch
       [not found] ` <CA+XuAnJO4BCE0yj0i_CZ_iQvDj56FHwFZH302XrisD45P5R3Tw-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
@ 2013-04-29 17:52   ` Kent Overstreet
       [not found]     ` <CAC7rs0t4uKx37i7pxMoMQgVeUT6spDeridfdC+R6mqLbf1dwug-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
  0 siblings, 1 reply; 12+ messages in thread
From: Kent Overstreet @ 2013-04-29 17:52 UTC (permalink / raw)
  To: Leslie Basmid; +Cc: linux-bcache-u79uwXL29TY76Z2rM5mHXA@public.gmane.org

There's documentation for exactly this stuff, linked off the main page
of the wiki:

http://atlas.evilpiepirate.org/git/linux-bcache.git/tree/Documentation/bcache.txt?h=bcache-dev#n126

On Mon, Apr 29, 2013 at 8:43 AM, Leslie Basmid <leslie.basmid-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org> wrote:
> Hi all,
>
> I have chosen to install bcache using the bcache-for-upstream branch
> (recompiled this morning for the latest patches). Even though
> everything is running smoothly, I am surprised by the very poor
> performance I'm obtaining from my setup.
>
> 1. Everything is setup on my laptop which has a 500GB HD (sda) and a
> 16 GB SSD (sdb).
> 2. I have setup an LVM above bcache, and /dev/sda4 is the only cache
> partition. The whole thing was setup with:
>
> make-bcache -B /dev/sda4 -C /dev/sdb
>
> the LVM is "inside" /dev/sda4
> 3. I am using fio as a benchmark, have setup writeback
> # cat /sys/block/bcache0/bcache/writeback_running
> 1
> and I think I have followed every hints I could found about performance tuning.
> Yet, when running the ssd fio test suite on a file on partition that
> is not "cached", I am obtaining the following figures:
> seq-read: iops=29156
> rand-read: iops=291
> seq-write: iops=22355
> rand-write: iops=260
>
> Running it on a cached file system I'm obtaining:
> seq-read: iops=22196
> rand-read: iops=330
> seq-write: iops=15864
> rand-write: iops=387
>
> What am I missing ?
> The benchmark parameters are:
> bs=4k
> ioengine=libaio
> iodepth=64
> size=1g
> direct=1
>
> Thanks in advance for your answers,
> Leslie.
> --
> To unsubscribe from this list: send the line "unsubscribe linux-bcache" in
> the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 12+ messages in thread

* block activity stats
       [not found]     ` <CAC7rs0t4uKx37i7pxMoMQgVeUT6spDeridfdC+R6mqLbf1dwug-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
@ 2013-04-30  3:46       ` matthew patton
       [not found]         ` <1367293610.13617.YahooMailClassic-XYahOdtEMNn35Xbc4wGBzZOW+3bF1jUfVpNB7YpNyf8@public.gmane.org>
  2013-05-01 13:30       ` Very poor performances with the bcache-for-upstream branch Leslie Basmid
  1 sibling, 1 reply; 12+ messages in thread
From: matthew patton @ 2013-04-30  3:46 UTC (permalink / raw)
  To: linux-bcache-u79uwXL29TY76Z2rM5mHXA

Is block activity tracked exclusively by Bcache code or is Bcache piggy-backing on already existing statistics tracking that is implemented in LVM or the VM or (scsi?) block layers? If the first, I was wondering how hard it would be to move it into the lower levels and turned on/off with sysfs.

Is there a whitepaper that goes into the mechanics of the tracking, the granularity (eg. block region size), (de)promotion trigger logic, etc? Or is this a question best answered by "go read the code"?

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: block activity stats
       [not found]         ` <1367293610.13617.YahooMailClassic-XYahOdtEMNn35Xbc4wGBzZOW+3bF1jUfVpNB7YpNyf8@public.gmane.org>
@ 2013-04-30 20:31           ` Kent Overstreet
       [not found]             ` <20130430203134.GI9931-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
  0 siblings, 1 reply; 12+ messages in thread
From: Kent Overstreet @ 2013-04-30 20:31 UTC (permalink / raw)
  To: matthew patton; +Cc: linux-bcache-u79uwXL29TY76Z2rM5mHXA

On Mon, Apr 29, 2013 at 08:46:50PM -0700, matthew patton wrote:
> Is block activity tracked exclusively by Bcache code or is Bcache
> piggy-backing on already existing statistics tracking that is
> implemented in LVM or the VM or (scsi?) block layers? If the first, I
> was wondering how hard it would be to move it into the lower levels
> and turned on/off with sysfs.

What sort of activity? Do you mean which blocks are cached and which
aren't?

That sort of thing is tracked entirely within bcache.

Maybe you could tell me more specifically what you're trying to do?

> Is there a whitepaper that goes into the mechanics of the tracking,
> the granularity (eg. block region size), (de)promotion trigger logic,
> etc? Or is this a question best answered by "go read the code"?

There might be a more concise description of this stuff in the wiki, but
probably the best documentation is going to be in the code - I wrote a
lot of high level documentation and stuck it at the top of various
files:

http://atlas.evilpiepirate.org/git/linux-bcache.git/tree/drivers/md/bcache/bcache.h?h=bcache-dev

http://atlas.evilpiepirate.org/git/linux-bcache.git/tree/drivers/md/bcache/alloc.c?h=bcache-dev

http://atlas.evilpiepirate.org/git/linux-bcache.git/tree/drivers/md/bcache/btree.h?h=bcache-dev

Let me know what's missing!

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: block activity stats
       [not found]             ` <20130430203134.GI9931-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
@ 2013-05-01  1:10               ` Jason Warr
  0 siblings, 0 replies; 12+ messages in thread
From: Jason Warr @ 2013-05-01  1:10 UTC (permalink / raw)
  To: Kent Overstreet; +Cc: linux-bcache-u79uwXL29TY76Z2rM5mHXA

Hi Kent,

I'm sort of using this thread as a point to jump into one thing I am
working on.

I am working on a munin plugin and configuration settings to track the
cache and backing device statistics.  What metrics do you feel are
important to track for both the cache and backer?  Sysstat will
obviously capture all of the raw device I/O numbers we would need but to
tell the whole picture I'm sure there are allot of stats bcache tracks
that would be nice to relate to the raw I/O.

I also plan to write a script that will output raw numbers in a similar
fashion to what you'd get with iostat so one can get more of a immediate
snapshot with definable intervals.

Thanks,

Jason

On 04/30/2013 03:31 PM, Kent Overstreet wrote:
> On Mon, Apr 29, 2013 at 08:46:50PM -0700, matthew patton wrote:
>> Is block activity tracked exclusively by Bcache code or is Bcache
>> piggy-backing on already existing statistics tracking that is
>> implemented in LVM or the VM or (scsi?) block layers? If the first, I
>> was wondering how hard it would be to move it into the lower levels
>> and turned on/off with sysfs.
> 
> What sort of activity? Do you mean which blocks are cached and which
> aren't?
> 
> That sort of thing is tracked entirely within bcache.
> 
> Maybe you could tell me more specifically what you're trying to do?
> 
>> Is there a whitepaper that goes into the mechanics of the tracking,
>> the granularity (eg. block region size), (de)promotion trigger logic,
>> etc? Or is this a question best answered by "go read the code"?
> 
> There might be a more concise description of this stuff in the wiki, but
> probably the best documentation is going to be in the code - I wrote a
> lot of high level documentation and stuck it at the top of various
> files:
> 
> http://atlas.evilpiepirate.org/git/linux-bcache.git/tree/drivers/md/bcache/bcache.h?h=bcache-dev
> 
> http://atlas.evilpiepirate.org/git/linux-bcache.git/tree/drivers/md/bcache/alloc.c?h=bcache-dev
> 
> http://atlas.evilpiepirate.org/git/linux-bcache.git/tree/drivers/md/bcache/btree.h?h=bcache-dev
> 
> Let me know what's missing!
> --
> To unsubscribe from this list: send the line "unsubscribe linux-bcache" in
> the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: Very poor performances with the bcache-for-upstream branch
       [not found]     ` <CAC7rs0t4uKx37i7pxMoMQgVeUT6spDeridfdC+R6mqLbf1dwug-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
  2013-04-30  3:46       ` block activity stats matthew patton
@ 2013-05-01 13:30       ` Leslie Basmid
       [not found]         ` <CA+XuAnK4nNOG_tSuuteDThbj5L+5xmDsxQLfrHj314z1fr+bUA-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
  1 sibling, 1 reply; 12+ messages in thread
From: Leslie Basmid @ 2013-05-01 13:30 UTC (permalink / raw)
  To: linux-bcache-u79uwXL29TY76Z2rM5mHXA@public.gmane.org

Hi Kent,

this is exactly the documentation I had followed to try to track the
bad performance issues I was experiencing.
Redoing a whole run, trying to force everything into writeback mode,
by disabling both the sequential bypass
echo 0 > /sys/block/bcache0/bcache/sequential_cutoff
and the congested thresholds
echo 0 > /sys/fs/bcache/<set>/congested_read_threshold_us,
echo 0 > /sys/fs/bcache/<set>/congested_write_threshold_us


I am obtaining the following figures, on a cached fs:
seq-read: iops=12188
rand-read: iops=7392
seq-write: iops=430
rand-write: iops=454

I must be missing something, and I would really appreciate any help on
the matter.

Thanks in advance,
Leslie.

On Mon, Apr 29, 2013 at 7:52 PM, Kent Overstreet
<kent.overstreet-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org> wrote:
> There's documentation for exactly this stuff, linked off the main page
> of the wiki:
>
> http://atlas.evilpiepirate.org/git/linux-bcache.git/tree/Documentation/bcache.txt?h=bcache-dev#n126
>
> On Mon, Apr 29, 2013 at 8:43 AM, Leslie Basmid <leslie.basmid-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org> wrote:
>> Hi all,
>>
>> I have chosen to install bcache using the bcache-for-upstream branch
>> (recompiled this morning for the latest patches). Even though
>> everything is running smoothly, I am surprised by the very poor
>> performance I'm obtaining from my setup.
>>
>> 1. Everything is setup on my laptop which has a 500GB HD (sda) and a
>> 16 GB SSD (sdb).
>> 2. I have setup an LVM above bcache, and /dev/sda4 is the only cache
>> partition. The whole thing was setup with:
>>
>> make-bcache -B /dev/sda4 -C /dev/sdb
>>
>> the LVM is "inside" /dev/sda4
>> 3. I am using fio as a benchmark, have setup writeback
>> # cat /sys/block/bcache0/bcache/writeback_running
>> 1
>> and I think I have followed every hints I could found about performance tuning.
>> Yet, when running the ssd fio test suite on a file on partition that
>> is not "cached", I am obtaining the following figures:
>> seq-read: iops=29156
>> rand-read: iops=291
>> seq-write: iops=22355
>> rand-write: iops=260
>>
>> Running it on a cached file system I'm obtaining:
>> seq-read: iops=22196
>> rand-read: iops=330
>> seq-write: iops=15864
>> rand-write: iops=387
>>
>> What am I missing ?
>> The benchmark parameters are:
>> bs=4k
>> ioengine=libaio
>> iodepth=64
>> size=1g
>> direct=1
>>
>> Thanks in advance for your answers,
>> Leslie.
>> --
>> To unsubscribe from this list: send the line "unsubscribe linux-bcache" in
>> the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: Very poor performances with the bcache-for-upstream branch
       [not found]         ` <CA+XuAnK4nNOG_tSuuteDThbj5L+5xmDsxQLfrHj314z1fr+bUA-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
@ 2013-05-01 15:36           ` matthew patton
       [not found]             ` <1367422594.24900.YahooMailClassic-XYahOdtEMNm2Y7dhQGSVAJOW+3bF1jUfVpNB7YpNyf8@public.gmane.org>
  0 siblings, 1 reply; 12+ messages in thread
From: matthew patton @ 2013-05-01 15:36 UTC (permalink / raw)
  To: linux-bcache-u79uwXL29TY76Z2rM5mHXA@public.gmane.org,
	Leslie Basmid

> I am obtaining the following figures, on a cached fs:
> seq-read: iops=12188
> rand-read: iops=7392
> seq-write: iops=430
> rand-write: iops=454

Just what numbers were you expecting to see? A decent 7200RPM drive can only muster 70 IOPs on a good day. The lies the SSD vendors print in their literature and on the side of the box are almost always done with a blocksize of 512 bytes. So if you're doing 4K operations, divide by 8 at least.

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: Very poor performances with the bcache-for-upstream branch
       [not found]             ` <1367422594.24900.YahooMailClassic-XYahOdtEMNm2Y7dhQGSVAJOW+3bF1jUfVpNB7YpNyf8@public.gmane.org>
@ 2013-05-01 16:56               ` Leslie Basmid
       [not found]                 ` <CA+XuAn+Z-gxkfeE8D-A1-EJ4_QEBUqPJYfc40jevkJFbMzQ+Rw-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
  0 siblings, 1 reply; 12+ messages in thread
From: Leslie Basmid @ 2013-05-01 16:56 UTC (permalink / raw)
  To: linux-bcache-u79uwXL29TY76Z2rM5mHXA@public.gmane.org

Hi Matthew,

this is a very good question to start with. I am in fact very
surprised by two things:

1. The results I have on a cached filesystem are not that far away
from those I am getting from a not-cached FS;
2. The results I am getting as write performance seems very far from
those that are exposed for a similar benchmark
on bcache front page (accounting for tens of thousand IOPS).

I understand that my benchmark is done on a cached partition set up as
a LVM, and on a file laid out on a XFS formatted VG. This must have a
cost, but this huge ?
I also understand that the SSD on my laptop may have poorer
performances than the one used by Kent for his benchmark, yet the
difference is huge (18.5K >> 454). Hence my eyebrows rising...

Cheers,
Leslie.

On Wed, May 1, 2013 at 5:36 PM, matthew patton <pattonme-/E1597aS9LQAvxtiuMwx3w@public.gmane.org> wrote:
>> I am obtaining the following figures, on a cached fs:
>> seq-read: iops=12188
>> rand-read: iops=7392
>> seq-write: iops=430
>> rand-write: iops=454
>
> Just what numbers were you expecting to see? A decent 7200RPM drive can only muster 70 IOPs on a good day. The lies the SSD vendors print in their literature and on the side of the box are almost always done with a blocksize of 512 bytes. So if you're doing 4K operations, divide by 8 at least.

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: Very poor performances with the bcache-for-upstream branch
       [not found]                 ` <CA+XuAn+Z-gxkfeE8D-A1-EJ4_QEBUqPJYfc40jevkJFbMzQ+Rw-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
@ 2013-05-01 18:56                   ` Kent Overstreet
       [not found]                     ` <20130501185638.GB4057-jC9Py7bek1znysI04z7BkA@public.gmane.org>
  0 siblings, 1 reply; 12+ messages in thread
From: Kent Overstreet @ 2013-05-01 18:56 UTC (permalink / raw)
  To: Leslie Basmid; +Cc: linux-bcache-u79uwXL29TY76Z2rM5mHXA@public.gmane.org

On Wed, May 01, 2013 at 06:56:41PM +0200, Leslie Basmid wrote:
> Hi Matthew,
> 
> this is a very good question to start with. I am in fact very
> surprised by two things:
> 
> 1. The results I have on a cached filesystem are not that far away
> from those I am getting from a not-cached FS;

> 2. The results I am getting as write performance seems very far from
> those that are exposed for a similar benchmark
> on bcache front page (accounting for tens of thousand IOPS).

Your read numbers are much better than any rotating disk will give
you - and as for the write numbers, you're still in writethrough mode.
The docs have the command you want:

# echo writeback > cache_mode

> I understand that my benchmark is done on a cached partition set up as
> a LVM, and on a file laid out on a XFS formatted VG. This must have a
> cost, but this huge ?
> I also understand that the SSD on my laptop may have poorer
> performances than the one used by Kent for his benchmark, yet the
> difference is huge (18.5K >> 454). Hence my eyebrows rising...
> 
> Cheers,
> Leslie.
> 
> On Wed, May 1, 2013 at 5:36 PM, matthew patton <pattonme-/E1597aS9LQAvxtiuMwx3w@public.gmane.org> wrote:
> >> I am obtaining the following figures, on a cached fs:
> >> seq-read: iops=12188
> >> rand-read: iops=7392
> >> seq-write: iops=430
> >> rand-write: iops=454
> >
> > Just what numbers were you expecting to see? A decent 7200RPM drive can only muster 70 IOPs on a good day. The lies the SSD vendors print in their literature and on the side of the box are almost always done with a blocksize of 512 bytes. So if you're doing 4K operations, divide by 8 at least.
> --
> To unsubscribe from this list: send the line "unsubscribe linux-bcache" in
> the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: Very poor performances with the bcache-for-upstream branch
       [not found]                     ` <20130501185638.GB4057-jC9Py7bek1znysI04z7BkA@public.gmane.org>
@ 2013-05-01 19:50                       ` Leslie Basmid
  0 siblings, 0 replies; 12+ messages in thread
From: Leslie Basmid @ 2013-05-01 19:50 UTC (permalink / raw)
  To: linux-bcache-u79uwXL29TY76Z2rM5mHXA@public.gmane.org

Hi Kent,

thank you for your answer. I did not re-stated it, but I am in
writeback mode. I did the command you quoted, and the following
commands are supposed to tell me that it was correctly taken into
account (and give additional details):

# cat /sys/block/bcache0/bcache/cache_mode
writethrough [writeback] writearound none
# cat /sys/block/bcache0/bcache/writeback_running
1
# cat /sys/block/bcache0/bcache/writeback_percent
10

I just tried to reset the cache_mode value (to writeback), it did not
change anything significant.

Please tell me what data I can provide, debugging information or
whatever. I am still using the bcache-for-upstream branch.

Leslie.

On Wed, May 1, 2013 at 8:56 PM, Kent Overstreet <koverstreet-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org> wrote:
> On Wed, May 01, 2013 at 06:56:41PM +0200, Leslie Basmid wrote:
>> Hi Matthew,
>>
>> this is a very good question to start with. I am in fact very
>> surprised by two things:
>>
>> 1. The results I have on a cached filesystem are not that far away
>> from those I am getting from a not-cached FS;
>
>> 2. The results I am getting as write performance seems very far from
>> those that are exposed for a similar benchmark
>> on bcache front page (accounting for tens of thousand IOPS).
>
> Your read numbers are much better than any rotating disk will give
> you - and as for the write numbers, you're still in writethrough mode.
> The docs have the command you want:
>
> # echo writeback > cache_mode
>
>> I understand that my benchmark is done on a cached partition set up as
>> a LVM, and on a file laid out on a XFS formatted VG. This must have a
>> cost, but this huge ?
>> I also understand that the SSD on my laptop may have poorer
>> performances than the one used by Kent for his benchmark, yet the
>> difference is huge (18.5K >> 454). Hence my eyebrows rising...
>>
>> Cheers,
>> Leslie.
>>
>> On Wed, May 1, 2013 at 5:36 PM, matthew patton <pattonme-/E1597aS9LQAvxtiuMwx3w@public.gmane.org> wrote:
>> >> I am obtaining the following figures, on a cached fs:
>> >> seq-read: iops=12188
>> >> rand-read: iops=7392
>> >> seq-write: iops=430
>> >> rand-write: iops=454
>> >
>> > Just what numbers were you expecting to see? A decent 7200RPM drive can only muster 70 IOPs on a good day. The lies the SSD vendors print in their literature and on the side of the box are almost always done with a blocksize of 512 bytes. So if you're doing 4K operations, divide by 8 at least.
>> --
>> To unsubscribe from this list: send the line "unsubscribe linux-bcache" in
>> the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: Very poor performances with the bcache-for-upstream branch
  2013-04-29 15:43 Very poor performances with the bcache-for-upstream branch Leslie Basmid
       [not found] ` <CA+XuAnJO4BCE0yj0i_CZ_iQvDj56FHwFZH302XrisD45P5R3Tw-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
@ 2013-05-02  5:50 ` Gabriel de Perthuis
  2013-05-02 16:13   ` Leslie Basmid
  1 sibling, 1 reply; 12+ messages in thread
From: Gabriel de Perthuis @ 2013-05-02  5:50 UTC (permalink / raw)
  To: linux-bcache-u79uwXL29TY76Z2rM5mHXA

> 2. I have setup an LVM above bcache, and /dev/sda4 is the only cache
> partition. The whole thing was setup with:
> 
> make-bcache -B /dev/sda4 -C /dev/sdb
> 
> the LVM is "inside" /dev/sda4
> 3. I am using fio as a benchmark, have setup writeback
> # cat /sys/block/bcache0/bcache/writeback_running
> 1
> and I think I have followed every hints I could found about performance tuning.
> Yet, when running the ssd fio test suite on a file on partition that
> is not "cached", I am obtaining the following figures:
> seq-read: iops=29156
> rand-read: iops=291
> seq-write: iops=22355
> rand-write: iops=260
> 
> Running it on a cached file system I'm obtaining:
> seq-read: iops=22196
> rand-read: iops=330
> seq-write: iops=15864
> rand-write: iops=387
> 
> What am I missing ?

Outside of the make-bcache, none of the commands you give in the thread
prove that the cache is assembled (it's actually possible to have
writeback_running = 1 with a detached bdev, though that's a bug in my
opinion).  What does `ls -d /sys/fs/bcache/*/bdev*` show?

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: Very poor performances with the bcache-for-upstream branch
  2013-05-02  5:50 ` Gabriel de Perthuis
@ 2013-05-02 16:13   ` Leslie Basmid
  0 siblings, 0 replies; 12+ messages in thread
From: Leslie Basmid @ 2013-05-02 16:13 UTC (permalink / raw)
  To: linux-bcache-u79uwXL29TY76Z2rM5mHXA@public.gmane.org

Hello Gabriel,

thank you for your interest in my problem !

I'm quite sure that my cache is assembled: omitting to register my two
devices in linuxrc leads to my lvm not being found (which is quite
logical I think):

echo /dev/sda4 > /sys/fs/bcache/register
echo /dev/sdb > /sys/fs/bcache/register

Anyway, here is the result of the command you requested:

$ ls -d /sys/fs/bcache/*/bdev*
/sys/fs/bcache/< cache set >/bdev0

I'd be glad to provide more info as needed,
Thanks in advance,

Leslie.

On Thu, May 2, 2013 at 7:50 AM, Gabriel de Perthuis <g2p.code-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org> wrote:
>> 2. I have setup an LVM above bcache, and /dev/sda4 is the only cache
>> partition. The whole thing was setup with:
>>
>> make-bcache -B /dev/sda4 -C /dev/sdb
>>
>> the LVM is "inside" /dev/sda4
>> 3. I am using fio as a benchmark, have setup writeback
>> # cat /sys/block/bcache0/bcache/writeback_running
>> 1
>> and I think I have followed every hints I could found about performance tuning.
>> Yet, when running the ssd fio test suite on a file on partition that
>> is not "cached", I am obtaining the following figures:
>> seq-read: iops=29156
>> rand-read: iops=291
>> seq-write: iops=22355
>> rand-write: iops=260
>>
>> Running it on a cached file system I'm obtaining:
>> seq-read: iops=22196
>> rand-read: iops=330
>> seq-write: iops=15864
>> rand-write: iops=387
>>
>> What am I missing ?
>
> Outside of the make-bcache, none of the commands you give in the thread
> prove that the cache is assembled (it's actually possible to have
> writeback_running = 1 with a detached bdev, though that's a bug in my
> opinion).  What does `ls -d /sys/fs/bcache/*/bdev*` show?
>
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-bcache" in
> the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2013-05-02 16:13 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2013-04-29 15:43 Very poor performances with the bcache-for-upstream branch Leslie Basmid
     [not found] ` <CA+XuAnJO4BCE0yj0i_CZ_iQvDj56FHwFZH302XrisD45P5R3Tw-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2013-04-29 17:52   ` Kent Overstreet
     [not found]     ` <CAC7rs0t4uKx37i7pxMoMQgVeUT6spDeridfdC+R6mqLbf1dwug-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2013-04-30  3:46       ` block activity stats matthew patton
     [not found]         ` <1367293610.13617.YahooMailClassic-XYahOdtEMNn35Xbc4wGBzZOW+3bF1jUfVpNB7YpNyf8@public.gmane.org>
2013-04-30 20:31           ` Kent Overstreet
     [not found]             ` <20130430203134.GI9931-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
2013-05-01  1:10               ` Jason Warr
2013-05-01 13:30       ` Very poor performances with the bcache-for-upstream branch Leslie Basmid
     [not found]         ` <CA+XuAnK4nNOG_tSuuteDThbj5L+5xmDsxQLfrHj314z1fr+bUA-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2013-05-01 15:36           ` matthew patton
     [not found]             ` <1367422594.24900.YahooMailClassic-XYahOdtEMNm2Y7dhQGSVAJOW+3bF1jUfVpNB7YpNyf8@public.gmane.org>
2013-05-01 16:56               ` Leslie Basmid
     [not found]                 ` <CA+XuAn+Z-gxkfeE8D-A1-EJ4_QEBUqPJYfc40jevkJFbMzQ+Rw-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2013-05-01 18:56                   ` Kent Overstreet
     [not found]                     ` <20130501185638.GB4057-jC9Py7bek1znysI04z7BkA@public.gmane.org>
2013-05-01 19:50                       ` Leslie Basmid
2013-05-02  5:50 ` Gabriel de Perthuis
2013-05-02 16:13   ` Leslie Basmid

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox