* backporting bcache-testing to kernel 3.4
@ 2013-04-22 19:21 Dongsu Park
[not found] ` <20130422192100.GA16221-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
0 siblings, 1 reply; 2+ messages in thread
From: Dongsu Park @ 2013-04-22 19:21 UTC (permalink / raw)
To: linux-bcache-u79uwXL29TY76Z2rM5mHXA
Hi,
has anyone succeeded backporting bcache(-testing) branch to kernel 3.4?
I tried to get it working, patching commits from bcache-testing branch
as well as block patches by Kent, on top of Kernel 3.4.23.
* writeback mode
* backing device: MD-RAID0, or a HDD
* cache device: loopback device on tmpfs (for test)
* testing command:
dd if=/dev/zero of=/dev/bcache0 bs=4K count=1M oflag=sync
Everything works fine, except for a problem of sync operation.
Every write request is being immediately flushed into backing device.
That's actually what I want to avoid in writeback caches.
Observing block I/Os with iostat, it's weird that no data is actually
written to backing device, despite of high IOPs. e.g.:
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s
avgrq-sz avgqu-sz await r_await w_await svctm %util
vdb 0.00 0.00 0.00 2550.00 0.00 0.00
0.00 2.05 0.81 0.00 0.81 0.26 67.00
If I test bcache-testing branch based on kernel 3.9-rc3, which means
without backporting, then it works well without such massive bogus syncs.
bcache-3.2 is also fine. So I suppose there might be any change on the
block layer between 3.4 and 3.9, making bcache work in such a strange way.
Any idea?
Dongsu
^ permalink raw reply [flat|nested] 2+ messages in thread
* Re: backporting bcache-testing to kernel 3.4
[not found] ` <20130422192100.GA16221-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
@ 2013-04-24 20:41 ` Kent Overstreet
0 siblings, 0 replies; 2+ messages in thread
From: Kent Overstreet @ 2013-04-24 20:41 UTC (permalink / raw)
To: Dongsu Park; +Cc: linux-bcache-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
This sounds like the bug I _think_ I just fixed - Do you suppose you
could try again with the current bcache branch?
On Mon, Apr 22, 2013 at 12:21 PM, Dongsu Park
<dongsu.park-EIkl63zCoXaH+58JC4qpiA@public.gmane.org> wrote:
> Hi,
>
> has anyone succeeded backporting bcache(-testing) branch to kernel 3.4?
>
> I tried to get it working, patching commits from bcache-testing branch
> as well as block patches by Kent, on top of Kernel 3.4.23.
>
> * writeback mode
> * backing device: MD-RAID0, or a HDD
> * cache device: loopback device on tmpfs (for test)
> * testing command:
> dd if=/dev/zero of=/dev/bcache0 bs=4K count=1M oflag=sync
>
> Everything works fine, except for a problem of sync operation.
> Every write request is being immediately flushed into backing device.
> That's actually what I want to avoid in writeback caches.
> Observing block I/Os with iostat, it's weird that no data is actually
> written to backing device, despite of high IOPs. e.g.:
>
> Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s
> avgrq-sz avgqu-sz await r_await w_await svctm %util
> vdb 0.00 0.00 0.00 2550.00 0.00 0.00
> 0.00 2.05 0.81 0.00 0.81 0.26 67.00
>
> If I test bcache-testing branch based on kernel 3.9-rc3, which means
> without backporting, then it works well without such massive bogus syncs.
> bcache-3.2 is also fine. So I suppose there might be any change on the
> block layer between 3.4 and 3.9, making bcache work in such a strange way.
>
> Any idea?
>
> Dongsu
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-bcache" in
> the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 2+ messages in thread
end of thread, other threads:[~2013-04-24 20:41 UTC | newest]
Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2013-04-22 19:21 backporting bcache-testing to kernel 3.4 Dongsu Park
[not found] ` <20130422192100.GA16221-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
2013-04-24 20:41 ` Kent Overstreet
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox