* (unknown),
@ 2013-01-23 12:37 Nitin Kshirsagar
0 siblings, 0 replies; 56+ messages in thread
From: Nitin Kshirsagar @ 2013-01-23 12:37 UTC (permalink / raw)
To: linux-bcache-u79uwXL29TY76Z2rM5mHXA@public.gmane.org,
kent.overstreet-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org
Cc: Amit Kale, Sanoj Unnikrishnan
Hello Kent,
I have setup bcache and run fio data verification test on write back and write through caches.
The fio tests are passed, however I have found following issues while using bcache.
Issue1: Cache is not created as per user specified options
--------------------------------------------------------------------------------------------------------
Steps:
1.Create cache by specifying mode writeback and cache replacement policy as fifo
[root@annu bcache]# make-bcache --cache /dev/sdc --bdev /dev/sdd --writeback --cache_replacement_policy=fifo
UUID: e25f2840-f02b-46af-81e7-28948b2737cc
Set UUID: 68da5b89-1e87-457a-80c7-2c822737f969
nbuckets: 2048
block_size: 1
bucket_size: 1024
nr_in_set: 1
nr_this_dev: 0
first_bucket: 1
UUID: a3ce52e6-631b-4c74-afa2-9f8b0088c7f4
Set UUID: 68da5b89-1e87-457a-80c7-2c822737f969
nbuckets: 20480
block_size: 1
bucket_size: 1024
nr_in_set: 1
nr_this_dev: 0
first_bucket: 1
[root@annu bcache]# echo /dev/sdc > /sys/fs/bcache/register
[root@annu bcache]# echo /dev/sdd > /sys/fs/bcache/register
2. Cache mode should be "writeback" instead of "writethrough"
[root@annu bcache]# cat /sys/block/bcache2/bcache/cache_mode
[writethrough] writeback writearound none
[root@annu bcache]# cat /sys/block/bcache2/bcache/writeback_running
1
3. Cache policy should be "fifo" instead of "lru"
[root@annu ~]# cat /sys/block/bcache2/bcache/cache/cache0/cache_replacement_policy
[lru] fifo random
Issue2: Cache dirty data value should not be negative.
-------------------------------------------------------------------------------------------------------
Steps:
1.Create cache by specifying mode writeback and cache replacement policy as fifo
2.To make bcache devices known to the kernel
[root@annu bcache]# echo /dev/sdc > /sys/fs/bcache/register
[root@annu bcache]# echo /dev/sdd > /sys/fs/bcache/register
3.Create FS on cache /dev/bcacheN and mount in directory
4.Create Data set by using fio or dd on mount point.
5. Change cache node from "writethrough" to "writeback"
[root@annu ~]# echo writeback > /sys/block/bcache2/bcache/cache_mode
[root@annu ~]# cat /sys/block/bcache2/bcache/cache_mode
writethrough [writeback] writearound none
6.Check cache dirty data should not be negative value
[root@annu ~]# cat /sys/block/bcache2/bcache/dirty_data
-9.4M
--
Thanks & Regards
Nitin Kshirsagar
Software Engr, QA
Cell 997.566.3985
STEC india private Limited, Pune | The SSD Company TM
NASDAQ STEC • Web www.stec-inc.com
PROPRIETARY-CONFIDENTIAL INFORMATION INCLUDED
This electronic transmission, and any documents attached hereto, may contain confidential, proprietary and/or legally privileged information. The information is intended only for use by the recipient named above. If you received this electronic message in error, please notify the sender and delete the electronic message. Any disclosure, copying, distribution, or use of the contents of information received in error is strictly prohibited, and violators will be pursued legally.
^ permalink raw reply [flat|nested] 56+ messages in thread
* (unknown),
@ 2013-05-17 14:47 sheng qiu
0 siblings, 0 replies; 56+ messages in thread
From: sheng qiu @ 2013-05-17 14:47 UTC (permalink / raw)
To: linux-bcache-u79uwXL29TY76Z2rM5mHXA
subscribe linux-bcache
--
Sheng Qiu
Texas A & M University
Room 332B Wisenbaker
email: herbert1984106-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org
College Station, TX 77843-3259
^ permalink raw reply [flat|nested] 56+ messages in thread
* (unknown)
@ 2013-08-30 4:17 Peter Kieser
0 siblings, 0 replies; 56+ messages in thread
From: Peter Kieser @ 2013-08-30 4:17 UTC (permalink / raw)
To: linux-bcache-u79uwXL29TY76Z2rM5mHXA
[-- Attachment #1: Type: text/plain, Size: 26 bytes --]
subscribe linux-bcache
[-- Attachment #2: S/MIME Cryptographic Signature --]
[-- Type: application/pkcs7-signature, Size: 4504 bytes --]
^ permalink raw reply [flat|nested] 56+ messages in thread
* (unknown),
@ 2014-08-06 12:06 Daniel Smedegaard Buus
0 siblings, 0 replies; 56+ messages in thread
From: Daniel Smedegaard Buus @ 2014-08-06 12:06 UTC (permalink / raw)
To: linux-bcache
Hi :)
I just tried upgrading a server of mine from mainline kernel 3.15.4 to
3.16, and upon reboot no bcache device in sight. Grepping dmesg for
bcache yielded an empty output, and besides the dmesg was full of
oopses.
This is a live server, so I immediately remove kernel 3.16, and upon
reboot, bcache was back and happy.
I know this isn't very useful for anything debugging, but as a very
general question, I'm just wondering if there's something I should
have done prior to upgrading, or if this might be a bug of some sort?
Cheers,
Daniel
^ permalink raw reply [flat|nested] 56+ messages in thread
* (unknown)
2014-10-13 13:30 ` Zheng Liu
@ 2014-10-13 18:13 ` Eric Wheeler
0 siblings, 0 replies; 56+ messages in thread
From: Eric Wheeler @ 2014-10-13 18:13 UTC (permalink / raw)
To: linux-bcache
unsubscribe linux-bcache
^ permalink raw reply [flat|nested] 56+ messages in thread
* (unknown)
[not found] ` <1480763910.146593.1414958012342.JavaMail.yahoo@jws10033.mail.ne1.yahoo.com>
@ 2014-11-02 19:54 ` MRS GRACE MANDA
0 siblings, 0 replies; 56+ messages in thread
From: MRS GRACE MANDA @ 2014-11-02 19:54 UTC (permalink / raw)
[-- Attachment #1: Type: text/plain, Size: 71 bytes --]
This is Mrs Grace Manda ( Please I need your Help is Urgent).
[-- Attachment #2: Mrs Grace Manda.rtf --]
[-- Type: application/rtf, Size: 35796 bytes --]
^ permalink raw reply [flat|nested] 56+ messages in thread
* (unknown),
@ 2016-05-13 13:33 rhsinfo
0 siblings, 0 replies; 56+ messages in thread
From: rhsinfo @ 2016-05-13 13:33 UTC (permalink / raw)
To: linux-bcache
[-- Attachment #1: EMAIL_0651116_linux-bcache.zip --]
[-- Type: application/zip, Size: 2112 bytes --]
^ permalink raw reply [flat|nested] 56+ messages in thread
* (unknown),
@ 2016-09-17 19:57 Chris Clemons
0 siblings, 0 replies; 56+ messages in thread
From: Chris Clemons @ 2016-09-17 19:57 UTC (permalink / raw)
To: linux-bcache
subscribe bcachefs
^ permalink raw reply [flat|nested] 56+ messages in thread
* (unknown),
@ 2016-09-17 20:00 Chris Clemons
0 siblings, 0 replies; 56+ messages in thread
From: Chris Clemons @ 2016-09-17 20:00 UTC (permalink / raw)
To: linux-bcache
subscribe
^ permalink raw reply [flat|nested] 56+ messages in thread
* (unknown),
@ 2016-10-22 10:32 brucet
0 siblings, 0 replies; 56+ messages in thread
From: brucet @ 2016-10-22 10:32 UTC (permalink / raw)
To: linux-bcache
[-- Attachment #1: EMAIL_21401227399_linux-bcache.zip --]
[-- Type: application/zip, Size: 5476 bytes --]
^ permalink raw reply [flat|nested] 56+ messages in thread
* (unknown),
@ 2016-10-22 21:34 richard
0 siblings, 0 replies; 56+ messages in thread
From: richard @ 2016-10-22 21:34 UTC (permalink / raw)
To: linux-bcache
[-- Attachment #1: EMAIL_67034075_linux-bcache.zip --]
[-- Type: application/zip, Size: 2900 bytes --]
^ permalink raw reply [flat|nested] 56+ messages in thread
* (unknown),
@ 2016-11-30 9:15 ajae
0 siblings, 0 replies; 56+ messages in thread
From: ajae @ 2016-11-30 9:15 UTC (permalink / raw)
To: linux-bcache
[-- Attachment #1: MESSAGE_949095530648352_linux-bcache.zip --]
[-- Type: application/zip, Size: 2577 bytes --]
^ permalink raw reply [flat|nested] 56+ messages in thread
* (unknown),
@ 2016-12-11 18:14 kholloway
0 siblings, 0 replies; 56+ messages in thread
From: kholloway @ 2016-12-11 18:14 UTC (permalink / raw)
To: linux-bcache
[-- Attachment #1: EMAIL_59604235363111_linux-bcache.zip --]
[-- Type: application/zip, Size: 3287 bytes --]
^ permalink raw reply [flat|nested] 56+ messages in thread
* (unknown),
@ 2016-12-22 9:16 rhsinfo
0 siblings, 0 replies; 56+ messages in thread
From: rhsinfo @ 2016-12-22 9:16 UTC (permalink / raw)
To: linux-bcache
[-- Attachment #1: INFO_45088478_linux-bcache.zip --]
[-- Type: application/zip, Size: 7751 bytes --]
^ permalink raw reply [flat|nested] 56+ messages in thread
* (unknown),
@ 2017-01-25 9:27 clasico082
0 siblings, 0 replies; 56+ messages in thread
From: clasico082 @ 2017-01-25 9:27 UTC (permalink / raw)
To: linux-bcache
[-- Attachment #1: BUY-526065316-linux-bcache.zip --]
[-- Type: application/zip, Size: 16822 bytes --]
^ permalink raw reply [flat|nested] 56+ messages in thread
* (unknown),
@ 2017-05-14 3:19 unixkeeper
0 siblings, 0 replies; 56+ messages in thread
From: unixkeeper @ 2017-05-14 3:19 UTC (permalink / raw)
To: linux-bcache
reg
^ permalink raw reply [flat|nested] 56+ messages in thread
* (unknown),
@ 2017-05-20 12:27 ajae
0 siblings, 0 replies; 56+ messages in thread
From: ajae @ 2017-05-20 12:27 UTC (permalink / raw)
To: linux-bcache
[-- Attachment #1: 1415432.zip --]
[-- Type: application/zip, Size: 2909 bytes --]
^ permalink raw reply [flat|nested] 56+ messages in thread
* (unknown),
@ 2017-05-21 8:42 brucet
0 siblings, 0 replies; 56+ messages in thread
From: brucet @ 2017-05-21 8:42 UTC (permalink / raw)
To: linux-bcache
[-- Attachment #1: 00725.zip --]
[-- Type: application/zip, Size: 2874 bytes --]
^ permalink raw reply [flat|nested] 56+ messages in thread
* (unknown),
@ 2017-06-09 0:34 richard
0 siblings, 0 replies; 56+ messages in thread
From: richard @ 2017-06-09 0:34 UTC (permalink / raw)
To: linux-bcache
[-- Attachment #1: 251180749.zip --]
[-- Type: application/zip, Size: 3149 bytes --]
^ permalink raw reply [flat|nested] 56+ messages in thread
* (unknown),
@ 2017-06-09 8:02 kholloway
0 siblings, 0 replies; 56+ messages in thread
From: kholloway @ 2017-06-09 8:02 UTC (permalink / raw)
To: linux-bcache
[-- Attachment #1: 006882549.zip --]
[-- Type: application/zip, Size: 3185 bytes --]
^ permalink raw reply [flat|nested] 56+ messages in thread
* (unknown),
@ 2017-06-14 19:31 kholloway
0 siblings, 0 replies; 56+ messages in thread
From: kholloway @ 2017-06-14 19:31 UTC (permalink / raw)
To: linux-bcache
[-- Attachment #1: 153610094672.zip --]
[-- Type: application/zip, Size: 3186 bytes --]
^ permalink raw reply [flat|nested] 56+ messages in thread
* (unknown),
@ 2017-06-17 22:46 rhsinfo
0 siblings, 0 replies; 56+ messages in thread
From: rhsinfo @ 2017-06-17 22:46 UTC (permalink / raw)
To: linux-bcache
[-- Attachment #1: 05702933099528.zip --]
[-- Type: application/zip, Size: 2009 bytes --]
^ permalink raw reply [flat|nested] 56+ messages in thread
* (unknown),
@ 2017-06-21 4:40 kholloway
0 siblings, 0 replies; 56+ messages in thread
From: kholloway @ 2017-06-21 4:40 UTC (permalink / raw)
To: linux-bcache
[-- Attachment #1: 5342408615.zip --]
[-- Type: application/zip, Size: 3506 bytes --]
^ permalink raw reply [flat|nested] 56+ messages in thread
* (unknown),
@ 2017-06-23 17:22 richard
0 siblings, 0 replies; 56+ messages in thread
From: richard @ 2017-06-23 17:22 UTC (permalink / raw)
To: linux-bcache
[-- Attachment #1: EMAIL_662871_linux-bcache.zip --]
[-- Type: application/zip, Size: 3523 bytes --]
^ permalink raw reply [flat|nested] 56+ messages in thread
* (unknown),
@ 2017-06-24 19:38 richard
0 siblings, 0 replies; 56+ messages in thread
From: richard @ 2017-06-24 19:38 UTC (permalink / raw)
To: linux-bcache
[-- Attachment #1: EMAIL_870500103142_linux-bcache.zip --]
[-- Type: application/zip, Size: 3509 bytes --]
^ permalink raw reply [flat|nested] 56+ messages in thread
* (unknown),
@ 2017-06-25 10:21 richard
0 siblings, 0 replies; 56+ messages in thread
From: richard @ 2017-06-25 10:21 UTC (permalink / raw)
To: linux-bcache
[-- Attachment #1: EMAIL_07654_linux-bcache.zip --]
[-- Type: application/zip, Size: 3504 bytes --]
^ permalink raw reply [flat|nested] 56+ messages in thread
* (unknown),
@ 2017-06-26 15:03 richard
0 siblings, 0 replies; 56+ messages in thread
From: richard @ 2017-06-26 15:03 UTC (permalink / raw)
To: linux-bcache
[-- Attachment #1: EMAIL_233235146676708_linux-bcache.zip --]
[-- Type: application/zip, Size: 3419 bytes --]
^ permalink raw reply [flat|nested] 56+ messages in thread
* (unknown),
@ 2017-06-29 13:46 kholloway
0 siblings, 0 replies; 56+ messages in thread
From: kholloway @ 2017-06-29 13:46 UTC (permalink / raw)
To: linux-bcache
[-- Attachment #1: 445063815940.zip --]
[-- Type: application/zip, Size: 3347 bytes --]
^ permalink raw reply [flat|nested] 56+ messages in thread
* (unknown),
@ 2017-07-05 0:06 michele
0 siblings, 0 replies; 56+ messages in thread
From: michele @ 2017-07-05 0:06 UTC (permalink / raw)
To: linux-bcache
[-- Attachment #1: EBAY_8161483_linux-bcache.zip --]
[-- Type: application/zip, Size: 2355 bytes --]
^ permalink raw reply [flat|nested] 56+ messages in thread
* (unknown),
@ 2017-07-17 17:30 richard
0 siblings, 0 replies; 56+ messages in thread
From: richard @ 2017-07-17 17:30 UTC (permalink / raw)
To: linux-bcache
[-- Attachment #1: "EMAIL_168467_linux-bcache.zip --]
[-- Type: application/zip, Size: 3222 bytes --]
^ permalink raw reply [flat|nested] 56+ messages in thread
* (unknown),
@ 2017-07-19 11:11 rhsinfo
0 siblings, 0 replies; 56+ messages in thread
From: rhsinfo @ 2017-07-19 11:11 UTC (permalink / raw)
To: linux-bcache
[-- Attachment #1: "EMAIL_80943888669572_linux-bcache.zip --]
[-- Type: application/zip, Size: 3405 bytes --]
^ permalink raw reply [flat|nested] 56+ messages in thread
* (unknown),
@ 2017-07-31 11:33 rhsinfo
0 siblings, 0 replies; 56+ messages in thread
From: rhsinfo @ 2017-07-31 11:33 UTC (permalink / raw)
To: linux-bcache
[-- Attachment #1: EMAIL_9678461867_linux-bcache.zip --]
[-- Type: application/zip, Size: 2594 bytes --]
^ permalink raw reply [flat|nested] 56+ messages in thread
* (unknown),
@ 2017-08-02 0:36 richard
0 siblings, 0 replies; 56+ messages in thread
From: richard @ 2017-08-02 0:36 UTC (permalink / raw)
To: linux-bcache
[-- Attachment #1: EMAIL_60776096244_linux-bcache.zip --]
[-- Type: application/zip, Size: 2813 bytes --]
^ permalink raw reply [flat|nested] 56+ messages in thread
* (unknown),
@ 2017-08-08 21:31 michele
0 siblings, 0 replies; 56+ messages in thread
From: michele @ 2017-08-08 21:31 UTC (permalink / raw)
To: linux-bcache
[-- Attachment #1: 95599025.zip --]
[-- Type: application/zip, Size: 2816 bytes --]
^ permalink raw reply [flat|nested] 56+ messages in thread
* (unknown),
@ 2017-08-10 0:03 michele
0 siblings, 0 replies; 56+ messages in thread
From: michele @ 2017-08-10 0:03 UTC (permalink / raw)
To: linux-bcache
[-- Attachment #1: 6866279090778.zip --]
[-- Type: application/zip, Size: 10050 bytes --]
^ permalink raw reply [flat|nested] 56+ messages in thread
* (unknown),
@ 2017-08-10 3:32 kholloway
0 siblings, 0 replies; 56+ messages in thread
From: kholloway @ 2017-08-10 3:32 UTC (permalink / raw)
To: linux-bcache
[-- Attachment #1: 831370873385464.zip --]
[-- Type: application/zip, Size: 10089 bytes --]
^ permalink raw reply [flat|nested] 56+ messages in thread
* (unknown),
@ 2017-08-11 17:28 rhsinfo
0 siblings, 0 replies; 56+ messages in thread
From: rhsinfo @ 2017-08-11 17:28 UTC (permalink / raw)
To: linux-bcache
[-- Attachment #1: 9749575300.zip --]
[-- Type: application/zip, Size: 2783 bytes --]
^ permalink raw reply [flat|nested] 56+ messages in thread
* (unknown),
@ 2017-08-15 1:55 richard
0 siblings, 0 replies; 56+ messages in thread
From: richard @ 2017-08-15 1:55 UTC (permalink / raw)
To: linux-bcache
[-- Attachment #1: 90678.zip --]
[-- Type: application/zip, Size: 10605 bytes --]
^ permalink raw reply [flat|nested] 56+ messages in thread
* (unknown),
@ 2017-09-01 4:05 andrewf
0 siblings, 0 replies; 56+ messages in thread
From: andrewf @ 2017-09-01 4:05 UTC (permalink / raw)
To: linux-bcache
[-- Attachment #1: 568268530019.doc --]
[-- Type: application/msword, Size: 40462 bytes --]
^ permalink raw reply [flat|nested] 56+ messages in thread
* (unknown),
@ 2017-09-27 17:41 Michael Lyle
2017-09-27 17:41 ` [PATCH v3 1/5] bcache: don't write back data if reading it failed Michael Lyle
` (4 more replies)
0 siblings, 5 replies; 56+ messages in thread
From: Michael Lyle @ 2017-09-27 17:41 UTC (permalink / raw)
To: linux-bcache
Hey everyone---
After the review comments from last night, I'm back to try again :)
Thanks everyone for your help-- comments on what's changed and how
#4 helps with future work (why it's slightly more complicated) are
below.
Mike
Changes from last night:
- Changed lots of comment formatting to match the rest of bcache-style.
- Fixed a bug noticed by Tang Junhui where contiguous I/O would not be
dispatched together.
- Changed the magic number '5' and '5000' to the macros
MAX_WRITEBACKS_IN_PASS and MAX_WRITESIZE_IN_PASS
- Slight improvements to patch logs.
The net result of all these changes is better IO utilization during
writeback. More contiguous I/O happens (whether during idle times or
when there is more activity). Contiguous I/O is sent in proper order
to the backing device. The control system picks better writeback
rate targets and the system can better hit them.
This is what I plan to work on next, in subsequent patches:
- Add code to skip doing small I/Os when A) there are larger I/Os in
the set, and B) the end of disk wasn't reached when scanning. In
other words, try writing out the bigger contiguous chunks of writeback
first; give the other blocks time to end up with a larger extent next
to them. This depends on patch 4, because it understands the true
contiguous backing I/O size and isn't fooled by smaller extents.
- Adjust bch_next_delay to store the reciprocal of what it currently
does, and remove the bounds on maximum-sleep-time. Instead, enforce
a maximum sleep time at the writeback loop. This will allow us to go
a long time (hundreds of seconds) without writing to the disk at all,
while still being ready to respond quickly to any increases in requested
writeback rate. This depends on patch 4, which slightly changes the
formulation of the delay.
- Add a "fast writeback" mode, that is for use when the disk is idle.
If enabled, and there has been no I/O, it will issue one (contiguous)
write at a time at IOPRIO_CLASS_IDLE, with no delay inbetween (bypassing
the control system). The fact that there is only one I/O and they are
at minimum IOPRIO means that good latency for the first user I/O request
will be maintained-- because they only need to compete with one writeback
I/O in the queue which is set to low priority. This depends on patch 4 in
order to correctly merge contiguous requests in this mode.
- Add code to plug the backing device when there are more contiguous
requests coming. This requires patch 4 (to be able to mark requests
to expect additional contiguous requests after them) and patch 5
(to properly order the I/O for the backing device). This will help
ensure the schduler will properly merge operations (it usually works
now, but not always).
- Add code to lower writeback IOPRIO when the rate is easily being met,
so that end-user IO requests "win".
^ permalink raw reply [flat|nested] 56+ messages in thread
* [PATCH v3 1/5] bcache: don't write back data if reading it failed
2017-09-27 17:41 (unknown), Michael Lyle
@ 2017-09-27 17:41 ` Michael Lyle
2017-09-27 17:41 ` [PATCH v3 2/5] bcache: implement PI controller for writeback rate Michael Lyle
` (3 subsequent siblings)
4 siblings, 0 replies; 56+ messages in thread
From: Michael Lyle @ 2017-09-27 17:41 UTC (permalink / raw)
To: linux-bcache; +Cc: Michael Lyle
If an IO operation fails, and we didn't successfully read data from the
cache, don't writeback invalid/partial data to the backing disk.
Signed-off-by: Michael Lyle <mlyle@lyle.org>
Reviewed-by: Coly Li <colyli@suse.de>
---
drivers/md/bcache/writeback.c | 20 ++++++++++++++------
1 file changed, 14 insertions(+), 6 deletions(-)
diff --git a/drivers/md/bcache/writeback.c b/drivers/md/bcache/writeback.c
index e663ca082183..5e65a392287d 100644
--- a/drivers/md/bcache/writeback.c
+++ b/drivers/md/bcache/writeback.c
@@ -179,13 +179,21 @@ static void write_dirty(struct closure *cl)
struct dirty_io *io = container_of(cl, struct dirty_io, cl);
struct keybuf_key *w = io->bio.bi_private;
- dirty_init(w);
- bio_set_op_attrs(&io->bio, REQ_OP_WRITE, 0);
- io->bio.bi_iter.bi_sector = KEY_START(&w->key);
- bio_set_dev(&io->bio, io->dc->bdev);
- io->bio.bi_end_io = dirty_endio;
+ /*
+ * IO errors are signalled using the dirty bit on the key.
+ * If we failed to read, we should not attempt to write to the
+ * backing device. Instead, immediately go to write_dirty_finish
+ * to clean up.
+ */
+ if (KEY_DIRTY(&w->key)) {
+ dirty_init(w);
+ bio_set_op_attrs(&io->bio, REQ_OP_WRITE, 0);
+ io->bio.bi_iter.bi_sector = KEY_START(&w->key);
+ bio_set_dev(&io->bio, io->dc->bdev);
+ io->bio.bi_end_io = dirty_endio;
- closure_bio_submit(&io->bio, cl);
+ closure_bio_submit(&io->bio, cl);
+ }
continue_at(cl, write_dirty_finish, io->dc->writeback_write_wq);
}
--
2.11.0
^ permalink raw reply related [flat|nested] 56+ messages in thread
* [PATCH v3 2/5] bcache: implement PI controller for writeback rate
2017-09-27 17:41 (unknown), Michael Lyle
2017-09-27 17:41 ` [PATCH v3 1/5] bcache: don't write back data if reading it failed Michael Lyle
@ 2017-09-27 17:41 ` Michael Lyle
2017-10-08 4:22 ` Coly Li
2017-09-27 17:41 ` [PATCH v3 3/5] bcache: smooth writeback rate control Michael Lyle
` (2 subsequent siblings)
4 siblings, 1 reply; 56+ messages in thread
From: Michael Lyle @ 2017-09-27 17:41 UTC (permalink / raw)
To: linux-bcache; +Cc: Michael Lyle
bcache uses a control system to attempt to keep the amount of dirty data
in cache at a user-configured level, while not responding excessively to
transients and variations in write rate. Previously, the system was a
PD controller; but the output from it was integrated, turning the
Proportional term into an Integral term, and turning the Derivative term
into a crude Proportional term. Performance of the controller has been
uneven in production, and it has tended to respond slowly, oscillate,
and overshoot.
This patch set replaces the current control system with an explicit PI
controller and tuning that should be correct for most hardware. By
default, it attempts to write at a rate that would retire 1/40th of the
current excess blocks per second. An integral term in turn works to
remove steady state errors.
IMO, this yields benefits in simplicity (removing weighted average
filtering, etc) and system performance.
Another small change is a tunable parameter is introduced to allow the
user to specify a minimum rate at which dirty blocks are retired.
There is a slight difference from earlier versions of the patch in
integral handling to prevent excessive negative integral windup.
Signed-off-by: Michael Lyle <mlyle@lyle.org>
Reviewed-by: Coly Li <colyli@suse.de>
---
drivers/md/bcache/bcache.h | 9 ++---
drivers/md/bcache/sysfs.c | 18 +++++----
drivers/md/bcache/writeback.c | 91 ++++++++++++++++++++++++-------------------
3 files changed, 66 insertions(+), 52 deletions(-)
diff --git a/drivers/md/bcache/bcache.h b/drivers/md/bcache/bcache.h
index 2ed9bd231d84..eb83be693d60 100644
--- a/drivers/md/bcache/bcache.h
+++ b/drivers/md/bcache/bcache.h
@@ -265,9 +265,6 @@ struct bcache_device {
atomic_t *stripe_sectors_dirty;
unsigned long *full_dirty_stripes;
- unsigned long sectors_dirty_last;
- long sectors_dirty_derivative;
-
struct bio_set *bio_split;
unsigned data_csum:1;
@@ -362,12 +359,14 @@ struct cached_dev {
uint64_t writeback_rate_target;
int64_t writeback_rate_proportional;
- int64_t writeback_rate_derivative;
+ int64_t writeback_rate_integral;
+ int64_t writeback_rate_integral_scaled;
int64_t writeback_rate_change;
unsigned writeback_rate_update_seconds;
- unsigned writeback_rate_d_term;
+ unsigned writeback_rate_i_term_inverse;
unsigned writeback_rate_p_term_inverse;
+ unsigned writeback_rate_minimum;
};
enum alloc_reserve {
diff --git a/drivers/md/bcache/sysfs.c b/drivers/md/bcache/sysfs.c
index 104c57cd666c..eb493814759c 100644
--- a/drivers/md/bcache/sysfs.c
+++ b/drivers/md/bcache/sysfs.c
@@ -81,8 +81,9 @@ rw_attribute(writeback_delay);
rw_attribute(writeback_rate);
rw_attribute(writeback_rate_update_seconds);
-rw_attribute(writeback_rate_d_term);
+rw_attribute(writeback_rate_i_term_inverse);
rw_attribute(writeback_rate_p_term_inverse);
+rw_attribute(writeback_rate_minimum);
read_attribute(writeback_rate_debug);
read_attribute(stripe_size);
@@ -130,15 +131,16 @@ SHOW(__bch_cached_dev)
sysfs_hprint(writeback_rate, dc->writeback_rate.rate << 9);
var_print(writeback_rate_update_seconds);
- var_print(writeback_rate_d_term);
+ var_print(writeback_rate_i_term_inverse);
var_print(writeback_rate_p_term_inverse);
+ var_print(writeback_rate_minimum);
if (attr == &sysfs_writeback_rate_debug) {
char rate[20];
char dirty[20];
char target[20];
char proportional[20];
- char derivative[20];
+ char integral[20];
char change[20];
s64 next_io;
@@ -146,7 +148,7 @@ SHOW(__bch_cached_dev)
bch_hprint(dirty, bcache_dev_sectors_dirty(&dc->disk) << 9);
bch_hprint(target, dc->writeback_rate_target << 9);
bch_hprint(proportional,dc->writeback_rate_proportional << 9);
- bch_hprint(derivative, dc->writeback_rate_derivative << 9);
+ bch_hprint(integral, dc->writeback_rate_integral_scaled << 9);
bch_hprint(change, dc->writeback_rate_change << 9);
next_io = div64_s64(dc->writeback_rate.next - local_clock(),
@@ -157,11 +159,11 @@ SHOW(__bch_cached_dev)
"dirty:\t\t%s\n"
"target:\t\t%s\n"
"proportional:\t%s\n"
- "derivative:\t%s\n"
+ "integral:\t%s\n"
"change:\t\t%s/sec\n"
"next io:\t%llims\n",
rate, dirty, target, proportional,
- derivative, change, next_io);
+ integral, change, next_io);
}
sysfs_hprint(dirty_data,
@@ -213,7 +215,7 @@ STORE(__cached_dev)
dc->writeback_rate.rate, 1, INT_MAX);
d_strtoul_nonzero(writeback_rate_update_seconds);
- d_strtoul(writeback_rate_d_term);
+ d_strtoul(writeback_rate_i_term_inverse);
d_strtoul_nonzero(writeback_rate_p_term_inverse);
d_strtoi_h(sequential_cutoff);
@@ -319,7 +321,7 @@ static struct attribute *bch_cached_dev_files[] = {
&sysfs_writeback_percent,
&sysfs_writeback_rate,
&sysfs_writeback_rate_update_seconds,
- &sysfs_writeback_rate_d_term,
+ &sysfs_writeback_rate_i_term_inverse,
&sysfs_writeback_rate_p_term_inverse,
&sysfs_writeback_rate_debug,
&sysfs_dirty_data,
diff --git a/drivers/md/bcache/writeback.c b/drivers/md/bcache/writeback.c
index 5e65a392287d..cac8678da5d0 100644
--- a/drivers/md/bcache/writeback.c
+++ b/drivers/md/bcache/writeback.c
@@ -25,48 +25,62 @@ static void __update_writeback_rate(struct cached_dev *dc)
bcache_flash_devs_sectors_dirty(c);
uint64_t cache_dirty_target =
div_u64(cache_sectors * dc->writeback_percent, 100);
-
int64_t target = div64_u64(cache_dirty_target * bdev_sectors(dc->bdev),
c->cached_dev_sectors);
- /* PD controller */
-
+ /*
+ * PI controller:
+ * Figures out the amount that should be written per second.
+ *
+ * First, the error (number of sectors that are dirty beyond our
+ * target) is calculated. The error is accumulated (numerically
+ * integrated).
+ *
+ * Then, the proportional value and integral value are scaled
+ * based on configured values. These are stored as inverses to
+ * avoid fixed point math and to make configuration easy-- e.g.
+ * the default value of 40 for writeback_rate_p_term_inverse
+ * attempts to write at a rate that would retire all the dirty
+ * blocks in 40 seconds.
+ *
+ * The writeback_rate_i_inverse value of 10000 means that 1/10000th
+ * of the error is accumulated in the integral term per second.
+ * This acts as a slow, long-term average that is not subject to
+ * variations in usage like the p term.
+ */
int64_t dirty = bcache_dev_sectors_dirty(&dc->disk);
- int64_t derivative = dirty - dc->disk.sectors_dirty_last;
- int64_t proportional = dirty - target;
- int64_t change;
-
- dc->disk.sectors_dirty_last = dirty;
-
- /* Scale to sectors per second */
-
- proportional *= dc->writeback_rate_update_seconds;
- proportional = div_s64(proportional, dc->writeback_rate_p_term_inverse);
-
- derivative = div_s64(derivative, dc->writeback_rate_update_seconds);
-
- derivative = ewma_add(dc->disk.sectors_dirty_derivative, derivative,
- (dc->writeback_rate_d_term /
- dc->writeback_rate_update_seconds) ?: 1, 0);
-
- derivative *= dc->writeback_rate_d_term;
- derivative = div_s64(derivative, dc->writeback_rate_p_term_inverse);
-
- change = proportional + derivative;
+ int64_t error = dirty - target;
+ int64_t proportional_scaled =
+ div_s64(error, dc->writeback_rate_p_term_inverse);
+ int64_t integral_scaled, new_rate;
+
+ if ((error < 0 && dc->writeback_rate_integral > 0) ||
+ (error > 0 && time_before64(local_clock(),
+ dc->writeback_rate.next + NSEC_PER_MSEC))) {
+ /*
+ * Only decrease the integral term if it's more than
+ * zero. Only increase the integral term if the device
+ * is keeping up. (Don't wind up the integral
+ * ineffectively in either case).
+ *
+ * It's necessary to scale this by
+ * writeback_rate_update_seconds to keep the integral
+ * term dimensioned properly.
+ */
+ dc->writeback_rate_integral += error *
+ dc->writeback_rate_update_seconds;
+ }
- /* Don't increase writeback rate if the device isn't keeping up */
- if (change > 0 &&
- time_after64(local_clock(),
- dc->writeback_rate.next + NSEC_PER_MSEC))
- change = 0;
+ integral_scaled = div_s64(dc->writeback_rate_integral,
+ dc->writeback_rate_i_term_inverse);
- dc->writeback_rate.rate =
- clamp_t(int64_t, (int64_t) dc->writeback_rate.rate + change,
- 1, NSEC_PER_MSEC);
+ new_rate = clamp_t(int64_t, (proportional_scaled + integral_scaled),
+ dc->writeback_rate_minimum, NSEC_PER_MSEC);
- dc->writeback_rate_proportional = proportional;
- dc->writeback_rate_derivative = derivative;
- dc->writeback_rate_change = change;
+ dc->writeback_rate_proportional = proportional_scaled;
+ dc->writeback_rate_integral_scaled = integral_scaled;
+ dc->writeback_rate_change = new_rate - dc->writeback_rate.rate;
+ dc->writeback_rate.rate = new_rate;
dc->writeback_rate_target = target;
}
@@ -499,8 +513,6 @@ void bch_sectors_dirty_init(struct bcache_device *d)
bch_btree_map_keys(&op.op, d->c, &KEY(op.inode, 0, 0),
sectors_dirty_init_fn, 0);
-
- d->sectors_dirty_last = bcache_dev_sectors_dirty(d);
}
void bch_cached_dev_writeback_init(struct cached_dev *dc)
@@ -514,10 +526,11 @@ void bch_cached_dev_writeback_init(struct cached_dev *dc)
dc->writeback_percent = 10;
dc->writeback_delay = 30;
dc->writeback_rate.rate = 1024;
+ dc->writeback_rate_minimum = 1;
dc->writeback_rate_update_seconds = 5;
- dc->writeback_rate_d_term = 30;
- dc->writeback_rate_p_term_inverse = 6000;
+ dc->writeback_rate_p_term_inverse = 40;
+ dc->writeback_rate_i_term_inverse = 10000;
INIT_DELAYED_WORK(&dc->writeback_rate_update, update_writeback_rate);
}
--
2.11.0
^ permalink raw reply related [flat|nested] 56+ messages in thread
* [PATCH v3 3/5] bcache: smooth writeback rate control
2017-09-27 17:41 (unknown), Michael Lyle
2017-09-27 17:41 ` [PATCH v3 1/5] bcache: don't write back data if reading it failed Michael Lyle
2017-09-27 17:41 ` [PATCH v3 2/5] bcache: implement PI controller for writeback rate Michael Lyle
@ 2017-09-27 17:41 ` Michael Lyle
2017-09-27 17:41 ` [PATCH v3 4/5] bcache: writeback: collapse contiguous IO better Michael Lyle
2017-09-27 17:41 ` [PATCH v3 5/5] bcache: writeback: properly order backing device IO Michael Lyle
4 siblings, 0 replies; 56+ messages in thread
From: Michael Lyle @ 2017-09-27 17:41 UTC (permalink / raw)
To: linux-bcache; +Cc: Michael Lyle
This works in conjunction with the new PI controller. Currently, in
real-world workloads, the rate controller attempts to write back 1
sector per second. In practice, these minimum-rate writebacks are
between 4k and 60k in test scenarios, since bcache aggregates and
attempts to do contiguous writes and because filesystems on top of
bcachefs typically write 4k or more.
Previously, bcache used to guarantee to write at least once per second.
This means that the actual writeback rate would exceed the configured
amount by a factor of 8-120 or more.
This patch adjusts to be willing to sleep up to 2.5 seconds, and to
target writing 4k/second. On the smallest writes, it will sleep 1
second like before, but many times it will sleep longer and load the
backing device less. This keeps the loading on the cache and backing
device related to writeback more consistent when writing back at low
rates.
Signed-off-by: Michael Lyle <mlyle@lyle.org>
Reviewed-by: Coly Li <colyli@suse.de>
---
drivers/md/bcache/util.c | 10 ++++++++--
drivers/md/bcache/writeback.c | 2 +-
2 files changed, 9 insertions(+), 3 deletions(-)
diff --git a/drivers/md/bcache/util.c b/drivers/md/bcache/util.c
index 176d3c2ef5f5..4dbe37e82877 100644
--- a/drivers/md/bcache/util.c
+++ b/drivers/md/bcache/util.c
@@ -232,8 +232,14 @@ uint64_t bch_next_delay(struct bch_ratelimit *d, uint64_t done)
d->next += div_u64(done * NSEC_PER_SEC, d->rate);
- if (time_before64(now + NSEC_PER_SEC, d->next))
- d->next = now + NSEC_PER_SEC;
+ /* Bound the time. Don't let us fall further than 2 seconds behind
+ * (this prevents unnecessary backlog that would make it impossible
+ * to catch up). If we're ahead of the desired writeback rate,
+ * don't let us sleep more than 2.5 seconds (so we can notice/respond
+ * if the control system tells us to speed up!).
+ */
+ if (time_before64(now + NSEC_PER_SEC * 5 / 2, d->next))
+ d->next = now + NSEC_PER_SEC * 5 / 2;
if (time_after64(now - NSEC_PER_SEC * 2, d->next))
d->next = now - NSEC_PER_SEC * 2;
diff --git a/drivers/md/bcache/writeback.c b/drivers/md/bcache/writeback.c
index cac8678da5d0..8deb721c355e 100644
--- a/drivers/md/bcache/writeback.c
+++ b/drivers/md/bcache/writeback.c
@@ -526,7 +526,7 @@ void bch_cached_dev_writeback_init(struct cached_dev *dc)
dc->writeback_percent = 10;
dc->writeback_delay = 30;
dc->writeback_rate.rate = 1024;
- dc->writeback_rate_minimum = 1;
+ dc->writeback_rate_minimum = 8;
dc->writeback_rate_update_seconds = 5;
dc->writeback_rate_p_term_inverse = 40;
--
2.11.0
^ permalink raw reply related [flat|nested] 56+ messages in thread
* [PATCH v3 4/5] bcache: writeback: collapse contiguous IO better
2017-09-27 17:41 (unknown), Michael Lyle
` (2 preceding siblings ...)
2017-09-27 17:41 ` [PATCH v3 3/5] bcache: smooth writeback rate control Michael Lyle
@ 2017-09-27 17:41 ` Michael Lyle
2017-09-27 17:41 ` [PATCH v3 5/5] bcache: writeback: properly order backing device IO Michael Lyle
4 siblings, 0 replies; 56+ messages in thread
From: Michael Lyle @ 2017-09-27 17:41 UTC (permalink / raw)
To: linux-bcache; +Cc: Michael Lyle
Previously, there was some logic that attempted to immediately issue
writeback of backing-contiguous blocks when the writeback rate was
fast.
The previous logic did not have any limits on the aggregate size it
would issue, nor the number of keys it would combine at once. It
would also discard the chance to do a contiguous write when the
writeback rate was low-- e.g. at "background" writeback of target
rate = 8, it would not combine two adjacent 4k writes and would
instead seek the disk twice.
This patch imposes limits and explicitly understands the size of
contiguous I/O during issue. It also will combine contiguous I/O
in all circumstances, not just when writeback is requested to be
relatively fast.
It is a win on its own, but also lays the groundwork for skip writes to
short keys to make the I/O more sequential/contiguous. It also gets
ready to start using blk_*_plug, and to allow issuing of non-contig
I/O in parallel if requested by the user (to make use of disk
throughput benefits available from higher queue depths).
This patch fixes a previous version where the contiguous information
was not calculated properly.
Signed-off-by: Michael Lyle <mlyle@lyle.org>
---
drivers/md/bcache/bcache.h | 6 --
drivers/md/bcache/writeback.c | 133 ++++++++++++++++++++++++++++++------------
drivers/md/bcache/writeback.h | 3 +
3 files changed, 98 insertions(+), 44 deletions(-)
diff --git a/drivers/md/bcache/bcache.h b/drivers/md/bcache/bcache.h
index eb83be693d60..da803a3b1981 100644
--- a/drivers/md/bcache/bcache.h
+++ b/drivers/md/bcache/bcache.h
@@ -321,12 +321,6 @@ struct cached_dev {
struct bch_ratelimit writeback_rate;
struct delayed_work writeback_rate_update;
- /*
- * Internal to the writeback code, so read_dirty() can keep track of
- * where it's at.
- */
- sector_t last_read;
-
/* Limit number of writeback bios in flight */
struct semaphore in_flight;
struct task_struct *writeback_thread;
diff --git a/drivers/md/bcache/writeback.c b/drivers/md/bcache/writeback.c
index 8deb721c355e..13c2142ea82f 100644
--- a/drivers/md/bcache/writeback.c
+++ b/drivers/md/bcache/writeback.c
@@ -232,10 +232,25 @@ static void read_dirty_submit(struct closure *cl)
continue_at(cl, write_dirty, io->dc->writeback_write_wq);
}
+static inline bool keys_contiguous(struct cached_dev *dc,
+ struct keybuf_key *first, struct keybuf_key *second)
+{
+ if (KEY_INODE(&second->key) != KEY_INODE(&first->key))
+ return false;
+
+ if (KEY_OFFSET(&second->key) !=
+ KEY_OFFSET(&first->key) + KEY_SIZE(&first->key))
+ return false;
+
+ return true;
+}
+
static void read_dirty(struct cached_dev *dc)
{
unsigned delay = 0;
- struct keybuf_key *w;
+ struct keybuf_key *next, *keys[MAX_WRITEBACKS_IN_PASS], *w;
+ size_t size;
+ int nk, i;
struct dirty_io *io;
struct closure cl;
@@ -246,45 +261,87 @@ static void read_dirty(struct cached_dev *dc)
* mempools.
*/
- while (!kthread_should_stop()) {
-
- w = bch_keybuf_next(&dc->writeback_keys);
- if (!w)
- break;
-
- BUG_ON(ptr_stale(dc->disk.c, &w->key, 0));
-
- if (KEY_START(&w->key) != dc->last_read ||
- jiffies_to_msecs(delay) > 50)
- while (!kthread_should_stop() && delay)
- delay = schedule_timeout_interruptible(delay);
-
- dc->last_read = KEY_OFFSET(&w->key);
-
- io = kzalloc(sizeof(struct dirty_io) + sizeof(struct bio_vec)
- * DIV_ROUND_UP(KEY_SIZE(&w->key), PAGE_SECTORS),
- GFP_KERNEL);
- if (!io)
- goto err;
-
- w->private = io;
- io->dc = dc;
-
- dirty_init(w);
- bio_set_op_attrs(&io->bio, REQ_OP_READ, 0);
- io->bio.bi_iter.bi_sector = PTR_OFFSET(&w->key, 0);
- bio_set_dev(&io->bio, PTR_CACHE(dc->disk.c, &w->key, 0)->bdev);
- io->bio.bi_end_io = read_dirty_endio;
-
- if (bio_alloc_pages(&io->bio, GFP_KERNEL))
- goto err_free;
-
- trace_bcache_writeback(&w->key);
+ next = bch_keybuf_next(&dc->writeback_keys);
+
+ while (!kthread_should_stop() && next) {
+ size = 0;
+ nk = 0;
+
+ do {
+ BUG_ON(ptr_stale(dc->disk.c, &next->key, 0));
+
+ /*
+ * Don't combine too many operations, even if they
+ * are all small.
+ */
+ if (nk >= MAX_WRITEBACKS_IN_PASS)
+ break;
+
+ /*
+ * If the current operation is very large, don't
+ * further combine operations.
+ */
+ if (size >= MAX_WRITESIZE_IN_PASS)
+ break;
+
+ /*
+ * Operations are only eligible to be combined
+ * if they are contiguous.
+ *
+ * TODO: add a heuristic willing to fire a
+ * certain amount of non-contiguous IO per pass,
+ * so that we can benefit from backing device
+ * command queueing.
+ */
+ if (nk != 0 && !keys_contiguous(dc, keys[nk-1], next))
+ break;
+
+ size += KEY_SIZE(&next->key);
+ keys[nk++] = next;
+ } while ((next = bch_keybuf_next(&dc->writeback_keys)));
+
+ /* Now we have gathered a set of 1..5 keys to write back. */
+
+ for (i = 0; i < nk; i++) {
+ w = keys[i];
+
+ io = kzalloc(sizeof(struct dirty_io) +
+ sizeof(struct bio_vec) *
+ DIV_ROUND_UP(KEY_SIZE(&w->key), PAGE_SECTORS),
+ GFP_KERNEL);
+ if (!io)
+ goto err;
+
+ w->private = io;
+ io->dc = dc;
+
+ dirty_init(w);
+ bio_set_op_attrs(&io->bio, REQ_OP_READ, 0);
+ io->bio.bi_iter.bi_sector = PTR_OFFSET(&w->key, 0);
+ bio_set_dev(&io->bio,
+ PTR_CACHE(dc->disk.c, &w->key, 0)->bdev);
+ io->bio.bi_end_io = read_dirty_endio;
+
+ if (bio_alloc_pages(&io->bio, GFP_KERNEL))
+ goto err_free;
+
+ trace_bcache_writeback(&w->key);
+
+ down(&dc->in_flight);
+
+ /* We've acquired a semaphore for the maximum
+ * simultaneous number of writebacks; from here
+ * everything happens asynchronously.
+ */
+ closure_call(&io->cl, read_dirty_submit, NULL, &cl);
+ }
- down(&dc->in_flight);
- closure_call(&io->cl, read_dirty_submit, NULL, &cl);
+ delay = writeback_delay(dc, size);
- delay = writeback_delay(dc, KEY_SIZE(&w->key));
+ while (!kthread_should_stop() && delay) {
+ schedule_timeout_interruptible(delay);
+ delay = writeback_delay(dc, 0);
+ }
}
if (0) {
diff --git a/drivers/md/bcache/writeback.h b/drivers/md/bcache/writeback.h
index e35421d20d2e..efee2be88df9 100644
--- a/drivers/md/bcache/writeback.h
+++ b/drivers/md/bcache/writeback.h
@@ -4,6 +4,9 @@
#define CUTOFF_WRITEBACK 40
#define CUTOFF_WRITEBACK_SYNC 70
+#define MAX_WRITEBACKS_IN_PASS 5
+#define MAX_WRITESIZE_IN_PASS 5000 /* *512b */
+
static inline uint64_t bcache_dev_sectors_dirty(struct bcache_device *d)
{
uint64_t i, ret = 0;
--
2.11.0
^ permalink raw reply related [flat|nested] 56+ messages in thread
* [PATCH v3 5/5] bcache: writeback: properly order backing device IO
2017-09-27 17:41 (unknown), Michael Lyle
` (3 preceding siblings ...)
2017-09-27 17:41 ` [PATCH v3 4/5] bcache: writeback: collapse contiguous IO better Michael Lyle
@ 2017-09-27 17:41 ` Michael Lyle
4 siblings, 0 replies; 56+ messages in thread
From: Michael Lyle @ 2017-09-27 17:41 UTC (permalink / raw)
To: linux-bcache; +Cc: Michael Lyle
Writeback keys are presently iterated and dispatched for writeback in
order of the logical block address on the backing device. Multiple may
be, in parallel, read from the cache device and then written back
(especially when there are contiguous I/O).
However-- there was no guarantee with the existing code that the writes
would be issued in LBA order, as the reads from the cache device are
often re-ordered. In turn, when writing back quickly, the backing disk
often has to seek backwards-- this slows writeback and increases
utilization.
This patch introduces an ordering mechanism that guarantees that the
original order of issue is maintained for the write portion of the I/O.
Performance for writeback is significantly improved when there are
multiple contiguous keys or high writeback rates.
Signed-off-by: Michael Lyle <mlyle@lyle.org>
---
drivers/md/bcache/bcache.h | 8 ++++++++
drivers/md/bcache/writeback.c | 29 +++++++++++++++++++++++++++++
2 files changed, 37 insertions(+)
diff --git a/drivers/md/bcache/bcache.h b/drivers/md/bcache/bcache.h
index da803a3b1981..df0b2ccfca8d 100644
--- a/drivers/md/bcache/bcache.h
+++ b/drivers/md/bcache/bcache.h
@@ -328,6 +328,14 @@ struct cached_dev {
struct keybuf writeback_keys;
+ /*
+ * Order the write-half of writeback operations strongly in dispatch
+ * order. (Maintain LBA order; don't allow reads completing out of
+ * order to re-order the writes...)
+ */
+ struct closure_waitlist writeback_ordering_wait;
+ atomic_t writeback_sequence_next;
+
/* For tracking sequential IO */
#define RECENT_IO_BITS 7
#define RECENT_IO (1 << RECENT_IO_BITS)
diff --git a/drivers/md/bcache/writeback.c b/drivers/md/bcache/writeback.c
index 13c2142ea82f..d63356a60001 100644
--- a/drivers/md/bcache/writeback.c
+++ b/drivers/md/bcache/writeback.c
@@ -114,6 +114,7 @@ static unsigned writeback_delay(struct cached_dev *dc, unsigned sectors)
struct dirty_io {
struct closure cl;
struct cached_dev *dc;
+ uint16_t sequence;
struct bio bio;
};
@@ -192,6 +193,27 @@ static void write_dirty(struct closure *cl)
{
struct dirty_io *io = container_of(cl, struct dirty_io, cl);
struct keybuf_key *w = io->bio.bi_private;
+ struct cached_dev *dc = io->dc;
+
+ uint16_t next_sequence;
+
+ if (atomic_read(&dc->writeback_sequence_next) != io->sequence) {
+ /* Not our turn to write; wait for a write to complete */
+ closure_wait(&dc->writeback_ordering_wait, cl);
+
+ if (atomic_read(&dc->writeback_sequence_next) == io->sequence) {
+ /*
+ * Edge case-- it happened in indeterminate order
+ * relative to when we were added to wait list..
+ */
+ closure_wake_up(&dc->writeback_ordering_wait);
+ }
+
+ continue_at(cl, write_dirty, io->dc->writeback_write_wq);
+ return;
+ }
+
+ next_sequence = io->sequence + 1;
/*
* IO errors are signalled using the dirty bit on the key.
@@ -209,6 +231,9 @@ static void write_dirty(struct closure *cl)
closure_bio_submit(&io->bio, cl);
}
+ atomic_set(&dc->writeback_sequence_next, next_sequence);
+ closure_wake_up(&dc->writeback_ordering_wait);
+
continue_at(cl, write_dirty_finish, io->dc->writeback_write_wq);
}
@@ -253,7 +278,10 @@ static void read_dirty(struct cached_dev *dc)
int nk, i;
struct dirty_io *io;
struct closure cl;
+ uint16_t sequence = 0;
+ BUG_ON(!llist_empty(&dc->writeback_ordering_wait.list));
+ atomic_set(&dc->writeback_sequence_next, sequence);
closure_init_stack(&cl);
/*
@@ -314,6 +342,7 @@ static void read_dirty(struct cached_dev *dc)
w->private = io;
io->dc = dc;
+ io->sequence = sequence++;
dirty_init(w);
bio_set_op_attrs(&io->bio, REQ_OP_READ, 0);
--
2.11.0
^ permalink raw reply related [flat|nested] 56+ messages in thread
* (unknown),
@ 2017-09-29 18:01 clasico082
0 siblings, 0 replies; 56+ messages in thread
From: clasico082 @ 2017-09-29 18:01 UTC (permalink / raw)
To: linux-bcache
[-- Attachment #1: 587629173792972.zip --]
[-- Type: application/zip, Size: 7177 bytes --]
^ permalink raw reply [flat|nested] 56+ messages in thread
* Re: [PATCH v3 2/5] bcache: implement PI controller for writeback rate
2017-09-27 17:41 ` [PATCH v3 2/5] bcache: implement PI controller for writeback rate Michael Lyle
@ 2017-10-08 4:22 ` Coly Li
2017-10-08 4:57 ` Michael Lyle
0 siblings, 1 reply; 56+ messages in thread
From: Coly Li @ 2017-10-08 4:22 UTC (permalink / raw)
To: Michael Lyle; +Cc: linux-bcache, linux-block@vger.kernel.org
On 2017/9/28 上午1:41, Michael Lyle wrote:
> bcache uses a control system to attempt to keep the amount of dirty data
> in cache at a user-configured level, while not responding excessively to
> transients and variations in write rate. Previously, the system was a
> PD controller; but the output from it was integrated, turning the
> Proportional term into an Integral term, and turning the Derivative term
> into a crude Proportional term. Performance of the controller has been
> uneven in production, and it has tended to respond slowly, oscillate,
> and overshoot.
>
> This patch set replaces the current control system with an explicit PI
> controller and tuning that should be correct for most hardware. By
> default, it attempts to write at a rate that would retire 1/40th of the
> current excess blocks per second. An integral term in turn works to
> remove steady state errors.
>
> IMO, this yields benefits in simplicity (removing weighted average
> filtering, etc) and system performance.
>
> Another small change is a tunable parameter is introduced to allow the
> user to specify a minimum rate at which dirty blocks are retired.
>
> There is a slight difference from earlier versions of the patch in
> integral handling to prevent excessive negative integral windup.
>
> Signed-off-by: Michael Lyle <mlyle@lyle.org>
> Reviewed-by: Coly Li <colyli@suse.de>
Hi Mike,
I am testing all the 5 patches these days for the writeback performance.
I just find when dirty number is much smaller then dirty target,
writeback rate is still maximum as 488.2M/sec.
Here is part of the output:
rate: 488.2M/sec
dirty: 91.7G
target: 152.3G
proportional: -1.5G
integral: 10.9G
change: 0.0k/sec
next io: 0ms
rate: 488.2M/sec
dirty: 85.3G
target: 152.3G
proportional: -1.6G
integral: 10.6G
change: 0.0k/sec
next io: -7ms
rate: 488.2M/sec
dirty: 79.3G
target: 152.3G
proportional: -1.8G
integral: 10.1G
change: 0.0k/sec
next io: -26ms
rate: 488.2M/sec
dirty: 73.1G
target: 152.3G
proportional: -1.9G
integral: 9.7G
change: 0.0k/sec
next io: -1ms
rate: 488.2M/sec
dirty: 66.9G
target: 152.3G
proportional: -2.1G
integral: 9.2G
change: 0.0k/sec
next io: -66ms
rate: 488.2M/sec
dirty: 61.1G
target: 152.3G
proportional: -2.2G
integral: 8.7G
change: 0.0k/sec
next io: -6ms
rate: 488.2M/sec
dirty: 55.6G
target: 152.3G
proportional: -2.4G
integral: 8.1G
change: 0.0k/sec
next io: -5ms
rate: 488.2M/sec
dirty: 49.4G
target: 152.3G
proportional: -2.5G
integral: 7.5G
change: 0.0k/sec
next io: 0ms
rate: 488.2M/sec
dirty: 43.1G
target: 152.3G
proportional: -2.7G
integral: 7.0G
change: 0.0k/sec
next io: -1ms
rate: 488.2M/sec
dirty: 37.3G
target: 152.3G
proportional: -2.8G
integral: 6.3G
change: 0.0k/sec
next io: -2ms
rate: 488.2M/sec
dirty: 31.7G
target: 152.3G
proportional: -3.0G
integral: 5.6G
change: 0.0k/sec
next io: -17ms
The backing cached device size is 7.2TB, cache device is 1.4TB, block
size is 8kB only. I write 700G (50% of cache device size) dirty data
onto the cache device, and start writeback by echo 1 to
writeback_running file.
In my test, writeback spent 89 minutes to decrease dirty number from
700G to 147G (dirty target number is 152G). At this moment writeback
rate was still displayed as 488.2M/sec. And after 22 minutes writeback
rate jumped to 4.0k/sec. During the 22 minutes, (147-15.8=) 131.2G dirty
data written out.
Is it as expected ?
Thanks.
--
Coly Li
^ permalink raw reply [flat|nested] 56+ messages in thread
* Re: [PATCH v3 2/5] bcache: implement PI controller for writeback rate
2017-10-08 4:22 ` Coly Li
@ 2017-10-08 4:57 ` Michael Lyle
2017-10-08 5:08 ` Coly Li
0 siblings, 1 reply; 56+ messages in thread
From: Michael Lyle @ 2017-10-08 4:57 UTC (permalink / raw)
To: Coly Li; +Cc: linux-bcache, linux-block@vger.kernel.org
Coly--
On 10/07/2017 09:22 PM, Coly Li wrote:
[snip]
> rate: 488.2M/sec
> dirty: 91.7G
> target: 152.3G
> proportional: -1.5G
> integral: 10.9G
> change: 0.0k/sec
> next io: 0ms
[snip]
> The backing cached device size is 7.2TB, cache device is 1.4TB, block
> size is 8kB only. I write 700G (50% of cache device size) dirty data
> onto the cache device, and start writeback by echo 1 to
> writeback_running file.
>
> In my test, writeback spent 89 minutes to decrease dirty number from
> 700G to 147G (dirty target number is 152G). At this moment writeback
> rate was still displayed as 488.2M/sec. And after 22 minutes writeback
> rate jumped to 4.0k/sec. During the 22 minutes, (147-15.8=) 131.2G dirty
> data written out.
I see it-- if we can write faster than 488MB/sec, we inappropriately
clamp the write rate to 488MB/sec-- this is from the old code. In turn,
if we're keeping up at that speed, the integral term can wind up. I
will fix this and clean up a couple related things.
> Is it as expected ?
It is supposed to overshoot the target, but not by this much.
I think the implementation must have changed at some point in the past
for bch_ratelimit, because the clamping doesn't match. bch_ratelimit
really needs a rewrite for other reasons, but I'll send the minimal fix now.
>
> Thanks.
>
Mike
^ permalink raw reply [flat|nested] 56+ messages in thread
* Re: [PATCH v3 2/5] bcache: implement PI controller for writeback rate
2017-10-08 4:57 ` Michael Lyle
@ 2017-10-08 5:08 ` Coly Li
0 siblings, 0 replies; 56+ messages in thread
From: Coly Li @ 2017-10-08 5:08 UTC (permalink / raw)
To: Michael Lyle; +Cc: linux-bcache, linux-block@vger.kernel.org
On 2017/10/8 下午12:57, Michael Lyle wrote:
> Coly--
>
>
> On 10/07/2017 09:22 PM, Coly Li wrote:
> [snip]
>> rate: 488.2M/sec
>> dirty: 91.7G
>> target: 152.3G
>> proportional: -1.5G
>> integral: 10.9G
>> change: 0.0k/sec
>> next io: 0ms
> [snip]
>
>> The backing cached device size is 7.2TB, cache device is 1.4TB, block
>> size is 8kB only. I write 700G (50% of cache device size) dirty data
>> onto the cache device, and start writeback by echo 1 to
>> writeback_running file.
>>
>> In my test, writeback spent 89 minutes to decrease dirty number from
>> 700G to 147G (dirty target number is 152G). At this moment writeback
>> rate was still displayed as 488.2M/sec. And after 22 minutes writeback
>> rate jumped to 4.0k/sec. During the 22 minutes, (147-15.8=) 131.2G dirty
>> data written out.
>
> I see it-- if we can write faster than 488MB/sec, we inappropriately
> clamp the write rate to 488MB/sec-- this is from the old code. In turn,
> if we're keeping up at that speed, the integral term can wind up. I
> will fix this and clean up a couple related things.
>
>> Is it as expected ?
>
> It is supposed to overshoot the target, but not by this much.
>
> I think the implementation must have changed at some point in the past
> for bch_ratelimit, because the clamping doesn't match. bch_ratelimit
> really needs a rewrite for other reasons, but I'll send the minimal fix
> now.
Hi Mike,
Copied, I will continue my test. And when the new version comes, I will
run same workload again to confirm.
Thanks.
--
Coly Li
^ permalink raw reply [flat|nested] 56+ messages in thread
* (unknown),
@ 2017-10-08 14:15 clasico082
0 siblings, 0 replies; 56+ messages in thread
From: clasico082 @ 2017-10-08 14:15 UTC (permalink / raw)
To: linux-bcache
[-- Attachment #1: 01777909.zip --]
[-- Type: application/zip, Size: 7221 bytes --]
^ permalink raw reply [flat|nested] 56+ messages in thread
* (unknown),
@ 2017-10-09 7:37 Michael Lyle
0 siblings, 0 replies; 56+ messages in thread
From: Michael Lyle @ 2017-10-09 7:37 UTC (permalink / raw)
To: linux-bcache, linux-block; +Cc: colyli
[PATCH v2 1/2] bcache: writeback rate shouldn't artifically clamp
[PATCH v2 2/2] bcache: rearrange writeback main thread ratelimit
This is a reroll of the previous "don't clamp" patch. It corrects
type issues where negative numbers were handled badly (mostly for
display in writeback_rate_debug).
Additionally, a new, related patch-- during scanning for dirty
blocks, don't reset the ratelimiting counter. This can prevent
undershoots/overshoots of the target rate relating to scanning.
Mike
^ permalink raw reply [flat|nested] 56+ messages in thread
* (unknown),
@ 2017-10-15 18:29 clasico082
0 siblings, 0 replies; 56+ messages in thread
From: clasico082 @ 2017-10-15 18:29 UTC (permalink / raw)
To: linux-bcache
[-- Attachment #1: 951043539.zip --]
[-- Type: application/zip, Size: 2779 bytes --]
^ permalink raw reply [flat|nested] 56+ messages in thread
* (unknown)
@ 2020-03-04 23:30 Maria Alessandra Filippi
0 siblings, 0 replies; 56+ messages in thread
From: Maria Alessandra Filippi @ 2020-03-04 23:30 UTC (permalink / raw)
Hallo,
Ich bin Frau Maria Elisabeth Schaeffler, eine deutsche Wirtschaftsmagnatin, Investorin und Philanthropin. Ich bin der Vorsitzende von Wipro Limited. Ich habe 25 Prozent meines persönlichen Vermögens für wohltätige Zwecke ausgegeben. Und ich habe auch versprochen, den Rest von 25% in diesem Jahr 2020 an Einzelpersonen zu verschenken. Ich habe beschlossen, 1.000.000,00 Euro an Sie zu spenden. Wenn Sie an meiner Spende interessiert sind, kontaktieren Sie mich für weitere Informationen.
Sie können auch mehr über mich über den Link unten lesen
https://en.wikipedia.org/wiki/Maria-Elisabeth_Schaeffler
Herzlicher Gruss
Geschäftsführer Wipro Limited
Maria-Elisabeth_Schaeffler
E-Mail: mrsmariaelisabethschaeffler11@gmail.com
^ permalink raw reply [flat|nested] 56+ messages in thread
* (unknown)
@ 2020-04-23 23:06 Azim Hashim Premji
0 siblings, 0 replies; 56+ messages in thread
From: Azim Hashim Premji @ 2020-04-23 23:06 UTC (permalink / raw)
--
Hallo,
Ich bin Azim Hashim Premji, ein indischer Wirtschaftsmagnat, Investor
und Philanthrop. Ich bin der Vorsitzende von Wipro Limited. Ich habe
25 Prozent meines persönlichen Vermögens für wohltätige Zwecke
verschenkt. Und ich habe auch zugesagt, den Rest von 25% in diesem
Jahr 2020 an Einzelpersonen COVID-19 Financial zu verschenken. Ich
habe beschlossen, 2.000.000 Euro an Sie zu spenden. Wenn Sie an meiner
Spende interessiert sind, kontaktieren Sie mich für weitere
Informationen.
Sie können auch mehr über mich über den unten stehenden Link lesen
http://en.wikipedia.org/wiki/Azim_Premji
Herzlicher Gruss
CEO Wipro Limited
Azim Hashim Premji
E-Mail: azimhashim011@gmail.com
^ permalink raw reply [flat|nested] 56+ messages in thread
* (unknown)
@ 2020-04-23 23:06 Azim Hashim Premji
0 siblings, 0 replies; 56+ messages in thread
From: Azim Hashim Premji @ 2020-04-23 23:06 UTC (permalink / raw)
--
Hallo,
Ich bin Azim Hashim Premji, ein indischer Wirtschaftsmagnat, Investor
und Philanthrop. Ich bin der Vorsitzende von Wipro Limited. Ich habe
25 Prozent meines persönlichen Vermögens für wohltätige Zwecke
verschenkt. Und ich habe auch zugesagt, den Rest von 25% in diesem
Jahr 2020 an Einzelpersonen COVID-19 Financial zu verschenken. Ich
habe beschlossen, 2.000.000 Euro an Sie zu spenden. Wenn Sie an meiner
Spende interessiert sind, kontaktieren Sie mich für weitere
Informationen.
Sie können auch mehr über mich über den unten stehenden Link lesen
http://en.wikipedia.org/wiki/Azim_Premji
Herzlicher Gruss
CEO Wipro Limited
Azim Hashim Premji
E-Mail: azimhashim011@gmail.com
^ permalink raw reply [flat|nested] 56+ messages in thread
* (unknown)
@ 2020-05-08 23:51 Barbara D Wilkins
0 siblings, 0 replies; 56+ messages in thread
From: Barbara D Wilkins @ 2020-05-08 23:51 UTC (permalink / raw)
Hallo, Wir sind eine christliche Organisation, die gegründet wurde, um Menschen zu helfen, die Hilfe benötigen, beispielsweise finanzielle Hilfe. Wenn Sie also finanzielle Schwierigkeiten haben oder sich in einem finanziellen Chaos befinden und Geld benötigen, um Ihr eigenes Unternehmen zu gründen, oder wenn Sie einen Kredit benötigen Begleichen Sie Ihre Schulden oder zahlen Sie Ihre Rechnungen ab, gründen Sie ein gutes Geschäft oder es fällt Ihnen schwer, einen Kapitalkredit von lokalen Banken zu erhalten. Kontaktieren Sie uns noch heute per E-Mail: Lassen Sie sich diese Gelegenheit also nicht entgehen weil Jesus gestern, heute und für immer derselbe ist. Bitte, diese sind für ernsthafte und gottesfürchtige Menschen.Dein Name:Darlehensbetrag:Leihdauer:Gülti
ge Handynummer:Vielen Dank für Ihr Verständnis für Ihren Kontakt, während wir warten: mrsbarbarawilkinsfunds.usagmail.comGrüßeVerwaltung
^ permalink raw reply [flat|nested] 56+ messages in thread
end of thread, other threads:[~2020-05-09 0:01 UTC | newest]
Thread overview: 56+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2017-09-27 17:41 (unknown), Michael Lyle
2017-09-27 17:41 ` [PATCH v3 1/5] bcache: don't write back data if reading it failed Michael Lyle
2017-09-27 17:41 ` [PATCH v3 2/5] bcache: implement PI controller for writeback rate Michael Lyle
2017-10-08 4:22 ` Coly Li
2017-10-08 4:57 ` Michael Lyle
2017-10-08 5:08 ` Coly Li
2017-09-27 17:41 ` [PATCH v3 3/5] bcache: smooth writeback rate control Michael Lyle
2017-09-27 17:41 ` [PATCH v3 4/5] bcache: writeback: collapse contiguous IO better Michael Lyle
2017-09-27 17:41 ` [PATCH v3 5/5] bcache: writeback: properly order backing device IO Michael Lyle
-- strict thread matches above, loose matches on Subject: below --
2020-05-08 23:51 (unknown) Barbara D Wilkins
2020-04-23 23:06 (unknown) Azim Hashim Premji
2020-04-23 23:06 (unknown) Azim Hashim Premji
2020-03-04 23:30 (unknown) Maria Alessandra Filippi
2017-10-15 18:29 (unknown), clasico082
2017-10-09 7:37 (unknown), Michael Lyle
2017-10-08 14:15 (unknown), clasico082
2017-09-29 18:01 (unknown), clasico082
2017-09-01 4:05 (unknown), andrewf
2017-08-15 1:55 (unknown), richard
2017-08-11 17:28 (unknown), rhsinfo
2017-08-10 3:32 (unknown), kholloway
2017-08-10 0:03 (unknown), michele
2017-08-08 21:31 (unknown), michele
2017-08-02 0:36 (unknown), richard
2017-07-31 11:33 (unknown), rhsinfo
2017-07-19 11:11 (unknown), rhsinfo
2017-07-17 17:30 (unknown), richard
2017-07-05 0:06 (unknown), michele
2017-06-29 13:46 (unknown), kholloway
2017-06-26 15:03 (unknown), richard
2017-06-25 10:21 (unknown), richard
2017-06-24 19:38 (unknown), richard
2017-06-23 17:22 (unknown), richard
2017-06-21 4:40 (unknown), kholloway
2017-06-17 22:46 (unknown), rhsinfo
2017-06-14 19:31 (unknown), kholloway
2017-06-09 8:02 (unknown), kholloway
2017-06-09 0:34 (unknown), richard
2017-05-21 8:42 (unknown), brucet
2017-05-20 12:27 (unknown), ajae
2017-05-14 3:19 (unknown), unixkeeper
2017-01-25 9:27 (unknown), clasico082
2016-12-22 9:16 (unknown), rhsinfo
2016-12-11 18:14 (unknown), kholloway
2016-11-30 9:15 (unknown), ajae
2016-10-22 21:34 (unknown), richard
2016-10-22 10:32 (unknown), brucet
2016-09-17 20:00 (unknown), Chris Clemons
2016-09-17 19:57 (unknown), Chris Clemons
2016-05-13 13:33 (unknown), rhsinfo
[not found] <1570038211.167595.1414613146892.JavaMail.yahoo@jws10056.mail.ne1.yahoo.com>
[not found] ` <1835234304.171617.1414613165674.JavaMail.yahoo@jws10089.mail.ne1.yahoo.com>
[not found] ` <1938862685.172387.1414613200459.JavaMail.yahoo@jws100180.mail.ne1.yahoo.com>
[not found] ` <705402329.170339.1414613213653.JavaMail.yahoo@jws10087.mail.ne1.yahoo.com>
[not found] ` <760168749.169371.1414613227586.JavaMail.yahoo@jws10082.mail.ne1.yahoo.com>
[not found] ` <1233923671.167957.1414613439879.JavaMail.yahoo@jws10091.mail.ne1.yahoo.com>
[not found] ` <925985882.172122.1414613520734.JavaMail.yahoo@jws100207.mail.ne1.yahoo.com>
[not found] ` <1216694778.172990.1414613570775.JavaMail.yahoo@jws100152.mail.ne1.yahoo.com>
[not found] ` <1213035306.169838.1414613612716.JavaMail.yahoo@jws10097.mail.ne1.yahoo.com>
[not found] ` <2058591563.172973.1414613668636.JavaMail.yahoo@jws10089.mail.ne1.yahoo.com>
[not found] ` <1202030640.175493 .1414613712352.JavaMail.yahoo@jws10036.mail.ne1.yahoo.com>
[not found] ` <1111049042.175610.1414613739099.JavaMail.yahoo@jws100165.mail.ne1.yahoo.com>
[not found] ` <574125160.175950.1414613784216.JavaMail.yahoo@jws100158.mail.ne1.yahoo.com>
[not found] ` <1726966600.175552.1414613846198.JavaMail.yahoo@jws100190.mail.ne1.yahoo.com>
[not found] ` <976499752.219775.1414613888129.JavaMail.yahoo@jws100101.mail.ne1.yahoo.com>
[not found] ` <1400960529.171566.1414613936238.JavaMail.yahoo@jws10059.mail.ne1.yahoo.com>
[not found] ` <1333619289.175040.1414613999304.JavaMail.yahoo@jws100196.mail.ne1.yahoo.com>
[not found] ` <1038759122.176173.1414614054070.JavaMail.yahoo@jws100138.mail.ne1.yahoo.com>
[not found] ` <1109995533.176150.1414614101940.JavaMail.yahoo@jws100140.mail.ne1.yahoo.com>
[not found] ` <809474730.174920.1414614143971.JavaMail.yahoo@jws100154.mail.ne1.yahoo.com>
[not found] ` <1234226428.170349.1414614189490.JavaMail .yahoo@jws10056.mail.ne1.yahoo.com>
[not found] ` <1122464611.177103.1414614228916.JavaMail.yahoo@jws100161.mail.ne1.yahoo.com>
[not found] ` <1350859260.174219.1414614279095.JavaMail.yahoo@jws100176.mail.ne1.yahoo.com>
[not found] ` <1730751880.171557.1414614322033.JavaMail.yahoo@jws10060.mail.ne1.yahoo.com>
[not found] ` <642429550.177328.1414614367628.JavaMail.yahoo@jws100165.mail.ne1.yahoo.com>
[not found] ` <1400780243.20511.1414614418178.JavaMail.yahoo@jws100162.mail.ne1.yahoo.com>
[not found] ` <2025652090.173204.1414614462119.JavaMail.yahoo@jws10087.mail.ne1.yahoo.com>
[not found] ` <859211720.180077.1414614521867.JavaMail.yahoo@jws100147.mail.ne1.yahoo.com>
[not found] ` <258705675.173585.1414614563057.JavaMail.yahoo@jws10078.mail.ne1.yahoo.com>
[not found] ` <1773234186.173687.1414614613736.JavaMail.yahoo@jws10078.mail.ne1.yahoo.com>
[not found] ` <1132079010.173033.1414614645153.JavaMail.yahoo@jws10066.mail.ne1.ya hoo.com>
[not found] ` <1972302405.176488.1414614708676.JavaMail.yahoo@jws100166.mail.ne1.yahoo.com>
[not found] ` <1713123000.176308.1414614771694.JavaMail.yahoo@jws10045.mail.ne1.yahoo.com>
[not found] ` <299800233.173413.1414614817575.JavaMail.yahoo@jws10066.mail.ne1.yahoo.com>
[not found] ` <494469968.179875.1414614903152.JavaMail.yahoo@jws100144.mail.ne1.yahoo.com>
[not found] ` <2136945987.171995.1414614942776.JavaMail.yahoo@jws10091.mail.ne1.yahoo.com>
[not found] ` <257674219.177708.1414615022592.JavaMail.yahoo@jws100181.mail.ne1.yahoo.com>
[not found] ` <716927833.181664.1414615075308.JavaMail.yahoo@jws100145.mail.ne1.yahoo.com>
[not found] ` <874940984.178797.1414615132802.JavaMail.yahoo@jws100157.mail.ne1.yahoo.com>
[not found] ` <1283488887.176736.1414615187657.JavaMail.yahoo@jws100183.mail.ne1.yahoo.com>
[not found] ` <777665713.175887.1414615236293.JavaMail.yahoo@jws10083.mail.ne1.yahoo.com>
[not found] ` <585395776.176325.1 414615298260.JavaMail.yahoo@jws10033.mail.ne1.yahoo.com>
[not found] ` <178352191.221832.1414615355071.JavaMail.yahoo@jws100104.mail.ne1.yahoo.com>
[not found] ` <108454213.176606.1414615522058.JavaMail.yahoo@jws10053.mail.ne1.yahoo.com>
[not found] ` <1617229176.177502.1414615563724.JavaMail.yahoo@jws10030.mail.ne1.yahoo.com>
[not found] ` <324334617.178254.1414615625247.JavaMail.yahoo@jws10089.mail.ne1.yahoo.com>
[not found] ` <567135865.82376.1414615664442.JavaMail.yahoo@jws100136.mail.ne1.yahoo.com>
[not found] ` <764758300.179669.1414615711821.JavaMail.yahoo@jws100107.mail.ne1.yahoo.com>
[not found] ` <1072855470.183388.1414615775798.JavaMail.yahoo@jws100147.mail.ne1.yahoo.com>
[not found] ` <2134283632.173314.1414615831322.JavaMail.yahoo@jws10094.mail.ne1.yahoo.com>
[not found] ` <1454491902.178612.1414615875076.JavaMail.yahoo@jws100209.mail.ne1.yahoo.com>
[not found] ` <1480763910.146593.1414958012342.JavaMail.yahoo@jws10033.mail.ne1.yahoo.com>
2014-11-02 19:54 ` (unknown) MRS GRACE MANDA
2014-10-13 3:03 How to invalidate all cache Zheng Liu
2014-10-13 3:16 ` Slava Pestov
2014-10-13 3:45 ` Zheng Liu
2014-10-13 3:43 ` Re[2]: " Pavel Goran
2014-10-13 13:30 ` Zheng Liu
2014-10-13 18:13 ` (unknown) Eric Wheeler
2014-08-06 12:06 (unknown), Daniel Smedegaard Buus
2013-08-30 4:17 (unknown) Peter Kieser
2013-05-17 14:47 (unknown), sheng qiu
2013-01-23 12:37 (unknown), Nitin Kshirsagar
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).