public inbox for linux-bcache@vger.kernel.org
 help / color / mirror / Atom feed
From: Coly Li <colyli@suse.de>
To: mingzhe.zou@easystack.cn
Cc: dongsheng.yang@easystack.cn, linux-bcache@vger.kernel.org,
	zoumingzhe@qq.com
Subject: Re: [PATCH v3] bcache: dynamic incremental gc
Date: Thu, 12 May 2022 21:41:52 +0800	[thread overview]
Message-ID: <ecce38e7-8ba0-5fbf-61a6-2dfc21c7793d@suse.de> (raw)
In-Reply-To: <20220511073903.13568-1-mingzhe.zou@easystack.cn>

On 5/11/22 3:39 PM, mingzhe.zou@easystack.cn wrote:
> From: ZouMingzhe <mingzhe.zou@easystack.cn>
>
> Currently, GC wants no more than 100 times, with at least
> 100 nodes each time, the maximum number of nodes each time
> is not limited.
>
> ```
> static size_t btree_gc_min_nodes(struct cache_set *c)
> {
>          ......
>          min_nodes = c->gc_stats.nodes / MAX_GC_TIMES;
>          if (min_nodes < MIN_GC_NODES)
>                  min_nodes = MIN_GC_NODES;
>
>          return min_nodes;
> }
> ```
>
> According to our test data, when nvme is used as the cache,
> it takes about 1ms for GC to handle each node (block 4k and
> bucket 512k). This means that the latency during GC is at
> least 100ms. During GC, IO performance would be reduced by
> half or more.
>
> I want to optimize the IOPS and latency under high pressure.
> This patch hold the inflight peak. When IO depth up to maximum,
> GC only process very few(10) nodes, then sleep immediately and
> handle these requests.
>
> bch_bucket_alloc() maybe wait for bch_allocator_thread() to
> wake up, and and bch_allocator_thread() needs to wait for gc
> to complete, in which case gc needs to end quickly. So, add
> bucket_alloc_inflight to cache_set in v3.
>
> ```
> long bch_bucket_alloc(struct cache *ca, unsigned int reserve, bool wait)
> {
>          ......
>          do {
>                  prepare_to_wait(&ca->set->bucket_wait, &w,
>                                  TASK_UNINTERRUPTIBLE);
>
>                  mutex_unlock(&ca->set->bucket_lock);
>                  schedule();
>                  mutex_lock(&ca->set->bucket_lock);
>          } while (!fifo_pop(&ca->free[RESERVE_NONE], r) &&
>                   !fifo_pop(&ca->free[reserve], r));
>          ......
> }
>
> static int bch_allocator_thread(void *arg)
> {
> 	......
> 	allocator_wait(ca, bch_allocator_push(ca, bucket));
> 	wake_up(&ca->set->btree_cache_wait);
> 	wake_up(&ca->set->bucket_wait);
> 	......
> }
>
> static void bch_btree_gc(struct cache_set *c)
> {
> 	......
> 	bch_btree_gc_finish(c);
> 	wake_up_allocators(c);
> 	......
> }
> ```
>
> Apply this patch, each GC maybe only process very few nodes,
> GC would last a long time if sleep 100ms each time. So, the
> sleep time should be calculated dynamically based on gc_cost.
>
> At the same time, I added some cost statistics in gc_stat,
> hoping to provide useful information for future work.


Hi Mingzhe,

 From the first glance, I feel this change may delay the small GC 
period, and finally result a large GC period, which is not expected.

But it is possible that my feeling is incorrect. Do you have detailed 
performance number about both I/O latency  and GC period, then I can 
have more understanding for this effort.

BTW, I will add this patch to my testing set and experience myself.


Thanks.


Coly Li




[snipped]



  reply	other threads:[~2022-05-12 13:42 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-05-11  7:39 [PATCH v3] bcache: dynamic incremental gc mingzhe.zou
2022-05-12 13:41 ` Coly Li [this message]
2022-05-20  8:22   ` Zou Mingzhe
2022-05-20 18:24     ` Eric Wheeler
2022-05-23  2:52       ` Zou Mingzhe
2022-05-23 12:54         ` Zou Mingzhe
2022-05-23 17:55           ` Eric Wheeler
2022-05-24  2:47             ` Zou Mingzhe

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=ecce38e7-8ba0-5fbf-61a6-2dfc21c7793d@suse.de \
    --to=colyli@suse.de \
    --cc=dongsheng.yang@easystack.cn \
    --cc=linux-bcache@vger.kernel.org \
    --cc=mingzhe.zou@easystack.cn \
    --cc=zoumingzhe@qq.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox