All of lore.kernel.org
 help / color / mirror / Atom feed
From: Michele Brodoloni <mik.linux@gmail.com>
To: linux-bcache@vger.kernel.org
Subject: Re: bcache: bch_btree_gc() gc failed!
Date: Fri, 7 Oct 2016 12:55:50 +0000 (UTC)	[thread overview]
Message-ID: <nt860l$r23$1@blaine.gmane.org> (raw)
In-Reply-To: nt7uef$u76$1@blaine.gmane.org

Hi,
I tried to reboot the machine, but bcache is still dead.
/sys/block/bcache0/bcache/state reports "clean" (RAID5)
/sys/block/bcache1/bcache/state reports "no cache" (SAN RAID10)

I did not mention before that the SAN is used like a DAS, and it isn't 
shared with other machines. There's just one server accessing it.

Regards,
Michele

Il Fri, 07 Oct 2016 10:46:39 +0000, Michele Brodoloni ha scritto:

> Hello,
> I have bcache running on a Debian 8.0 x86_64 with kernel 4.4.16.
> I got 2x Samsung PRO 850 250gb in hardware RAID0 acting as cache device
> for 1) a local RAID5 volume 2) a SAN RAID10 volume in active/backup
> multipath fiber channel
> 
> I noticed today that SAN backing devices got detached from cache:
> 
> # bcache-super-show /dev/sdc1 sb.magic		ok 
sb.first_sector		8 [match]
> sb.csum			2D71F678442855F6 [match]
> sb.version		3 [cache device]
> dev.label		(empty)
> dev.uuid		c3dd7b4e-04e0-4578-a0ce-b35a5745e459 
dev.sectors_per_block	1
> dev.sectors_per_bucket	1024 dev.cache.first_sector	1024
> dev.cache.cache_sectors	629144576 dev.cache.total_sectors	
629145600
> dev.cache.ordered	yes dev.cache.discard	no 
dev.cache.pos		0
> dev.cache.replacement	0 [lru] cset.uuid	
> 7eb257b3-940d-42ca-ab23-52752f8b17f8
> 
> # bcache-super-show /dev/sdd1 sb.magic		ok 
sb.first_sector		8 [match]
> sb.csum			514C0F59BC7C1938 [match]
> sb.version		1 [backing device]
> dev.label		(empty)
> dev.uuid		904aaaa4-473a-446d-aad5-4e55cde972a8 
dev.sectors_per_block	1
> dev.sectors_per_bucket	1024 dev.data.first_sector	16 
dev.data.cache_mode
> 0 [writethrough]
> dev.data.cache_state	0 [detached] cset.uuid	
> 00000000-0000-0000-0000-000000000000
> 
> # bcache-super-show /dev/sde1 sb.magic		ok 
sb.first_sector		8 [match]
> sb.csum			514C0F59BC7C1938 [match]
> sb.version		1 [backing device]
> dev.label		(empty)
> dev.uuid		904aaaa4-473a-446d-aad5-4e55cde972a8 
dev.sectors_per_block	1
> dev.sectors_per_bucket	1024 dev.data.first_sector	16 
dev.data.cache_mode
> 0 [writethrough]
> dev.data.cache_state	0 [detached] cset.uuid	
> 00000000-0000-0000-0000-000000000000
> 
> (Obviously /dev/sdd1 and /dev/sde1 are the same device but with
> different path)
> 
> Another side effect is that my syslog, kern.log logfiles are eating all
> my root partition space with messages like this:
> 
> Oct  7 12:27:31 lnx kernel: [2300151.278097] bcache: bch_btree_gc() gc
> failed!
> 
> 
> What can be done to troubleshoot this issue?
> 
> Regards,
> Michele

  reply	other threads:[~2016-10-07 12:57 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-10-07 10:46 bcache: bch_btree_gc() gc failed! Michele Brodoloni
2016-10-07 12:55 ` Michele Brodoloni [this message]
2016-10-07 18:01   ` Michele Brodoloni

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='nt860l$r23$1@blaine.gmane.org' \
    --to=mik.linux@gmail.com \
    --cc=linux-bcache@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.