linux-bcache.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Kent Overstreet <kent.overstreet@gmail.com>
To: Christopher James Halse Rogers <chris@cooperteam.net>
Cc: Martin McClure <martin.mcclure@gemtalksystems.com>,
	linux-bcache@vger.kernel.org
Subject: Re: bcachefs with cache device and backing device
Date: Tue, 17 May 2016 21:41:24 -0800	[thread overview]
Message-ID: <20160518054124.GA1354@moria.home.lan> (raw)
In-Reply-To: <1463547999.21115.1@mail.cooperteam.net>

On Wed, May 18, 2016 at 03:06:39PM +1000, Christopher James Halse Rogers wrote:
> On Wed, May 18, 2016 at 2:01 PM, Kent Overstreet <kent.overstreet@gmail.com>
> wrote:
> > On Tue, May 17, 2016 at 07:46:33PM -0700, Martin McClure wrote:
> > >  On 05/12/2016 09:36 PM, Kent Overstreet wrote:
> > >  >
> > >  > Yeah - tiering replaces cache/backing devices
> > >  >
> > >  > IIRC,
> > >  >
> > >  > bcache format --tier 0 -C <SSD> --tier 1 -C <spinning rust>
> > >  >
> > >  > (the -C is going to go away at some point)
> > >  >
> > > 
> > >  Had a chance to play with this some more, but still not getting it
> > > to
> > >  work...
> > > 
> > >  Formatting seems to work, and once I do this:
> > > 
> > >    echo /dev/sdd1 > /sys/fs/bcache/register
> > >    echo /dev/nvme0n1 > /sys/fs/bcache/register
> > >    echo 1 > /sys/fs/bcache/<set-uuid>/blockdev_volume_create
> > > 
> > >  a /dev/bcache0 has been created. However, if I try to mount it:
> > > 
> > >    mount -t bcache /dev/bcache0 /mnt
> > > 
> > >  it says:
> > > 
> > >    mount: No such file or directory
> > > 
> > >  with a return code of 32, which is documented as "mount failure".
> > > 
> > >  At this point I reach the limit of my current understanding, but
> > > would
> > >  like to understand more.
> > 
> > The intended mount path for multi device filesystems is currently
> > broken...
> > Chris got it working (to my surprise) by - I belive - registering all
> > the
> > devices via /sys/fs/bcache/register, and then mounting just one of the
> > block
> > devices - Chris, is that correct?
> 
> That is indeed correct. Once the volume has all its components registered,
> it can be mounted by any of the block devices.

Actually, I just tested and mounting multile devices directly _does_ work - you
just pass a colon separated list of devices to mount:

mount -t bcache /dev/sdb:/dev/sdc /mnt

I could have sworn this was broken, but worked just now...

  reply	other threads:[~2016-05-18  5:41 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-05-13  1:50 bcachefs with cache device and backing device Martin McClure
2016-05-13  4:36 ` Kent Overstreet
2016-05-13  4:37   ` Martin McClure
2016-05-18  2:46   ` Martin McClure
2016-05-18  4:01     ` Kent Overstreet
2016-05-18  5:06       ` Christopher James Halse Rogers
2016-05-18  5:41         ` Kent Overstreet [this message]
2016-05-20  0:41           ` Martin McClure
2016-05-20  0:44             ` Christopher James Halse Rogers
2016-05-20  1:18               ` Martin McClure

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20160518054124.GA1354@moria.home.lan \
    --to=kent.overstreet@gmail.com \
    --cc=chris@cooperteam.net \
    --cc=linux-bcache@vger.kernel.org \
    --cc=martin.mcclure@gemtalksystems.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).