From: Wei Liu <wei.liu2@citrix.com>
To: Ian Campbell <ian.campbell@citrix.com>
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
Jim Fehlig <jfehlig@suse.com>, Ken Johnson <ken@suse.com>,
Wei Liu <wei.liu2@citrix.com>,
xen-devel <xen-devel@lists.xen.org>
Subject: Re: [RFC] support more qdisk types
Date: Wed, 3 Feb 2016 11:08:14 +0000 [thread overview]
Message-ID: <20160203110814.GI23178@citrix.com> (raw)
In-Reply-To: <1454497504.25207.63.camel@citrix.com>
On Wed, Feb 03, 2016 at 11:05:04AM +0000, Ian Campbell wrote:
> On Wed, 2016-02-03 at 10:55 +0000, Wei Liu wrote:
> > On Wed, Feb 03, 2016 at 10:51:27AM +0000, Ian Campbell wrote:
> > > On Wed, 2016-02-03 at 10:35 +0000, Wei Liu wrote:
> > > > > Ok. So in your opinion, even if any new disk config is encoded in
> > > > > 'target=',
> > > > > libxlu should split that up into (new) members of
> > > > > libxl_device_disk, not just
> > > > > plop it into libxl_device_disk.pdev_path?
> > > > >
> > > >
> > > > No, not necessarily. I didn't look closely in the code yesterday when
> > > > replying, sorry.
> > > >
> > > > If target= has always been shoveled into pdev_path, using that would
> > > > be
> > > > fine. We already have mechanism to parse target= outside of libxl in
> > > > hotplug script.
> > > >
> > > > Are you aware of all those hotplug scripts living under tools/hotplug
> > > > ?
> > > > Does using hotplug script sound plausible to you?
> > > >
> > > > Currently hotplug script for QEMU is broken and needs fixing though,
> > > > but
> > > > I'm sure we can figure it out.
> > >
> > > How do hotplug scripts factor into this?
> > >
> >
> > If supporting all such block devices requires presenting a block device
> > to QEMU? If QEMU directly handles them then hotplug script is not in the
> > picture.
>
> Perhaps I've misunderstood what this thread is about. I thought it was
> about exposing all the various backends which qdisk supports natively, like
> CEPH, sheepdog, iscsi, nbd etc.
>
Good point. It is me who is confused. Hotplug is not in the picture
then.
Wei.
> Ian.
next prev parent reply other threads:[~2016-02-03 11:08 UTC|newest]
Thread overview: 25+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-01-26 0:25 [RFC] support more qdisk types Jim Fehlig
2016-01-27 18:32 ` Konrad Rzeszutek Wilk
2016-01-27 20:25 ` Doug Goldstein
2016-01-27 21:09 ` Konrad Rzeszutek Wilk
2016-01-28 2:42 ` Jim Fehlig
2016-01-29 14:07 ` Konrad Rzeszutek Wilk
2016-01-29 17:18 ` Jim Fehlig
2016-01-29 17:59 ` Konrad Rzeszutek Wilk
2016-01-28 2:37 ` Jim Fehlig
2016-01-29 14:21 ` Doug Goldstein
2016-01-28 2:27 ` Jim Fehlig
2016-02-02 14:59 ` Wei Liu
2016-02-02 22:06 ` Jim Fehlig
2016-02-03 9:56 ` Ian Campbell
2016-02-04 2:53 ` Jim Fehlig
2016-02-04 10:16 ` Ian Campbell
2016-02-09 0:54 ` Jim Fehlig
2016-02-09 9:35 ` Ian Campbell
2016-02-09 10:58 ` Ian Jackson
2016-02-03 10:35 ` Wei Liu
2016-02-03 10:51 ` Ian Campbell
2016-02-03 10:55 ` Wei Liu
2016-02-03 11:05 ` Ian Campbell
2016-02-03 11:08 ` Wei Liu [this message]
2016-02-03 11:15 ` Roger Pau Monné
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20160203110814.GI23178@citrix.com \
--to=wei.liu2@citrix.com \
--cc=Ian.Jackson@eu.citrix.com \
--cc=ian.campbell@citrix.com \
--cc=jfehlig@suse.com \
--cc=ken@suse.com \
--cc=xen-devel@lists.xen.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).