cluster-devel.redhat.com archive mirror
 help / color / mirror / Atom feed
From: david m. richter <richterd@gmail.com>
To: cluster-devel.redhat.com
Subject: [Cluster-devel] gfs uevent and sysfs changes
Date: Fri, 5 Dec 2008 12:35:06 -0500	[thread overview]
Message-ID: <1d07ca700812050935q2d37c53lda8ad5ce4af6459a@mail.gmail.com> (raw)
In-Reply-To: <1228470705.3579.12.camel@localhost.localdomain>

On Fri, Dec 5, 2008 at 4:51 AM, Steven Whitehouse <swhiteho@redhat.com> wrote:
> Hi,
>
> On Thu, 2008-12-04 at 16:38 -0600, David Teigland wrote:
>> On Thu, Dec 04, 2008 at 04:59:23PM -0500, david m. richter wrote:
>> > ah, so just to make sure i'm with you here: (1) gfs_controld is
>> > generating this "id"-which-is-the-mountgroup-id, and (2) gfs_kernel
>> > will no longer receive this in the hostdata string, so (3) i can just
>> > rip out my in-kernel hostdata-parsing gunk and instead send in the
>> > mountgroup id on my own (i have my own up/downcall channel)?  if i've
>> > got it right, then everything's a cinch and i'll shut up :)
>>
>> Yep.  Generally, the best way to uniquely identify and refer to a gfs
>> filesystem is using the fsname string (specified during mkfs with -t and
>> saved in the superblock).  But, sometimes it's just a lot easier have a
>> numerical identifier instead.  I expect this is why you're using the id,
>> and it's why we were using it for communicating about plocks.
>>
>> In cluster1 and cluster2 the cluster infrastructure dynamically selected a
>> unique id when needed, and it never worked great.  In cluster3 the id is
>> just a crc of the fsname string.
>>
>> Now that I think about this a bit more, there may be a reason to keep the
>> id in the string.  There was some interest on linux-kernel about better
>> using the statfs fsid field, and this id is what gfs should be putting
>> there.
>>
> In that case gfs2 should be able to generate the id itself from the
> fsname and it still doesn't need it passed in, even if it continues to
> expose the id in sysfs.
>
> Perhaps better still, it should be possible for David to generate the id
> directly if he really needs it from the fsname.
>
> Since we also have a UUID now, for recently created filesystems, it
> might be worth exposing that via sysfs and/or uevents too.
>
>> > say, one tangential question (i won't be offended if you skip it -
>> > heh): is there a particular reason that you folks went with the uevent
>> > mechanism for doing upcalls?  i'm just curious, given the
>> > seeming-complexity and possible overhead of using the whole layered
>> > netlink apparatus vs. something like Trond Myklebust's rpc_pipefs
>> > (don't let the "rpc" fool you; it's a barebones, dead-simple pipe).
>> > -- and no, i'm not selling anything :)  my boss was asking for a list
>> > of differences between rpc_pipefs and uevents and the best i could
>> > come up with is the former's bidirectional.  Trond mentioned the
>> > netlink overhead and i wondered if that was actually a significant
>> > factor or just lost in the noise in most cases.
>>
>> The uevents looked pretty simple when I was initially designing how the
>> kernel/user interactions would work, and they fit well with sysfs files
>> which I was using too.  I don't think the overhead of using uevents is too
>> bad.  Sysfs files and uevents definately don't work great if you need any
>> kind of sophisticated bi-directional interface.
>>
>> Dave
>>
> I think uevents are a reasonable choice as they are easy enough to parse
> that it could be done by scripts, etc and easy to extend as well. We do
> intend to use netlink in the future (bz #337691) for quota messages, but
> in that case we would be using an existing standard for sending those
> messages.
>
> Netlink can be extended fairly easily, but you do have to be careful
> when designing the message format. I've not come across rpc_pipefs
> before, so I can't comment on that yet. I don't think we need to worry
> about overhead on sending the messages (if you have so much recovery
> message traffic that its a problem, you probably have bigger things to
> worry about!), and I don't see that netlink should have any more
> overhead than any other method of sending messages.

thanks!  again, i appreciate learning from other peoples' experiences.
 fyi, the rpc_pipefs stuff is only currently used in two places, i
believe -- by rpc.idmapd and rpc.gssd; just another of a surprisingly
wide variety of ways to do kernel<->userland communication.

thanks again, d.
>
> Steve.
>
>
>



  parent reply	other threads:[~2008-12-05 17:35 UTC|newest]

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2008-12-01 17:31 [Cluster-devel] gfs uevent and sysfs changes David Teigland
2008-12-02 14:02 ` Steven Whitehouse
2008-12-04 18:32 ` david m. richter
2008-12-04 21:07   ` David Teigland
2008-12-04 21:59     ` david m. richter
2008-12-04 22:38       ` David Teigland
2008-12-05  9:51         ` Steven Whitehouse
2008-12-05 14:52           ` David Teigland
2008-12-05 15:03             ` David Teigland
2008-12-05 17:35           ` david m. richter [this message]
2008-12-05 17:31         ` david m. richter

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1d07ca700812050935q2d37c53lda8ad5ce4af6459a@mail.gmail.com \
    --to=richterd@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).