linux-fsdevel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Ram Pai <linuxram@us.ibm.com>
To: Miklos Szeredi <miklos@szeredi.hu>
Cc: Andrew Morton <akpm@osdl.org>,
	viro@parcelfarce.linux.theplanet.co.uk,
	Avantika Mathur <mathurav@us.ibm.com>,
	mike@waychison.com, janak@us.ibm.com,
	linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org
Subject: Re: [PATCH 1/7] shared subtree
Date: Sat, 30 Jul 2005 17:45:05 -0700	[thread overview]
Message-ID: <1122770705.6956.23.camel@localhost> (raw)
In-Reply-To: <E1Dyk4A-0006IL-00@dorka.pomaz.szeredi.hu>

On Fri, 2005-07-29 at 22:39, Miklos Szeredi wrote:
> > > static struct vfsmount *propagation_next(struct vfsmount *p,
> > > 					 struct vfsmount *base)
> > > {
> > > 	/* first iterate over the slaves */
> > > 	if (!list_empty(&p->mnt_slave_list))
> > > 		return first_slave(p);
> > 
> > I think this code should be
> > 		if (!list_empty(&p->mnt_slave))
> > 			return next_slave(p);
> > 
> > Right? I think I get the idea. 
> 
> This is a depth-first search, so first_slave() is right.

Ok. I have started implementing your idea. But the implementation is no
simple.  Its becomes a complex mess. Atleast in the case of pnode
datastructure implementation, the propogation was all abstracted and
concentrated in the pnode datastructure. 

Here is a sample implementation of do_make_slave() with your idea. 

static int do_make_slave(struct vfsmount *mnt)
{
        int err=0;
        struct vfsmount *peer_mnt;

        spin_lock(&vfspnode_lock);
        if (!IS_MNT_SHARED(mnt)) {
                spin_unlock(&vfspnode_lock);
                err = -EINVAL;
                goto out;
        }

        peer_mnt = list_entry(mnt->mnt_share.next, struct vfsmount,
mnt_share);
        if (peer_mnt == mnt)
                peer_mnt = NULL;

        list_del_init(&mnt->mnt_share);
        if (peer_mnt) {
                /* move the slave list to the peer_mnt */
                list_splice(&mnt->mnt_slave, &peer_mnt->mnt_slave);
                list_add(&mnt->mnt_slave_list, &peer_mnt->mnt_slave);
                set_mnt_slave(mnt);
        } else {
                struct vfsmount *slave_mnt, *t_slave_mnt;
                list_for_each_entry_safe(slave_mnt, t_slave_mnt,
                                &mnt->mnt_slave, mnt_slave_list) {
                        CLEAR_MNT_SLAVE(slave_mnt);
                        list_del_init(&slave_mnt->mnt_slave_list);
                }
        }
        list_del_init(&mnt->mnt_slave);
        mnt->mnt_master = peer_mnt;
        spin_unlock(&vfspnode_lock);
out:
        return err;
}

Do you still believe that your idea is simpler? 

The most difficult part is, attaching shared vfs tree, which needs to be
attached at someother shared mount point. The problem here is while
traversing the propogation tree, one has to build another similar
propogation tree for the new child mount. Its a 2 dimensional tree walk,
i.e one walk is along the vfstree, and the other walk is along the pnode
tree for each mount.  Its much easier to abstract out the pnode
operations and concentrate on them separately than mixing the
functionality of vfs and pnode in a single vfs datastructure.

In my code, I have abstracted out the pnode tree traversing operation
using a single iterator function pnode_next().

Think about it, and let me know if it is worth the effort of changing
the implementation. I sincerely feel it just shifts complexity instead
of reducing complexity. I can eventually come up with a fully tested
implementation using your idea, but I am still not convinced that
it reduces complexity.


RP

> 
> Here's a less buggy (and even more simplified) version of the
> function.  Note: it must be called with 'origin' either on a slave
> list or at the root pnode.  That's because the function checks if the
> all vfsmounts in a pnode have been traversed by looking at the
> emptiness of mnt_slave.  So if origin was in a slave pnode, but is not
> the actual slave link, the algorithm will go off the starting pnode
> and up to it's master.
> 
> So here's a preparation function that finds the right place to start
> the propagation.
> 
> static struct vfsmount *propagation_first(struct vfsmount *p)
> {
> 	struct vfsmount *q = p;
> 
> 	while (list_empty(&q->mnt_slave)) {
> 		q = next_shared(q);
> 		if (q == p)
> 			break;
> 	}
> 	return q;
> }
> 
> static struct vfsmount *propagation_next(struct vfsmount *p,
> 					 struct vfsmount *origin)
> {
> 	/* are there any slaves of this mount? */
> 	if (!list_empty(&p->mnt_slave_list))
> 		return first_slave(p);
> 
> 	while (1) {
> 		/* if p->mnt_share is empty, this is a no-op */
> 		p = next_shared(p);
> 
> 		/* finished traversing? */
> 		if (p == origin)
> 			break;
> 
> 		/* more vfsmounts belong to the pnode? */
> 		if (list_empty(&p->mnt_slave))
> 			return p;
> 		
> 		/* more slaves? */
> 		if (p->mnt_slave.next != &p->mnt_master->mnt_slave_list)
> 			return next_slave(p);
> 
> 		/* back at master */
> 		p = p->mnt_master;
> 	}
> 
> 	return NULL;
> }
> 


  reply	other threads:[~2005-07-31  0:45 UTC|newest]

Thread overview: 37+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2005-07-25 22:44 (unknown) Ram Pai
2005-07-25 22:44 ` (unknown) Ram Pai
2005-07-25 22:44 ` (unknown) Ram Pai
2005-07-25 22:44 ` (unknown) Ram Pai
2005-07-25 22:44 ` (unknown) Ram Pai
2005-07-25 22:44 ` (unknown) Ram Pai
2005-07-25 22:44 ` (unknown) Ram Pai
2005-07-25 22:44 ` (unknown) Ram Pai
2005-07-26  2:53 ` supposed to be shared subtree patches Ram Pai
     [not found] ` <20050725225908.031752000@localhost>
2005-07-27 19:13   ` [PATCH 3/7] shared subtree Miklos Szeredi
2005-07-27 20:30     ` Ram Pai
2005-07-28  8:34       ` Miklos Szeredi
     [not found] ` <20050725225907.007405000@localhost>
2005-07-27 19:54   ` [PATCH 1/7] " Miklos Szeredi
2005-07-27 21:39     ` Ram Pai
2005-07-28  7:35       ` mount behavior question Ram Pai
2005-07-28 11:56         ` Miklos Szeredi
2005-07-28 15:02           ` Ram Pai
2005-07-28 15:58             ` Miklos Szeredi
2005-07-28 18:22               ` Ram Pai
2005-07-28 19:30                 ` Miklos Szeredi
2005-07-28 20:09                   ` Ram Pai
2005-07-28 20:44                     ` Miklos Szeredi
2005-07-28 20:59                       ` Ram Pai
2005-07-28 18:27         ` Bryan Henderson
2005-07-28 19:01           ` Miklos Szeredi
2005-07-28 20:35             ` Bryan Henderson
2005-07-28 20:42               ` Ram Pai
2005-07-28 22:27                 ` Bryan Henderson
2005-07-28 22:59                   ` Ram Pai
2005-07-28 20:53               ` Miklos Szeredi
2005-07-28 22:51                 ` Bryan Henderson
2005-07-28  9:57       ` [PATCH 1/7] shared subtree Miklos Szeredi
2005-07-29 19:54         ` Ram Pai
2005-07-30  5:39           ` Miklos Szeredi
2005-07-31  0:45             ` Ram Pai [this message]
2005-07-31  7:52               ` Miklos Szeredi
2005-07-31  8:25                 ` Miklos Szeredi

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1122770705.6956.23.camel@localhost \
    --to=linuxram@us.ibm.com \
    --cc=akpm@osdl.org \
    --cc=janak@us.ibm.com \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mathurav@us.ibm.com \
    --cc=mike@waychison.com \
    --cc=miklos@szeredi.hu \
    --cc=viro@parcelfarce.linux.theplanet.co.uk \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).