From: Herbert Poetzl <herbert@13thfloor.at>
To: "Serge E. Hallyn" <serue@us.ibm.com>
Cc: Andi Kleen <ak@suse.de>,
"Eric W. Biederman" <ebiederm@xmission.com>,
dev@sw.ru, linux-kernel@vger.kernel.org, sam@vilain.net,
xemul@sw.ru, haveblue@us.ibm.com, clg@fr.ibm.com,
frankeh@us.ibm.com
Subject: Re: [PATCH 7/7] uts namespaces: Implement CLONE_NEWUTS flag
Date: Fri, 5 May 2006 08:44:45 +0200 [thread overview]
Message-ID: <20060505064445.GA3437@MAIL.13thfloor.at> (raw)
In-Reply-To: <20060503161143.GA18576@sergelap.austin.ibm.com>
On Wed, May 03, 2006 at 11:11:43AM -0500, Serge E. Hallyn wrote:
> Quoting Andi Kleen (ak@suse.de):
> > On Tuesday 02 May 2006 19:20, Serge E. Hallyn wrote:
> > > Quoting Andi Kleen (ak@suse.de):
> > > > Have a proxy structure which has pointers to the many name spaces and a bit
> > > > mask for "namespace X is different".
> > >
> > > different from what?
> >
> > From the parent.
>
> ...
>
> > > Oh, you mean in case we want to allow cloning a namespace outside of
> > > fork *without* cloning the nsproxy struct?
> >
> > Basically every time any name space changes you need a new nsproxy.
>
> But, either the nsproxy is shared between tasks and you need to copy
> youself a new one as soon as any ns changes, or it is not shared, and
> you don't need that info at all (just make the change in the nsproxy
> immediately)
>
> What am I missing?
>
> Should we talk about this on irc someplace? Perhaps drag in Eric as
> well?
good idea, feel free to use #vserver (irc.oftc.net) for that
> > > > This structure would be reference
> > > > counted. task_struct has a single pointer to it.
> > >
> > > If it is reference counted, that implies it is shared between some
> > > processes. But namespace pointers themselves are shared between some of
> > > these nsproxy's. The lifetime mgmt here is one reason I haven't tried a
> > > patch to do this.
> >
> > The livetime management is no different from having individual pointers.
>
> That's true if we have one nsproxy per process or thread, which I didn't
> think was the case. Are you saying not to share nsproxy's among
> processes which share all namespaces?
>
> > > > With many name spaces you would have smaller task_struct, less cache
> > > > foot print, better cache use of task_struct because slab cache colouring
> > > > will still work etc.
> > >
> > > I suppose we could run some performance tests with some dummy namespace
> > > pointers? 9 void *'s directly in the task struct, and the same inside a
> > > refcounted container struct. The results might add some urgency to
> > > implementing the struct nsproxy.
> >
> > Not sure you'll notice too much difference on the beginning. I am just
>
> 9 void*'s is probably more than we'll need, though, so it's not "the
> beginning". Eric previously mentioned uts, sysvipc, net, pid, and uid,
> to which we might add proc, sysctl, and signals, though those are
> probably just implied through the others.
> What others do you see us needing?
the 'container', as well as accounting and resource limits
but they are not required in the beginning either
> If the number were more likely to be 50, then in the above experiment
> use 50 instead - the point was to see the performance implications
> without implementing the namespaces first.
>
> Anyway I guess I'll go ahead and queue up some tests.
good!
best,
Herbert
> > the opinion memory/cache bloat needs to be attacked at the root, not
> > when it's too late.
>
> -serge
next prev parent reply other threads:[~2006-05-05 6:44 UTC|newest]
Thread overview: 27+ messages / expand[flat|nested] mbox.gz Atom feed top
2006-05-01 19:53 [PATCH 2/7] uts namespaces: switch to using uts namespaces Serge E. Hallyn
2006-05-01 19:53 ` [PATCH 3/7] uts namespaces: use init_utsname when appropriate Serge E. Hallyn
2006-05-01 19:53 ` [PATCH 1/7] uts namespaces: introduce temporary helpers Serge E. Hallyn
2006-05-01 19:53 ` [PATCH 4/7] uts namespaces: implement utsname namespaces Serge E. Hallyn
2006-05-01 19:53 ` [PATCH 5/7] uts namespaces: sysctl hack Serge E. Hallyn
2006-05-01 19:53 ` [PATCH 6/7] uts namespaces: remove system_utsname Serge E. Hallyn
2006-05-01 19:53 ` [PATCH 7/7] uts namespaces: Implement CLONE_NEWUTS flag Serge E. Hallyn
2006-05-01 20:28 ` Dave Hansen
2006-05-01 21:11 ` Serge E. Hallyn
2006-05-01 21:58 ` Dave Hansen
2006-05-02 17:32 ` Serge E. Hallyn
2006-05-02 8:55 ` Eric W. Biederman
2006-05-02 6:55 ` Andi Kleen
2006-05-02 8:03 ` Eric W. Biederman
2006-05-02 8:17 ` Andi Kleen
2006-05-02 8:48 ` Eric W. Biederman
2006-05-02 17:20 ` Serge E. Hallyn
2006-05-02 17:30 ` Andi Kleen
2006-05-03 16:11 ` Serge E. Hallyn
2006-05-03 16:19 ` Serge E. Hallyn
2006-05-05 6:44 ` Herbert Poetzl [this message]
2006-05-05 12:17 ` Serge E. Hallyn
2006-05-05 11:02 ` Andi Kleen
2006-05-05 11:43 ` Serge E. Hallyn
2006-05-05 14:31 ` Andi Kleen
2006-05-05 15:55 ` Eric W. Biederman
-- strict thread matches above, loose matches on Subject: below --
2006-05-01 19:53 [PATCH 0/7] uts namespaces: Introduction Serge E. Hallyn
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20060505064445.GA3437@MAIL.13thfloor.at \
--to=herbert@13thfloor.at \
--cc=ak@suse.de \
--cc=clg@fr.ibm.com \
--cc=dev@sw.ru \
--cc=ebiederm@xmission.com \
--cc=frankeh@us.ibm.com \
--cc=haveblue@us.ibm.com \
--cc=linux-kernel@vger.kernel.org \
--cc=sam@vilain.net \
--cc=serue@us.ibm.com \
--cc=xemul@sw.ru \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox