From: Simon Horman <horms@verge.net.au>
To: Hans Schillstrom <hans.schillstrom@ericsson.com>
Cc: "lvs-devel@vger.kernel.org" <lvs-devel@vger.kernel.org>,
"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
"netfilter-devel@vger.kernel.org"
<netfilter-devel@vger.kernel.org>, "ja@ssi.bg" <ja@ssi.bg>,
"wensong@linux-vs.org" <wensong@linux-vs.org>,
"daniel.lezcano@free.fr" <daniel.lezcano@free.fr>
Subject: Re: [RFC PATCH 0/9] ipvs network name space (netns) aware
Date: Wed, 20 Oct 2010 11:17:59 +0200 [thread overview]
Message-ID: <20101020091759.GC19121@verge.net.au> (raw)
In-Reply-To: <201010181355.59763.hans.schillstrom@ericsson.com>
On Mon, Oct 18, 2010 at 01:55:58PM +0200, Hans Schillstrom wrote:
> On Sunday 17 October 2010 08:47:31 Simon Horman wrote:
> > On Fri, Oct 08, 2010 at 01:16:36PM +0200, Hans Schillstrom wrote:
> > > This patch series adds network name space (netns) support to the LVS.
> > >
> > > REVISION
> > >
> > > This is version 1
> > >
> > > OVERVIEW
> > >
> > > The patch doesn't remove or add any functionality except for netns.
> > > For users that don't use network name space (netns) this patch is
> > > completely transparent.
> > >
> > > No it's possible to run LVS in a Linux container (see lxc-tools)
> > > i.e. a light weight virtualization. For example it's possible to run
> > > one or several lvs on a real server in their own network name spaces.
> > > >From the LVS point of view it looks like it runs on it's own machine.
> > >
> > > IMPLEMENTATION
> > > Basic requirements for netns awareness
> > > - Global variables has to be moved to dyn. allocated memory.
> > >
> > > Most global variables now resides in a struct ipvs { } in netns/ip_vs.h.
> > > What is moved and what is not ?
> > >
> > > Some cache aligned locks are still in global, module init params and some debug_level.
> > >
> > > Algorithm files they are untouched.
> > >
> > > QUESTIONS
> > > Drop rate in ip_vs_ctl per netns or grand total ?
>
> This is a tricky one (I think),
> if the interface is shared with root name-space and/or other name-spaces
> - use grand total
> if it's an "own interface"
> - drop rate can/should be in netns...
I hadn't thought about shared devices - yes that is tricky.
> > My gut-feeling is that per netns makes more sense.
> >
> > > Should more lock variables be moved (or less) ?
> >
> > I'm unsure what you are asking here but I will make a general statement
> > about locking in IPVS: it needs work.
>
> Some locks still resides as global variables, and others in netns_ipvs struct.
> Since you have a lot of experience with IPVS locks,
> you might have ideas what to move and what to not move.
My basic thought is that locks tend to either related to a connection
or the configuration of a service. And it seems to me that if you
have a per-namespace connection hash table then both of these categories
of locks are good candidates to be made per-namespace.
Do you have any particular locks that you are worried about?
> > > PATCH SET
> > > This patch set is based upon net-next-2.6 (2.6.36-rc3) from 4 oct 2010
> > > and [patch v4] ipvs: IPv6 tunnel mode
> > >
> > > Note: ip_vs_xmit.c will not work without "[patch v4] ipvs: IPv6 tunnel mode"
> >
> > Unfortunately the patches don't apply with the persistence engine
> > patches which were recently merged into nf-next-2.6 (although
> > "[patch v4.1 ]ipvs: IPv6 tunnel mode" is still unmerged).
> >
> I do have a patch based on the nf-next without the SIP/PE patch
>
> > I'm happy to work with you to make the required changes there.
>
> I would appreciate that.
No problem. I am a bit busy this week as I am attending the Netfilter
Workshop. But I will try to find some time to rebase your changes soon.
> > (I realise those patches weren't merged when you made your post.
> > But regardless, either your or me will need to update the patches).
> >
> > Another issue is that your patches seem to be split in a way
> > where the build breaks along the way. E.g. after applying
> > patch 1, the build breaks. Could you please split things up
> > in a manner such that this doesn't happen. The reason being
> > that it breaks bisection.
> >
> Hmm, Daniel also pointed at this,
> The Patch is quite large, and will become even larger with pe and sip.
> My Idea was to review the patch in pieces and put it together in one or two large patches when submitting it.
> I don't know that might be a stupid ?
> It's hard to break it up, making the code reentrant causes changes every where.
>
> Daniel L, had another approach break it into many many tiny patches.
I would prefer the tiny patch approach.
> > Lastly, could you provide a unique subject for each patch.
> > I know its a bit tedious, but it does make a difference when
> > browsing the changelog.
> >
> Yepp, no problem
prev parent reply other threads:[~2010-10-20 9:17 UTC|newest]
Thread overview: 5+ messages / expand[flat|nested] mbox.gz Atom feed top
2010-10-08 11:16 [RFC PATCH 0/9] ipvs network name space (netns) aware Hans Schillstrom
2010-10-17 6:47 ` Simon Horman
2010-10-18 11:55 ` Hans Schillstrom
2010-10-18 12:16 ` Daniel Lezcano
2010-10-20 9:17 ` Simon Horman [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20101020091759.GC19121@verge.net.au \
--to=horms@verge.net.au \
--cc=daniel.lezcano@free.fr \
--cc=hans.schillstrom@ericsson.com \
--cc=ja@ssi.bg \
--cc=lvs-devel@vger.kernel.org \
--cc=netdev@vger.kernel.org \
--cc=netfilter-devel@vger.kernel.org \
--cc=wensong@linux-vs.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).