From: jamal <hadi@cyberus.ca>
To: Jesse Gross <jesse@nicira.com>
Cc: Ben Pfaff <blp@nicira.com>, netdev@vger.kernel.org, ovs-team@nicira.com
Subject: Re: openvswitch/flow WAS ( Re: [rfc] Merging the Open vSwitch datapath
Date: Sat, 16 Oct 2010 07:35:59 -0400 [thread overview]
Message-ID: <1287228959.3664.72.camel@bigi> (raw)
In-Reply-To: <AANLkTim2LsFUOVSVss6HqGTxuyGH6eBKQdYabjMoWiaB@mail.gmail.com>
Jesse,
I re-added the other address Ben put earlier on in case you
missed it.
yes, I have heard of TL;DR but unlike Alan Cox i find it hard to
make a point in one sentence of 3 words - so please bear with me
and read on.
On Fri, 2010-10-15 at 14:35 -0700, Jesse Gross wrote:
>
> You're right, at a high level, it appears that there is a bit of an
> overlap between bridging, tc, and Open vSwitch.
It looks like openvswitch rides on top of openflow, correct?
earlier i was looking at openflow/datapath but gleaning
openvswitch/datapath it still looks conceptually the same
at the lower level.
> However, in reality each is targeting a pretty different use case.
Sure, use cases differences typically map either to policy
or extension/addition of a new mechanism.
To clarify - you have the following approach per VM:
-->ingress port --> filter match --> actions
Did i get this right?
You have a classifier that has 10 or so tuples. I could
replicate it with the u32 classifier - but it could be argued
that a brand new "hard-coded" classifier would be needed.
You have a series of actions like: redirect/mirror to port, drop etc
I can do most of these with existing tc actions and maybe replicate
most (like the vlan, MAC address, checksum etc rewrites) with pedit
action - but it could be argued that maybe one or more new tc actions
are needed.
Note: in linux, the above ingress port could be replaced with an
egress port instead. Bridging and L3 come after the actions in
the ingress path; and post that we have exactly the same approach of
port->filter->action
> Given that the design
> goals are not aligned, keeping separate things separate actually helps
> with overall simplicity.
In general i would agree with the simplicity sentiment - but i fail to
see it so far.
A lot of the complexity, such as your own proprietary headers for flows
+actions, doesnt need to sit in the kernel.
IOW, the semantics of openflow already exist albeit a different syntax.
You can map the syntax to semantic in user space. This adheres to the
principal of simple kernel and external policy.
I am sure thats what you would need to do with openflow on top of an
ASIC chip for example, no? I can see from the website you already run on
top of broadcom and marvel...
> Where there is overlap, I am certainly happy
> to see common functionality reused: for example, Open vSwitch uses tc
> for its QoS capabilities.
Refer to above.
> In the future, I expect there to be an even clearer delineation
> between the various components. One of the primary use cases of Open
> vSwitch at the moment is for virtualized data center networking but a
> few of the other potential uses that have been brought up include
> security processing (involving sending traffic of interest to
> userspace) and configuring SR-IOV NICs (to appropriately program rules
> in hardware). You can see how each of these makes sense in the
> context of a virtual switch datapath but less so as a set of tc
> actions.
Unless i am misunderstanding - these are clearly more control extensions
but I dont see any of it needing to be in the kernel. It is all control
path stuff.
i.e something in user space (maybe even in a hypervisor) that is aware
of the virtualization creates, destroys and manages the VMs (SR-IOV etc)
and then configures per-VM flows whether directly in the kernel or via
some ethtool or other interface to the NIC.
> So, in short, I don't see this as something lacking in Linux, just
> complementary functionality.
Like i said above, I dont see the complimentary part.
cheers,
jamal
next prev parent reply other threads:[~2010-10-16 11:36 UTC|newest]
Thread overview: 31+ messages / expand[flat|nested] mbox.gz Atom feed top
2010-08-30 6:27 [rfc] Merging the Open vSwitch datapath Simon Horman
2010-08-30 6:52 ` Joe Perches
2010-08-30 7:11 ` Simon Horman
2010-08-30 7:25 ` Joe Perches
2010-08-30 7:33 ` Simon Horman
2010-08-30 17:22 ` Ben Pfaff
2010-08-30 18:26 ` Rose, Gregory V
2010-08-30 18:33 ` Ben Pfaff
2010-08-30 18:45 ` Rose, Gregory V
2010-08-30 20:59 ` Chris Wright
2010-08-31 0:48 ` Simon Horman
2010-08-31 0:54 ` Chris Wright
2010-08-31 1:01 ` Simon Horman
2010-08-31 1:11 ` Jesse Gross
2010-08-31 1:38 ` Simon Horman
2010-08-31 8:18 ` Herbert Xu
2010-08-30 21:04 ` Arnd Bergmann
2010-08-30 22:15 ` Rose, Gregory V
2010-08-31 11:48 ` Arnd Bergmann
2010-08-31 17:04 ` Rose, Gregory V
2010-08-31 17:43 ` Arnd Bergmann
2010-08-31 20:16 ` Rose, Gregory V
2010-10-15 11:31 ` openvswitch/flow WAS ( " jamal
2010-10-15 16:18 ` Ben Pfaff
2010-10-15 21:35 ` Jesse Gross
2010-10-16 11:35 ` jamal [this message]
2010-10-16 19:33 ` Jesse Gross
2010-10-18 12:16 ` jamal
2010-10-18 15:20 ` Simon Horman
2010-10-19 10:22 ` jamal
2010-10-19 14:56 ` Simon Horman
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1287228959.3664.72.camel@bigi \
--to=hadi@cyberus.ca \
--cc=blp@nicira.com \
--cc=jesse@nicira.com \
--cc=netdev@vger.kernel.org \
--cc=ovs-team@nicira.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).