From: Demi Marie Obenour <demi@invisiblethingslab.com>
To: Elliott Mitchell <ehem+xen@m5p.com>
Cc: Xen developer discussion <xen-devel@lists.xenproject.org>,
netdev@vger.kernel.org
Subject: Re: Layer 3 (point-to-point) netfront and netback drivers
Date: Mon, 19 Sep 2022 17:41:05 -0400 [thread overview]
Message-ID: <Yyjh+EfCbiAI4vqi@itl-email> (raw)
In-Reply-To: <YyjVQxmIujBMzME3@mattapan.m5p.com>
[-- Attachment #1: Type: text/plain, Size: 2112 bytes --]
On Mon, Sep 19, 2022 at 01:46:59PM -0700, Elliott Mitchell wrote:
> On Sun, Sep 18, 2022 at 08:41:25AM -0400, Demi Marie Obenour wrote:
> > How difficult would it be to provide layer 3 (point-to-point) versions
> > of the existing netfront and netback drivers? Ideally, these would
> > share almost all of the code with the existing drivers, with the only
> > difference being how they are registered with the kernel. Advantages
> > compared to the existing drivers include less attack surface (since the
> > peer is no longer network-adjacent), slightly better performance, and no
> > need for ARP or NDP traffic.
>
> I've actually been wondering about a similar idea. How about breaking
> the entire network stack off and placing /that/ in a separate VM?
This is going to be very hard to do without awesome but difficult
changes to applications. Switching to layer 3 links is a much smaller
change that should be transparent to applications.
> One use for this is a VM could be constrained to *exclusively* have
> network access via Tor. This would allow a better hidden service as it
> would have no network topology knowledge.
That is great in theory, but in practice programs will expect to use
network protocols to connect to Tor. Whonix already implements this
with the current Xen netfront/netback.
> The other use is network cards which are increasingly able to handle more
> of the network stack. The Linux network team have been resistant to
> allowing more offloading, so perhaps it is time to break *everything*
> off.
Do you have any particular examples? The only one I can think of is
that Linux is not okay with TCP offload engines.
> I'm unsure the benefits would justify the effort, but I keep thinking of
> this as the solution to some interesting issues. Filtering becomes more
> interesting, but BPF could work across VMs.
Classic BPF perhaps, but eBPF's attack surface is far too large for this
to be viable. Unprivileged eBPF is already disabled by default.
--
Sincerely,
Demi Marie Obenour (she/her/hers)
Invisible Things Lab
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]
next prev parent reply other threads:[~2022-09-19 21:41 UTC|newest]
Thread overview: 5+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-09-18 12:41 Layer 3 (point-to-point) netfront and netback drivers Demi Marie Obenour
2022-09-19 20:46 ` Elliott Mitchell
2022-09-19 21:41 ` Demi Marie Obenour [this message]
2022-09-19 23:21 ` Elliott Mitchell
2022-09-19 23:32 ` Demi Marie Obenour
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=Yyjh+EfCbiAI4vqi@itl-email \
--to=demi@invisiblethingslab.com \
--cc=ehem+xen@m5p.com \
--cc=netdev@vger.kernel.org \
--cc=xen-devel@lists.xenproject.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).