BPF List
 help / color / mirror / Atom feed
From: Dust Li <dust.li@linux.alibaba.com>
To: Cong Wang <xiyou.wangcong@gmail.com>, lsf-pc@lists.linux-foundation.org
Cc: bpf <bpf@vger.kernel.org>, Xuan Zhuo <xuanzhuo@linux.alibaba.com>,
	"a.mehrab@bytedance.com" <a.mehrab@bytedance.com>
Subject: Re: [LSF/MM/BPF TOPIC] Inter-VM Shared Memory Communications with eBPF
Date: Mon, 4 Mar 2024 17:59:47 +0800	[thread overview]
Message-ID: <20240304095947.GB123222@linux.alibaba.com> (raw)
In-Reply-To: <CAM_iQpXzAYFES62Cbj8PoGqr_OW=R+Y-ac=6s3kmp5373R7RzQ@mail.gmail.com>

On Fri, Feb 23, 2024 at 03:05:59PM -0800, Cong Wang wrote:

Hi Cong,

This is a good topic !
We have proposed another solution to accelerate Inter-VM tcp/ip communication
transparently within the same host based on SMC-D + virtio-ism
https://lists.oasis-open.org/archives/virtio-comment/202212/msg00030.html

I don't know, can we do better with your proposal ?

Best regards,
Dust


>Hi, all
>
>We would like to discuss our inter-VM shared memory communications
>proposal with the BPF community.
>
>First, VMM (virtual machine monitor) offers significant advantages
>over native machines when VMs co-resident on the same physical host
>are non-competing in terms of network and computing resources.
>However, the performance of VMs is significantly degraded compared to
>that of native machines when co-resident VMs are competing for
>resources under high workload demands due to high overheads of
>switches and events in host/guest domain and VMM. Second, the
>communication overhead between co-resident VMs can be as high as the
>communication cost between VMs located on separate physical machines.
>This is because the abstraction of VMs supported by VMM technology
>does not differentiate whether the data request is coming from
>co-resident VMs or not. More importantly, when using TCP/IP as the
>communication method, the overhead of the Linux networking stack
>itself is also significant.
>
>Although vsock already offers an optimized alternative of inter-VM
>communications, we argue that lack of transparency to applications is
>the reason why vsock is not yet widely adopted. Instead of introducing
>more socket families, we propose a novel solution using shared memory
>with eBPF to bypass the TCP/IP stack completely and transparently to
>bring co-resident VM communications to optimal.
>
>We would like to discuss:
>- How to design a new eBPF map based on IVSHMEM (Inter-VM Shared Memory)?
>- How to reuse the existing eBPF ring buffer?
>- How to leverage the socket map to replace tcp_sendmsg() and
>tcp_recvmsg() with shared memory logic?
>
>
>Thanks.
>Cong

  reply	other threads:[~2024-03-04  9:59 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-02-23 23:05 [LSF/MM/BPF TOPIC] Inter-VM Shared Memory Communications with eBPF Cong Wang
2024-03-04  9:59 ` Dust Li [this message]
2024-03-08  3:52   ` Cong Wang
2024-03-11  9:54     ` Dust Li
2024-05-07 17:18       ` Cong Wang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20240304095947.GB123222@linux.alibaba.com \
    --to=dust.li@linux.alibaba.com \
    --cc=a.mehrab@bytedance.com \
    --cc=bpf@vger.kernel.org \
    --cc=lsf-pc@lists.linux-foundation.org \
    --cc=xiyou.wangcong@gmail.com \
    --cc=xuanzhuo@linux.alibaba.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox