From: "Michael S. Tsirkin" <mst@redhat.com>
To: Srivatsa Vaddagiri <vatsa@codeaurora.org>
Cc: Jan Kiszka <jan.kiszka@siemens.com>,
Will Deacon <will@kernel.org>,
konrad.wilk@oracle.com, jasowang@redhat.com,
stefano.stabellini@xilinx.com, iommu@lists.linux-foundation.org,
virtualization@lists.linux-foundation.org,
virtio-dev@lists.oasis-open.org, tsoni@codeaurora.org,
pratikp@codeaurora.org, christoffer.dall@arm.com,
alex.bennee@linaro.org, linux-kernel@vger.kernel.org
Subject: Re: [RFC/PATCH 1/1] virtio: Introduce MMIO ops
Date: Thu, 30 Apr 2020 15:34:24 -0400 [thread overview]
Message-ID: <20200430152808-mutt-send-email-mst@kernel.org> (raw)
In-Reply-To: <20200430133321.GC3204@quicinc.com>
On Thu, Apr 30, 2020 at 07:03:21PM +0530, Srivatsa Vaddagiri wrote:
> * Jan Kiszka <jan.kiszka@siemens.com> [2020-04-30 14:59:50]:
>
> > >I believe ivshmem2_virtio requires hypervisor to support PCI device emulation
> > >(for life-cycle management of VMs), which our hypervisor may not support.
PCI is mostly just 2 registers. One sets the affected device, one the data to read/write.
> A
> > >simple shared memory and doorbell or message-queue based transport will work for
> > >us.
> >
> > As written in our private conversation, a mapping of the ivshmem2 device
> > discovery to platform mechanism (device tree etc.) and maybe even the
> > register access for doorbell and life-cycle management to something
> > hypercall-like would be imaginable. What would count more from virtio
> > perspective is a common mapping on a shared memory transport.
>
> Yes that sounds simpler for us.
>
> > That said, I also warned about all the features that PCI already defined
> > (such as message-based interrupts) which you may have to add when going a
> > different way for the shared memory device.
>
> Is it really required to present this shared memory as belonging to a PCI
> device?
But then you will go on and add MSI, and NUMA, and security, and and and ...
> I would expect the device-tree to indicate the presence of this shared
> memory region, which we should be able to present to ivshmem2 as shared memory
> region to use (along with some handles for doorbell or message queue use).
>
> I understand the usefulness of modeling the shared memory as part of device so
> that hypervisor can send events related to peers going down or coming up. In our
> case, there will be other means to discover those events and avoiding this
> requirement on hypervisor (to emulate PCI) will simplify the solution for us.
>
> Any idea when we can expect virtio over ivshmem2 to become available?!
Check out the virtio spec. Right at the beginning it states:
These devices are
found in virtual environments, yet by design they look like physical devices to the guest within
the virtual machine - and this document treats them as such. This similarity allows the guest to
use standard drivers and discovery mechanisms
> --
> QUALCOMM INDIA, on behalf of Qualcomm Innovation Center, Inc. is a member
> of Code Aurora Forum, hosted by The Linux Foundation
next prev parent reply other threads:[~2020-04-30 19:34 UTC|newest]
Thread overview: 16+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-04-30 10:02 [RFC/PATCH 0/1] virtio_mmio: hypervisor specific interfaces for MMIO Srivatsa Vaddagiri
2020-04-30 10:02 ` [RFC/PATCH 1/1] virtio: Introduce MMIO ops Srivatsa Vaddagiri
2020-04-30 10:14 ` Will Deacon
2020-04-30 10:34 ` Srivatsa Vaddagiri
2020-04-30 10:41 ` Will Deacon
2020-04-30 11:11 ` Srivatsa Vaddagiri
2020-04-30 12:59 ` Jan Kiszka
[not found] ` <7bf8bffe-267b-6c66-86c9-40017d3ca4c2-kv7WeFo6aLtBDgjK7y7TUQ@public.gmane.org>
2020-04-30 13:33 ` Srivatsa Vaddagiri
2020-04-30 19:34 ` Michael S. Tsirkin [this message]
2020-04-30 10:07 ` [RFC/PATCH 0/1] virtio_mmio: hypervisor specific interfaces for MMIO Michael S. Tsirkin
2020-04-30 10:40 ` Srivatsa Vaddagiri
2020-04-30 10:56 ` Jason Wang
2020-04-30 10:08 ` Will Deacon
2020-04-30 10:29 ` Srivatsa Vaddagiri
2020-04-30 10:39 ` Will Deacon
2020-04-30 11:02 ` Srivatsa Vaddagiri
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20200430152808-mutt-send-email-mst@kernel.org \
--to=mst@redhat.com \
--cc=alex.bennee@linaro.org \
--cc=christoffer.dall@arm.com \
--cc=iommu@lists.linux-foundation.org \
--cc=jan.kiszka@siemens.com \
--cc=jasowang@redhat.com \
--cc=konrad.wilk@oracle.com \
--cc=linux-kernel@vger.kernel.org \
--cc=pratikp@codeaurora.org \
--cc=stefano.stabellini@xilinx.com \
--cc=tsoni@codeaurora.org \
--cc=vatsa@codeaurora.org \
--cc=virtio-dev@lists.oasis-open.org \
--cc=virtualization@lists.linux-foundation.org \
--cc=will@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).