qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Gustavo Romero <gustavo.romero@linaro.org>
To: Markus Armbruster <armbru@redhat.com>
Cc: qemu-devel@nongnu.org, philmd@linaro.org, thuth@redhat.com,
	lvivier@redhat.com, qemu-arm@nongnu.org, alex.bennee@linaro.org,
	pbonzini@redhat.com, anton.kochkov@proton.me,
	richard.henderson@linaro.org, peter.maydell@linaro.org,
	Bill Mills <bill.mills@linaro.org>
Subject: Re: [PATCH 0/6] Add ivshmem-flat device
Date: Mon, 22 Apr 2024 13:47:07 -0300	[thread overview]
Message-ID: <a28f3657-c827-7a0d-a8da-b82d17d17577@linaro.org> (raw)
In-Reply-To: <87wmqp3xug.fsf@pond.sub.org>

Hi Markus,

Thanks for interesting in the ivshmem-flat device.

Bill Mills (cc:ed) is the best person to answer your question,
so please find his answer below.

On 2/28/24 3:29 AM, Markus Armbruster wrote:
> Gustavo Romero <gustavo.romero@linaro.org> writes:
> 
> [...]
> 
>> This patchset introduces a new device, ivshmem-flat, which is similar to the
>> current ivshmem device but does not require a PCI bus. It implements the ivshmem
>> status and control registers as MMRs and the shared memory as a directly
>> accessible memory region in the VM memory layout. It's meant to be used on
>> machines like those with Cortex-M MCUs, which usually lack a PCI bus, e.g.,
>> lm3s6965evb and mps2-an385. Additionally, it has the benefit of requiring a tiny
>> 'device driver,' which is helpful on some RTOSes, like Zephyr, that run on
>> memory-constrained resource targets.
>>
>> The patchset includes a QTest for the ivshmem-flat device, however, it's also
>> possible to experiment with it in two ways:
>>
>> (a) using two Cortex-M VMs running Zephyr; or
>> (b) using one aarch64 VM running Linux with the ivshmem PCI device and another
>>      arm (Cortex-M) VM running Zephyr with the new ivshmem-flat device.
>>
>> Please note that for running the ivshmem-flat QTests the following patch, which
>> is not committed to the tree yet, must be applied:
>>
>> https://lists.nongnu.org/archive/html/qemu-devel/2023-11/msg03176.html
> 
> What problem are you trying to solve with ivshmem?
> 
> Shared memory is not a solution to any communication problem, it's
> merely a building block for building such solutions: you invariably have
> to layer some protocol on top.  What do you intend to put on top of
> ivshmem?

Actually ivshmem is shared memory and bi-direction notifications (in this case a doorbell register and an irq).

This is the fundamental requirement for many types of communication but our interest is for the OpenAMP project [1].

All the OpenAMP project's communication is based on shared memory and bi-directional notification.  Often this is on a AMP SOC with Cortex-As and Cortex-Ms or Rs.  However we are now expanding into PCIe based AMP. One example of this is an x86 host computer and a PCIe card with an ARM SOC.  Other examples include two systems with PCIe root complex connected via a non-transparent bridge.

The existing PCI based ivshmem lets us model these types of systems in a simple generic way without worrying about the details of the RC/EP relationship or the details of a specific non-transparent bridge.  In fact the ivshmem looks to the two (or more) systems like a non-transparent bridge with its own memory (and no other memory access is allowed).

Right now we are testing this with RPMSG between two QEMU system where both systems are cortex-a53 and both running Zephyr. [2]

We will expand this by switching one of the QEMU systems to either arm64 Linux or x86 Linux.

We (and others) are also working on a generic virtio transport that will work between any two systems as long as they have shared memory and bi-directional notifications.

Now for ivshmem-flat.  We want to expand this model to include MCU like CPUs and RTOS'es that don't have PCIe.  We focus on Cortex-M because every open source RTOS has an existing port for one of the Cortex-M machines already in QEMU.  However they don't normally pick the same one.  If we added our own custom machine for this, the QEMU project would push back and even if accepted we would have to do a port for each RTOS.  This would mean we would not test as many RTOSes.

The ivshmem-flat is actually a good model for what a Cortex-M based PCIe card would look like.  The host system would see the connection as PCIe but to the Cortex-M it would just appear as memory, MMR's for the doorbell, and an IRQ.

So even after we have a "roll your own machine definition from a file", I expect ivshmem and ivshmem-flat to still be very useful.

[1] https://www.openampproject.org/
[2] Work in progress here: https://github.com/OpenAMP/openamp-system-reference/tree/main/examples/zephyr/dual_qemu_ivshmem


Cheers,
Gustavo


  reply	other threads:[~2024-04-22 16:48 UTC|newest]

Thread overview: 14+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-02-22 22:22 [PATCH 0/6] Add ivshmem-flat device Gustavo Romero
2024-02-22 22:22 ` [PATCH 1/6] hw/misc/ivshmem: " Gustavo Romero
2024-02-22 22:22 ` [PATCH 2/6] hw/misc/ivshmem-flat: Allow device to wire itself on sysbus Gustavo Romero
2024-02-22 22:22 ` [PATCH 3/6] hw/arm: Allow some machines to use the ivshmem-flat device Gustavo Romero
2024-02-22 22:22 ` [PATCH 4/6] hw/misc/ivshmem: Rename ivshmem to ivshmem-pci Gustavo Romero
2024-02-22 22:22 ` [PATCH 5/6] tests/qtest: Reorganize common code in ivshmem-test Gustavo Romero
2024-02-26  7:56   ` Thomas Huth
2024-02-22 22:22 ` [PATCH 6/6] tests/qtest: Add ivshmem-flat test Gustavo Romero
2024-02-26  8:00   ` Thomas Huth
2024-02-28  6:29 ` [PATCH 0/6] Add ivshmem-flat device Markus Armbruster
2024-04-22 16:47   ` Gustavo Romero [this message]
2024-04-23 10:39     ` Markus Armbruster
2024-04-23 16:00       ` Bill Mills
2024-04-25 11:58         ` Markus Armbruster

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=a28f3657-c827-7a0d-a8da-b82d17d17577@linaro.org \
    --to=gustavo.romero@linaro.org \
    --cc=alex.bennee@linaro.org \
    --cc=anton.kochkov@proton.me \
    --cc=armbru@redhat.com \
    --cc=bill.mills@linaro.org \
    --cc=lvivier@redhat.com \
    --cc=pbonzini@redhat.com \
    --cc=peter.maydell@linaro.org \
    --cc=philmd@linaro.org \
    --cc=qemu-arm@nongnu.org \
    --cc=qemu-devel@nongnu.org \
    --cc=richard.henderson@linaro.org \
    --cc=thuth@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).