From: Stefan Hajnoczi <stefanha@redhat.com>
To: Viresh Kumar <viresh.kumar@linaro.org>
Cc: qemu-devel@nongnu.org, virtio-dev@lists.oasis-open.org,
"Michael S. Tsirkin" <mst@redhat.com>,
"Vincent Guittot" <vincent.guittot@linaro.org>,
"Alex Bennée" <alex.bennee@linaro.org>,
stratos-dev@op-lists.linaro.org,
"Oleksandr Tyshchenko" <olekstysh@gmail.com>,
xen-devel@lists.xen.org,
"Andrew Cooper" <andrew.cooper3@citrix.com>,
"Juergen Gross" <jgross@suse.com>,
"Sebastien Boeuf" <sebastien.boeuf@intel.com>,
"Liu Jiang" <gerry@linux.alibaba.com>,
"Mathieu Poirier" <mathieu.poirier@linaro.org>
Subject: Re: [PATCH V2] docs: vhost-user: Add Xen specific memory mapping support
Date: Tue, 7 Mar 2023 11:22:37 -0500 [thread overview]
Message-ID: <20230307162237.GI124259@fedora> (raw)
In-Reply-To: <20230307054336.uvky5d7q2qqlxdcv@vireshk-i7>
[-- Attachment #1: Type: text/plain, Size: 3270 bytes --]
On Tue, Mar 07, 2023 at 11:13:36AM +0530, Viresh Kumar wrote:
> On 06-03-23, 10:34, Stefan Hajnoczi wrote:
> > On Mon, Mar 06, 2023 at 04:40:24PM +0530, Viresh Kumar wrote:
> > > +Xen mmap description
> > > +^^^^^^^^^^^^^^^^^^^^
> > > +
> > > ++-------+-------+
> > > +| flags | domid |
> > > ++-------+-------+
> > > +
> > > +:flags: 64-bit bit field
> > > +
> > > +- Bit 0 is set for Xen foreign memory memory mapping.
> > > +- Bit 1 is set for Xen grant memory memory mapping.
> > > +- Bit 2 is set if the back-end can directly map additional memory (like
> > > + descriptor buffers or indirect descriptors, which aren't part of already
> > > + shared memory regions) without the need of front-end sending an additional
> > > + memory region first.
> >
> > I don't understand what Bit 2 does. Can you rephrase this? It's unclear
> > to me how additional memory can be mapped without a memory region
> > (especially the fd) is sent?
>
> I (somehow) assumed we will be able to use the same file descriptor
> that was shared for the virtqueues memory regions and yes I can see
> now why it wouldn't work or create problems.
>
> And I need suggestion now on how to make this work.
>
> With Xen grants, the front end receives grant address from the from
> guest kernel, they aren't physical addresses, kind of IOMMU stuff.
>
> The back-end gets access for memory regions of the virtqueues alone
> initially. When the back-end gets a request, it reads the descriptor
> and finds the buffer address, which isn't part of already shared
> regions. The same happens for descriptor addresses in case indirect
> descriptor feature is negotiated.
>
> At this point I was thinking maybe the back-end can simply call the
> mmap/ioctl to map the memory, using the file descriptor used for the
> virtqueues.
>
> How else can we make this work ? We also need to unmap/remove the
> memory region, as soon as the buffer is processed as the grant address
> won't be relevant for any subsequent request.
>
> Should I use VHOST_USER_IOTLB_MSG for this ? I did look at it and I
> wasn't convinced if it was an exact fit. For example it says that a
> memory address reported with miss/access fail should be part of an
> already sent memory region, which isn't the case here.
VHOST_USER_IOTLB_MSG probably isn't necessary because address
translation is not required. It will also reduce performance by adding
extra communication.
Instead, you could change the 1 memory region : 1 mmap relationship that
existing non-Xen vhost-user back-end implementations have. In Xen
vhost-user back-ends, the memory region details (including the file
descriptor and Xen domain id) would be stashed away in back-end when the
front-end adds memory regions. No mmap would be performed upon
VHOST_USER_ADD_MEM_REG or VHOST_USER_SET_MEM_TABLE.
Whenever the back-end needs to do DMA, it looks up the memory region and
performs the mmap + Xen-specific calls:
- A long-lived mmap of the vring is set up when
VHOST_USER_SET_VRING_ENABLE is received.
- Short-lived mmaps of the indirect descriptors and memory pointed to by
the descriptors is set up by the virtqueue processing code.
Does this sound workable to you?
Stefan
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]
next prev parent reply other threads:[~2023-03-07 16:28 UTC|newest]
Thread overview: 5+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-03-06 11:10 [PATCH V2] docs: vhost-user: Add Xen specific memory mapping support Viresh Kumar
2023-03-06 15:34 ` Stefan Hajnoczi
2023-03-07 5:43 ` Viresh Kumar
2023-03-07 16:22 ` Stefan Hajnoczi [this message]
2023-03-09 8:51 ` Viresh Kumar
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20230307162237.GI124259@fedora \
--to=stefanha@redhat.com \
--cc=alex.bennee@linaro.org \
--cc=andrew.cooper3@citrix.com \
--cc=gerry@linux.alibaba.com \
--cc=jgross@suse.com \
--cc=mathieu.poirier@linaro.org \
--cc=mst@redhat.com \
--cc=olekstysh@gmail.com \
--cc=qemu-devel@nongnu.org \
--cc=sebastien.boeuf@intel.com \
--cc=stratos-dev@op-lists.linaro.org \
--cc=vincent.guittot@linaro.org \
--cc=viresh.kumar@linaro.org \
--cc=virtio-dev@lists.oasis-open.org \
--cc=xen-devel@lists.xen.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).