From: Srivatsa Vaddagiri <quic_svaddagi@quicinc.com>
To: Yongji Xie <xieyongji@bytedance.com>
Cc: Jason Wang <jasowang@redhat.com>,
Stefan Hajnoczi <stefanha@redhat.com>,
<virtio-dev@lists.linux.dev>, <virtualization@lists.linux.dev>,
<quic_mnalajal@quicinc.com>, <quic_eberman@quicinc.com>,
<quic_pheragu@quicinc.com>, <quic_pderrin@quicinc.com>,
<quic_cvanscha@quicinc.com>, <quic_pkondeti@quicinc.com>,
<quic_tsoni@quicinc.com>
Subject: Re: [RFC] vduse config write support
Date: Fri, 26 Jul 2024 12:36:09 +0530 [thread overview]
Message-ID: <20240726070609.GB723942@quicinc.com> (raw)
In-Reply-To: <CACycT3sGqd7jT2--Srt2de0gDPs6+AE9vMgX-ObBgKoZoqoBJw@mail.gmail.com>
* Yongji Xie <xieyongji@bytedance.com> [2024-07-26 10:37:40]:
> On Wed, Jul 24, 2024 at 11:38???AM Srivatsa Vaddagiri
> <quic_svaddagi@quicinc.com> wrote:
> >
> > Currently vduse does not seem to support configuration space writes
> > (vduse_vdpa_set_config does nothing). Is there any plan to lift that
> > limitation? I am aware of the discussions that took place here:
> >
>
> The problem is that current virtio code can't allow the failure of config write.
Ok got it.
> > We will however likely need vduse to support configuration writes (guest VM
> > updating configuration space, for ex: writing to 'events_clear' field in case of
> > virtio-gpu). Would vduse maintainers be willing to accept config_write support
> > for select devices/features (as long as the writes don't violate any safety
> > concerns we may have)?
> >
>
> It would be easier to support it if the config write just triggers an
> async operation on the device side, e.g. a doorbell. That means we
> can't ensure that any required internal actions on the device side
> triggered by the config write have been completed after the driver
> gets a successful return. But I'm not sure if this is your case.
Yes posted write should be fine, as long as guest issues a read as a fence after
that, which needs to be a sync point. As discussed in my earlier reply, we can
explore injecting surprise removal event into guest where the VDUSE daemon does
not respond within a timeout.
Thanks
vatsa
next prev parent reply other threads:[~2024-07-26 7:06 UTC|newest]
Thread overview: 14+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-07-24 3:38 [RFC] vduse config write support Srivatsa Vaddagiri
2024-07-26 2:37 ` Yongji Xie
2024-07-26 7:06 ` Srivatsa Vaddagiri [this message]
2024-07-26 2:47 ` Jason Wang
2024-07-26 5:15 ` Michael S. Tsirkin
2024-07-29 2:06 ` Jason Wang
2024-07-26 7:03 ` Srivatsa Vaddagiri
2024-07-26 7:29 ` Michael S. Tsirkin
2024-07-29 2:16 ` Jason Wang
2024-07-29 6:02 ` Srivatsa Vaddagiri
2024-07-30 3:06 ` Jason Wang
2024-07-30 3:10 ` Jason Wang
2024-07-26 12:42 ` Srivatsa Vaddagiri
2024-07-30 2:53 ` Jason Wang
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20240726070609.GB723942@quicinc.com \
--to=quic_svaddagi@quicinc.com \
--cc=jasowang@redhat.com \
--cc=quic_cvanscha@quicinc.com \
--cc=quic_eberman@quicinc.com \
--cc=quic_mnalajal@quicinc.com \
--cc=quic_pderrin@quicinc.com \
--cc=quic_pheragu@quicinc.com \
--cc=quic_pkondeti@quicinc.com \
--cc=quic_tsoni@quicinc.com \
--cc=stefanha@redhat.com \
--cc=virtio-dev@lists.linux.dev \
--cc=virtualization@lists.linux.dev \
--cc=xieyongji@bytedance.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).