From: Raphael Norwitz <raphael.norwitz@nutanix.com>
To: "mst@redhat.com" <mst@redhat.com>
Cc: "raphael.s.norwitz@gmail.com" <raphael.s.norwitz@gmail.com>,
"david@redhat.com" <david@redhat.com>,
"mst@redhat.com" <mst@redhat.com>,
"qemu-devel@nongnu.org" <qemu-devel@nongnu.org>,
"stefanha@redhat.com" <stefanha@redhat.com>,
"marcandre.lureau@redhat.com" <marcandre.lureau@redhat.com>
Subject: Re: [RFC 0/5] Clean up error handling in libvhost-user memory mapping
Date: Tue, 4 Jan 2022 15:46:37 +0000 [thread overview]
Message-ID: <20220104154630.GA26497@raphael-debian-dev> (raw)
In-Reply-To: <20211215222939.24738-1-raphael.norwitz@nutanix.com>
Ping
On Wed, Dec 15, 2021 at 10:29:46PM +0000, Raphael Norwitz wrote:
> Hey Stefan, Marc-Andre, MST, David -
>
> As promised here is a series cleaning up error handling in the
> libvhost-user memory mapping path. Most of these cleanups are
> straightforward and have been discussed on the mailing list in threads
> [1] and [2]. Hopefully there is nothing super controversial in the first
> 4 patches.
>
> I am concerned about patch 5 “libvhost-user: handle removal of
> identical regions”. From my reading of Stefan's comments in [1], the
> proposal seemed to be to remove any duplicate regions. I’d prefer to
> prevent duplicate regions from being added in the first place. Thoughts?
>
> [1] https://lore.kernel.org/qemu-devel/20211018143319.GA11006@raphael-debian-dev/
> [2] https://lore.kernel.org/qemu-devel/9391f500-70be-26cf-bcfc-591d3ee84d4e@redhat.com/
>
> Sorry for the delay,
> Raphael
>
> David Hildenbrand (1):
> libvhost-user: Simplify VHOST_USER_REM_MEM_REG
>
> Raphael Norwitz (4):
> libvhost-user: Add vu_rem_mem_reg input validation
> libvhost-user: Add vu_add_mem_reg input validation
> libvhost-user: prevent over-running max RAM slots
> libvhost-user: handle removal of identical regions
>
> subprojects/libvhost-user/libvhost-user.c | 52 +++++++++++++++--------
> 1 file changed, 34 insertions(+), 18 deletions(-)
>
> --
> 2.20.1
prev parent reply other threads:[~2022-01-04 15:48 UTC|newest]
Thread overview: 14+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-12-15 22:29 [RFC 0/5] Clean up error handling in libvhost-user memory mapping Raphael Norwitz
2021-12-15 22:29 ` [RFC 1/5] libvhost-user: Add vu_rem_mem_reg input validation Raphael Norwitz
2022-01-05 11:00 ` Stefan Hajnoczi
2022-01-06 5:13 ` Raphael Norwitz
2021-12-15 22:29 ` [RFC 2/5] libvhost-user: Add vu_add_mem_reg " Raphael Norwitz
2022-01-05 11:02 ` Stefan Hajnoczi
2021-12-15 22:29 ` [RFC 3/5] libvhost-user: Simplify VHOST_USER_REM_MEM_REG Raphael Norwitz
2022-01-05 11:04 ` Stefan Hajnoczi
2021-12-15 22:29 ` [RFC 4/5] libvhost-user: prevent over-running max RAM slots Raphael Norwitz
2022-01-05 11:06 ` Stefan Hajnoczi
2021-12-15 22:29 ` [RFC 5/5] libvhost-user: handle removal of identical regions Raphael Norwitz
2022-01-05 11:18 ` Stefan Hajnoczi
2022-01-06 5:36 ` Raphael Norwitz
2022-01-04 15:46 ` Raphael Norwitz [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20220104154630.GA26497@raphael-debian-dev \
--to=raphael.norwitz@nutanix.com \
--cc=david@redhat.com \
--cc=marcandre.lureau@redhat.com \
--cc=mst@redhat.com \
--cc=qemu-devel@nongnu.org \
--cc=raphael.s.norwitz@gmail.com \
--cc=stefanha@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).