From: Raphael Norwitz <raphael.norwitz@nutanix.com>
To: "stefanha@redhat.com" <stefanha@redhat.com>,
"marcandre.lureau@redhat.com" <marcandre.lureau@redhat.com>,
"mst@redhat.com" <mst@redhat.com>,
"david@redhat.com" <david@redhat.com>
Cc: "raphael.s.norwitz@gmail.com" <raphael.s.norwitz@gmail.com>,
"qemu-devel@nongnu.org" <qemu-devel@nongnu.org>,
Raphael Norwitz <raphael.norwitz@nutanix.com>
Subject: [PATCH v3 5/6] libvhost-user: prevent over-running max RAM slots
Date: Mon, 17 Jan 2022 04:12:34 +0000 [thread overview]
Message-ID: <20220117041050.19718-6-raphael.norwitz@nutanix.com> (raw)
In-Reply-To: <20220117041050.19718-1-raphael.norwitz@nutanix.com>
When VHOST_USER_PROTOCOL_F_CONFIGURE_MEM_SLOTS support was added to
libvhost-user, no guardrails were added to protect against QEMU
attempting to hot-add too many RAM slots to a VM with a libvhost-user
based backed attached.
This change adds the missing error handling by introducing a check on
the number of RAM slots the device has available before proceeding to
process the VHOST_USER_ADD_MEM_REG message.
Suggested-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Raphael Norwitz <raphael.norwitz@nutanix.com>
---
subprojects/libvhost-user/libvhost-user.c | 8 ++++++++
1 file changed, 8 insertions(+)
diff --git a/subprojects/libvhost-user/libvhost-user.c b/subprojects/libvhost-user/libvhost-user.c
index 3f4d7221ca..2a1fa00a44 100644
--- a/subprojects/libvhost-user/libvhost-user.c
+++ b/subprojects/libvhost-user/libvhost-user.c
@@ -705,6 +705,14 @@ vu_add_mem_reg(VuDev *dev, VhostUserMsg *vmsg) {
return false;
}
+ if (dev->nregions == VHOST_USER_MAX_RAM_SLOTS) {
+ close(vmsg->fds[0]);
+ vu_panic(dev, "failing attempt to hot add memory via "
+ "VHOST_USER_ADD_MEM_REG message because the backend has "
+ "no free ram slots available");
+ return false;
+ }
+
/*
* If we are in postcopy mode and we receive a u64 payload with a 0 value
* we know all the postcopy client bases have been received, and we
--
2.20.1
next prev parent reply other threads:[~2022-01-17 4:15 UTC|newest]
Thread overview: 12+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-01-17 4:12 [PATCH v3 0/6] Clean up error handling in libvhost-user memory mapping Raphael Norwitz
2022-01-17 4:12 ` [PATCH v3 1/6] libvhost-user: Add vu_rem_mem_reg input validation Raphael Norwitz
2022-01-17 8:19 ` David Hildenbrand
2022-01-17 4:12 ` [PATCH v3 2/6] libvhost-user: Add vu_add_mem_reg " Raphael Norwitz
2022-01-17 8:19 ` David Hildenbrand
2022-01-17 4:12 ` [PATCH v3 3/6] libvhost-user: Simplify VHOST_USER_REM_MEM_REG Raphael Norwitz
2022-01-17 4:12 ` [PATCH v3 4/6] libvhost-user: fix VHOST_USER_REM_MEM_REG not closing the fd Raphael Norwitz
2022-01-17 4:12 ` Raphael Norwitz [this message]
2022-01-17 8:20 ` [PATCH v3 5/6] libvhost-user: prevent over-running max RAM slots David Hildenbrand
2022-01-17 12:32 ` Philippe Mathieu-Daudé via
2022-01-17 4:12 ` [PATCH v3 6/6] libvhost-user: handle removal of identical regions Raphael Norwitz
2022-01-17 8:21 ` David Hildenbrand
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20220117041050.19718-6-raphael.norwitz@nutanix.com \
--to=raphael.norwitz@nutanix.com \
--cc=david@redhat.com \
--cc=marcandre.lureau@redhat.com \
--cc=mst@redhat.com \
--cc=qemu-devel@nongnu.org \
--cc=raphael.s.norwitz@gmail.com \
--cc=stefanha@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).